playlist
stringclasses
160 values
file_name
stringlengths
9
102
content
stringlengths
29
329k
Principles_Of_Baking
This_is_How_Milk_Affects_Bread_Dough_How_to_Use_Milk_in_Breadmaking.txt
how's it going everyone i hope you're having an awesome day welcome to the channel i'm the chain baker and in today's video we'll look at the effects that milk has on bread though i'll show you when why and how to use it so let's go to the kitchen and have a closer look so here's the milk that we're going to use and compare against each other let's see how they stack up the first one is regular full fat cow's milk at about four percent fat this is what you would normally use in your enriched dough the second one is full fat cow's milk powder or powdered milk most commonly used in bakeries because of its long shelf life but you can certainly use it at home but it comes out more expensive though in my experience and lastly i just wanted to see how plant milk compares this is roasted unsweetened almond milk but you can't use any other plant milk with similar results but before we start writing bread recipes that contain milk we need consider some numbers sometimes it's not as simple as swapping out water cow's milk can contain anywhere from point one to four percent fat skim milk of course being on the lower end and full fat milk being on the higher end same goes for sugar can be from three to five percent but most importantly milk is only 90 water milk powder on the other hand is way fattier it's got way more sugar it also contains a little bit water but because we're diluting it in water in fact it ends up having less fat and sugar than full fat milk lastly the almond milk the one i had contains one point one percent fat zero percent sugar and it's made up of 98 water so with that out of the way we'll make four breads one with no milk just flour yeast salt and water the second one will be made with flour yeast salt and full fat milk of course we had to raise the amount of milk because it only contains 90 water number three made with milk powder this same recipe is number one but we are adding the milk powder at a 1 to 10 ratio against the water no adjustments in hydration needed this time finally number 4 the almond milk dough i increased the amount of almond milk by 1 gram to compensate for the 98 water content although i could have just left it it's no big difference all the breads will be made exactly the same way a quick three minutes of kneading then fermenting for one hour a fold in between another one hour proof then final shaping a final proof of around two hours and then baking and then we'll leave them to cool down and compare them so this is not a recipe video i'm not going to talk you through the steps of making these doughs if you're watching this video i assume you already know how to do that but instead while i'm making this dough i'm going to talk about the use of milk and the effects of it and before moving on i just want to mention that there are other kinds of milk out there as well like goat's milk or buffalo milk they all have different nutritional benefits fat content sugar content and not forget flavor buffalo milk is practically unobtainable where i live and goat's milk although you can buy it it is not the most commonly used milk in bread making so why use milk at all it has of course nutritional benefits protein calcium vitamin b12 but for most of us when making bread nutritional benefits is not the first thing that comes to mind milk is most commonly used in sweet and rich though beat your burger buns sweet dinner rolls cinnamon buns and so on and most of the time milk would be used alongside other ingredients like fat sugar and eggs that's why we rarely see the effects of milk by itself of course there are regular loaves of white bread made with just milk and no other extra ingredients and that's exactly what we're doing today just to see how milk itself affects the dough and that is why i looked at the fat and sugar content in milk earlier on in the video fat inhibits gluten formation it weakens the gluten it makes the dough more extensible it means it can puff up more evenly sugar helps with caramelizing the crust of the bread and both sugar and fat tenderize the crumb of the bread making it softer more springy so even though the content of fat and sugar is relatively low in milk it can still have a noticeable effect on final bread and powdered milk acts in a similar way but let's get back to nutrition for a bit and talk about plant-based milk the almond milk i used was full of vitamins this because they can be added during the manufacturing process so while it doesn't contain much fat or sugar and it can't give us that texture and crust like cow's milk would at least it's quite nutritious and there are ways you can adjust the plant-based milk recipe to get a more uniform crumb and more caramelized crusts you can simply add some plant-based fat and some sugar to it let's talk about the powdered milk for a second it is most commonly used in bakeries out of convenience because of its longer shelf life it also takes up less space it can be mixed with water at different ratios generally it's about one to ten so one part milk powder 10 part water but not all powdered milk is created equal so i'm sure different powder milk can affect the dough differently the one i used was full fat milk powder and i thought the results would be just like using the fresh full fat milk but as we'll find out later is not exactly the same honey is more expensive than using fresh milk 100 grams cost three pounds that makes only one liter that's about four dollars for four ounces for my american friends that is almost three times more expensive than fresh milk perhaps you could find cheaper milk powder but i couldn't let's get back to our full bread for a second here so far they have gone through two hours bulk fermentation with one fold in between the bread with milk is rising more slowly than the others but that is not because of the milk the dough came out a little bit cooler after i needed it so let's not blame the milk for the volume of the final loaf and that's not the point of this video anyway we're showing the effects of milk we're certainly not trying to see which one of these breads will be best they can all be adjusted and made in 100 different ways i only wanted to bake them more or less at the same time so we are doing the final shaping of the loaves no pre-shaping step is needed because we're only making one loaf out of one dough ball and the dough is tight enough they all go in the same size baking tins with a little bit of paper to prevent any stickage now we're going to cover them and leave them for the final proof it'll take around two hours but the two breads made with milk powder and with almond milk were rising more rapidly so i preheated my oven 180 degrees celsius fan off that is 355 fahrenheit for my american friends the powdered milk loaf and the almond milk clove are going in first the other two were a little bit sluggish my milk must have been extra cold today but now these two are ready to go in the oven as well and what took me the whole day has taken you only about 7 minutes there we have it all our breads have been fully baked and they do look quite different from the top i'll let these cool down for a couple of hours then we'll compare them properly and here they are are four little loaves of white bread made without milk full fat milk powdered milk and almond milk just as i said earlier the sugars in the milk help with caramelizing crust so naturally the fresh milk loaf and the powdered milk loaf were the darkest ones the bread made with water is our benchmark it looks exactly how you would expect it to look an even color not too dark any other soft crust next up the full fat milk bread the crust is clearly darker more caramelized and it feels a little bit more crispy the powdered milk loaf also has a darker crust but it's not as uniformly colored that of course could be my oven's fault the crust on this one is also nice and crispy and finally the almond milk cloth it looks quite like the loaf with no milk it has a light colored crust and although it has some cracking on the surface it feels kind of the same but what they look like from the outside is only part of the story so let's cut these open see what's in the middle of course we shouldn't be expecting a massive difference between them like i said milk does not alter the bread that much it's when you use it together with other ingredients like more fat and sugar and perhaps eggs that's when you really see a difference the size of these loaves is not different because the ingredients that we used is only due to fermentation but let's have a closer look at the crumb and do the all-important taste test the non-milk bread is nice and springy and it tastes just like regular white bread no surprises there the crumb of the next bread looks clearly different all the little air pockets are spaced out evenly this is what we call a uniform crumb and milk will do that that's the kind of crumb you expect in your regular burger bun when it comes to taste it is a little bit sweeter next up the powdered milk loaf and this has a more irregular crumb it looks similar to the first loaf but it does feel more tender tastes a little bit sweeter and it has more of a milky taste a milky smell as well and that is because the milk powder itself smells stronger and tastes stronger still kind of concentrated even though we diluted it in water but saying that the flavor is by no means strong it's very faint finally the almond milk loaf the crumb fills and looks quite like the one made with milk powder and it does not taste of almonds at all so what's the verdict here milk by itself does not make a massive difference but you can tell it apart it makes for more uniform crumb but saying that there are other methods of achieving this when it comes to dairy milk i think using fresh milk is the best option and plant milk is a great alternative for people who don't eat dairy so do you use milk and bread making what's your favorite milk dough recipe let me know down in comments if you want to see more videos like this click right here subscribe to the channel click over here thank you for watching i'll see you in the next one
Principles_Of_Baking
Which_is_the_Best_Slow_Fermentation_Method_Cold_vs_Room_Temperature_Compared.txt
greetings my Bakers hope you're having a great day so far welcome back to another principles of baking episode today we're comparing long fermentation methods Cold versus room temperature so let's go to the kitchen get started slow fermentation makes the tastiest bread but which slow fermentation method is best cold fermentation or room temperature fermentation they both have advantages and disadvantages let's find out which one produces the best result we'll make two breads both containing the same ingredients white bread flour whole wheat bread flour salt yeast and water the one on the left will be bulk fermented for 12 hours in the fridge the one on the right will be bulk fermented for 12 hours at room temperature and that will be 5 degrees Celsius or 41 degrees Fahrenheit for the fridge and 20 degrees Celsius or 68 degrees Fahrenheit for the room temperature the only difference when it comes to ingredients is the amount of yeast used in each recipe in baker's percentage times that'll be one percent for the cold fermented dough and only one tenth of a percent for the room temperature fermented dough and this is where we can start comparing the practicality of these methods most scales have very hard time weighing such small amounts I'm looking for 0.3 grams of yeast but the scale still says zero once I get to half a gram and who knows is it half or is it not as soon as I try to remove a bit it jumps back to zero so when you're making a small batch of dough like most of us home Bakers do this can be quite an issue because you really don't want to use too much in such a recipe even if it's slightly over you risk over fermenting your dough unlike with cold fermentation at room temperature yeast multiplies exponentially so the more time goes by the faster it multiplies when you combine that with room temperature inconsistencies throughout the day you run a risk of something going wrong fermenting your dough at room temperature for a long time can be more unpredictable and of course less practical as you can see I have wasted about two minutes trying to weigh this yeast out and I'm still not sure if it's correct or not temperature control is the second crucial thing right now my kitchen is around 21 degrees Celsius during the night it will drop down 20 or so which is 68 degrees Fahrenheit these are no need recipes that we're making so I'm using slightly warmer water this time I'm looking for a final dough temperature of around 25 degrees Celsius or 77 degrees fahrenheit as ever this is not a recipe video it's comparison video so I'm not going to talk you through the steps here too much but here's a quick rundown both breads are being made with the no-need method I'll mix the two doughs I'll pop the left one in the fridge for half an hour I'll leave the right one at room temperature for half an hour I'll take the left one out of the refrigerator and give it a fold and at the same time I'll give the room temperature fermented though fold too folding replaces kneading in a no-need recipe and it equalizes the temperature of the dough after the fold they will both be left to ferment for 12 hours one at room temperature the other one in the fridge on the next day they will get pre-shaped and then left to rest for 30 minutes then we'll do the final shaping and the final proof followed by baking and then we'll compare them we'll see if they look different if they taste different and smell different and whether they have a different texture of course the only way that you can tell which one is best is by trying this test out for yourself all I can do is give you my opinion okay back to the methods if your dough is too warm after mixing and you're fermenting it at room temperature for 12 hours it will over ferment if your dough is too warm when you're fermenting it in the fridge then there's no problem refrigerate it pull it out after 15 minutes give it a fold refrigerate it again and give it another fold the surface of the dough cools down rapidly in the fridge when you fold the dough the cold part gets folded into the middle and the dough gets cooled down evenly and quickly and unlike room temperature your fridge temperature will be more consistent after you experiment a couple of times you will find out the perfect amount of yeast needed for fermenting the dough in the fridge in a certain amount of time I usually stick to one percent I know that this amount of yeast will make the dough rise in around 12 hours but if I really want to maximize on Flavor I can easily leave the same dough in the fridge for another day or two at such low temperatures yeast activity grants down to a halt whilst organic acids and bacteria still keep fermenting the dough and developing flavor and texture you definitely cannot do that at room temperature even if I use the tiniest amount of yeast this though would over ferment in two or three days for sure as I said the room temperature yeast grows exponentially whilst in the fridge everything happens at a much slower Pace you can find a cold fermentation test video on the principal's baking playlist I made four cold fermented doughs one was fermented for two days another one for five days another one for seven days and the last one for two weeks even the two-week fermented one made a pretty good bread also in that video I go much deeper in the science of fermentation so you should definitely watch it for context but back to the Practical applications here bulk fermentation was done now we pre-shaped them left them to rest for 30 minutes and now we'll do the final shaping this is where cold fermented dough gets another point in my opinion a cold fermented dough is always easier to handle it's less sticky first purely because of the fact that it's cold when it's cold it's stiff but secondly because the cold fermented those more acidic acidity has a tightening effect on the gluten of course that also means that the bread may not gain as much volume but at the same time that allows us to add more water to the dough we can increase the hydration while still getting the same volume and of course the more hydrated your bread the longer it will take to stale and I would say that those are all advantages okay it's final proofing time and here's where you'll notice quite a difference in fermentation rate the cold fermented dough rose up much faster for one that's because it contains much more yeast than the other one but secondly it's because fermentation accelerates when you take the dyes of the fridge as it was sitting in the cold the yeast was inactive but the enzymes and acids in the dough still kept producing sugars so basically when we pulled the dough out of the fridge it warms up and the yeast wakes up and it's surrounded by loads of food and it just starts munching through it and fermenting faster and faster it took an extra 45 minutes for the room temperature for mental love to be ready for the oven by that time the cold fermented one was baked already and that could be another point for called fermentation it can be quicker in some cases obviously I'm quite heavily leaning towards cold fermentation here because all the advantages are on that method side but of course in the end it's all about the bread that these methods make just because a method is easier and more practical doesn't mean it's better in fact quite often easier methods produce worse results and here we have our two loaves they're still in same order on the left cold fermented on the right room temperature fermented the color of the crust is pretty much identical but clearly the room temperature for mental love has gained more volume that's what I explained earlier the cold fermented dough is tighter more acidic it does not puff up as much as it bakes this can easily be adjusted by using more water when I squeeze and press them they feel pretty much identical the one on the right is just slightly softer again that can be fixed with more water in the cold fermented one I expected the difference in texture and volume but when it came to smell and taste I was pretty surprised they were very like each other the cold fermented bread was just slightly more acidic I expected that to be a bigger difference I thought that the cold fermented bread will have a much stronger taste it was so hard to tell the difference that I had to taste them twice of course taste smell and texture is highly subjective and I can't tell you which one you will like best to me they are extremely similar but I do prefer the cold fermented one and when you combine that with the fact that it's a lot easier to make more practical more predictable then there is no good reason not to ferment your dough in the fridge but I would love to hear your thoughts on this do you think that there are times when room temperature fermentation makes more sense than cold fermentation and I'm not talking about fridge space I want to hear some better reasons I know that I will be sticking to call block fermentation and of course cold proofing sometimes so what do you think these methods which one's your favorite and why let me know down in the comments if you want to see more videos like this one click over here subscribe to the channel click right here that's all I have for you today thank you so much for watching I'll see in the next one
Principles_Of_Baking
How_to_Calculate_Individual_Bread_Dough_Ingredients_for_a_Certain_Dough_Mass.txt
how's it going everyone I hope you're having an awesome day welcome to the channel I'm the Shane Baker and in today's video I'll show you how to calculate individual ingredients for a certain size bread dough let's go to the kitchen and check it out how to calculate the correct amount of dough and the correct amount of ingredients for that dough def fit a certain Siz baking tin has been quite a common question in the comment section and today I will show you a formula which will work every single time and it's not just about fitting a dough in a tin this formula is universal you can use it for calculating in individual amount of ingredients for a certain amount of dough but let's start with the tins most of baking tins come in standard sizes in half a pound or 225 g increments from left to right I have a 2 lb tin 1.5 lb tin 1 lb tin and a half a pound tin and I is 900 G 680 G 450 G and 225 g respectively some of them have the weight stamped into the tin but some of them are blank and I'm sure we all have some tins at home whose sizes are a total mystery to us and if you have such a tin well there is one way to find out what size it may be grab pair of scales place your tin on the scales zero the scales out and then fill the baking tin with water about 3/4 of the way up I know this is a 1.5 lb tin because when I bought it that's what it said on the label and filling it 3/4 of the way up with water confirms it 680 G is 1.5 lb and with that out of the way we can start with some calculations and the first thing you need to understand is Baker's percentage I made a very long video about this a couple of years ago so I'm not going to go into too much detail here but the main thing you need to understand is that every ingredient is calculated as a percentage in relation to the total amount of flour for these calculations to work accurately the amount of flour is always considered as 100% no matter if you're using 250 g or 250 kg it's always 100% the water or the hydration depends on the recipe it could be 50% or even less and it can go up to 100% or even more yeast can be as little as 0.1% for dough that's fermented for a very long time but it can go up to about 1.4% this is instant e that I'm talking about active yeast and fresh yeast I used at different percentages and I have separate videos on this too when it comes to Salt it can be as little as 0% well no salt at all and it goes up to about 2.5% any more than that would make the bread too salty so the percentages of every other ingredient can vary massively but the flour will always be 100% And of course other bread dos contain many other ingredients like eggs butter and sugar and we'll look at that a little bit later into the video we'll start these calculations off with a very basic recipe a bread dough made with 60% hydration 1.2% yeast and 2% salt this is a bog standard white loaf recipe there is absolutely nothing special about it it's as simple as it gets and will make for an easy understandable first example so let's begin let's say I want to make sandwich loaf to fit my 2 lb or 900 G loaf tin because the dough is low hydration it doesn't contain any enrichments it will not puff up that much so to fill the tin nicely we will make the suggested 900 G of dough so we have decided how much dough we're going to make we know that the flour is 100% the water is 60% the yeast is 1.2% and the salt is 2% so the next thing we need to do is is adding up these percentages that's 100 + 60 + 1.2 + 2 = 163.000 / 163.5 51 now what's this number and why do we need it this number tells us that 1% of this dough weighs 5.51 G and you might get where this is going right now we know how many per of each ingredient there is we know that 1% weighs 5.51 G so all we need to do is start with the easiest one the flow is 100% so 100 * 5.51 is 551 G of flour that gives us the amount of flour in this dough and we don't need to calculate each percentage with that 1% value because as I mentioned earlier every ingredient is calculated in relation to the total amount of flour so for the water 60% of 551 is about 330 G 551 * .6 the yeast 551 * .012 = 6.6 G and of course the salt is 2% of 551 which is 11 G you can of course multiply the 1% value with the individual percentage number of each ingredients I've been calculating recipes for so long that I just find it easier to find out the amount of flour and then calculate every other ingredient in relation to it and after adding up all the ingredients the flour water the yeast and the salt we ended up with about 8986 G now that's close enough to 900 for me and of course you can round the numbers up or down a little bit and this was a very simple example so let's move on to something a little bit more complicated this is a recipe for a dough which is made with eggs butter and sugar it'll be light and Rise high so we need to use less dough in this tin otherwise it's going to be too big so will go with 800 G so for this particular recipe the flour is of course 100% water 47% yeast 1.5% salt 2% sugar 4% butter 9% and 10% egg if you add up all the percentages the total percentage number is 173.50 and just as previously we take the total amount of dough which is 800 G in this case and divide it by the total number of percent 800 / 173.50 is 4.61 so 1% of this dough weighs 4.61 G and the first ingredient we need to calculate of course is flour 100 * 4.61 is 461 G that's the only ingredient you need to know now you can calculate all the other ingredients as a percentage in relation to it so as you can see no matter how many greenin there are and what percentage values they have the calculations are still the same adding these up we end up with about 8.5 G you can find individual videos on my channel covering all these ingredients in this recipe but of course it's not always about fitting a loaf in a tin sometimes we may need to make certain number of breads which weigh certain amount of grams so we need to add another calculation on top which is the number of breads times the amount that they weigh let's say 10 Burger buns at 125 G each equals 1250 G of dough of course the percentages are totally up to you it's your recipe flour water yeast salt sugar butter and so on the calculations are exactly the same as I showed earlier I would highly suggest checking out the bers percentage video on my channel it also covers dough hydration now to help us visualize what we've done here I have made the two dos from the previous calculations now you might not be able to make this out but this dough weighs 891 G that's because we lose a few Gams during mixing and kneading and that's why you should always weigh your dough before dividing it never assume that you can simply add up the amounts of ingredients and have that amount of dough cuz it never works like that you will always lose a few grams here or there so if you want your rolls or breads to be the same size at the end you should always weigh your dough before dividing it by the way in case you didn't know every one of my videos is accompanied by blog post or recipe on my website you can find all the details there and even things that I've missed in the video and most importantly you'll find the formulas there which I showed you in this video you'll find a link for that in the video description Okay so our lean dough is ready for the oven it's filled the tin up nicely so in this case following the suggested weight was the right thing to do but we're not finished we need to make another dough and compare it to the first one this is the 800 G dough which contains the eggs sugar and butter you may be able to see this we have lost about 7 G of the dough and I wish I could lose weight this easily so the eggs the sugar and the butter in this dough will make it puff up a lot more but there's another ingredient that we're going to add which will bulk up the loaf just a little bit it's a glaze we are making a Dutch crunch Loaf and the glaze itself is close to 100 gram it's not going to add too much volume to this dough it's just worth considering the main thing here are the enrichments do check out my egg comparison video after you watch this video eggs make a huge difference in the final loaf oh and of course I'm not going to leave you hanging here the recipe for this Dutch crunch bread will be posted in a couple of weeks I will make an in-depth guide for the aome loaf and just like that there it is the crust is unbelievably crunchy and the interior is so so soft it's a great contrast and you should definitely try this bread out but let's let it cool down let's compare these two side by side and even though there's a 100 G difference between the weight of the two do they're more or less the same volume and it's not really the glaze it's just the fact that different ingredients and different recipes produce different results and that's what you going to keep in mind when you're trying to fit the loaf in the tin if it's a low hydration lean D should aim for more weight if it's light and Rich D well then use less but if it's a super heavy Ry bread for instance perhaps you should use more dough than suggested on the 10 I hope you found this useful and valuable if you want to see more videos like this one click over here subscribe to the channel click right here that's all I have for you today thank you so much for watching and I'll see you in the next one
Principles_Of_Baking
Bakers_Percentage_Dough_Hydration_Explained.txt
so today we'll talk about bakers percentage now what is bakers percentage there's a way of working out the amounts of ingredients in each recipe now normally beginners or home bakers will not be using this because most of the time we just use recipes they're already written down but once you understand Baker's percentage you will be able to create your own recipes so normal bread recipe would have flour water yeast and salt as a standard every ingredients always calculated in relation to the flour so no matter how much flour you use it will always be 100% as a general rule dry yeast would be 1.4 percent or fresh yeast would be 4 percent salt would be normally 1.8 to 2% now some recipes may use more but 2% is normally to go to your mount a round number also makes it easier to work out the amount you need the main variable in a recipe will be the water percentage and this may change from as little as 50 percent to more than a hundred ending what though you are making for instance bagels could be 50 percent or high hydration focaccia could go up to more than 100 so I'll quickly show you how to work out the percentage of a set recipe you may have read it in a book or you may have seen it in one of my videos the main thing to work out here is the water percentage because that will tell you how wet dough will be and how easy or difficult it will be to work with so regular loaf of bread might have 500 grams of flour and as I said before that will be the 100% no matter how much flour you are using it will always be 100% now water might be 300 grams yeast 7 grams and then 10 grams of salt this is your regular quite low you get a calculator so I can't do these things in my head so to work out the amount of water in relation to the flour all you need to do divide the amount of water by the amount of flour you get no points X which is 60% same goes for the yeast you divide the amount of yeast with the amount of flour 7 divided by 500 forgive you North Point North 1/4 which is 1.4 percent and of course the same goes for salt now let's say you decide to make a loaf of bread you'll normally be 500 grams of flour which will of course be 100 percent of the total amount but you don't know yet how much in weight water yeast and salt you're going to use but you have decided for example that the word content will be 68 percent and as we learn before dry yeast will be normally 1.4 percent salt we could do one point eight these are things we know so once again grab your calculator and to work out to be Mountain grams from a percentage all you need to do is multiply the amount of total flour with the percentage so 500 grams and no point six eight equals 340 grams I found grams x naught point naught 1/4 equals seven grams and of course the exact same applies for the salt now that's quite easy right but this formula here will really help you out and creating a recipe and this will work for any amount let's say for example you are making 20 burger buns when you want them to weigh 100 grams each and let's say this time you decided that you want the hydration to be 63% so once we decide that the hydration percentage we know we know the percentage all the elements the flour like I said will always be 100% no matter how much you are using 63 percent of water yeast will be 1.4 percent and salt will be 2 percent so each percentage is a part of the total dough so to work out each individual element need to add together all the parts so 100 percent plus 63 plus one point four plus two it'll give us one hundred and sixty six point four now we know how many parts we have now we work out the total doll weight she's quite simple twenty times 120 buns at 100 grams each give us two kilos or 2,000 grams so we know how many parts we also know the total dough weight now we need to work out the weight of a single part or a single cassette to do that divide the total dough wait by the amount of parts or two thousand divided by 1/6 6.4 which in this case will be twelve point zero one nine and so on so from here we can work out the amount of flour so we know that the flour is always 100% so you multiply one pot by 100 which will give you 1201 grams you can round it up or down a little bit one gram won't make much of a difference so as soon as we have worked out the amount flour in the recipe we can calculate all the other ingredients in relation to that amount so we know that the water will be sixty-three percent so 1201 times nine point six three which in this case will be roughly five hundred and seventy five grams and the same goes for the east and salt as before just multiply the amount of flour by the percentage and just to see if we are correct we can add up all the ingredients which in this case result is 1999 grams we lost that one gram because we rounded down some decimals earlier but that's not a big deal and that's how you write the burgh recipe and this formula will work for any amount and any bread so let's have a look at the hydration hydration just means the amount of water that you are using a relation to the flour normally I'd categorize this as low hydration medium or normal hydration and high hydration them so low hydration dough would be about fifty sixty percent something quite easy to work with then normal hydration sixty to sixty eight it's a little bit more sticky in high hydration does 68 to 100 or even more 100% just give you a few examples of low hydration though be like bagels pretzels burger buns something that has quite a tight crumb it's a normal hydration you have your sandwich loaf so flat breads also baguettes and when it comes to high hydration that okay it's a ciabatta rye bread because rye flour absorbs a lot of water and then again these rules are not set in stone you can make bagels with 65% hydration you can make pizza at 63 you can also make flat breads which are 68 69 70 it all depends on what end result you want the more water the more larger bubbles you will have inside your dough the less water the more tight the inside will be I'll show you some real life practical examples of dough hydration I will make three loaves of bread with the same amount of flour yeast and salt the only difference will be water so start with first one sixty percent hydration as you can see I've mix it all up it's quite dry white flaky kneading 60 percent hydration dough is quite easy job because it's not sticky so I just use a regular kneading methods press down forwards my right hand using the fingers of my left hand all the piece of dough under the heel of my right hand and repeat this dough takes no more than five to seven minutes to work as you can see it's not very smooth because there's not a lot of water so pop it into the bowl leave it to proof I left it for 45 minutes and I'll do all the logs for the five minutes they'll give them a fault now this step can be skipped for 60% hydration dough normally folds are used to create more layers in the gluten but a dry dough like this doesn't really need it well do the faults just to keep things equal because I will fold the other dough's and as you can see we are not using any flour we are folding on the table and those not sticking 60% hydration now back into the bowl it goes another 45 minutes perfect then comes the pre shaping again we don't have to use any flour just take your dough out and when I shape a 60% hydration though I don't fold it too much it is not as stretchy so if you fold it too tight you might rip the surface so after the pre shape covered it up believe it for 30 minutes to relax now its final shaping time I'm gonna use this oblong basket by redoubt a very minimal dusting of flour because this dough is not sticky at all as you can see it's quite easy to handle it's not sticking so I'll just flatten it out a little bit hold in the sides crossing over roll it up don't have to this too tight the dough will keep its shape now it can go into the basket seam side pointing up I'll leave it to proof for 30 minutes as you can see even though the dough's been relaxing puffing up has kept its shape standing up a higher hydration that will tend to spread out a little bit more I'll score it with my razor blades just do 1/4 top to bottom and get it in the oven and the law of hydration also makes it really easy to score a bake for 15 minutes with lid on and take the lid off and your baking for 15 more minutes it's quite small love and that's it that's a 60% hydration dough it's not too bad but let's try 65% hydration so just before flour water yeast and salt just 5% more water this time I'll give it all a good mix then tip it out on your table and you will notice this though a little bit more sticky but we can still use the same method pressing it against table and rolling it and as the hydration percentage increases the amount of time that you will spend kneading the dough will also increase so this may take 7 to 10 minutes once you finish kneading it you'll notice that those a little bit more sticky a little bit more stretchy now just as with the previous dough or collect it up into a bowl cover it up believe it's proof for 45 minutes then give the 1 fault fold this dough I would use a light dusting of flour just to prevent it from sticking to the table but look how much soft and stretchy it is and as you increase the hydration percentage you can also change up the folding method generally high hydration does benefit from more faults and more layers so I'll just roll this up into a tight roll and then back into the bowl it goes for another 45 minutes of fermentation now after that's done we'll do the pre shape and again using a very small dusting of flour now stretch it out fold it up cover it we'll leave it to rest before the final shape and again you can feel the dough is a lot lighter a lot Afiya than a 60% hydration and this time you may want us to your basket with just a little bit of flour also dust your dog before shaping now you can really feel the gas bubbles inside just stretch it out a bit fold it up roll it out like we did before and now we can roll it a little bit more tightly because the dough is a bit more stretchy you can take more tension I'll pop it in the basket for its final proof and we're ready for baking and this dough has increased in volume by a lot more than the previous one not only because we added five percent more water also there's more air inside it so that adds to the volume now scoring it the same way as before we can feel how soft it is I've cover it up pop it in the oven halfway through we take the lid off as you can see this is risen a lot more then get it back in the oven to finish baking and that's your 65% hydration bread now let's look at 70% at this hydration it will really start feeling the difference in the dough and it will take a bit more skill to work this and master it it will be a lot more wet and sticky and if working by hand you will use a different kneading method as you can see this is a sodium mess at the moment so to knead the dough like this we use a stretch and fold method pick the dough up by one side stretch it against the table and fold it over and this may take anywhere from 10 to 15 minutes and it might even be sticky when you finish but that's okay get it in the bowl cover it up leave it to proof the most convenient way of folding this dough is doing it in the bowl this is called the coil fold using wet hands you pick the dough up and you roll it underneath itself then turn the bowl and repeat so what you might want to do for high hydration dough is give it an extra fault or two so I do shorter proofing intervals and more folds in this case one extra fault and when it comes to pre shaping and dust a little bit more with flour because there's no is a bit sticky then I use a different pre-shipping method as well to create more layers of gluten more tension in the dough I will use a stitching method fold the bottom up cross over the sides then pull the top over to the bottom stitch it up roll it tight or cover it and leave it to rest when it comes to stick he does need to work quickly and with a very light touch now come shape in time dust your basket of flour again you don't want the sticking also dust your dough a little bit more than the 60 and 65 percent you will really feel it being wobbly and light and full of air you should handle it gently but quickly now the shaping method changes for higher hydration dough we use the stitching method make sure your hands are flour so they don't stick and what you need to do is create more layers and more tension in the dough so fold the bottom up cross over the sides fold the top over to the bottom and then stitch it together this takes a little bit more skill also done a few times you'll get back so off the stitching roll it tight and then get in the basket for final proof and once it's in the basket you might want to stitch off the bottom just to help it keep its shape I will leave it to proof just as before and as you can see this bridge is huge compared to the other ones it's nice and wobbly pull over really light so tip it out carefully into your pan you can see how soft it is when scoring do have to be gentle though same as with the other braids I'll cover it up take it halfway through take the lid off get back in the oven finish baking [Music] and that's the 70% hydration dough so let's check these all out side-by-side so here's the 60% 65 and 70 definitely a noticeable difference in volume and when handling them you will feel that and even though the 60% hydration bread has the least amount of ingredients by weight it will feel heavier in the hands than these larger high hydration dough's the more water there is the more it will expand the more air that will be inside it let's cut them up and see what the insides look like now the 60% hydration though the denser Crump which is not a bad thing it's all up to you what you want to make with your bread now the $0.65 hydration will be a little bit lighter there's more bubbles that are more spread out is definitely softer and the 70% hydration just bumps it up a notch you'll get even bigger bubbles and a softer crumb and if you made it to the end of this extremely long video I hope you learn something be any questions or suggestions write them down in comments if you're new to this channel subscribe for more videos and as always thank you for watching
Principles_Of_Baking
How_to_Control_Bread_Dough_Temperature_Nail_It_Every_Time_Bread_Tips.txt
let's talk about temperature after all it is one of the most important ingredients in bread baking so regular loaf bread contains flour water yeast and salt because there is only such small amount of yeast and salt it doesn't affect the temperature the main things that are going to affect the final dough temperature are the flour the water the air temperature in your kitchen and the temperature that you add by mixing whether using your hands or mixer friction creates heat so your dough will warm up during mixing so with all that in mind let's do some calculations generally you want your dough to be around 24 to 26 degrees celsius if your kitchen is warm you aim for the lower end if it's cool then you want it higher there's three main variables that come into play two of those we can measure the other one we need to work out and it's quite a simple calculation really so the first variable that we can measure is the air temperature in our kitchen and this is one of the things that we normally can't control it just is what it is unless you have an air conditioner in your kitchen now the second variable is the temperature of the flour and because it's in your kitchen most of the time it will be more or less the same temperature of course you could stick it in the fridge or set it on top of your radiator but that's not very practical is it but the final variable that we can really control easily is the temperature of the water so taking that in mind let's have a look at what temperatures we have so the air temperature in my kitchen today is 25 degrees celsius it's quite warm it's the middle of the summer in the uk right now so let's look at the flower temperature by the looks of it the flower is a little bit cooler 24 and a half degrees one last thing to consider mixing temperature and this will be different for everyone i know that by experience using my hands kneading a dough for around 6 minutes will add 5 degrees celsius to that dough and the only way you can work out this number is by taking a dough and mixing it either with a mixer or with your hands and then remember that number because that is important in working out the water temperature so using all this information let's say for example i want my dough to be 25 degrees celsius after mixing so that's 25 degrees c i know that during mixing i will add 5 degrees celsius so we need to say 25 minus 5 equals 20. that should be the temperature of the dough right when you mix the flour and water together that first second so we take that number 20 and multiply it by the variables which are three air flower and water that equals 60 and then we take that total number 60 and the rest is very simple we need to subtract the air temperature which is 25 degrees and also temperature of the flower which is 24.5 and that will give us the exact water temperature that we need so 60 minus 49.5 equals 10.5 degrees pretty simple right using this calculation you will always be in 100 control of your dough temperature which is very important right so 10.5 degrees is quite specific temperature right and you may think now how do i get to this temperature if i stick my water in the fridge i have to catch it once it goes down to that temperature or try to open my taps for half an hour and wait for the water to cool down so what i like to do is take a jug of water stick it in the freezer that makes it quicker and just let it cool down until it's like whatever five six degrees celsius then all you need to do is stick a probe in that water and add back some room temperature water until you get to the desired temperature and as you can see the temperature has gone up to 10 and a half degrees simple as that and now all you need to do is grab your scales and weigh out the amount that you need one thing to note is that if your bowl or if your table is really warm then the water will tend to warm up more quickly and the dough subsequently okay let's see how this works out i've got my water added some yeast some salt add the flour give it a mix and after mixing this should be more or less 20 degrees celsius which is the final temperature minus the mixing temperature so 25 minus 5 and it is 20 just as we expected so now start mixing it though or needing it in other words and i'll need this for around six minutes in total because that's how long this dough takes so as i said before it's quite simple three variables air flour water final temperature minus mixing temperature times three minus air and flour equals water temperature simple and as you can see we got it almost perfect 25.1 i can live with that and that's how you control dough temperature now put this into practice and let me know the results down in the comments check out my other videos in the principles of baking playlist and my channel is also full of recipes i post two videos a week about bread and all things baking so thank you for watching i hope to see you in the next one
Principles_Of_Baking
Over_Proofing_Under_Proofing_Explained_How_to_Tell_the_Difference.txt
how's it going my bakers i hope you're having a nice day welcome back to the channel in today's video we're going to be talking about overproofing and underproofing so let's go to the kitchen and have a closer look there are three main factors that affect fermentation rate and depending on one or more of them your dog could be under-fermented properly fermented or over-fermented and they're all equally as important time is one of them if you don't leave your dough to ferment for long enough it will now rise if you ferment it for too long it will overproof ambient temperature even if the temperature of your dough is correct if your kitchen is too warm or too cold it will greatly affect the fermentation rate and of course the temperature of the dough itself if it starts off too cold it will take a lot longer and if it's too warm it might over ferment to get a controlled and predictable fermentation there must be balance between all of these and the only way to learn about this balance is by practicing over and over again in time you'll start seeing patterns and you will master this in today's video we'll make three loaves of bread normally in my comparison videos i make several separate breads but today i'm just making one dough i'll divide it into three after bulk fermentation so that the only difference between all of them will be the final proof one will be under proofed one will be properly proofed and the final one will be overproofed and once they're all baked we'll cut them open and compare them quickly about the recipe here it is a white flower loaf it's made with a polish and it has a hydration of about 63 and if you want all the written details you'll find them in the link below video as ever this is not a recipe video this comparison video so i'm not going to talk you through the steps here what i will talk about is what to look for when fermenting your dough and how to tell whether it's risen enough we always hear this basic rule of fermenting it until it's doubled and for the most part i would agree with that during bulk fermentation that's the initial fermentation stage we'll talk about final proofing in a minute the bulk fermentation stage is a lot more forgiving than the final proof even if your dough is a little bit under a little bit over that can be fixed during the final proof so i would always aim to double the dough and that is why i always show you the time lapse of it rising so you can get a good idea on how much volume it should gain okay i'm just doing a quick temperature check here for reference the dough is about 23 degrees celsius my kitchen is around 25 i've made the dough a little bit cooler because it does contain that preference it will ferment more rapidly i'm going to let it ferment now halfway through bulk fermentation i'll give it a fold now you can find detailed videos about folding bulk fermentation temperature control in the principles of baking playlist and in the steps of baking playlists in this video we're just looking at how big the dough should be and clearly it's gone through block fermentation and it's doubled in volume now i'm going to divide it into three pre-shape it leave it to rest then do the final shaping and the final proof now there are some cases when you don't want to wait for your dough to be doubled during bulk fermentation this is especially true for low hydration though which just doesn't puff up as much or a rye bread for instance which is dense and doesn't rise much at all high percentage whole wheat breads with normal hydration levels also have this effect but speaking of whole wheat there is another thing that affects fermentation rate which is the ingredients using whole wheat flour and bread though generally will make it ferment more rapidly using ingredients like cinnamon mace or nutmeg may slow it down using a tiny amount of sugar may accelerate fermentation using too much sugar will slow it down you just have to keep an eye on your dough a very important thing is not to rely on the times given in a recipe each of our kitchens is different we may live in different climates we may have the heating on or we may have our windows open in winter for some reason and that is when you might need to adjust the fermentation time or your dough temperature to begin with it's funny sometimes when i get comments saying that my recipes don't work the recipe is just a list of ingredients and basic instructions it is you who makes it i mean it did work for me in the video right okay so i'm doing the final shaping now and we'll be moving on to the final proof so in short when it comes to bulk fermentation you just let it rise until doubles most of the time our final proofing is a little bit different from left to right i'm gonna ferment the first dough for 20 minutes the second dough for one hour and the third dough for an hour and a half as you can see this barely rolls up we'll get in the oven and we'll leave the other ones to rise the second one has filled the thin and slightly risen over it but it has not doubled in volume and that's extremely important you always want to leave some room to grow if you let it rise for longer it will not puff up in the oven any further the yeast may run out of food and in the worst case the dough will break down and we'll see this in just a few seconds this dough is way over risen it has become quite sticky and the gluten structure has begun to break down there's even little holes forming on the surface letting the fermentation gas escape from the interior the dose become very fragile it is basically turning back into the polish that we used earlier to be fair this would make noise for catcher but that is not the point of this video see it's very hard to tell you how long the dough should be fermented for and what exactly it should look like you must learn that for yourself because if you are an inexperienced baker and if i gave you each of these loaves separately you may think that they are perfectly fine maybe it's a different kind of recipe and supposed to look like this right it is only when you bake the same loaf in your kitchen for several times that you will learn to see when it's right and when it's not right and that is why instead of telling you all kinds of hacks in this video i will show you exactly what to look for especially in the crumb to distinguish between the dough that's overproofed under proofed and proved to the correct degree there are some differences on the outside of course the underproof though kinda looks like it has risen better than the other ones it has also torn open and that's most likely because there was too much tension the gluten had not relaxed enough before baking whilst the overproofed one had a flat top and you can see that i had started spilling over the sides of the tin it did not rise at all whilst it was baking whilst the one in the middle has puffed up nicely and has a nice shape but here is the most important part the telltale science the dota is underproved will quite often have large air pockets on one side and dense dough on the other side people making high hydration sourdough bread often mistake this for success it is called fool's crumb now on the other side the overproof bread it has fairly large bubbles in the crumb with less dough in between if we look at the correctly proof bread now it has a fine crumb with the bubbles spaced out quite evenly with a little bit denser dough in between it looks nice and smooth and it's a lot softer now you see why it's called full scrum the bread is puffed up and torn open there are big bubbles inside it almost feels like it should be like that but when you see those large bubbles surrounded by dense dough that is a clear indicator of underproof dough a fairly even crumb with random air pockets is what you're looking for whilst the cranberry is very bubbly with little dough in between may just be over fermented of course we can see from the profile too the overproofed one literally is hanging over the sides of the tin saying all that i would always take an overproof dough over an underproofed one here's another quick example it is my 100 hydration whole wheat bread the first batch i made over proved and that's on the right it over fermented it broke down and it deflated through the top it could not puff up anymore in the oven the yeast was not active enough and there was no gluten structure so there was no more gas being produced and all the gas that was built up inside just escaped and we can see the same thing with the crumb again it has a lot of bubbles evenly spaced out with a little bit of dough surrounding them whilst the correct one is a lot more random it's not about baking everything correctly the first time you have to make mistakes to learn from them and now you know exactly what to look for and how to tell your mistakes apart so go and make some mistakes and bake some better bread so what do you think this topic what is your experience with overproofing and underproofing let me know down in the comments if you want to see more videos like this one click over here subscribe to the channel click right here that's all i have for you today thank you so much for watching i'll see in the next one
Principles_Of_Baking
Why_Do_You_Have_to_Punch_Down_Bread_Dough_Degassing_Explained.txt
how's it going my bakers i hope you're having a great day welcome to the channel i'm the chain baker and in today's video i'll show you the importance of de-gassing your bread dough as it's fermenting let's go to the kitchen and check it out it is quite common to find instructions for de-gassing punching down or knocking back your bread dough as it's fermenting i would say most recipes include this step as the dough ferments it fills up with carbon dioxide expelled by the yeast which is feeding on simple sugars which are broken down by enzymes from the starch in the flour the gas accumulates in pockets inside the dough held together by the gluten structure as the dough ferments and fills up with gas it expands and that's one of the main purposes of fermentation we want the dough to rise up and become nice and light bread so why would we want to punch the gas out it seems that it would defeat the purpose of fermentation normally punching down is performed halfway through bulk fermentation and the punching down or degassing can be accompanied by folding in fact both are done in the same step and these two actions go hand in hand not only during folding but also when dividing pre-shaping and final shaping the dough and no matter how gentle you try to handle your dough you will always de-gas it if only a little bit in today's comparison video we'll make 4 breads they will be made from the same dough but they will all be treated differently it is a 65 hydration dough containing only flour water yeast and salt we'll keep it simple so the results are clearer as i mentioned earlier degassing or knocking back goes hand in hand with folding pre-shaping and final shaping so the first one of the four breads will be left alone from the beginning of fermentation until it's baked the second dough will only get a final shaping the third dough will be pre-shaped rested and then shaped again before final proofing and lastly the fourth dough will get a fold doing bulk fermentation then a pre-shaping resting final shaping and then it will be baked so all the lows will be handled progressively the first one won't be touched at all and the final one will be folded shaped and degassed three times and we won't be fermenting them for the same amount of time we will be fermenting them to the same volume this test is not to show what is better or worse it is there to show you the differences it is up to the requirements of the recipe how many times is going to get folded or degassed and it is also up to the baker when it comes to the texture and shape that the bread should have understanding these principles will give you the power to create any recipe you like in your style okay we got our four dough balls the first one is going straight into the tin and this will be the last time we touch it it goes in the tin ferments and then it will be baked and we'll fold them and degas them progressively from left to right starting with this one now i keep going on about folding and shaping but this is a video about degassing right like i mentioned earlier no matter what you do you will digaste though folding pre-shaping and final shaping are just steps that we take during the process of bread making in fact you would almost never make a bread like the one on the left and there's a good reason for it that's why first we're going to talk about folding but i'll keep it brief since i've made the full video on this topic and you can find it in the steps of baking playlist folding achieves a couple of things first off it degasses the dough secondly it builds tension into the dough if the dough is weak and loose by folding it we can tighten it this makes it rise higher vertically instead of spreading out sideways a stronger dough will also be able to hold more gas and take more fermentation before deflating and falling flat and what you're seeing here on the screen is the first fold halfway to bulk fermentation the dough becomes tight but it's also deflated so it's going to have to rise up again the folding step is similar to the pre-shaping final shaping step in that the dough gets more tension built into it now some breads benefit from this some don't if you have very loose dough it's good to give it even more falls during bowl fermentation than one and it's good to shape it more tightly because the pre-shaping final shaping steps are not just there to make the bread look nice they also build more tension making the dough tighter and a dough like that will spring up better in the oven now we've gone through bulk fermentation now it's pre-shaping time and while i'm pre-shaping i can tell you another thing that folding achieves it can equalize the temperature of the dough making it ferment more evenly let's say your kitchen is cooler than your dough as it sits and ferments the outside of the dough will start cooling down and adjusting to the temperature of the kitchen while the middle may still stay warm so as we fold the dough we fold those outside layers into the middle distributing that temperature evenly throughout it and that is folding in short there are different folding methods for different breads it all depends on what the dough is like but let's get back to de-gassing why would we want to knock the fermentation gases out of the dough why don't we leave it alone and let it rise and it would never punch you so why are you punching it punching is actually quite aggressive and you should never punch your dough the best thing to do is deflate it by pressing it gently as the dough ferments the gas pockets inside the grow larger and larger and the membrane of dough between those pockets can tear and the pockets of gas can fuse into larger pockets of gas if this process keeps going undisturbed the crumb of the bread can end up with large bubbles surrounded by denser areas of dough this of course not always a bad thing some high hydration breads are specifically made to have that texture as we punch down or deflate the dough the gas pockets break down and split up resulting in a more tightly packed and even chrome structure what the texture of the crumb should be is up to the baker you can manipulate the dough to have a certain texture if you handle it gently during each step it will end up with a more open chrome with large air pockets if you deflate it a lot it'll be tighter now we are doing the final shaping here and those are ready for the final proof i tried to shape them similarly as i could so that the main difference between these breads would be the steps that we took or skipped now degassing folding and shaping is not just about what the crumb will be like it's also a way of controlling fermentation the loft on the left is almost ready to be baked while the other three they still need about an hour until they go in the oven as we know fermentation builds flavor and it helps develop texture of the crumb and the crust by degassing the dough we are forcing it to rise back up again it basically has to start over so a bulk fermentation that can take three hours can be stretched out to take four even five hours of course there is a limit you may have noticed that after punching your dough down it starts fermenting more rapidly this is because the old built up carbon dioxide actually slows down fermentation so knocking the old gas out makes these ferment more rapidly okay it's time to bake these three and now we can compare the results and just on the outside there are some big differences for one the crust on this loaf separated that could be all those gas bubbles fusing together and finding the path of least resistance and we see a good progression of oven spring the more the dough was shaped de-gassed and handled the highest rows that may seem quite counter-intuitive but as i mentioned earlier tension is what makes the oven spring happen and the dough on the right was the tightest one because we folded it pre-shaped it and final shaped it now let's have a look at the crumb and of course as i explained earlier if the dough is left undisturbed it will result in a crumb with large holes surrounded by denser dough that's exactly what we're seeing here on the bread that wasn't de-gassed at all and like i said de-gassing pops those air pockets so the love that got the final shaping has a more even crumb and it rolls higher of course resulting in softer texture and as we keep going the breads become progressively softer and lighter and larger and as i said earlier we're not trying to prove what is better or worse even though the first bread is clearly the worst and you should not be making bread like that if you want to use this method it should be fermented for much longer and perhaps go in a higher tin at the end of the day it's all up to you make your bread the way you like it make it fit your style and taste experiment try different methods don't just follow recipes ask questions and if you ever get stuck check out more videos in the principles of baking playlists you might find some answers there so what do you think of degas thing do you degas your bread though let me know down in the comments and don't forget to read the blog post linked in the video description i always write down things that i forgot to say in the video see more videos like this one click right here subscribe to the channel click over here thank you so much for watching and i'll see in the next one
Principles_Of_Baking
This_is_How_Eggs_Affect_Bread_Dough_How_to_Use_Eggs_in_Breadmaking.txt
how's it going everyone i hope you're having an awesome day welcome to the channel i'm the chain baker and in today's video we'll look at the effects of egg in bread making this has been one of your most common questions so let's go to the kitchen and have a closer look have you ever asked yourself why a bread recipe is calling for egg why even use it and what effect does it have on the final product because most commonly as far as baked goods are concerned eggs are used for cakes but more often than not some of our favorite and rich breads contain egg and as it turns out eggs can benefit our dough in a number of ways and today we'll compare four breads side by side and we'll find out what effect each of the parts of the egg has on our dough be it whole egg egg white or egg yolks let's start with some numbers generally whole egg is around 75 water nine percent fat the yolk cologne is about 50 water 30 fat and egg whites are around 90 water and they have practically no fat these numbers will come in handy when you're trying to calculate your recipe using baker's percentage knowing this will help you adjust the amount of water and fat that you add to your dough and with the stats out of the way let's do some practical examples we'll make 4 breads they will all contain the same amount of flour yeast salt and water and the first one contains no egg the second one will again have flour yeast salt water and a whole egg because we are using egg i've reduced the amount of water that i'm adding to the dough to compensate for the amount of water that's contained in the egg onto dough number three once again flour yeast salt water but this time only egg white if you want the exact recipe for these breads you can find them in the link below the video let's move on to the fourth dough last but not least we're going to use egg yolk in this one the yolk itself is small but still contains 50 water so of course we adjusted the amount of water we're gonna use so let's get to mixing we're going to make all four breads at the same time more or less to the same temperature of course this is not a comparison of which one is best there is no right answer to that which is best is up to you and it's up to the requirements of the dota you're making some breads don't benefit from egg being added to them but what are the benefits to using egg well firstly they are one of the most nutritious foods you could eat back with protein and vitamins they may as well be classed as a superfood but when it comes to bread making it goes a lot further than just nutritional benefits eggs of course add a nice flavor to the bread and they can greatly improve the texture and the crust the fat in the yolk acts similarly to other fats you would use in your bread making it inhibits the gluten formation weakening it that makes the dose softer and looser and that's what makes the crumb a lot more airy because it rises higher and puffs up more egg whites on the other hand act as a coagulant they will help with developing an even crumb with a texture that is more springy eggs can also extend the shelf life of your bread by lowering the ph and adding acidity let's not forget about the crust using egg yolks or whole eggs will make the crust of the bread caramelize more but also become nice and crispy that being said bread with egg in it should be baked at a lower temperature you don't want the crust going too dark too early of course not everyone can eat eggs there are a couple things you can replace them with if you want the texture of your bread to be nice and airy and fluffy you could use oil butter or other fats instead of the egg yolk even using sugar will make the crumb more airy and light but let's get back to our breads for a second here the one i just finished mixing is the one without egg i've decided to mix the one with the egg white next because the ones that contain egg yolk are a lot stickier leaving them to hydrate for a little bit longer make them easier to knead comparing the dough with no egg versus the one with egg white they're quite similar they almost feel exactly the same the next thing i'm going to need is the one that contains a whole leg as i mentioned earlier the fatty yolk inhibits gluten formation that can make the dough more sticky more difficult to knead especially by hand i needed this for around the same amount of time as the other two which about three minutes as you can see it's a little bit sticky but we'll move on to the next one the last though only has egg yolk and kneading it feels about the same as the one that contains whole egg it is also sticky but manageable we'll also give this one around 3 minutes and you may think that 3 minutes is not very long for kneading a dough but this is a very small piece of dough let's clean down before moving on this dough is only made up of 130 grams of flour and the smaller your dough is the more of it you work every time you press it into the table that's why it takes less time okay so let's get to fermenting top left corner is the dough with no egg top right corner is the dough with the whole leg bottom left corner is the one with egg white and bottom right corner is the one with egg yolk i tried to make them all with about more or less the same final temperature they're within a half a degree of each other the first dough being the coolest and the last though being the warmest because i want them to rise more or less at the same time the first though had to be made cooler because it gets the advantage of being mixed first and the last one benefits from being slightly warmer because it needs to catch up with the first one i don't know about you but i normally use whole eggs when making bread that requires egg do you ever use just whites or juice yolks and why let me know down in comments i guess a good reason for using a whole egg would be so that you don't have to come up with something to use the other half for but then again you could use the white and the dough and the alk for glazing or you could use the yolk in the dough and the white for glazing both with very different results right so we proved these for around an hour now we're going to give them a fold folding will help with equalizing the temperature it will also help with degassing and building some tension into the dough so it's not too loose the dough with no egg and the one with egg white doesn't really benefit from the extra tension i just wanted to keep things equal between all of these the one with whole egg and the one with egg yolk still a little bit sticky so sometimes you may need to use some flour when performing this step but back to building tension the reason why the dough with no egg and the one with egg white don't benefit from it is because this is relatively low hydration though only at 60 percent and whilst kneading we already developed the gluten fully so the dough is already tight enough the ones with the yolk and whole egg were looser gluten is weaker so the dough is more runny in a sense so folding the dough with no egg and with egg white will make it expand less because it's tighter folding the dough with egg yolk and with whole egg may make it expand less but it will still puff up nicely because it's looser regardless by folding them we are ensuring that the dough expands more vertically instead of spreading out sideways but if you want to learn more about folding i have full video on that in the steps of baking playlist on my channel you can also find videos by using fat and adding fat to your bread though in the principles of baking playlist in fact both of those playlists are full of useful information that you may find interesting right so what happened here is i pre-shaped the dough and i let it rest for around 20 minutes and now we're going to do the final shaping this is where you will really feel the difference between the four doughs the one with no egg feels nice and firm still full of air bubbles it's nice and tight i'm going to shape them all the same way flatten them out fold the top two corners in the middle and then roll up that's how you shape a regular batard we'll be sure to pinch all the seams together and try to have them more or less the same shape they will all go in the same size baking tins lined with some nonstick paper the two doughs with no egg and with egg whites don't really need any flour and shaping but the two with whole egg and egg yolk are a little bit sticky so a light dusting helps along the way this is actually the second time i'm doing this experiment and both times the results were exactly the same now all the lows have been shaped we can proceed to the final proof i'll give them a light dusting of flour to prevent the cling film from sticking so far the bulk fermentation took around two hours then we pre-shaped them rested them for 20 minutes and the final proof will take another two hours or so towards the end of proofing i'll preheat my oven 180 degrees celsius fan off we'll bake all the loaves at the same temperature just to see the difference the ones with the whole egg and the egg yolk have risen more rapidly so they'll go in the oven first we'll leave the other two to ferment for around 20 more minutes at this point i wasn't really sure what the difference would be between all the loaves but as we'll see in a minute the difference is quite significant and these two are ready to join the other two in the oven right now there they are fully baked no egg whole egg egg white egg yolk and they look very different from each other the crust is clearly different in all of them and they're all different sizes but let's have a closer look first the one that doesn't contain any egg looks like a regular loaf of bread he hasn't popped up too much but has a nice uniform shape and the crust feels nice and soft next we got the one with whole egg clearly has puffed up a little bit more the crust is darker and crispier you can see the cracks all over it this could be a desired result number three is the one with egg white it looks very similar to one without egg but the crust looks more evenly coloured but they're about the same volume but the last one that's a big boy the one with egg yolk this is puffed up massively it's got by far the darkest and crunchiest crust and this is a big discovery for myself because i've tried making bread with crust like this in the past and i never knew how i guess egg yolks is the answer but looking at them from the outside is one thing let's cut them open and see what's in the middle of course we can kind of guess what the texture would be like just by looking at them the bigger ones of course will be the softer ones let's do the all-important taste test starting from the left the one with no egg it has a nice even crumb with little bubbles it's nice and soft it's just a regular old white bread it's just a benchmark for this test basically next up let's pick up the one with whole egg you can clearly see that the crumb is made up of larger bubbles they've spread out more pressing it feels nice and soft quite a bit softer than the bread with no egg this stuff would make a nice roll i like the crispy crust biting through this is effortless the third one was the most surprising one to me the one made with egg white is very similar to the one with no egg with one difference the crumb is a little bit tighter this is due to the coagulating effect of the egg white that's by no means a bad thing if you want a heartier bread that's easier to slice may want to use egg white last but not least the one with egg yolk it is super airy and fluffy this is the lightest one of all it would make the best burger buns when it comes to flavor the one with egg white tastes almost the same as the one without egg the one with egg yolk is the eggiest one which of course is not a bad thing but the one with whole egg that one is slightly less eggy that concludes our test i hope you found this interesting so if you have any questions or suggestions let me know down in comments see more videos like this click over here to subscribe to the channel click right here thank you for watching and i'll see in the next one
Principles_Of_Baking
Which_Is_the_Best_Surface_for_Bread_Baking_Steel_Iron_Stone_Aluminium_Compared.txt
how's it going my awesome bunch of Bakers I hope you're having a great day welcome back to the channel today we have another episode of the principles of baking playlist we'll be comparing different surfaces for bread baking so let's go to the kitchen and check them out baking bread on a hot solid surface can be quite beneficial it helps the bread achieve better oven spring and the crust can get nicely browned and crispy cartin is one of the best surfaces to bake your bread on it is extremely dense and heavy it holds and radiates heat really well it is not cheap but it's extremely durable if you look after your ctin pan it will outlive you easily what's especially good about this is that it has a lid the lid will trap steam inside steam prevents the crust from drying out and that's another way of getting great rise there is a whole episode about steaming which you can find in the principles baking playlist the only disadvantage of this pan is its size and shape it can't hold all the breads the size and shape of the next piece of equipment is quite good but this coated aluminium pan is extremely light such a shade does not hold the heat very well and it can cool down quite rapidly when you place a large piece of dough on it I prefer using this for smaller bakes like Burger buns or dinner rolls which are proofed on the tray and then moved into the oven another common surface used in bread making is Stone I think most commercial bread ovens have a stone floor and it can work really well if you have the right kind of stone this is very light and porous it doesn't hold or radiate heat very well and it soaks up pretty much anything you place on on it so after baking just a few pizzas it looks pretty nasty and there's no way to clean it next up we have a baking steel this is one of my favorite pieces of equipment it is extremely dense and heavy and it holds the heat just as well as the cast iron pan does and the best thing about it is that is big of course compared to Carin pot there's one downside it doesn't have a lid but there are some good options for steaming out there if I had to choose between this and a car time pot with a lid I would definitely go for this but let's run them against each other and see how they compare what I have here today is a cold bulk fermented dough it's made with white flour and it has a hydration of 70% it has been in the fridge for 24 hours since we're testing four baking surfaces I'm dividing this into four equal pieces I will pre-shape the dough then I'll let it rest for 15 minutes then I'll do the final shaping and the final proof which will only take about an hour once we start baking the first Loaf the other three will be placed back into the fridge to slow the fermentation down and stop them from over proofing all the loaves will have a piece of nonstick paper underneath them it's just there to help me move them from the table to the oven it will not have any negative effects on the test all the baking surfaces will be preheated and the breads will be baked at 220° C fanof that is around 430° F okay with all the details out the way and final proof done we can get our first Loa in the oven and we'll start with the cast iron pan I will not be using the lid since we're only testing the surface but I will spray all the loaves with water before they go in the oven again you'll find a video about steaming in the principles of baking playlist while the cast iron pan is small and it can't hold very many breads the positive side of that is that it can be moved out of the oven this makes loading the loaf into the oven far easier you take the pan out make sure you don't burn your table place the loaf in the pan and then move the pan with the loaf to the oven and once it's done baking you just take the whole pan out again all four loaves will will be baked for exactly the same time which is 30 minutes I will try to keep things as equal as I can here first loaf rolls up pretty well although it is a little bit misshapen but that's not important here let's look at the bottom and predictably it has brown quite nicely we'll leave this to cool down and move on to the next surface which is the baking steel I really like using this because it can accommodate many breads at once and it doesn't stain like my stone does plus it is just as durable as the cast iron skillet my baking steel never leaves the oven I always keep it in there on the same rack it works perfectly together with a baking tray so when I'm baking some burger buns which are final proofed on a baking tray I can place the tray on the steel the steel transfers heat to the tray and I get a nicely brown crust on my buns and a good rise of course this looks like it's risen pretty well too and looking at the bottom we can see that it's brown quite nicely so let's move right on to to the baking stone or pizza stone as it's sometimes known I bought this before I bought my baking steel and I didn't realize there's going to be so light and porous I definitely do not recommend a stone like this if you want to use a baking stone then I would suggest going with granite or marble those types of stone are heavy and solid they will heat up and hold the heat really well the only problem with the stone is that it can break and I personally don't see any advantage to using a baking stone instead of baking steel though used earlier I mean I guess a stone doesn't rust but if you look after your steel it won't either and while a stone can break steel can't okay here's our pale bottom loaf let's leave this to cool down and move on to the final baking surface which is the coated aluminium tray or aluminum tray as my American friends would say it is the most basic baking surface and if you have ever used your oven you must have one of these trays at home it is perfectly fine to bake a loaf of bread on this but it's definitely not the best surface for baking on it works best for rolls and buns which must be fermented on the tray before being moved to the oven it would not be the best for baking baguett on for example or chatas or pizza those kinds of breads require a solid hot surface even though this time it worked out pretty well I would still go for the steel or the cost iron right let's compare this side by side not the most beautiful loaves I'll admit what's important is their bottoms they are still in the same order here from left to right number one is the C IR pan number two is the baking steel number three is the baking stone and number four is the tray and I must say I was quite surprised at how well the aluminium tray did the thing is that I didn't expect it to be better than the stone maybe it was the coating on that tray that transferred the heat quite well still comparing all four of them the cast time Pan and the baking steel definitely did the best the first two breads have the darkest and most evenly colored crust and the crust on the stone baked one looks miserable it is clearly underbaked of course that would not be the end of the world if the bottom crust of your loaf looks like that you can simply flip it over let it bake for another 5 minutes to Brown the bottom I have done that countless times but if you are using C IR or steel you won't have to I spoke about aen spring at the beginning of the video of course I don't have the best examples for you here today but we can see some slight differences the first two loaves B in the cast iron pan and on the steel rose up a little bit more vertically the ones baked on the stone and on the tray spread out a little bit more sideways the heat really makes the love jump up comparing the thickness of the crusts we can clearly see that the one baked on the stone is very very thin of course we don't want the crust to be extremely thick but it should have some thickness to it like a millim or two and it should be crispy not soft and I'm talking about a regular loaf of bread here not your soft dinner rolls we need need to be able to slice this smear it with butter all without breaking it and tearing it up so have we answered the question of which baking surface is the best it is the heavy baking Steel in my opinion because it's the most versatile but a car time pan especially one with the lid is a game changer too so what's your preferred baking surface let me know down in the comments if you want to see more videos like this one click over here subscribe to the channel click right here that's all I have for you today thank you so much for watching I'll see you in the next one
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Safety_guideline_for_COVID19_Transmission_rate.txt
PROFESSOR: So as an aside, for those of you who have an understanding of probability theory and stochastic processes, let me just explain why it is valid to bound the main transmission rate in defining the indoor reproductive number when we might be more concerned about bounding the probability of a transmission. So let's let a capital T be a random variable which describes the random number of transmissions that occur in the room. And this is specifically for the case for one infector or infected person and n minus 1 susceptible. So again, the situation of the reproductive number where for every person that comes in we want to know is there going to be transmission, and typically, only one infected person would be seen. So this is a random variable, and let's let ft of n be the probability density function that gives the probability of little n transmissions. And then we can define the risk of a transmission, the risk of at least one transmission, as the probability that this random variable takes on a value, which is greater than or equal to 1. OK. Well, in terms of the probability density function then, that would be a sum from 1 to infinity of the ft event, basically. So we're just adding the probabilities that we're not seeing a transmission to those-- or possible transition to those different numbers of people. Now I'd like to do a little calculation to get an upper bound on this quantity. So we can say that this is less than or equal to the sum from n equals 1 to infinity of n ft of n. Now, this is just a mathematical trick here. So n refers to the natural numbers 1, 2, 3, 4, et cetera. Those are all positive numbers and they're all greater than or equal to 1. So if I take 1 in this expression over here and replace it with little n, I'm only increasing the value of that sum, because also, this ft is a probability density that has to be positive. And then now I can also say that this is actually equal to throwing in n equals 0, because that is a term that is actually identically 0. So I can change the summation. And then by definition here, this thing is the expected value of the number of transmissions, because I'm summing the number of transmissions little n times the probability of that event occurring. So that is the definition of the average. So what we're seeing here is that the risk of a transmission is rigorously bounded above by the expected number of transmissions. And so therefore, if we require that this is less than epsilon, our risk tolerance that we've just introduced, this is a conservative bound on the true risk, which let's say here is defined by rt. So if your goal is to control the probability of having at least one transmission, so basically to ensure that no transmissions occur, then you would do well to bound the expected number of transmissions, because that's an upper bound. It can also be shown that as epsilon goes to 0, so we're talking about very low probabilities of transmission, and oftentimes that is the case, then rt is asymptotically the same as the expected number of transmissions. So this overall risk of a transmission and the expected number are the same. And in fact, that's one way to understand sort of even the definition of a probability in terms of an expected number of events. And we are typically thinking of cases where epsilon is much less than 1. So for those of you that have some background in probability, you may recognize that what I've just done here is an example of a much more general result, which is called Markov's Inequality. So now we can safely proceed by continuing to work with average values of all the quantities of interest.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Transfer_of_respiratory_pathogens_Wells_curve_derivation_ASIDE.txt
PROFESSOR: So now, let me make the first of our technical asides, which you can skip over if you're not interested in the mathematical details or for those of you that have a higher-level, say, upper-level-undergraduate or even graduate-level understanding of transport phenomena and fluid mechanics. I'd like to show you some of the equations that are behind the results that I've been quoting in all the lectures. So in particular, let's derive the Wells curve. So part of that was a theory of drop settling. Here, I will quote a certain result, because the derivation would be a lot longer. Actually, for that, you could refer to my online class 10.50x, which is that if you have a droplet or a particle of a radius R and it is settling under gravity-- so it has a mass m, and the gravitational force is m g, where g is the gravitational acceleration-- then there is a flow of fluid around this object. And relative to the moving object, the flow is going the other way. And if you solve for the viscous flow around an object being dragged through a fluid, then you arrive at the result of Stokes, which is the drag on that fluid. So if you're falling at a velocity v, then the drag force is -- f_d is -- 6*pi times the radius of the drop times the viscosity of the fluid times the velocity of the drop. So we're falling at a velocity v, which is the settling velocity. And this here is the Stokes drag coefficient, which comes from solving the fluid mechanics of viscous flow around a sphere translating at a constant speed. We can, furthermore, say that the mass of the droplet, of course, is 4*pi/3 times the density of the droplet liquid, times the radius cubed. And so given the mass of the droplet, there's a force balance between the gravitational force m g and the drag force when the particle reaches a terminal velocity. So if we think of this v_s as the terminal velocity where it's accelerating until there's a balance between the forces and is moving at a speed, it's given by this force balance. And from that equation, we can solve for the settling velocity, which is energy divided by 6*pi*R. And to use the same notation as before, I'll call this mu_a -- the air -- but generally, it's the viscosity of the ambient fluid around the particle as it's settling. And if we plug in the value for m g, so that's 4/3*pi*rho*g* R^3 over 6*pi*R*rho_a. And so we simplify that, we end up with (2/9)*rho*g*r^2/mu_a. So that's the settling speed. And this is a pretty important concept, so I'll just sketch it here. So if we want to know what's the settling speed as a function of the radius of the drop, then you see it grows like R^2. So it's like this. And then to put a scale on that, if we have a particle that is 3 microns -- so that's really an aerosol particle -- then the settling speed is around 1 millimeter a second if we use the density of water and the viscosity of air for this formula. And so that's already a fairly slow settling speed, millimeter per second. So you can already see the particles that are in the micron range will be suspended in the air for a long period of time, as long as they don't evaporate away. And so that is now the second part of the calculation. Oh, and I should the finish the first part here. What we're left with is that the settling time is L over v_s. And that's the formula that we had before, which is 9*mu_a*L divided by 2*rho*g*r^2. So this is our first part of the Wells curve. So if I draw the Wells curve over here in the traditional way, where I plot on the horizontal axis the size of the particle, and in a downward axis, we draw the time, then we have a curve like this for settling. And the reason it's drawn down, I guess maybe the feeling that as you're sort of falling down a particle a certain size, you hit this curve and that's when you've settled a distance L and fallen out of the air. So now, let's look at evaporation, which is our second topic. I've lost my blue [marker] -- here it is. So these droplets are getting very small as they're evaporating, and it's happening very quickly, as we shall show in a moment. And so a natural assumption is that the process is limited by the diffusion of water vapor away from the droplet, because essentially we have this little droplet here with a certain size R, which is now going to be varying with time. So it has a radius R(t). And it's really close to the surface. There is an equilibrium concentration -- we'll call it c_w -- of water, which depends on the temperature. So that's kind of the saturation concentration of water vapor in the air. But then, if the water is going to evaporate more, it would create more concentration, which would then re-condense on the particle. So in order for it to continue evaporating, that water vapor that is produced has to diffuse away. So there's going to be a gradient of water vapor going outwards from c_w -- [it] is the concentration at position R. And then far away, there is a sort of diffusion layer thickness, delta. And far beyond the diffusion layer thickness, the concentration is going to approach the equilibrium concentration in the ambient air. c at infinity is going to be c_w times the relative humidity. So that's the ratio of the concentration of water vapor in the air to the saturation concentration, c_w, by definition. Now, the math problem that we have to solve for this diffusion problem -- with a moving boundary in this case, though we'll assume it's pseudosteady -- is dc/dt is the diffusion coefficient of water times the Laplacian of c, so just the diffusion equation with these two boundary conditions. Now, an interesting aspect a three-dimensional spherical diffusion is that at first the diffusion layer grows, but it very quickly reaches a steady state. And if we assume that that diffusion time to reach this distance delta is fast, so they reach a steady state, so it's a kind of quasisteady or pseudosteady shrinking of the droplet with sort of a diffusion layer around it that's always kind of at the steady value, then it turns out that this diffusion layer is on the order of the particle size. So as the particle shrinks, the diffusion layer also shrinks. But it has a well-defined thickness, as opposed to diffusion in one or two dimensions, where the diffusion layer just keeps growing out to infinity. For example, like square root of time -- you don't reach a steady state in an infinite domain. So the bottom line of this calculation, which I will not go through right now, is that the flux of water on the surface is the area of the surface at a given moment, where the size is R, times essentially Fick's law, where the driving force, the change in concentration from the surface to the bulk, is c_w times one minus relative humidity, the diffusivity of water, and then divided by delta, the diffusion layer thickness. And it turns out that with these coefficients here, it turns out to be exactly R. So this is not a scaling result, but actually an exact result for pseudo steady spherical diffusion of water vapor. So now, we have the flux on the surface. It's uniform on the surface. And it's [assumed] to be pseudosteady. And so then I can write down that the change in the size of the water droplet volume, which is (4*pi/3)*R^3 is equal to minus the volume of a water molecule times the flux of water. So that's basically my volume or mass balance of water. So if I plug this in here, then I get dR/dt is equal to -- let's see, collecting all the terms here -- so derivative of R^3 is 3*R^2. So the 3's cancel. And then I have a 4*pi*R^2, which cancels this 4*pi*R^2. So I just have dR/dt is -v_w*D_w*c_w*(1-RH)/R. If I put this R on the other side here, then I have -- I'll just continue the derivation here -- I have R dR/dt is equal to all this stuff. So -v_w*D_w*c_w*(1-RH). And then this expression here can be written as 1/2 the derivative of R^2. So what we find is that R^2 is linear in time. And then using the boundary condition that we start out at a certain initial value, R_0, then I'm going to get the R(t) is the initial value R_0 times the square root of 1 minus t over a certain evaporation time. And that evaporation time is given here by (R_0)^2 times basically all these coefficients here, where I'll separate out the effective humidity, and then a bunch of other coefficients, which you can see have units of length squared over time, because R_0 is a length squared. So it's effectively some kind of diffusivity. And what we get from this calculation is that this effective diffusivity that goes into this expression is -- there's a factor 2 from this guy -- there's a 2*v_w*D_w*c_w. And if you plug in values for water vapor, for the saturation pressure, and the diffusivity, and the volume of water in air, then this coefficient turns out to be 1.2e-9 meters squared for second for pure water. And that's where you get now the second part of the Wells theory, which is the evaporation, which gives you a curve looking like this. So that there's this sort of in this theory a natural crossover between large drops, which in this case here are ones that are large enough to settle out of the fluid before they evaporate, and small drops, which evaporate. On the other hand, for true biological fluids that appear in respiratory droplets, the evaporation is limited by solutes and salts, which stop the evaporation and, in fact, can attract even more water in some cases. So that the evaporation part of it is not as accurate, and we tend to see that the settling part is more important to consider, given an equilibrium distribution of droplets that has been measured and is understood to come from different types of respiration.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Safety_guideline_for_COVID19_Transient_aerosol_buildup.txt
PROFESSOR: In most of our calculations of safety, we're going to be interested in the steady state average transmission rate between individuals in a room, after the aerosol particles have built up to a steady state. But let's briefly talk about the transient buildup, and how to take that into account, just as an aside in this board here. So here is the general expression for the transmission rate that we describe, which depends on the breathing rate squared, the volume of the room. It's an integral over all the different drop sizes, where NQ is a lumped distribution of the number of infection quanta per volume per radius. And then PM is the mass transmission factor, which depends on radius. And lambda C is the total relaxation rate involving sedimentation or settling and viral deactivation and filtration. And that also depends, of course, on R. And that lambda C of R also ends up in the exponent here. So that actually, that rate of relaxation of the concentration in the air, is also setting the time scale, lambda C inverse, for the buildup of those aerosol droplets in the air. And so that's this factor here. This is basically the transient term is this term. And then this term, the one, is the steady term. So we're interested now what's the effect of the transient. Now, before we get to that, if we forget about the transient now, and we just have the steady state, then we introduce beta bar as the sort of constant steady state value transmission. And through this definition here, by doing these integrals, we have defined an effective radius R bar, which is sort of where you evaluate the mass transmission factor, and also the filtration-- or the relaxation rate in order to make these two values equal. So that's actually our definition of effective radius. And so now, looking at the transient term, let's ask ourselves, what is the average transmission rate up to a certain time tau. So that would be, we divide by a time tau, and we ask ourselves up to that time, what is the average transmission rate? So we integrate beta dT from 0 to tau, then divide by tau. So what would that be? Well, we can take this time integral and bring it inside the radius integral, and write this as QB squared over B integral 0 to infinity of PM squared NQ of lambda C. Keep in mind all those factors depend on R. Times, now, an integral from 0 to tau-- so I'll put this in brackets-- of 1 minus e to the minus lambda C, which depends on R, times T divided by divided by tau dT. And then dR. So switching the order of integration, where we're going to do the time integral first. And so what we have here, if we just look only at this expression right here, we can write this as a sum of a steady state term. So when it's just 1, this is the integral 1 over tau from 0 to tau, so that's just 1. So that's the steady state contribution. 1 plus, and there's a transient contribution where I have to do this integral here. So that's e to the minus lambda C of T over lambda C tau, evaluated from 0 to tau. And so we'll come back to this in just a moment and evaluate that. But just to draw a picture maybe first of what we're looking at here. The average transmission rate as a function of this averaging time tau, well, eventually of course, it tends to the steady state value. But it does so in a certain way we're going to calculate, like that, where the time for that transmission-- or for that transition, is the inverse of the relaxation time. Although there's not a precise value of that. But if we want to keep actually a scale for it, it's going to be evaluate at that value R bar that I mentioned. That gives you a rough sense of the overall relaxation. So this build up of the aerosol concentration in the room once the infected person has entered, and eventually, there's sort of a steady transmission rate to everyone else in the room. So let's continue calculating this right here now. So this is the transient. And I can write this as 1. And if I evaluate here, I can put it this way, as minus, and then I evaluate first at the lower limit, which gives me another 1, minus, and then evaluating at the upper limit, which is tau, e to the minus lambda C tau over lambda C tau. And now, I'll use an approximation that helps me get a simple analytical results. So I should mention, as soon as we have exponential and polynomial factors, it can be difficult to solve equations. For example, what is the bound on the occupancy or the time in the room, or the ventilation. We like to get a simple formula. And so if there's a nice approximation I can make, which is that 1 minus e to the minus x over x is not too far off from 1 over 1 plus x, it turns out. So it's not a perfect match. You can try plotting these two functions. But it's a reasonable approximation, given that everything we're doing in this calculation, when applied to a real situation, is going to be off by some uncertainty, which could be a factor of 2 or 3, this is actually going to be more than good enough of an approximation for us. So if I make that approximation, then what I have here is that this thing is 1 over 1 plus x here. And so we end up with 1 minus 1 over 1 plus lambda C tau. And when I combine those two terms, I end up with lambda C tau over 1 plus lambda C tau. So this is my approximation. In fact, I can further then write that as 1 over 1 plus lambda C tau inverse, dividing the numerator and denominator by lambda C tau. So I'm just making some approximations here that allow me to get a very simple expression in the end for my safety guideline, taking into account this transient build up here. So remember that the bound we have is on the indoor reproductive number, which is N minus 1 times the integral to tau of beta dT. So what is that? That's just the sort of time average beta times tau. So this bound is actually N minus 1, time average beta times tau. And then our guideline, of course, is to make this less than our tolerance, epsilon. And so what that means then is using this result, you can see that I just get the rest-- so if I look at the expression for beta bracket, it's just the steady state expression times this factor. So basically, this is kind of the factor that corrects for transient effects. Again, with just a simple approximation. So I can then write that my guideline now has a modified form, which is that N minus 1 times tau is less than epsilon over the time average beta up to time tau. And this is approximately equal to epsilon over beta steady state times this factor here. If I multiply that to the other side, I just get 1 plus 1 over lambda C of R bar tau. So basically, this right here is the transient correction, or modification. And this is the steady state formula, which we will more typically be using. Now, why do we care about the transient? Well, first of all, you can see that by using the transient, we are being less conservative. So if we want to be very conservative, we can say, you know what, let's just assume the second that the infected person enters the room, boom, the transition rate goes right to the maximum value. That's the most conservative. So generally, using the steady state is more conservative. So that's one reason we like to use it. Also, it gives you a simpler formula. Why add a bunch of factors to a formula that only make it less conservative. And we've made a lot of assumptions in this model, so it makes sense, maybe let's not worry about it. However, I do actually like to include it for certain examples, because it allows you to capture the intuition we all have that when the time goes to 0, the risk also has to go to 0, which you don't get from a steady state. If this bumps up right away, then you could be spending like 2 seconds in a room, and you have a chance of getting infected right away, which is actually not right. There has to be some time physically for transmission to happen through these droplets from one person to another. So the effect of the transient correction, as you can see here is, when lambda C tau is larger than 1-- so that's times that are kind of out here-- that term is gone. But when you get to these earlier times, or very short times, where there hasn't been time yet for the build up of the airborne concentration, then as you see, as tau goes is 0 actually, this term diverges. So what actually happens is that if I calculate sort of, for example, one thing you can get from this guideline is what is the sort of maximum occupancy, or versus time. Or it's sort of the maximum time in the room for a given occupancy. We're going to be looking at lots of plots like this. I would have something that might look like this for steady transmission. And when this time gets larger then lambda C of R bar inverse-- there's some critical time scale there, which is this one right here-- then when you're past that time scale, you've got the steady state. But if you go to smaller times, then what happens at this thing can sort of blow up a lot faster. So what it kind of helps to capture is, again, this intuition that if I put in like tau is extremely small, then of course, the risk goes away, and I can have larger numbers of people in the room, or I can tolerate smaller times and actually be safe. So anyway, that's one reason we do that. On the other hand, for a conservative guideline, and the most important message of this course, really, is to think about that steady state transmission rate, which we will mainly be focusing on in all of our examples.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Epidemiological_models_Disease_spreading_in_a_population.txt
PROFESSOR: So now, let's begin talking about the spread of disease, initially focusing on a population of individuals. So the type of modeling, which we'll be discussing, is sometimes referred to as compartmental modeling because a population is divided into different compartments. Such as, for example, a set of susceptible people that have yet not been infected, a set of infected people, and finally a compartment of recovered people. Additional compartments could be added, for example, for different age groups or sub-populations, for incubation prior to infection by the disease, and, for example, death and separated from recovery. This type of modeling based on population compartments was introduced by Kermack and McKendrick in the simplest case of what is called the SIR model, where S is the number of susceptible, I number infected, and R the number of recovered individuals in a population. And the dynamics take a simple form of a set of coupled nonlinear Ordinary Differential Equations. So first of all, the rate of change of the number of susceptible people is minus the transmission rate beta times S times I because SI base refers to how many pairs of susceptible and individual, or infected persons there are, and beta is the transmission rate per such individual pair. And then, next, we look at the dynamics of the number of infected persons. So that starts by a conversion from susceptible to infected. So beta*S*I is the rate of producing new infected persons. And then, we introduce another rate, constant gamma, which is the removal rate. So this could be people are removed from the infected compartment either by recovering or, potentially, we could lump into that, dying. So we finally complete the balance here by writing the number of recovered, changing as gamma*I. So basically, we have a model here with three compartments and two rate constants. Now the important aspect, really, here is the rate of change of the number of infected individuals. So I'll write that equation over here, which is dI/dt equals, and let's factor out gamma and write the prefactor here on the rate as (beta*S/gamma-1) times gamma*I. And if we look now at early times, then the number of susceptible people is approximately equal to the initial number, at t=0. So that's essentially the size of the entire population, typically. And so we would then write this as beta*S0/(gamma-1), times gamma*I. So now this is just a formula that at early times gives you an exponential increase. So we would then find that I grows like the initial number of infected persons. It could be, for example, just 1 times e to this factor here times t. And I'm going to write it in a certain way, which is (R_0-1)*gamma*t. So this is the early times. And then we have put a little squiggle here to indicate that's the initial growth rate or the initial dependence, where R_0 is beta*S_0/gamma. And this is called the reproductive number of the disease, or of the epidemic, because we can see here that if R_0 is bigger than 1, then we have an exponential growth of the number infected persons. So then we essentially have an epidemic starting from an initial index case, or some set of cases, numbering I_0. Of course, if R_0 is less than 1, then we have no epidemic. In other words, there may be an infected person or two, but the number will exponentially decrease, and there won't be any growth. So the reproductive number is an important concept in epidemiology that comes directly from these models. Related to that is the concept of herd immunity, which is the point where enough members of the population are immune that the epidemic starts to die out and eventually disappear. Let's make a plot of the typical predictions of the SIR model. So as a function of time, the number of susceptible people starts at some value S_0, and it decreases. Initially, the number of infected persons starts at some small number I_0, which might even just be one index case, and as we showed here, it exponentially increases. The number of recovered starts at 0 and increases as well, with some delay given by the recovery time. And what we then see is that as the number of susceptibles comes down-- let's look at this equation here. We can write this as (beta*S-gamma)*I. So initially, (beta*S-gamma) is positive. It starts, in fact, at the value (R_0-1)*gamma. But eventually, as the number of susceptible people comes down, there's a certain point where this factor goes to 0. And that would be leading to dI/dt equal to 0. So, in other words, a maximum of the number of infected people. So at some point, there's going to be a value where this will turn around. So dI/dt is equal to zero. And where that happens will be at a certain value of S_0, which we'll just put-- of S, I should say. I'll call that S_h for the value of herd immunity in the susceptible number. And that is when this factor beta*S-gamma is equal to 0. Or in other words, S_h is gamma/beta. OK, and once we get to that point, then now dI/dt is going to change sign and will only be negative. So the number infected will only be decreasing. Notice S is strictly decreasing because this is a negative rate here. So S only continues to go down, which means that the prefactor here is always negative. So the number of infected people now, at this point, must also necessarily continue to go down, and ultimately the number of susceptibles will, from that point, tend towards 0. The number of recovered will tend, of course, with some lag to S_0. So this is the number of recovered. This is the number of susceptible. And the number of infected, of course, also goes to 0, something like this. And ultimately, in the long run, the decay-- because S is going to 0, dI/dt is minus gamma*I*t. So the final rate of drop here is -gamma*t. So basically, the recovery rate is really dominating how quickly people are being converted, basically, from, to create the recovered and remove the infected population. So the final result here that comes from these models, which is quite interesting, is to ask, what is this fraction of the population that needs to become immune, or what is this sort of threshold of susceptible in order to achieve herd immunity? So if we look at what is this S_h over the initial? So how far do we have to go down? Well, that would be gamma/(beta*S_0), and you'll recognize that that is nothing more than the inverse of the initial reproductive number. So essentially, herd immunity is reached at a value of the susceptible fraction becoming 1/R_0. And this is an interesting prediction of this very simple model, which is that more infectious diseases that have a very high value of R_0, for example, smallpox, leads to a situation that, when the epidemic finally ends, the number of susceptibles when you've reached herd immunity, well, when this starts to turn around, is actually quite low. So, in other words, you have to go very far in infecting the population to start to end the epidemic. Conversely, a disease that has a small value of R_0, even COVID-19, might be a value of say 3.5, then this fraction here might not be so [low]. It might be, maybe, only say, 20% or 30%. Where you can start to see then herd immunity being reached and the number of infected people going down dramatically.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Transfer_of_respiratory_pathogens_Modes_of_transmission.txt
PROFESSOR: So now that we understand the types of droplets that are emitted by different types of respiration, and their evolution in the environment as a function of humidity and other factors. Let's talk about the different ways that transmission can actually occur between two individuals of a respiratory pathogen. So the first is contact transmission through so-called fomites. These are residues of those infectious droplets that had been emitted by breathing, which can build up on surfaces such as tables, floors, even on people's clothing, on their hands, from the breathing of an infected person. And then as a central person touches those surfaces and then touches their eyes or touches their nose. And in various ways gets the pathogen into their body. So that is not likely to be the dominant mode of transmission for COVID-19, as the evidence is building up of other modes of transmission. Nevertheless, it is still recommended to be disinfecting and washing surfaces to protect against this potential mode of transmission. Another mode of transmission is through large ballistic drops. So these are the droplets we talked about at the beginning, some of which are large enough to sediment to the ground. Others of which will eventually settle, but which can be transferred by the momentum of a cough, or a sneeze, or some violent exhalatory event to another person. And that other person can directly breathe in those droplets. So this is especially important when dealing with symptomatic individuals. On the other hand, for COVID-19, it's well established that the majority of transmissions are, in fact, asymptomatic. So people are not coughing or sneezing, and yet are still managing to transmit this highly infectious virus. Which brings us then to the third mode of transmission, which is through aerosol droplets. These are the droplets which do not settle quickly on the timescale of occupancy of the room, or a ventilation, or other factors that remove those droplets, such as deactivation. And those droplets are emitted even in normal respiration/ normal breathing. So just simply the puffs of breathing, speaking-- put those aerosol droplets in the air and then they're carried by air currents in the room, which we will analyze in greater detail later in this course, and essentially fill the room. And as a first approximation, those droplets are spread throughout the room, and is a well mixed space of air. And within that context, there are still two modes of transmission we can talk about. If people are not wearing any masks or face covering, then those puffs of respiration can be directly impinging upon a susceptible person, who then can breathe them in. And those aerosols will be at a higher concentration than the background ambient air of the well mixed room. And we refer to that as short-range aerosol transmission. And we will return to that in the last part of the course. But what we're going to focus on first is long-range airborne transmission. So these are the droplets that end up in the air. They become well mixed throughout the space. And anyone in the room, even very far away, can breathe those droplets in and, over time, can inhale an infectious dose and become infected. It's important to recognize the role of face coverings, especially in the context of these respiratory droplet transmission modes. So whether it's a mask or a face shield, those facial coverings can essentially eliminate short-range aerosol transmission. Of course, they also completely eliminate large drop emission because the droplets don't make it through those ballistic protections if you will. And even the momentum generated in the air from the breathing is largely eliminated by shields or masks. On the other hand, these small aerosol droplets can pass through masks. They certainly can pass around face shields or even plastic shields that you see in various public spaces these days. And those droplets then are quickly spread around the room and we're left with the airborne mode of transmission. We will see later, there is an important distinction between a mask and a shield, however. While both of them blocked the momentum of the fluid that leads to puffs and respiratory jets and plumes of transfer, they do provide, in the case of the mask, additionally filtration, which can actually block many of the droplets-- not necessarily all, but a significant fraction. And that will be an important aspect of understanding how to make spaces safe from airborne transmission.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Safety_guideline_for_COVID19_Case_studies.txt
PROFESSOR: So now that we have a fully parameterized safety guideline for indoor spaces to limit the transmission of COVID-19, we can go through some case studies. And I encourage you to use the guideline, in the form of the spreadsheet or the online app which are provided, to check out your own space to see how you might mitigate transmission there. So here are two representative examples of great interest today. So the first is a classroom in the United States, which is very relevant for discussions about closing or reopening or partially reopening our schools during the pandemic. And the second example will be that of a nursing home, which is a tragic situation of great interest, because a large fraction-- in fact, almost half-- of all deaths have occurred in nursing homes and eldercare facilities here in the United States, and a very significant number around the world. So first, let's look at the classroom case. So if we apply the guideline, first of all, we see the typical shape of what is predicted in a plot of occupancy versus time, which is a curve with roughly a 1 over x type behavior, because it is the product of occupancy and time essentially that are limited by the guideline. So you trade one for the other. So we have here curves representing natural ventilation, which we estimate to be an air change time of around 3 per hour-- or 0.3. Excuse me-- so a 3-hour air change time, and also mechanical ventilation, in this case with 8 air changes per hour, so reasonably good mechanical ventilation, which is the red curve-- the blue curve being the natural. And this is in a typical classroom space of 900 square feet and 12-foot-high ceilings, so for the United States. And in such spaces, the typical occupancy might be around 20 or 25 students. And so 20 students is actually indicated here as the normal occupancy for this space. And so what we see is that the normal occupancy is safe for a certain amount of time. But then eventually, it becomes unsafe. And so in the case of lower ventilation, of course, that transition happens more-- sooner. And if you have better ventilation, you can extend that time. Also in the dotted line here is shown the transient solution in the guideline, which accounts for the buildup of infectious aerosols when an infected person first enters the space. And you can see in this case, that buys you just a little bit of extra time only in the case of the natural ventilation, but really not much in the case of mechanical ventilation, where the transient and steady state curve essentially overlap. We can also compare this with some typical official guidelines here in the United States and also elsewhere. So first, we have the 6-foot rule. And what we see is that after a fairly short time, the 6-foot rule becomes inadequate in the case of natural ventilation. In the case of mechanical ventilation, still at some point, the 6-foot rule is unsafe. However, before that transition happens, it's actually overkill. So for enforcing the 6-foot rule, we are keeping people at a fairly low density when perhaps that's not necessary for airborne transmission, especially if masks are being worn, because that cuts down the short-range transmission and droplet transmission that we've discussed. And airborne transmission, analyzed by the guideline, is expected to be the dominant mode of transmission. So the way this plot is made is also allowing for rescaling by the use of masks. So and also, by-- so that's the horizontal axis. So what is plotted here is the mask-adjusted time, which is pm squared times tau. So pm is the masked transmission factor. So a very good mask has pm near 0-- for example, maybe 0.05 or even 0.01 for a good surgical mask, maybe 50% or 30% for a decent cloth face covering. And so that comes in squared. And that factor rescales the time. So for example, we see here with ventilation, we might expect to get 40 hours in this case with the 6-foot rule, or if we're at normal occupancy, maybe 20 hours. But if we have good masks that have pm squared maybe on the order of 100, let's say, or even 1,000-- so depending on what that value is-- you see you can turn that 20 hours into thousands of hours. And so it actually becomes quite safe to stay in that space. Now, on the vertical axis, we have the occupancy limit scaled by epsilon, the risk tolerance. So what's shown here, then, is effectively risk tolerance of 1, which we would not want. We don't want any transmissions. But if you reduce that to epsilon of 0.1 or even 0.01, then you again pick up a factor of, let's say, 10 to 100 reduction in the time or the occupancy. But again, with masks, that's offset by a factor which is typically larger than that. So what we're seeing here is that even in a very conservative viewpoint, if we analyze a classroom space-- where there by the way is no filtration or other mitigation measures occurring except for ventilation at either of these rates-- we see that if masks are worn consistently that we should be able to get tens or even hundreds of hours of shared use of that space, even at normal occupancy. Now, how should we interpret that time? So the time could be a continuous occupancy time. But especially if we're looking at the steady state value, which is more conservative than the transient, then we can simply add up the cumulative time that people spend together in the presence of an infected person. And so if we get to a number like, for example, 40 hours, which we can easily achieve wearing masks with normal occupancy and decent ventilation, then that might correspond to one week in the classroom with an infected person. That's a very good number, because the time to symptoms is around five or six days with COVID-19. Now, of course, some transitions may be symptomless. But the recovery time is also on the order of maybe two weeks. And so if you can stay during that period of time in the presence of an infected person, that infected person essentially will be removed, either by recovery or by showing symptoms and hopefully removing themselves and going home. Also, this kind of analysis can inform testing. If the guideline says that you're safe for a week or even two weeks of occupancy, given the number of hours per day and the other conditions, and the testing frequency is, let's say, once per week-- and also we know that the most infectious time only happens after, say, five or six days after becoming infected-- then with weekly testing and mask use, this becomes an extremely safe situation from the perspective of airborne transmission if people are wearing masks carefully during that time. So that's one way to think about these guidelines. Also, the guideline can tell you the relative risk improvement if you, say, introduce filtration, or you change the humidity of the room or change the ventilation rate other than values shown here. And you can play with the spreadsheet or the online app to see how those changes can take place. Our next example is a nursing home. Now, here, I've-- instead of scaling to epsilon, the risk tolerance, I've just chosen a risk tolerance of 0.01, which is a 1% risk of transmission if an infected person were to enter this space. You may want to take even a smaller value, to be even more conservative, given the grave danger that persons who are in nursing homes are facing, partly due to their age and also due to the common situation of pre-existing conditions, given that in many cases, there may only be months or years left of life expectancy for those people. And so they are, in fact, at the highest risk for COVID-19. So you want to pick a very small epsilon. And so now, if you look at these curves, then you see the horizontal axis has now changed to minutes. So that's if no one is wearing a mask, then on the order of minutes-- so something like the 15-minute rule-- you can see there becomes a risk of infection, although ventilation can help and can turn the 15 minutes, say, into 30 minutes. If we look at, say, the 6-foot rule, which is shown here, which would only place two people in a typical nursing home room that we have just shown here-- so that would be two beds, although in some cases, as in the example shown here, which is based on New York City nursing homes, the size of the space that we've assumed here would actually have a normal maximum occupancy of three. So there could even be three beds in this space. And the point is that even if these beds are 6 feet apart, if the people are not wearing masks-- or if a person enters the room, who is infected, who is not wearing a mask-- the risk of transmission is actually very high. And if people spend long periods of time without masks, as many nursing home patients must do because they have trouble breathing already-- so unless they're on a respirator, if they're just kind of in relatively good shape, they will be breathing oftentimes for long periods without masks. And that can be a serious problem and lead to very high risk. So the guideline essentially helps to sound the alarm for those sorts of situations and helps those facilities and the caregivers design the space and the time spent in the space with different people in a way that can help to protect the residents, and also when combined with testing and other mitigation measures, can hopefully reduce the spread of the disease where it matters the most and has led to the most deaths.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Transfer_of_respiratory_pathogens_Indoor_airborne_spreading_of_COVID19.txt
PROFESSOR: So besides our physical expectation that a virus such as SARS-CoV-2, coronavirus, could be transmitted through respiratory droplets, especially aerosol droplets through the airborne route of transmission, there is substantial evidence-- both epidemiological evidence and some physical evidence-- to support this hypothesis. So first, let's go through some of the epidemiological evidence. This is a very small fraction of what is available to date. So one of the first incidents that gave a sense there might be airborne transmission was a religious event that took place at the Tiantong Temple in Ningbo China, where there were hundreds of people in attendance. But in particular, there were two buses that brought worshippers in one-hour bus rides to this location. And on one of the buses was what was known to be the first infected person with COVID-19, entering this region after having had contact with others from Wuhan China, the initial source of the outbreak. And on the bus where the infected person was, 23 out of 68 passengers became infected during that ride. And despite the fact that there was also contact in the larger temple building with many other people, very few there were infected. And on the second bus, where they kept the same seating, there were no infections. So that gave a sense that on public transportation, there could be substantial superspreading going on. So shortly afterwards, there were a number of incidents, including a case in a restaurant in Guangzhou, China, where an infected person was sitting at a table having dinner with a party there. And there was a documented transmission to a far corner of the room with a person who had not been within a short distance such as six feet of the infected person, had not touched anything that could have led to contact transmission. And it was concluded that it could only be explained by airborne transmission. Then there was a well-known case involving a cruise ship-- one of several such cases-- of the Diamond Princess cruise ship, where a few infections were detected. And on February 3, 2020, the ship was quarantined in the harbor of Yokohama, Japan. And during that quarantine, out of 3,011 passengers and crew on the ship, there were a total, at the end of 12 days, of 354 infected persons, starting from an initial estimate of-- well, several known cases. And by our own analysis, we will be estimating there were perhaps around 20 initial cases, so a dramatic increase in the number of infected persons in 12 days, despite the fact that the passengers were mostly confined to their cabins with very little movement between. And also, later analysis showed that there was very little statistical correlation between contact with an infected person even being in the same room and getting a transmission. In other words, transmission was happening between people in different rooms, even on different floors, presumably through the air handling system. And among many other events that followed, another famous case was one of the initial sources of spreading in South Korea was the Shincheonji Church, which held services over the period of February 16 to 25. And a significant fraction of thousands of churchgoers were infected, that led to the initial outbreak in Korea, again, coming from contact of a large number of people sharing an indoor space, where it's not possible for each of those people to have been touching each other or within 3 or 6 feet. But rather, the only plausible explanation is transmission through the air. Another such example in Korea had to do with a call center, where it was hundreds of people working in a building with many floors. And there was an infected person in one large room of one of the floors. And a significant fraction of the co-workers were infected in that call center and yet a relatively small number in other parts of the same floor or on other floors, again pointing to airborne transmission after subsequent analysis. And then one of the first cases in the United States famously occurred in the Skagit Valley Chorale, which was holding a choir practice in Mount Vernon, Washington, USA. We will be analyzing this case in more detail. But it very dramatically showed the evidence for airborne transmission, because in a 2 and 1/2 hour choir practice, one infected person managed to infect 53 out of 61 others, two of whom later died, when it could be documented that there was no direct contact, short-range contact, or touching between all of those people. But rather, they simply shared the same indoor space. Also there was a hint that respiration and the type of respiration was important, in that these people were singing. And that led to a dramatic increase in the rate of transmission compared to other airborne events. There are many other examples one could go through. There have also been, recently, some meta analyses of large numbers of cases, which further point towards indoor airborne transmission of COVID-19. One recent such study looked at 7,300 cases of initial spreading of COVID-19 out of the epicenter of the outbreak in Hubei Province, China. So they looked at 320 cities, to the first known cases in those cities outside of Hubei Province. And they identified all the clusters of two or more transmissions. And of those clusters-- there were 72 of them. And they all occurred indoors. And out of those, 80% were at home in people's apartments. And 34% also included some public transportation. And out of all those clusters of transmission, only one was documented to be occurring outdoors, consistent with a wide range of other evidence. Moreover, there has been a cataloging of superspreading events such as the ones I've listed here. And that list now numbers well over 1,000. And out of all the superspreading events that have been documented, all of them have occurred indoors and involving large enough numbers of people that airborne transmission is the most plausible explanation. So besides the overwhelming epidemiological evidence for airborne transmission of COVID-19, there is also growing physical evidence. So first of all, other diseases that we've discussed in this course, such as tuberculosis, a bacterial disease, measles and the original SARS-CoV coronavirus, which are viral diseases, have all been established and believed to be transmitted through respiratory aerosols. SARS-CoV-2 is a very similar coronavirus to SARS-CoV-1. And so it's plausible that its mechanism of transmission would be the same, if not similar. And indeed, there has been recent work demonstrating that infectious aerosol droplets could be isolated from infected patients with COVID-19. And in particular, the most infectious droplets observed were in the aerosol range with radii less than 2 microns. So the evidence is growing and frankly becoming overwhelming that the airborne route of transmission is important, if not dominant, for COVID-19. So we will continue now by analyzing how to mitigate and model the transmission of a respiratory pathogen indoors.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Beyond_the_wellmixed_room_Shortrange_transmission.txt
PROFESSOR: So the next important part of turbulent plume theory that we need is the distribution of concentration of particles or droplets, in this case, that are injected with the fluid at the source. So as we've just arrived, the concentration C in this case, we could have referred to infection quanta and infectious aerosols relative to that leaving the mouth, which is we've called C_q, scales as square root of area of the mouth divided by alpha x where alpha is the turbulent entrainment coefficient, around 0.1 or 0.15. And that leads to a jet which grows in size and grows in fluctuations as you see more and more eddies, and eventually might even bend due to flows in the room or thermal buoyancy effects. And I'd like to talk about the difference between short range transmission due to really placing yourself in this jet and breathing that air directly, which is more concentrated than the background, and then compare that with the transmission in the well-mixed room that we've been talking about all along. So obviously if we're at position 0 here right at the mouth, this is the worst case scenario. So if we're here, let's say we're only 1 inch or 1 centimeter away. And we put our mouth on top of the other person's mouth, that is the worst case scenario of short range transmission. That is, relative to the background room, we're getting a much worse situation there. But if we ask ourselves, how much worse is it, so we know that fd is the dilution factor, that is the concentration kind of at the source relative to C sort of at infinity and the well-mixed room, so let's just say far away. Actually instead of C infinity, I should call that C average. Because it is the average concentration in the room. And we've seen that the dilution factor can be written as the flow rate of the breath divided by the sort of decay rate of the concentration field at the average-- at the appropriately defined mean radius divided by the volume of the room. So that is telling us how much more concentrated the infection quanta or viruses are here versus the well-mixed room where they're really spread out. How big is this factor? So this factor for the Skagit choir, which we've analyzed, the Skagit Valley Chorale. So that was a fairly large room, 4.5 meter ceiling. But it didn't have very good ventilation. So that's sort of similar to smaller rooms with better ventilation. And in that case, this number was 10 to the minus 3. And in general, fd for typical indoor spaces-- for offices, classrooms, and homes-- is on the order of 10 to the minus 2 to the 10 to the minus 4. So there's quite a significant difference. If you are right at the point of somebody's mouth and breathing in their air versus being far away, there really is a big difference. To put it in perspective, we can ask ourselves, well, how long would you have to stay in the well-mixed room, far away, breathing the air to have the same exposure and dose of infection quanta as if you put your mouth on top of somebody else and breathed in one lung-full of air. So if we just say that if we calculate it as a timescale, taking in the volume of a single breath and then dividing that by Q breath and the dilution factor, that is the time you would have to spend breathing the background air in order to achieve the same dose. And this quantity ends up being for the Skagit choir that we calculated around one hour. So if 63 people were in the room and 53 or so were infected, then that most likely happened through the airborne route, as we've discussed. Because 53 people were mostly breathing the background air that was, perhaps, well-mixed. On the other hand, you also could infect 53 people by taking turns one at a time putting their lips against the other person who's infected and just breathing in one full lung-full of air. Then you would get a similar number of people infected on the order of an hour or two, which is the length of time of that choir practice. But we know that didn't happen. So that already tells us that short range transmission really can't explain what happened in the Skagit choir. And as we've discussed, it has to be longer range airborne aerosol transmission. But how much longer range? We can also sample at different positions here. So that's the absolute worst case scenario. So let's consider as important numbers 3 feet and 6 feet. So 3 feet corresponds to 1 meter, which is the social distancing guideline of the World Health Organization today. I would also argue that 3 or maybe 2 feet is kind of close to what you might call natural social distancing. So if you don't impose social distancing, most people prefer to have a little space bubble around them. They don't want to be right up against somebody if they don't have to. And they tend to stand 2 or 3 feet apart. So somewhere in here is what I would call natural social distancing except in cases where you're in a crowd. So if you're in a nightclub or a bar or some crowded space where you're starting to press against people, then you come closer you might be 1 foot or 1/2 foot, and you might start to get closer to this worst case. But people tend to be about 3 feet apart. So we can ask ourselves what happens there. We can also look a little further. The Center for Disease Control the United States has imposed a 6 foot rule as we have discussed. In fact, it's been interpreted so strictly in the United States that you can find floor stickers exactly 6 feet apart in all sorts of indoor spaces even when people are wearing masks and when we aren't sure exactly what the flows are like. And we can ask ourselves, what is the sort of level of concentration there. It's also worth noting that we can also look at a negative value. How about minus 3 feet? Because it's important to note that respiratory jets do not only increase your risk relative to the well-mixed ambient, there must be regions where the risk is actually lower than the well-mixed ambient. Because in the end, the well-mixed solution was obtained by mass balance. So that means we've essentially counted all the infection quanta or virions in the room. And if there is a higher concentration here, there must be a lower concentration somewhere else to account for that. So perhaps if you're standing behind somebody at a reasonable distance, you actually have a lower concentration, although there is still mixing going on. And you will be exposed to the well-mixed room. Which then brings me to the way that we should really think about the role of short range transmission versus long range transmission. And that is to compare the concentration of the respiratory jet to that of the well-mixed ambient that we calculated. That's essentially the definition of when there's a transition from short range to long range behavior. So where will that occur? It will typically be somewhere here. So there's a certain position, which we'll call xC, and this will be defined by saying that C over the initial value is equal to the dilution factor. So this is the point where if I follow this orange curve, I've just hit the concentration that is predicted for the background well-mixed room, which we've already calculated before. So if you use our formula for C there, you then get a formula for this xC, which is the square root of the mouth area times lambda C of r bar, so the decay rate of the concentration field, the volume of the room, and then alpha QB. So I would argue that this is really the boundary which separates long range airborne transmission by aerosols with short range transmission, which also includes aerosols by the way. So some of these aerosol droplets, you definitely could be inhaling anywhere in here. But if you go long range, you're only talking about aerosol droplets. So this is really where the dividing line is. And if you plug in numbers for different settings, you'll find this is often larger than 6 feet. In fact, in some cases, many cases, it's actually larger than the room. It could be tens of meters even, because this dilution factor can be of order a factor 100 or even 10,000 lower concentration in the well-mixed room than at the person's mouth. So you have to go pretty far away to see it actually drop back down. What that means is that when people are breathing in a room these breaths are crossing all over the place. They're mixing. And you really can't think about a turbulent plume lasting out to infinity because there is another person standing here, there's ventilation flow, there are thermal flows. All of the mechanisms we talked about mixing will take this jet and start to mix it around the room so it won't look like a perfect jet all the way. But at least this gives us an estimate for these closer spaces here that's sort of what our risk might be. So using this concept, I'll just mention this is typically much bigger than 6 feet, we could estimate how much worse is it to be 6 feet or 3 feet away with yourself in a worst case scenario perfectly placed in a respiratory jet. So what we're asking here is what if we were unlucky enough to be right here and breathing in for a long period of time. So not just for a fleeting second but you're sitting there and just over and over you're just sitting here breathing in that person's breath. It's not an unreasonable situation. You could imagine at a meeting when two people are sitting at a table or maybe they're having dinner. Masks are off, they are facing each other, talking to each other. And in fact that is a high risk situation in terms of this kind of transmission. So let's see how much worse it actually could be. Well, if you plug in the numbers then using a typical mouth area, in this case the concentration drops to about 6%. In this case, the concentration since it's what goes as 1 over distance, is more like 3%. So it's a very rough calculation. But relative, and I should say, so relative to the highest concentration of reaching the mouth, the plume has been diluted down to 6% or 3% as you go these distances. Notice there's not a massive difference between 3 and 6. And it does make a very big difference in terms of decisions to reopen spaces and how people interact, whether you're strictly 6 feet apart or perhaps you can be 3. So keep in mind that 3% to 6% dilution. But what we're really interested in here is now, how do these fractions compare with the well-mixed room. And so for that I actually would like to do a different comparison as I'd like to do C over fd. Because fd, relative to here, is the concentration far away. So I want C over C_q, I should say, over fd. And so this also can be written as xC over x. So it's basically how much farther is that crossover point relative to where you're standing. That's another way to think about it. And this is then six to 600. So this is the excess risk. If you were to stand 3 feet away, that's the excess risk you face from short range transmission if you are 100% of the time breathing in the jet of the infected person pointing right at you all the time. If you take this 6 foot rule, it's a little bit less strong. You have a C over C_q divided by fd. It is more like three to 300. It, again, depends on the details of the room. But it still can be a significant factor. So we will come back to thinking about how to handle short transmission in light of its interaction with long range. But let me make one final comment on this discussion, which is where is this crossover point. So notice it scales with volume. So something you may be wondering is, don't we have airborne transmission and shortwave transmission outdoors also. Yes. And that is actually covered by the theoretic arguments that we're making here. Although, as I said, these jets really can't last forever. But we can at least say to ourselves, what happens as v goes to infinity. So I like to call this the outdoor limit. If you apply our safety criterion for a well-mixed room to very big room, think of, for example, a gymnasium or a sports stadium, very, very big room, then if there's only one infected person, then the dilution of those droplets in a massive space is like being outside. You might as well be outside. And what you find that is the crossover is very far away, which means the long range analysis is not even helpful. It's not even valid. Really, it's the short range analysis you'd be thinking about. Now unfortunately, the short range analysis depends on many assumptions about how people are positioned, how they're interacting, what are the local flow fields. But you have to deal with short range transmission. And social distancing can play a role in how you address that threat. But once you get to smaller rooms and longer periods of time then you actually find that this xC is on the order of the room size. And then you really are going to be having significant effects of long range transmission. And I will argue that for most typical rooms that we inhabit indoors, that the long range airborne risk is the leading order first approximation and best approximation. And the short range risk must be considered, but it's a correction to that.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Beyond_the_wellmixed_room_Chapter_5_overview.txt
PROFESSOR: So in the last part of the course, we'll discuss the fluid mechanics of indoor spaces, in particular, going beyond the approximation of a well-mixed room when considering airborne disease transmission. So for example, we will consider the occupants of a room and their respiratory activity. So when they breathe, especially if they're not wearing a mask, or if they cough, they're going to have turbulent plumes emitting from their mouth, which will then be convected throughout the room and will influence other people. Also, the heat of the body relative to the room can lead to warmer air, which is rising, so that essentially sort of a chimney of a thermal plume rising from an individual. Those heterogeneities in the flow and in the concentration of aerosol particles may not always be well-mixed. We'll talk about different types of ventilation, how, for example, cold air will sink from the place where it's inserted and warm air will rise. And that may or may not lead to sufficient mixing to remain well-mixed. And we'll talk about how to modify our criterion for airborne disease transmission in a well-mixed room to account for some of these changes, in particular the heightened risk of short-range transmission from being directly in the path of respiratory puffs and plumes emanating from an infected person.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Transfer_of_respiratory_pathogens_Escape_time_of_virions.txt
PROFESSOR: So let's think of a specific virus, the coronavirus, including the case of interest today, which is the SARS-CoV-2 novel coronavirus. So this virus comes in the form of virion, which is the capsid containing RNA, which is then going to infect a cell. And the size of that virion is around 120 nanometer diameter, and it's nearly a perfect sphere. Now, the size here of 120 nanometers compared to bacteria is about 1,000 times smaller than [some] bacteria. So this actually has a very big implication in terms of how a virion can be spread from one organism to another. So the bacteria, if you recall, are the size of several microns. That's the scale of large droplets, which sediment -- or larger than that, they would begin to sediment out of the air. And we can think about transmission through coughing. Also, besides the fact that the virus is much smaller, it cannot swim. So bacteria have various means of locomotion -- cilia, flagella, et cetera, whereas the virion essentially is a little hard sphere. So how can a virion actually transmit itself? Well, if we draw a droplet, which is released by respiration, then the virus is actually extremely small, at the scale of typical droplets that come from respiration, which, as we've discussed, have a typical, most probable size around half a micron, and then have mostly droplets that are larger than that. So at that scale, if we think of this as a micron scale droplet, this little virus is extremely small. It's like a little point, essentially. And so a single droplet may contain a couple viruses, a couple virions. And so in order for the virion to escape the droplet and make connection with a cell in a host, let's say it's been breathed in, and in the lungs, it's going to try to meet with some cell and begin to infect it. Or conversely, if the virion is being shed and released from an infected cell and then needs to go into a droplet to then be exhaled and spread to somebody else, either way, the virion has to get in and out of this droplet. And since it cannot swim, the only way can do it is by essentially a random walk or a diffusion process, where it's just bouncing around due to thermal fluctuations. And eventually, at some point, it gets out. So the diffusion process -- we can estimate the typical time to escape. I'll write it this way -- [it] can be shown if the radius of the droplet is R, the average time for a randomly distributed and selected virion anywhere in this droplet, the time for it to escape is of order R squared over D, where D if the diffusivity. And if you do a precise calculation for a sphere, then there's a factor of 15 here. And so that is basically average escape time by diffusion. So that gives you a sense of how quickly the virion is able to get out of the cell -- or out of the droplet, excuse me. Now, how big is the diffusivity? Well, the diffusivity of the virus, if you think of it just as a fluctuating sphere in a viscous medium, then we can use the Stokes-Einstein formula for the diffusivity, which is k_B*T, where k is Boltzmann constant and T is the temperature. So k*T is the thermal energy of the fluctuations. And that's divided by 6*pi times the radius of the virus, and then the viscosity of the liquid or fluid containing the droplet. So essentially, this denominator here is the Stokes drag coefficient for a sphere fluctuating in a viscous medium. So that's the diffusivity. And if you figure out for the size of 120 nanometers, if we use the viscosity of water, if we assume the droplets are just water, then this is 3.6e-8 centimeters squared per second in water, where we use the viscosity of water. But the droplets are not just water. In fact, they cannot be. As we've already discussed, water droplets at this size range would very quickly evaporate and disappear, or they would leave the virus essentially, a virion no longer contained in such a droplet. So what's more typical is that the droplet in fact contains many macromolecules and is coming from mucus in the pharynx, in the vocal cords, coming from the lungs directly. And mucus has a much higher viscosity. So notice, here, we have the viscosity of the liquid coming in. And if we take into account the fact that the viscosity of mucus relative to the viscosity of water is roughly -- it depends where the samples are taken and also what's the shear rate. So we're talking about low shear rate. These are sort of moving slowly. The viscosity of mucus is dependent how quickly you're shearing it. But if it's a low shear rate, then this is on the order of 1e3 to 1e5 at low shear rates. And because the viscosity has that factor, the diffusivity is then divided by that factor. So what that's telling us is instead of being around 1e-8 centimeters squared per second, we're really looking at more like 1e-11 1e-13 centimeters squared per second in mucus. So we assume these are actually mucus droplets, which are not fully evaporating and are contained in aerosol form, then this is the kind of diffusivity. And if we plug into this formula here, we can get a sense of what is the average time for the virion to actually escape. So why don't we make a little table of that result. So let's look at the radius. First, let's consider here 0.5 microns, or 500 nanometers. So that would be a 1-micron diameter droplet. So that would be kind of a typical aerosol droplet coming from breathing. Let's also consider larger droplets. So let's look at 5 microns, which is kind of on the upper end of the aerosol range. And then we could look at 50 microns. And I should say this is R of the drop. So the R here, just to be careful, is they R of the drop, as opposed to the R of the virus, which is R_v is half d, so that's 60 nanometers. OK, so that's the diameter. So this is the different size of the drops. And just for comparison, let's look at water versus mucus. And from mucus, why don't we take that the viscosity of the mucus is 1e4 times the viscosity of water? So we just pick something kind of in the middle of this range. OK, well, if we plug in then the numbers and we try to plot the average escape time that I've just written here, R^2/(15*D), then in water, this turns out to be about 5 milliseconds for an aerosol droplet. So we know the aerosol droplets are evaporating quickly and also that a virus can diffuse out of it relatively quickly, because the water is not really that this. Now, if we move in this direction, we're multiplying R by 10. And notice, the time scale goes like R^2. So there is a pretty strong size dependence. So as we think of a 10 times larger droplet, it's 100 times longer time. So that would be 500 milliseconds, or 0.5 seconds. And if we go another factor of 10, that's another factor of 100 in time. And if we convert seconds to minutes, it turns out to be around 8 minutes for a fairly large drop. Now, what if we're in a mucus? So now, we go in this direction. The timescale goes like 1/D, and D goes like 1 over viscosity. So the timescale is proportional to viscosity. So we're getting this factor of 1e4. That's a pretty big factor. And so, for example, this 5 milliseconds for mucus can turn into 1 minute. So from an aerosol droplet, which is sort of at the most probable size from respiration, which is on the order of a little bit below 1 micron, a typical virion would take about a minute to diffuse out of that droplet and infect a nearby cell or tissue. Now, if we look at a little bit bigger droplets, this 1 minute, we multiply by 100, it's 100 minutes, which is on the order of 1.5 hours. And what if we keep going another factor of 100? That turns into around 7 days. So if the virion is contained in one of the larger droplets that comes from coughing or sneezing, it could take it hours to days to escape from that droplet and have any chance of infecting a host cell. So you can immediately see the problem for a virus in terms of transmitting is it can't swim. It's extremely small. And so it's not a good way to transmit itself to be sitting in a large droplet or a pool of liquid, imagine a pool of saliva or some phlegm that you've just coughed up, which is very common for bacterial transmission. For a virus, it's much more difficult for a significant amount of virus to actually get out. So if the virus is going to transmit itself, it makes a lot more sense to be in aerosol droplets. And so it's really here, the aerosol droplets are the most infectious, because basically, the virus able to get out. And based on this calculation, roughly speaking, if R of the droplet, which we're just calling R, if R is less than around 5 microns, those are the ones we would expect to be the most infectious. And interestingly for SARS-CoV-2, recent experiments have sampled droplets from sick patients with COVID-19 at different sizes. And it was found that the droplets that had a diameter less than about 4 microns were the most infectious and clearly you could see replication of the viral RNA in samples of those droplets, whereas larger droplets in this range here, kind of larger than a few microns, were less infectious. And in fact, the virus was less able to replicate. So it's kind of consistent with this physical argument. So basically what we would probably say is that in this range, the virions are mostly trapped. They have a hard time getting out of those droplets and can deactivate over time, because that's also happening. They are believed to have a certain finite lifetime. And so this is the case that the virions are trapped and deactivate in large drops or the fomites, which are infectious residues on surfaces that are left over from those droplets. So this really shows us that our focus should be on looking at aerosol droplets for viral transmission.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Safety_guideline_for_COVID19_Role_of_prevalence_of_infection.txt
PROFESSOR: So until now, we based the safety guideline on the indoor reproductive number, which is essentially the effective number of new infections from a single infected person or per infected person in the room. And in many cases, that is the right variable to think about. In fact, it's essentially the most conservative definition that allows us to limit the spread of the disease at the level of the population. If every room were doing that, we would control the spread of the epidemic. But we also should think about the role of the prevalence of infection in a given region. In particular, as the number of infected people in the population goes up, we should be increasing restrictions to a certain point. And also, as the pandemic recedes, we should then be decreasing those restrictions. So there has to be a role also to be played in using the guideline for the prevalence of infection. So to describe this, let's think of a random number of transmissions that's going to occur. So this T here is going to be the random-- it's a random variable, which will be the random number of transmissions in the room with all the usual features so in time tau and all the other assumptions about this indoor space that we've been talking about. But the important thing is that this is random because we don't know-- what we're going to focus on here is we don't know how many people are in the room. So there are I infected people. And I here is the random number of infected people. There are S susceptible people, which also is a random number. And then there is a transmission rate, which is the expected number of-- which is the number of-- random number of transmissions per pair. So if you take an infected person and a central person in this time, there's a certain probability of transmission, which is described by this random variable TMN. And so what we're going to assume here is-- I'll just mention some technical assumptions, first of all, that this tau or TMN is a Poisson process with a certain mean rate calculated by the previous model that we've been dealing with with a mean rate beta times tau. So up to a certain point in time, there's an average transmission rate beta average. And that's a Poisson process. So what that actually means is that the occurrence of transmission between a pair of individuals can happen randomly in the time sequence. At any infinitesimal time step, it has no memory of the past. And it's an independent random event with fairly low probability in a given small time interval but which achieves this certain random rate here. So in probability statistics, we refer to that as a Poisson process. We also assume that each TMN is independent and identically distributed Poisson processes. So in other words, if I take two different pairs of individuals, and I consider the transmission, they're not correlated. That is an assumption. Because, of course, if the infected person is sitting in one place, you might expect the people nearby even, say, within six feet might be more likely to be infected. We are leaving that out because we are considering airborne transmission in a well-mixed room where this should not be any such correlations as a first approximation. Furthermore, we assume that not only each transmission is independent but also that the number of infected people I and all these transmissions are also independent, are uncorrelated. So basically, the arrival of infected people is uncorrelated to how they're transmitting. So for example, if we have a pack of infected people that arrive, we're not somehow changing the transmission rate to change that. And that's partly we can make the assumption because we are interested in the limit of a small number infected people. In fact, it's almost always going to be 0 or 1 because the prevalence is not going to be that high in the population generally. And so we can make that assumption. And so if we do that, then what we're really interested in is what is the expected number of transmissions so the expected value of this T here. So if you have a random sum of random variables, then the expectation is easy to calculate. If also the random number is independent from the variables you're adding up so there's no correlation between them, this would just be the expected number of the total number of those variables, which is IS, just the total number of pairs infected and susceptible, times the expected value of this tau MN, which is beta bar tau. And just for completeness, let me also remind you actually what this beta bar is just so that when I keep writing it, we're clear on it. It's 1 over tau integral from 0 to tau of beta dt. So it's the time average beta. And we have further approximated this by writing the beta inverse average is approximately the steady state value inverse times 1 plus lambda C tau inverse. So that was a convenient approximation. So in our subsequent calculations, whenever you see this expression average of beta times tau, you could imagine substituting this expression where beta bar is given by all the physical parameters that we've been discussing. So this is a very simple model of the random transmission that can occur when you take into account the randomness in the number of infected people. So now let's start to write down a model for the number of iffected people. So the simplest thing there is that-- I'll write here the random number of infected persons is that this should be a binomial random variable. And that means that the probability that the infected number is equal to some value N is the number of ways you can choose little n infected people out of N total people in the room times the probability that any one of them is infected, which we'll call pI, and then qI is the probability that the others are not infected. So the important new variable that we have here is pI is the probability a person randomly selected from the population and placed in this room is infected. And this is also sometimes called the prevalence of the infection in the population. And then this qI, of course, is just 1 minus pI. So it's standard notation for binomial distribution is to use q is 1 minus p. And so let's make some further assumptions about this random number of infected people. So first of all, by assuming this binomial distribution, we are assuming that at any moment in time, the number of iffected people is somehow refreshed continuously to reflect the same kind of distribution that you find in the population. So the variable I is refreshed continuously in time to reflect the population prevalence. So people are coming and going from the room. But there's always a certain number of inffected people that reflects the chance of running into an infected person in the population. So that's a reasonable assumption. A more sophisticated model for a given space might take into account the actual probability or rate of arrival and the rate of removal of people and, similar to models from queuing theory, might derive a distribution which is more complicated and depends on those other parameters for the fluctuations in the number of infected people in a room. But this is the simplest way to start is that the room essentially reflects the population statistically. Another assumption, though, which we're going to make, which gives us some more simplicity and also allows us to be more conservative, is that we neglect exposure and essentially allow infection to happen at the same rate more than once even and allow transmission to proceed at the same rate where we essentially are assuming that S is N minus I. So in other words, the susceptibles never get converted into an exposed group that can no longer be infected again. We'll just say that the rate is still always going to be I times S where S is just N minus I. So we're letting the number of infected people fluctuate. But everyone else in the room is considered susceptible. That's a conservative approximation. Because in reality, the number of susceptible people would go down as they converted to the exposed Group. Over very long times, eventually there be new infected people generated. But that's really relevant for situations like the Diamond Princess quarantine where same people are confined to the same space for a period, a longer period. Here, we're really thinking of just this indoor space that is reflecting the statistics of prevalence in the population. Let me now do a couple quick calculations based on this model of some quantities that we're going to need. So the first is that given a binomial distribution, the expected value of this random variable I is just the number in the room times pI the prevalence. And furthermore, the variance of I, which is the expected value of I squared minus the expected value of I quantity squared, is NpIqI-- basic result for a binomial random variable. From there, if you then take these two relationships, you can solve for the expected value of I squared and find that that is equal to this Npq plus the Np quantity squared-- I'll just write that-- Npq plus Np quantity squared. That's wrong. And so that's can be written as I expected value times-- well, let's see here. We have a qI plus expected value of I. And now using this relationship, if we look at the number of susceptibles, S, the expected value, is just N minus the expected value of I, which is Np. And finally, to calculate transmissions, what we're really interested in, the expected value of I times s-- so that would be, if we substitute, that would be N times the expected value of I minus the expected value of I squared, which we have right here. And if we substitute this expression here, you see we have a factor of I-- expected value that we can factor out, and then we get N minus the expected value I minus q, qI. And then, finally, substituting the expected value of Is and pI, and 1 minus pI is qI, you can find that this is pIqI N N minus 1, which can also be written sigma I squared N minus 1. So now we basically have an expression for the transmission rate in terms of the number of people in the room. So essentially, it's N minus 1, which, in fact, you may recall, was the transmission rate when there's one infected person and N minus 1 susceptibles. But time is factor sigma I squared, which is actually the fluctuation in the infected number.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Beyond_the_wellmixed_room_Airborne_transmission_indoors.txt
PROFESSOR: So now let's summarize where we've come to in our analysis of short-range effects from respiratory jets relative to long range airborne transmission which we previously calculated. And let's also think about what are the implications for policy. And for personal decisions about safety during the pandemic. So first of all, when masks are worn, we've already discussed the filtering effect of masks. We've talked about how that brings a squared factor of the mask penetration factor which leads to a very substantial increase in the reduction in the transmission rate. But there's another important factor when you have a mask, even a fairly poor mask, a cloth mask, it may not be so great a blocking aerosol particles but it's very good at blocking momentum so the fluid momentum that comes from just normal breathing is almost completely stopped by a mask. Even a vigorous cough only lets some momentum through and a few droplets from more violent activities such as that. So what's more typical in a case where people are wearing masks is that there is sort of some exhaled air slowly building up around you without very much additional momentum. And as we've discussed that air is warmer and tends to rise. And so typically the thermal plume of kind of the exhaled air rising above you a bit like a chimney and that's where the droplets initially reside. We're not likely to directly infect anybody by that method, either because people aren't close enough or they aren't above the other person. And then, of course, there are other flows in the room that we've discussed, which are often turbulent. And those will transport the droplets everywhere and will lead to the well-mixed room as being the most natural and most accurate first approximation for the infection risk by another person who is also wearing a mask. So I would argue that no strict social distancing is required in a mask situation. So whether we're six feet or three feet, it's not going to give us substantially more safety if people are wearing masks. And instead we should pay much more attention to the formula that I boxed here, which gives you an estimate of the long range transmission risks airborne which is equally there for everyone in the room. And that must be considered for safety and also for contact tracing. Now situation is very different without masks. As we've seen without masks, we are imparting a lot of momentum to the fluid, the air. Simply by breathing and certainly by talking or singing or exercising, we're really pushing the air around and launching those particles into the air. And even at three feet or six feet there can be still a substantial risk. Although as we said, the difference between three and six is not so great. So one might consider switching to three. On the other, hand, even six may not be enough protection you might need to be thinking about longer distancing. And of course, there is always the airborne risk. So at some point, you hit that point x_c where the airborne risk is comparable to the short-range risk. And also there are people such as the guy I sketched back here which are not in the respiratory jet and probably have a lower risk than the average. So it's definitely important to still consider the average risk coming from the long range guideline. But one could add a correction to be very conservative for cases such as sketched here when this poor person is finding himself or herself in the respiratory jet for long periods of time. So let's imagine there's a typical spacing, x bar, which would be sort of the worst case but still typical spacing one might expect of the people in the rooms. If you think there's a certain situation where maybe two of the people might be five feet apart or four, three feet apart and facing each other for sniffing amounts of time, and then p jet is the probability that they are encountering each other's respiratory jets. Or maybe put another way, the fraction of the time one person spends in another person's respiratory jet. When that happens the transmission is just between two people. It's not to the entire room. So that's another very important factor to keep in mind, this factor of n, the occupancy, doesn't really come up in the social distancing model here when we're talking about the indoor reproductive number. So to make a modified guideline, we can take the long range indoor reproductive number and add a short range correction. We've already calculated what this looks like. So if I write the guideline as n minus 1 tau is less than epsilon over the average beta, I can also restore what that means here. Average beta times tau is the integral, time interval data, basically, of the transmission rate. And so this fact that we've already calculated the ratio of the shortrange term to the longer term is x_c over x-- or x bar times p jet. It's a very simple correction. If we plug back in our estimates for this average beta then we arrive at this formula here. The first term is the same guideline we already have. And there's a new term now which, notice, does not involve n and doesn't have the mass factor, of course, as well. But it involves the length, so social distancing. And it involves the mouth area and the breathing rate. The breathing rate now is also not coming in square because we're assuming that the breath is already providing this concentration. And we're just looking at another distance downstream. And the Qb is for the receiving person's inhaling breath rate whereas the QB squared comes from inhaling and exhaling and finding the steady state in the room. And so we've already estimated how big this term is [? xc ?] over x might be anywhere from one to 100 or maybe even 1,000 if you let p jet be 1. So if you stand facing another person continuously, this can be a large term and can actually dominate the other. I'll notice there's an n there. So still, there could be a balance. But when you take into account that people are not always directly facing each other so much, then this number might be a lot smaller. And I would argue that in many cases the dominant term is the long range one. And the short range one is the correction. But one should be aware of situations where the short-range effect could be dominant. And so that must also be considered. And this modified guideline here would be one way to take that into account.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Beyond_the_wellmixed_room_Ventilation.txt
PROFESSOR: Now, let's talk briefly about different types of ventilation and different ways that they can mix or not mix the air in a room. So the first type, which is what we've mainly been talking about, is so-called mixing ventilation. So this is when the colder air is forced from above-- or in other cases, the warmer air is forced from below-- in such a way that there's an unstable thermal gradient, so that the buoyancy forces end up contributing towards mixing. And then you can see that if a person's in the room, the natural thermal plume rising around that person of warmer air, as well as the respiratory plume sketched here in yellow from all the breathing-- both of them are kind of rising and then well mixed in the room before they're finally removed by the outlet of the ventilation. And so that is the situation we've been mainly talking about of the well-mixed room. But it's worth considering that in some other cases, the ventilation specifically is set up to reduce mixing. So especially in rooms that have high ceilings, it can be advantageous to set up a stratified ambient, where the thermal plume from the body is rising as well as the respiratory plumes and are collecting in the upper area of the room where they're being gently sucked out as the colder air is being pumped in now from below. So that's actually a stable thermal gradient. And so at least the buoyancy forces are not leading to any large scale convection, although there certainly are still boundary layer flows and plumes from the people and from their motion, as I have described earlier. One advantage here is that if a mask is worn, then the respiratory droplets coming from breathing, which are sketched here in yellow, are kind of brought closer to the thermal plume and forced to rise more. So it basically causes the little droplet-- let's say, the infectious aerosols from the breath to be more likely swept to the upper reaches, where they might sit at the higher levels of the room and then be sucked out. So that can actually be a situation where a less well-mixed room can be better. On the other hand, the advantage of a well-mixed room is that the pathogen in the droplets is diluted throughout a larger space. And that can be more advantageous. But in any case, this situation with high ceilings and especially with natural ventilation, where you set up a stratified ambient, is a strategy that's been used for a long time. In fact, Florence Nightingale, in the 1800s, recommended that hospitals should have high ceilings and good ventilation for protection against airborne pathogens.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Safety_guideline_for_COVID19_Safety_guideline_as_a_simple_formula.txt
PROFESSOR: So now, we can summarize the safety guideline in a very simple formula, using the steady state transmission rate as a more conservative estimate than worrying about the transient buildup of aerosols in the room, and using the simplification of introducing an effective size for particles, R bar, that includes and encodes all the information about the radius dependent mass penetration factor or sedimentation rate and infectivity, et cetera, in droplet size distribution. And the safety guideline looks like this. It basically bounds the cumulative exposure time, which is n minus 1, the number of susceptible people when one person enters a room of occupancy n, maximum occupancy, and stays for a time tau. And I should mention the time tau does not need to be continuous. So if you think of, say, a classroom or an office, you will be adding up the time, total time. So if this safe time is 100 hours and you spend 10 hours a day, you might get 10 days, according to this calculation. And so that relationship is plotted as this yellow curve. And if you're below that curve, you're safe. So you have either smaller time than that or a lower occupancy. And that's how you can decide the safety. Again, this is a safety exposure guideline for indoor airborne transmission. And we will talk about other types of transmission later. What's nice about this formula is it allows you to immediately see the effects of the scaling, some of which we've already talked about before. So if we would like to move it in this direction, obviously, we would like the curve to be up here somewhere. So this is obviously making the room more safe. If I can push it in this direction, I can either get more people or more time or whatever combination that I'd like. How can I get there? So Cq is a disease specific parameter. But we will see that Cq does depend on the type of respiration and activities, such as singing or heavy exercise or loud speaking are much worse than resting or breathing. So to get in this direction, we could do smaller Cq, which could be resting versus speaking or singing. And we'll come back to characterizing that more carefully. So we can at least change that type of activity in the room. Well for physical parameters, we can increase V, the room volume. So if you take everything the same, same number of people, and you make the room much bigger, obviously the air gets more diluted. And hence, there's less chance of transmission if people are sort of scattered about the room. What else can we do? We can lower Qb, which means breathe slowly or in a more relaxed way. So again, coming back to the activity in the room, if you have heavy physical exercise, your Qb might increase, although it doesn't increase that much. Resting breathing is around 0.5 meters cube per hour. And even heavy exercise doesn't get much higher than about 3 meters cubed per hour. And so that's a total of a factor of 6 or so. But it comes in squared. So that could actually be significant. So definitely heavy exercise is not as good as resting. And what else do we have? We have masks. Well, obviously, we can wear masks. And that is actually very helpful, because as we've already discussed, it comes in squared. And that is something which is sort of very simple fix. And if the mask penetration factor is 10% or even possibly less, it comes in squared. So that can have a very big effect of pushing this over, as we will see. So this is basically lower P mask. So when there's no mask, Pm is equal to 1. Finally we also have an lambda C, because we had written that out earlier, that also includes several effects, such as better ventilation. So increase ventilation, so the flow rate of outside air, so the air change rate should be increased. And that gives us a larger lambda C. We can introduce air filtration. And that's sort of buried within lambda C. That's PF lambda F. Although as we discussed earlier, because you have to have a certain amount of fresh air coming into the room, air filtration doesn't give you as much benefit as you might hope, but can still give you a factor of 5 or something like that if you have 20% outdoor fresh air coming in. And so those are some of the key scalings that we can understand. And just simply by plugging into a formula like this, we can get a sense of how different mitigation strategies can be compared and also how different types of rooms can be compared in terms of the occupancy time and the number of people that could be allowed as a maximum occupancy. And what remains now is the parameterize this for COVID-19 specifically, which boils down to understanding this parameter Cq.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Transfer_of_respiratory_pathogens_Release_of_viral_load_from_a_drop_ASIDE.txt
PROFESSOR: So as a more technical aside, let's analyze more carefully the problem of release of viral load from a drop by process of diffusion. So here, again, I sketch a droplet, which would typically be an aerosol droplet in the size range of, let's say, microns. And the virion of interest has a size that is much smaller than that on the order of, let's say, 100 nanometers. And this white path is showing how such a virion would go from its initial position R, let's say, as a radio position, to the boundary. Now, the general problem of finding the expected first passage time from the point inside a domain to a boundary is a classical problem in the theory of stochastic processes and random walks. And it has the following representation. So the mean are expected first passage time from a point to an absorbing surface solves the following problem. Is the Laplacian of that time with a minus sign is 1 over D where D is the diffusivity. And so this is the DV that I described before, the diffusivity of the virion. And the boundary condition for this equation is that it's an absorbing boundary, basically. So when the virion gets to the surface it's gone. And that's when the stochastic process finishes. So it's tau equals 0 on radius capital R, which is the radius of the droplets. So that's on the boundary. So in the case of a spherical drop, then we can write this equation in spherical coordinates. So that's minus 1 over r-squared, r derivative of r-squared D tau DR. And that's equal to 1 over capital D. And then again, our boundary condition is the tau of capital R is equal to zero. Another boundary condition we might mention is that D tau DR at R equals 0 is zero. That's a symmetry boundary condition. So when you're right in the middle, basically, there should be no-- there's no favorite direction for the diffusion process. And so, therefore, the derivative this time with respect to R must be 0 at the center because it's a symmetry boundary condition. OK, so we can now go ahead and solve this problem. Let me put the r-squared on their side and use primes with no derivatives. So let's write this as r-squared tau prime, prime equals minus r-squared over d. And if I now integrate both sides, I get r-squared tau prime equals-- and then here we get a minus r-cubed over 3D plus a constant. And that constant immigration according to the symmetry boundary condition has to be 0, because tau prime is 0 at r equals 0. So we can simplify this and write tau prime is minus r over 3D. And so then I can integrate again. And I found that tau of r is-- well, I integrate this, I get r squared over 2 is integral of r. So that gives that an r-squared over 6. So there will be a 6 D and minus r-squared. And there'll be a constant integration. And in order to satisfy this boundary condition of vanishing a capital R, you can see that I can write the constant this way. And I'll write it as capital R squared over 6D is a constant of integration. So basically, the profile of the main first passage time is essentially a parabola. So as a function of distance R from the center of the droplet, you have this shape to the mean transmission time. And the maximum here, the maximum value is tau 0 is the maximum value. And that is r-squared over 60. So that's if you happen to be unlucky and right in the middle. That's the longest you would expect to take from a particular part. Now on the other hand, as I've sketched here, in a typical droplet, if the virions are randomly distributed, some of them, like this guy over here, happen to be very close to the surface. So you're not going to have to wait this long for them to escape. So now we can ask the question, what is the average escape time over all the initial positions of the virus, or virion, assuming that the virions are uniformly distributed at random in the initial condition. So if we do that, then we're solving for the-- that's all [INAUDIBLE],, so I'll put it over here. The average escape time or first passage time for the virus will be tau bar. Well, what I'll do is I'll integrate over all the positions and then divide by the volume because a uniform distribution. So I'll write the integral over the volume as 4 pi, the solid angle times integral from 0 to capital R of tau of r r-squared DR. And I'll divide by the total volume 4/3 pi capital r-cubed. OK, so if we do this integral here, so when we plug-in this, first of all, we can see that we get, obviously, for the constant term, r-squared over 60, we just have a ratio of volume over volume, which is one. So we get r squared over 6D is the first term, just integrating that constant. But then it's 1 minus. And then instead of r-squared, you have r-squared times this r-squared. It says r to fourth. So you integrate that. You get r to the fifth over 5. And so then you get 1/5 over this 1/3 here. And that gives you 3/5. And so that gives, when I subtract one minus 3/5, I get 2/5. And 2 over 6 is 1/3. And so this ends up being r-squared over 15D as we had previously quoted. So it's worth going through the calculation just to see. This number 15 is an order of magnitude. It's larger than 10. So while we normally estimate diffusion times to be of order length squared divided by D, so r-squared over d, this calculation shows that the actual average time is much smaller than that by a factor of 15. And that's precisely because the virus virions sometimes find themselves initially near the surface. So they get out more easily. So you don't have to wait for diffusion across the entire drop. That sets the overall scale. But the average drop diffusion time is actually less than that.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Transfer_of_respiratory_pathogens_Droplet_formation.txt
PROFESSOR: So the transfer of respiratory pathogens is primarily accomplished through droplets that are emitted by an infected person and then either breathed in or ending up on surfaces and touched and incorporated into the body in some other way by a susceptible person. So let's begin by talking about the formation of droplets during respiration. So these droplets can form in different parts of the respiratory tract. So the respiratory tract refers to the whole system of your breathing apparatus in your body. So that includes, of course, your lungs, which involves a network of passages going from the large bronchus down to the bronchioles and ultimately to the alveolar sacs where the air is exchanged or oxygen gets into the blood and carbon dioxide is picked up, and then you exhale. In the upper respiratory tract, we have of course the mouth and the nose and the larynx, the voice box where sounds are made. The nasopharynx is sort of the region behind the mouth and the nose where the passages are connected. And in all of those different regions of the respiratory tract, when we breathe in, air is coming through in one direction, and of course when we exhale, it's coming back out. And there is a lot of fluid in the lungs. So the airways are lined typically with surfactant film and mucus, which is a thick substance we're all familiar with. It can vary in composition but generally has some large macromolecules and in particular proteins that are called mucins, which give it it's sort of thick consistency. There are also ions such as sodium and chloride, which are dissolved in the liquid. And even liquid such as saliva in your mouth have a similar composition but less of the sort of thick mucin proteins that I mentioned compared to the deeper parts the respiratory tract. Of course, when someone gets sick, also there can be more of the sort of mucus and phlegm that's generated to help the body deal with the pathogen. So all those liquids and fluids are present in different parts of the respiratory tract. And so there are a number of mechanisms which are still subject to scientific research and debate by which droplets are created and ultimately emitted when a person is breathing out. So let's begin by thinking about such processes in the upper respiratory tract. So in the upper respiratory tract, we can imagine, first of all, that the passages have a little bit larger spacing. So for example, your mouth might be open by centimeters or millimeters. If you go into your nose, there's of course various hairs and smaller structures which are often covered with mucus and liquids, which as the air is passing by, could be leading to some breakup of droplets. And then of course also in the voicebox and other areas of the upper respiratory tract. So the main mechanism here for generating droplets would be the breakup of viscoelastic filaments in a fluid flow. So another word for breakup is fragmentation of viscoelastic filaments. And by that, I mean that the mucus especially is a fluid. So it has a viscosity, a resistance to shear flow. But it also can have some elasticity. If you pull on it, it can pull back a little bit because there are these macromolecules present. So in general, we have a somewhat complicated reality of that liquid or that fluid. And a filament refers to the fact that those droplets can be stretched out, and as the air is then blowing past those filaments, it can start to break up. So this is our basic mechanism. And this is mainly going to be happening while a person is exhaling, at least in terms of emissions. It's also possible when you Inhale, there'll be some of those droplets created. They go into your lungs or get deposited on the surfaces and then manage to somehow come back out again. But certainly during exhaling, you would imagine more-- or you could see actually that more droplets are created. So if we think of some examples of that, we might have, for example, when I'm speaking or breathing and my mouth is a little bit open, if I imagine drawing kind of let's say a person's lips and mouth might look something like this. So I'm kind of exaggerating here, but of course there's saliva present, and there may be little filaments that form. Of course, we can see this. And then as we're inhaling, and especially as we're exhaling, then these filaments will kind of bend and they can break, and some of them will be emitted. And in fact, these have been recently visualized in great detail. And anyway, so that's one mechanism. So it's these filaments of saliva in this case could be forming around-- I'll just mention this picture might be, for example, the mouth. We could also look at the act of speaking. We will discuss in detail later in this course that the emissions of infectious droplets is very strongly correlated with vocalization. If you're speaking, there's many more emissions than when you're just simply breathing, and when you're speaking in a louder volume or when you're singing, that rate of emission goes up very significantly. So there's clearly emissions related to the vocal-- the voicebox and to the vocal folds in the glottis, which is basically the voicebox. So what that looks like is if you take kind of a side view, there are these-- as a cross-sectional view, there's these folds where the air is flowing through, let's say, in this direction. And these are kind of waving together. They're vibrating where the frequency could be, for example, 100 hertz depending on the tone of your speech and the type of vowels you're making or other sounds. And again, what we have is that some here are saying this might be in the glottis. This could be the vocal folds. And this is basically the voicebox, is more colloquial term. And as the air is flowing through there, this part is vibrating. So there's some kind of maybe motion. I'll just kind of indicate like this just that this is kind of shaking and vibrating and coming together. And of course, there's also mucus and other liquids that are here lining all these things. And when those folds come close together, they touch each other, and they can pull apart and again form these filaments that can break up and generate droplets that will be emitted of different sizes. Now, one thing to notice is the length scale, so the mouth when it's opening might have a length scale obviously on the order of maybe centimeters but more likely millimeters in the regions where there could actually be emissions of droplets. If we look at the vocal cords, that scale is also going to be millimeters, but when the vocal cords really come together and pull apart, we might be looking at scales that are much smaller than that. So some of these filaments that are breaking up could be significantly smaller, and so vocalization may lead to droplets that are quite a bit smaller. In fact, in the case of the mouth, as I just mentioned, the sort of length scale might be of order of millimeters for the filaments that are breaking up. And the size of the droplets R might be on the order of 10 to 100 microns or even bigger, actually. In fact, it can even go up to-- well, maybe not quite millimeters, but in the case of, let's say when you're coughing or spitting, certainly you are spitting out millimeters, but it could be even-- maybe I'll put even here 1 millimeter as sort of a kind of upper bound on the types of droplets that you could be emitting. In the case of the voicebox, our length scale's a bit smaller. It might be on the order more like of 100 microns for these filaments that are breaking up. And the radius of droplets that you're going to form are going to be smaller, and they might be ranging more in the 1 to 10 micron range or possibly larger, again, depending on the details. If you're coughing and there's a lot of mucus here, certainly you could get maybe larger than that as well. So breakup of filaments is a primary mechanism of drop formation, especially in the upper respiratory tract. Now what about in the lower respiratory tract? So that's really referring to your lungs. So in the lower respiratory tract, there is significant evidence and also at least qualitative theories and to some extent quantitative theories showing that the main mechanism is not so much the breakup of filaments in a flow, but rather the bursting of filaments of mucus but in much smaller domains, where it's not so much that the fluid is whipping by and breaking apart the droplets, but it's simply breaking up due to surface tension. Just that it's this instability kind of like in a dripping faucet or a stream of liquid when you start to stretch it out and let surface tension act, it kind of squeezes down, eventually wants to make droplets. So that's kind of the rupture of a film. Under surface tension, it's more likely to be the mechanism. And so this is kind of maybe more generally can be thought of as an elastocapillarity instability of mucosal films, specifically in the deepest part of the lungs and in the smallest passages during during inhaling in the bronchioles and also, to some extent, in the alveola. During inhaling, that's when the breakup is happening, and then any droplets that are creating, some may deposit on the walls of the respiratory tract, but some fraction of them will be swept back out again. So let me explain this a little bit more detail. I should also mention this mechanism is also referred to as the bronchial film burst hypothesis. And I say it's a hypothesis because despite the fact that there's been a lot of study of the droplets that are produced by different forms of respiration and some theoretical modeling, it's difficult to actually observe this process occurring in the body. And so it's still-- it's a hypothesis that people are still studying. So what we're thinking of here is if we zoom in to a bronchiole, which is a passage that looks maybe something like this, it's like basically it's a flexible tube. And the smallest ones of these now are getting down to the scale of 100 microns or so. So it is kind of like a typical length scale here of, let's say for the radius, might be 100 microns or less. And these of course are lined with mucus as well. And in some places, there's a bridge. So it's kind of like there's almost like bubbles of air with sort of little bridges of mucus. In fact, you may actually have even some places where the passage after exhale has completely collapsed. And so maybe some parts of it are touching. Others are not touching. But there's kind of these little bridges of liquid or films, bronchial films that are kind of extending across at least part or even all of those channels. Now, imagine we start in this situation, and we start inhaling. And let's just say this is the direction of inhaling. Let's imagine that the alveola, which is kind of on the end of this tube, and so let's see what happens if we start inhaling. So for inhaling, then the air is flowing in. And so the first thing that happens is that these bubbles are going to start essentially-- this film is essentially going to be pushed. As that liquid is being pushed, we have some flows occurring. There's some recirculation flows in there. Also, there's interaction with the elastic or stretchy walls, which are soft, of the bronchiole, and so it can expand. So if you go to the next step, you may find as you continue inhaling that now this tube has expanded a bit. So it might look more like this. And then this, to some extent, this film would start to get stretched. And then at some point, as this thing is trying to open, and also it's under some flow, but it's going to burst. And this bursting again is not quite the same as this situation because the flows are much slower. So here, these flows are often at so-called high Reynolds number, as we'll talk about later in this class. High Reynolds number refers to the tendency of the flow to become unstable and for inertial effects and momentum of fluid to become important. At the scale of the mouth or the nose or even the vocal cords, there can be significant inertial effects and very complex flows. On the other hand, when we get down to the smallest channels in the lungs, and especially when we reach kind of the dead end, these sort of the alveolas, which is basically a bunch of little sacs, they're kind of at the end here, then it's kind of a dead end. There can't be any like very fast flow through that system. And so it's actually a low Reynolds number situation. So we're not talking about turbulent flows or sprays of liquid at high Reynolds number. Instead, we're talking about films that are getting stretched out, and then they simply break up under the effect of capilarity, which refers to surface tension. So basically, when you expose a surface and stretch out a liquid film, it just tends to break up into little droplets, basically in order to minimize its energy. So what we'll see here is that maybe one of these films over here has already burst and will lead to some droplets that are being created. So this bursting of the film is what leads to the droplets. And when you're inhaling, those get swept a little further downstream. Some of them may deposit on the walls and go back into the film and coalesce into the film, but others will remain suspended in the air. And now when you exhale, you start pushing back the other way. The tube is more open now, and we may have a situation like this where there's no more sort of spanning films left, but there's some fraction of these droplets. A few of them may have deposited and coalesced on the surfaces, but they're going to start getting blown out the other way now. And so these are the droplet emissions right here. I'll just say it'll eventually do that. So some fraction of these will make it all the way out. Of course, those droplets can deposit anywhere in the respiratory tract. In fact, some of them, if you're breathing through your nose, may end up getting caught in your nose, actually. And so there's an exchange of fluids between the different parts of the respiratory tract, but some fraction of those droplets will get out. And then ultimately, when you're finished exhaling, now the pressure is released. And this tube kind of relaxes back to its original state where there's some mucus here and there's some places maybe where it's closed and there's these possibly spanning films in some places where it's almost touching. So these are some of the basic processes by which droplets are emitted. As you can see by the range of different processes that are possible in the human physiology that we've just described, we can see there's a range of droplet sizes that will depend on the respiratory activity. Are you breathing lightly because you're sleeping? Are you breathing heavily at high speeds because you're exercising? Are you vocalizing and generating droplets in a different way in the larynx? All those activities play a role. And also there are variations between individuals. And finally, if a person is sick and all these fluids I've sketched here as mucus contain pathogens such as virus or bacteria, then of course the degree of infection, the viral load or the total amount of pathogen, the total amount of bacteria plays a role as well in sort of how infectious the emissions are from breathing. But these are some of the basic principles. And now we'll move on to ask, what happens to those droplets after they leave the mouth of the infected person?
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Airborne_disease_transmission_in_a_wellmixed_room_Airborne_transmission_rate.txt
PROFESSOR: So now that we've done a very simple calculation of C of t, the concentration of virions in the air per volume, which is coming from just a mass balance in a well-mixed room, let's zoom in, now, and think about transmission between the infected person, or an infected person and then a susceptible person who is not yet infected. And the transmission is occurring by breathing the infected droplets in, and then the virus has to get out of those droplets and interact with the host tissues. So if we let beta be the transmission rate, so this is basically new infection per time, so basically, it's the rate at which another person may become infected. Then how will we write this? Well, we could write it as-- so we're, now, forgetting about the infected person. We care about their breathing only because it's producing this concentration C that we just talked about. But now we're going to focus on the susceptible person. The susceptible person is now breathing in at the same flow rate Qb because the breathing in and breathing out are the same. And so Qb is the volume per time around, let's say, 0.5 meters cubed per hour with which they're sampling the air. And the air comes in, and then C of t is the concentration of virions per volume. So this combination now tells me how many virions per time they're actually taking in. There'll be a quantity Ci, which we mentioned earlier, which is the infectivity, and that's the probability that an individual virion actually causes this person to get sick and to become infected themselves. So this is the infectivity of an individual virion. And then on top of that, we also want to put in-- I'll just change the color to highlight it-- the mask factor that we talked about earlier. So that's the transmission or penetration probability for the droplets of interest through the mask. These quantities, many of them will depend on size, and we'll come to that, size of the droplet, but just as a rough approximation, this is the starting point. So this is the number of, basically, new infections per time, and there is a useful notion in epidemiology, which is that of the infection quantum. So transmission rates are often written as infection quanta per time, and that is the rate at which a person, which is susceptible, will get infected. What we have not yet captured is if you have a finite number of people in the room, when someone gets infected, they can't get infected again. So the numbers of susceptible people is changing. So we have to model the progression of the disease in the room, which we have not done yet. So that's why the beta is not the number of infected people cause eventually you run out of people to infect. So we have to account for that later. But a useful way to think of it is that beta is sort of the rate at which this person is sending infection quanta over here. Those quanta may not actually lead to an infection because they might already have been infected. But if they're susceptible, that tells you the rate at which that person would become infected. So that's the notion of infection quanta is essentially defined by the transmission rate beta. Now, this infectivity is something we'll come back to. We will actually go through the calculation for SARS-CoV-2, but it's been estimated before to be at about 2% from the original SARS virus in 2003. And in fact, I will argue that it's greater than 10% for SARS-CoV-2. And we'll do that by analyzing spreading data with the model. And, of course, that helps to explain why SARS-CoV-2 has led to the COVID-19 pandemic, and SARS-CoV-1 was not able to spread as much. So now we have here our transmission rate. And we can ask ourselves, this is a transmission rate, which is time-dependent, but what if now we calculate the steady-state transmission rate? So the transient would be when the infected person first enters the room. The concentration is changing in time in the air, but eventually, there's kind of a steady-state where there's a balance of the production and then the flow rate through the room of refreshing the air with outdoor air. And in steady-state, we have the transmission rate is going to go to a constant value, which I'll call beta bar, and that is given by the steady-state concentrations. So here I'll just write it-- I'll rewrite this expression here Qb Ci Pm and then the steady-state C, which is the production rate P divided by the outdoor airflow rate Q. And another way we can write that is that remember Q we can write as lambda a times V. So this is Qb, and in fact, let me-- well, here, I'll write it one more time. That's my Ci Pm capital P over lambda A V. Now, recall that the P we had written as, that's the production rate, also depends on Qb, that's the rate at which the infected person is exhaling infected air. So that was Qb times nd, the number of droplets per volume, Vd, the volume of liquid in a droplet, so this nd Vd's the volume fraction of liquid. And then we needed Cv, which was the concentration of virions in the liquid or in the fluid. And there's also a factor of Pm if that person is wearing a mask. So we put all this together, we get an important result here, which is that the steady-state transmission rate can be written, when I plug-in here, as Qb squared times Pm squared. So the mask factor comes in twice because if they are wearing masks, there's two masks. You have a mask at the source. You also mask at the target, and the fluid has to go through both of those filters. So that's one reason, as we'll see, that masks can be, actually, very effective. And then we'll lump all the parameters in something I'll call Cq, which I'll come back to, and then we'll leave lambda a V in the denominator. So this is the main transmission rate where I've defined this important parameter Cq which has all the information about the specific disease, and what is it? It's everything else is left. It's nd Vd, so that has to do with respiration, so the distribution of droplet sizes and the size of the droplets is something that's coming from the type of respiration that the infected person is engaging in. Cv is their viral load, so it has to do with the progression of their disease and how many viruses or virions are found. And then we have Ci, which is this infectivity, the probability that any one of those virions will actually infect this susceptible person if it manages to get in there and diffuse out of the droplets. So coming back to this notion of infection quanta, while beta is an infection quanta per time, which are being kind of transmitted from one infected person to one susceptible person, the way I've written it here is I've reexpressed it as infection quanta per volume of air exhaled. So while C is the concentration of virions in the background, there is this Cq, which is essentially the infection quanta that are being released, and the factor of CvCi is actually what connects those two. In fact, sometimes we write Cq little cq CvCi, this is infection quanta per liquid volume in a drop. So from the mucus or material that's being released, there is a certain concentration of infection quanta, which is the physical concentration of the virions Cv times the probability that if they were to be exposed to the susceptible person's cells, that they would actually infect those cells and cause a transmission of the infection. So that's another important quantity, and this here is really the primary sort of lumped or combined disease and physiological parameter in the model. So what's nice about separating this way is that Qb is something which has to do with people's activity, it's how fast they're breathing, and that's something we know very easily whether they're at rest or they're exercising. Pm is also known if we know the kinds of masks people are wearing. There's various studies of transmission factors and filtration efficiencies of masks. And so we can put reasonable estimates there. And then here we see the importance of lambda a, which is the air exchange rate. So how quickly is fresh air coming in the room? That's a physical parameter of the room, has nothing to do with the disease. And V, of course, is a geometrical parameter, the room, which is the volume. And so all the disease aspects are kind of lumped into the Cq. So if I want to apply this to actual spreading of COVID-19, I have all these parameters that I know that come from the physics and fluid mechanics of the room. And then, I have a single parameter Cq that I need to obtain from an understanding of disease transmission and looking at spreading events in indoor situations.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Transfer_of_respiratory_pathogens_Viruses.txt
PROFESSOR: So now let's talk about viruses, which is our main focus in these lectures. So viruses are very different from bacteria, because they are pathogens which infect the cells of your body. And so they themselves are not large cellular organisms, but in fact have a different biology. So they consist of virions, which are a capsid form of the virus, which contains a strand of RNA, which is some genetic material that, when this capsid is then integrated into the host cell, the RNA can basically activate to infect that cell, and also replicate itself and make some more virions, which can spread out and infect additional cells. So the basic entity that we're worried about here is really a small object, which is often shaped like a sphere, or maybe it's an ellipsoid, the virion. And it's much, much smaller. So the typical size is really on the order -- the radius, let's say -- is on the order of 100 nanometers. So that is 0.1 micron or less, so much smaller than even the smallest aerosol droplets that we can easily observe. So they're extremely tiny. There's a couple examples we can think about, which are famous examples. So one class of viruses cause the disease measles. And measles has a virus shape, as you can see here, which is not quite spherical but usually a little bit elongated, like an ellipsoid, and has a typical size of 100 to 300 nanometers in diameter. So measles is still a very active disease, which we have been controlling for a long time with vaccinations. But despite that, there were still, in 2018, 114,000 deaths worldwide from measles. And it's estimated that in the eight years before that -- actually, in the 20 years before that, excuse me -- that the vaccine has saved around 23 million lives that would be lost otherwise if this virus were allowed to propagate. And measles -- it is known to be airborne. So this is the classic example of an airborne transmission. It's been studied by, for example, Riley in the 1970s, who demonstrated measles transmission in schools and in other settings. And from our perspective, in this lecture, it's not surprising. These viruses or virions are so small, they can be contained in the smallest droplets, which are easily present in the air for hours, and so they are not settling out. So what's of much more interest to us today is the family of coronaviruses. So coronaviruses look very similar to this one, but they have these proteins that stick out, which we've all seen, that look a bit like a crown, or a corona. And they still have an RNA on the inside. And there's lots of different human coronaviruses. So there are the standard human coronaviruses, which cause the common cold. And there are four typical human coronaviruses that cause common colds that we all experience, and more serious pneumonias, but generally not life-threatening illnesses. But there are variations of the coronavirus mutations, which are constantly coming into contact with humans and can cause much more serious diseases. So in recent memory, we've had the Severe Acute Respiratory Syndrome coronavirus. And there was a big outbreak, reasonably big outbreak around 2003, that started also in China. And it infected around 8,000 people, and about 800 died. So It was a fairly lethal disease, about 10% mortality, but fortunately, it didn't spread too widely. More recently, there was the Middle East Respiratory Syndrome coronavirus, which was in 2012. And this led, over the couple of years after that as well, to around 2,500 cases and around 850 deaths. So again, an outbreak that was potentially very serious but remained controlled and was primarily in Saudi Arabia. And then we come to SARS-CoV-2, which, we all know the story. So this is the novel coronavirus, which appeared in Wuhan, China, in December 2019, and then led to the present global pandemic, which so far has claimed almost a million deaths. So as of today, which is September 24, it's around 976,000 deaths worldwide. And the confirmed cases are currently at about 3.8 million, which is a reasonable fraction of the world's population, around 7 billion. So what we can understand from all this is that these are very small objects. They cannot swim. And so they are just sort of floating in droplets. And so, of course, they can be in small droplets. And as we'll discussed next, you can imagine that they're actually more infectious in those small droplets, because they can more easily get out and reach host cells when they're transmitted. Moreover, small droplets are much easier to get deep into your respiratory system, into the smallest cavities of your lungs, where there's a high surface area for that interaction to take place.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Beyond_the_wellmixed_room_Natural_convection.txt
PROFESSOR: So another important source of convection in an air filled room is buoyancy due to differences in the density of the air as the temperature varies. Even relatively small variations in temperature can lead to significant flows. There's another dimensionless number, which controls the appearance and strength of such flows, which is the Rayleigh number, written Ra. And this is also a physical property of the-- it's a combination of physical properties of the fluid plus the geometry. So in this case, the relevant geometrical scale is the height, because this is a gravitational instability. So in the Rayleigh number, we have gravity. I'll just define all these-- a gravitational acceleration, which is 9.8 meters per second squared. We have-- well, if we define the change in air density relative to the initial air density that's caused by changes in temperature, if the temperature changes aren't too big, there is a linear response, which is defined by the thermal expansion coefficient beta, so beta delta T. So beta is the thermal expansion coefficient. And this, for air, is something like 3.1 times 10 to the minus 3 inverse Kelvins. OK, so basically we have gravity times the change in density. That's the buoyancy force per volume. And so we can write that as beta delta T. Or I should say, delta T is T maximum minus T minimum. So there's some temperature change. So the example we're going to consider is a cold plate above a hot plate with a fluid in between. And then we have also some other parameters. So we have-- the length comes in cubed now. So it's very sensitive to the size of the system. And then we also have the kinetic viscosity of air, which I'll write again here. This also appeared in the Reynolds number. And for air, this is 1.5 times 10 to the minus 5 meters squared per second, roughly. And then finally, we have alpha T, which is the thermal diffusivity. So this parameter gives a sense of how quickly heat energy is transmitted by conduction and diffusion through the fluid. And thermal diffusivity, turns out, is pretty close to the kinematic viscosity for a gas, like air. And the reason is that the ratio of thermal-- well, in other words, the ratio of the kinematic viscosity to the thermal diffusivity for air is around 0.7. And this ratio, by the way, is called Pr-- the Prandtl number-- which is also very important in these sorts of flows. And for gases, basically kinematic viscosity refers to the diffusion and momentum in the air, whereas the thermal diffusivity alpha is the diffusion of heat energy. And in a gas, both those processes occur by collisions of molecules. And since it's the same mechanism, you have roughly the same order of magnitude of those quantities. So basically, all these quantities enter the Rayleigh number. And the way to think about, qualitatively, what the Rayleigh number is telling us is the ratio of buoyancy force to viscous stress, which is trying to fight that motion as we talked about before, but also heat diffusion or thermal diffusion, which is also kind of fighting it, because it spreads out the temperature gradient because this is a motion that is naturally driven whenever a temperature gradient exists which is unstable. So because this beta is typically positive-- so when you heat the fluid, it expands. That's certainly the case for most gases and even for many liquids. Then what I've sketched here is an unstable density gradient, where if the cold is above the hot, there's a heavy fluid above a light fluid. And at conditions of low Rayleigh number, this is stable. And in particular, if the Rayleigh number is less-- for this particular case of two fixed plates and an infinite layer of fluid, if the Rayleigh number is less than 1708, then we have a stable situation. Or it can be at least meta-stable. It won't go spontaneously unstable, so at least local-- stable to small perturbations. But then at this critical Rayleigh number of 1708, we start to get some spontaneous flows, because what's happening is that the heavy fluid above, which I've sketched in blue-- the cold fluid-- it wants to sink to the bottom, whereas the red, warmer fluid is lighter and wants to rise to the top. And so it has to find a way to do that. And eventually, it breaks symmetry and just starts forming convection. And that is so-called natural convection. So you have plumes of hot fluid rising and cold fluid sinking, driven at first by fairly regular arrays. But as you increase the Rayleigh number even further, and if you increase it a lot, then you eventually get to a complicated turbulent flow. So if you increase the Rayleigh number on the order of 10 to the 4, you may have some unsteady situations as we saw with the vortex shedding. But here, if you go to a very high Reynolds number-- Rayleigh number, I should say-- greater than about 10 to the 9, then you again get turbulence. So simply these temperature variations are strong enough-- those buoyancy forces-- to completely destabilize the fluid and generate a turbulent mixture where the hot and cold are very quickly mixing. And I've sketched here the hot and cold as still being separate. But in fact, due to diffusion, they will kind of also be re-equilibrating all the time as well, although the temperature gradient is needed to kind of maintain that flow. So let's see how big the Rayleigh number is in different situations of interest for indoor air now, as well. So if we look, let's say, in a room-- so let's have-- this is another room. And let's imagine again, we have our heating and ventilation air conditioning. Let's say, it's an air conditioning unit on the top, which is dumping in some cold air. And it's giving it some velocity. So the velocity is associated with the Reynolds number. That inertia will lead to destabilization and vortices and mixing by itself. But let's see what the effect is of the temperature difference, OK? So we're injecting cold air. And we usually put it on top, because we actually want good mixing in the room. That's how these systems are designed. We also inject heat normally from below, for example, from the lower sections of the wall or from the floor. And so if we were-- and in fact, I could just mention, if we were to heat, we would do that. And if we're doing air conditioning or cooling, we would do it from above. And so basically, there is an unstable gradient like this. And let's actually put some numbers in here. So what if we say that the temperature difference between the fluid we're injecting, whether it's heating or cooling, relative to the sort of background air in the room, which is, let's say, closer to the target temperature-- is approaching target temperature-- let's say, it's only 10 degrees C. So that seems like not a very big difference. But then we go back to our height, which is our length scale. And we say it's 2.7 meters, just as sort of a standard ceiling height. If you plug in the properties of air with these numbers, the Rayleigh number is actually 10 to the 10. It's enormous. So that tells you that if you-- and that's partly because of the huge scale here, right? So H comes in cubed. So if you have 2 or 3 meters of height, that's a lot of height. And so if you are maintaining that kind of temperature difference across such a height, you're going to be generating very serious convection in that system. So what's happening is that besides the fact that you're blowing and generating flows by inertia, you also have these thermal flows going on, which can be very, very strong in a system when there's even just a few degrees of temperature difference. You've probably seen dust in the air near a sunny window, which allows you to visualize the flow. And even if nobody's moving-- the air is fairly still-- you might see plumes of rising air in one location or sinking in another air. And if you look closely, those plumes may actually have very complex convective instabilities and turbulence even, even when the temperature differences are not so great. And in fact, we can see such things. For example, if we have, let's say, a window-- and let's say that it's cold outside. And it's warm inside. Then just simply that temperature gradient means that there's a colder air near the surface that wants to sink. And so the flow rate is going to look-- or the flow is going to look something like a boundary layer flow of fluid that is sort of falling near the surface. And if the Reynolds number gets high enough, this flow can actually become itself unstable as it kind of goes down the surface. So you can see that these rising or falling plumes of natural convection near vertical services that are heated or cooled relative to the environment can also lead to complex flows. And actually, a good example of that is the flow that occurs around a person just simply due to the temperature. So if you look very closely at a person-- I'm not going to draw it very well here. But let's just say, we have a person. And that's supposed to be a head, OK? If you look very closely, the body has a temperature which is usually higher than the ambient by at least 10 degrees if not more. And if you now plug in a little bit smaller size-- let's say, we plug in 30 or 40 centimeters. And we-- so let's do that actually. So if we say that here, maybe, H would be on the order of-- well, since this was 3 meters, we'll go down to 0.3 meters. So we'll drop the size roughly by a factor of 10. But it comes in cubed . So that drops the Rayleigh number by a factor of 1,000. So if we still keep our delta T at 10 degrees-- and it might actually be much more than that-- the Rayleigh number around this person's head, just simply by virtue of the heat generated by the body, can be of order 10 to the 7. So it may not be quite into the turbulent regime. But it's certainly in the regime where there'll be some unsteady complicated flows due only to natural convection. And we're not even talking about the person moving, which gives you even more flow. And so what it actually looks like if you look closely-- the air around a person is actually rising almost like a chimney, driven by these thermal flows. And those flows even can go turbulent or at least generate some vortical structures. And of course, these kinds of flows are just due to temperature. In all these cases due to the HVAC and also due to the person, there are these convective flows that we've talked about from inertia, which also contribute to mixing. So as we'll talk about shortly, we also know that this person is breathing. Let's say, they're just breathing through the nose, even. Then there's some puffs that are generated. And you've got the thermal stuff going on. Also, the air that you're breathing out is warmer than the ambient. So it tends to want to rise as well. So I hope I've convinced you here that-- I'll write this as, these could be buoyant respiratory jets and puffs. So I guess the first part of this section is just to convince you that the conditions in a room are such that we have good reason to believe that there is significant mixing of the air, either due to inertial effects from movement, from ventilation air flows, or from thermal effects, as we have sketched here. And so at least, that gives us a beginning of a justification for our assumption of a well-mixed room. So in this video from Linden's group at the University of Cambridge, we can see a visualization of the airflow around a person who is speaking or just breathing. And the videos are taken by a differential synthetic schlieren imaging method, which allows you to see basically the changes in density in the flow. And what we see are these thermal plumes of warm air rising past the body due to the difference in the body temperature and the ambient air temperature. And we also see, on top of that, repeated puffs coming from the breathing, which interact with those plumes and also themselves have buoyant and turbulent flows therein. In the next video, we see how different the flows are when masks are worn. So we can still see the thermal body plume rising vertically past the person's face. But now the mask is preventing the transfer of momentum to the fluid to push forward these puffs. And instead, we see the leaking of some of the breathed, exhaled air rising, almost entrained in the turbulent thermal plume rising upwards rather than being ejected forward. And so this helps to eliminate short-range transmission due to those puffs and really brings us closer to an airborne model of a well-mixed room.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Safety_guideline_for_COVID19_Steady_transmission_rate.txt
PROFESSOR: So now let's focus on the steady-state transmission rate, which is really the most useful in designing a safety guideline. It's also the most conservative because the transient transmission rate is always smaller than a steady state. So, our formula for the steady state transmission rate is shown here, in terms of the relaxation rate, lambda_c(r), of the aerosol concentration in the air, and also n_q(r), which is the density of infection quanta in the air, per radius. So let's sketch some of the important functions here as a function of radius and try to get a sense of how we can maybe simplify this expression. First of all, we've already defined C_q, which is the integral of n_q(r), dr. This is the critical disease parameter, which is the infection quanta per exhaled air volume. I don't mean to cross that out but rather just to do this. So C_q is a very important quantity for us and we will return to that. That's the quality that we're going to want to fit to disease data for COVID-19, specifically. And this will be the exhaled infection quanta per air volume and we'll typically want to measure that sort of peak infectivity of an individual in order to design the conservative criteria. So what is C_q? If I plot this n_q, it has a bunch of factors in it. So it has the roughly constant assumed viral load per liquid volume, it has the infectivity, which we've already argued should be smaller in larger droplets because it's more difficult for the variant to diffuse out of those droplets once you get above, say, 5, 10 microns, if not less. There's the droplet distribution itself, which depends on the type of respiration but often has a peak which is submicron and then sort of a fairly broad tail at the higher end, with smaller amounts of larger droplets, and then V_d is 4 pi r^3, which is just the volume of a drop. So this net quantity, n_q has some kind of peak around 1 micron or less and then a tail. And then in the integral here, we have the integral of n_q over r is c_q, so that's very important. But there is these other factors, p_m and lambda_c, or 1/lambda_c. Each of those quantities gives us a cutoff which makes the larger droplets less important for this problem of airborne transmission in a well-mixed room. So lambda_c, as you can see, is a bunch of constant factors except for the sedimentation rate. So this lambda_a times (r/rc)^2, that is the sort of radius-dependent change of the sedimentation rate relative to the ventilation rate, lambda_a. So as you can see, this goes like 1 plus a constant plus, r^2. So as you go to large r, the inverse of that is 1 over a constant plus r^2, so it goes to 0. So it provides a cutoff and the scale for that cutoff is what we called r_c, that's the critical size of a droplet, which is just sedimentary at a rate comparable to the ventilation rate because, really, this is ventilation and sedimentation which are compared when you define r_c. In addition to that, we have (p_m)^2, which is the max penetration, or transmission factor. So while masks are 100%, or very efficient, filtering large droplets which don't fit through the fabric or the mesh, they're not as good as filtering smaller droplets. So if you look at the transmission probability when you're down well below micron, most masks are not doing a great job filtering, they may get 5%, 10% if you're lucky, depending the quality of the mask. But then it comes down because you start to have better and better blockage of particles by the masks. All these factors serve to cut off this distribution so that we're not worried about the large drops, and we're interested in aerosols. But it does so, here, in a way which is quantitative. So we're not just arbitrarily saying, as it's sometimes said in the field, that say 10 microns or 5 microns is the limit of the aerosols, but rather, we actually have a well-defined characteristic size that can emerge here. And the way we can define that is by taking the full expression for the steady state transmission that is has the radius-dependent terms in it and write this as (Q_b)^2/V times-- and then we'll keep the C_q from the integral of n_r-- C_q times-- and then we'll imagine that the remaining radius-dependent factors, p_m and lambda_c, are sampled at a certain value, r-bar. So what is r-bar? Well if you know the function is p_m and lambda_c is a function of r, there is a value of r, which we call r-bar, which is when you actually do this full integral, you would get that value. So that has to be determined. It can be done numerically but you can kind of see graphically where it ends up. What we're asking here is what is the typical value of the mask penetration factor and the relaxation time? Well, it's going to be where the most weight is here, keeping in mind, also, that there's more volume at the higher side than at the lower side. So if we look at how much activity there is, we might want to emphasize that. So depending on the details here, somewhere over here is going to be r-bar. What we're saying here is that even though our theory has all of the radius dependents in it, so if you know exactly the type of masks you have and you know p_m(r) from experimental measurements, maybe a lot about the virion and how infectious it is in different-sized droplets, or you studied sedimentation-- you have all these functions. There's a well-defined r-bar at which you can just use this simple expression in place of actually doing those integrals. So that's actually useful simplification. And in addition to that, we can also write this another way, which can be useful, is to take the mask factor out and write it as a quanta emission rate, lambda_q, where lambda_q is Q_b*C_q. This is the quanta emission rate by an infector. So if you don't like this notion infection quanta per volume, when you multiply by the breathing flow rate, you're actually getting how many quanta per time are being emitted by the infector. And then what's leftover is another factor, which I'll call f_d, which is something we'll come back to later, which is what I call the dilution factors. If we take the breath of an infected individual and then it ends up being diluted into the room, the ratio of the concentration of infection quanta, or virions in the breath compared to that which emerges in the well-mixed room, that's the dilution factor. This will become important later when we look more closely at respiratory fluid mechanics and we look at the plumes, or clouds, of droplets that are being emitted by a person when they're breathing, very close to the person's mouth it's a much higher concentration and eventually it gets sort of swirled around and mixed in the room, and it reaches the steady state values that we calculate. This f_d gives you that ratio in some sense and gives you a sense of how bad the risk is from short-range transmission versus the well-mixed room, so we'll come back to that. But this is a nice simplification for how we can think about the steady transmission rate in terms of several key variables, which I've boxed here. And so we will now move on to applying this to COVID-19.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Beyond_the_wellmixed_room_Turbulent_jets_ASIDE.txt
PROFESSOR: So as a technical aside, let me go through and sketch the derivation of the structure of a turbulent jet, in particular the conical shape that we have when the flow is turbulent. So in order to study the mean flow profile we began with the Navier-Stokes equations which describe the momentum conservation and mass conservation or continuity of an incomprehensible so-called Newtonian fluid. So this is a complicated set of equations. In particular, we have this nonlinear term here, which is the inertial term. And we've already said that we had our high Reynolds number and turbulence results because the inertia is very strong compared to the viscous term which is here. So that's the divergence of the viscous or the viscous forces on the fluid. So these two terms we know are important. They have to balance and the inertia is particularly strong and it is what leads to the very complicated flows that we see. So you can solve these equations numerically on a computer and generate simulations that look a lot like experiments on turbulent jets. What I'd like to do here is just to derive by simple scaling arguments what sort of the structure of the solutions could look like. So these two terms, as we just indicated, are the ones that are most likely to balance in the time average flow. So let's consider a time averaged steady flow which has a velocity component of v_z that depends on r and z. So it's basically something like this, which is basically expanding, but has a certain sort of localization of the flow in the middle. And it's smooth because we're averaging over all the complexity of the jets. So the jet looks something like this with all kinds of vortices and eddies that are getting bigger as it goes as you're entraining more and more air from the outside. So we're going to look at the time average flow. And we're also going to, importantly, assume that we have an eddy viscosity. So the kinematic viscosity, nu in the equations as I've written them here, represents the diffusion momentum. If a parcel of fluid is moving with a certain momentum, it has a chance of passing that momentum to the neighboring fluid and moving it along with it. And that is accomplished through viscous stresses. So the eddy viscosity basically assumes that that diffusion process from momentum happens at the scale of the largest eddy in the flow. And so we've talked about the assumption of eddy diffusivity. But for eddy viscosity, what I'll write is the eddy viscosity is a typical velocity which is v_z times a length scale which is delta. So what I'm saying here with this is that if I go out to a certain position z and ask myself, what is the sort of width of the jet at site z, then there's all kinds of eddies but the largest eddy is kind of at that scale. And so if I write down an eddy viscosity it's going to be these sort of average velocity there are times that scale. So that's going to be the eddy velocity. And I'm going to replace-- so when I do my time averaging, I'm going to replace the microscopic viscosity of the fluid, kinematic viscosity, with the eddy viscosity. So that's an important modification. And so if I do that. If I do this time averaging and look at the eddy viscosity, then I take these two terms and balance them, I'm going to get v_z bar, so that's my average v_z. I'm looking at the z component of momentum here of that first Navier-Stokes equation. And I get v_z dot derivative of v_z with respect to z plus v_r. And there's also an r component of velocity. So there is also some velocity in fact which is coming in from the sides. But I'm just going be interested in this term here v_z dr. And I'm going to balance this against the eddy viscosity, the eddy-- or nu eddy I should say, sorry, nu eddy is eddy viscosity-- times and then the Laplacian in is 1 r d/dr r dv_z dr. So that's just the Laplacian in its cylindrical coordinates. And now I'm going to make the assumption that this ve scales as v bar z times delta. And so now I'm going to do a scaling analysis on this equation. And so what we see is we have v_z over z times-- and then at least for that first so-- I should say these two terms will be of comparable size because of incompressibility-- the second equation. I won't go through the details of that. And we'll just do a scaling argument balancing these two terms. So if I look at v_z divided by z times v_z so that's a scaling of those two terms, I can balance that against v_z delta times-- and the scale for r is delta. So I have 1/delta for the 1/r, 1/delta for the derivative times delta * (1/delta) * v_z. So there's a lot there. But notice the v_z's all cancel. And we're left with a bunch of deltas here. And how many, because of the eddy viscosity, we are left with-- all of this is just 1 over delta. This is 1 over z. And so we find here the delta scales as z. So in other words, we have a conical shape. So the boundary of their thickness is a constant times z. And what we write is that delta is equal to alpha z specifically. And we define the turbulent entrainment coefficient alpha that way. And then once we've done that, we've already shown that from the momentum flux that v_z scales as the square root of k/rho_air times 1 over delta. So this basically now gives me the scaling of the problem. In fact, there is a similarity solution for the shape this profile that one could solve for. And it has the form that, for example, the v_z is square root of-- because delta is proportional to z. So it's the square root of k over rho a z times some function of r over alpha z. And then there's a similar expression for the other velocity component. And the function F looks very much as I've sketched here. It's essentially a Gaussian type profile or a bell curve that kind of localizes the velocity across this distance delta. The second thing that we're interested in is the mean concentration. And that would be a concentration of, let's say, virions contained in infectious aerosol droplets. So there's a mean concentration profile in the jet assuming that we're injecting a fluid of a constant concentration at the source of the jet. So again, we can do some scaling arguments here. So if we ask ourselves, what is the mass flow rate through a slice or actually the volumetric flow rate, use me, that is what is called a Q and it'll just be an average. This will be the average velocity times the cross-sectional area at a given position. So this is scaling like-- so area scaled is like delta squared. And then the velocity scales in this way is 1 over delta. So this ends up scaling as k over rho a times just delta. So the volumetric flow rate is increasing with r and that's a sign that we are actually in training fluid as I indicated. This is not just the fluid we're injecting but it's moving forward and it's sucking more fluid in. And all that fluid is kind of becoming part of the turbulent jet as it grows. Now if we ask-- so this is our flow rate, volumetric flow rate, but we can also ask ourselves, what is the flux of concentration of virions per unit volume. Well, that would be the average concentration times the average flow rate because flow rate is volume per time and concentration is number per volume. So this is a total number per time. And this we will assume should be a constant because as you can see from this picture, if we're injecting a bunch of concentration of let's say droplets here, they will spread out in the turbulent flow but they don't really have a good mechanism to get out of the turbulent flow. The turbulent flow is sucking fluid into the plume and so the particles are just kind of well mixed in that plume and we could assume they have a roughly constant concentration. And so, if that's the case, and in fact, this constant would be lambda Q if we're thinking of, for example, infect-- c is the concentration infection quanta. Then lambda Q is the rate of admission of infection quanta from the mouth. We've already talked about that quantity. And this is now telling me how the concentration infection quanta decays with time. And so we find if we substitute now is that the concentration infection quanta at a position z scales as so I have to divide by Q so I get the inverse of this. So I get square root of rho a over k. And then I have lambda Q over alpha z. So this tells me that if I plot as a function of distance from the mouth in the direction of the jet, the concentration of infection quanta that are carried by virions in aerosol droplets, then somewhere here I have, let's say, at z equals 0, is the mouth where I'm exhaling. And the concentration there is actually cQ. In fact that is something we've talked about before, which is that's the key disease parameter-- the concentration of infection quanta in the exhaled breath of an infected person. So we know at the mouth, that's what we start with. And what the turbulent theory is telling us is how that concentration is decaying with time and is decaying like 1 over z. And so that tells us sort of our relative risk of infection in different positions relative to being mouth to mouth with the infected person.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Safety_guideline_for_COVID19_Chapter_4_overview.txt
PROFESSOR: So now we're ready to synthesize our knowledge of airborne transmission in a well-mixed room and epidemiological models to arrive at a safety guideline to limit the indoor airborne spread of COVID-19, or, more generally, other respiratory pathogens. Existing guidelines limit one parameter that might be important. For example, social distancing limits the spacing between people, which can be defined by the average area per occupant for example, to a minimum of, say, 6 feet or 1 meter. Other rules, for example, adopted here in the state of Massachusetts, limit all gatherings to be no larger than 25 persons indoors. There also are recommendations from heating and ventilation societies that may recommend we increase the fresh air change rate to six or higher. In the UK, for example, the fresh air flow rate might be prescribed to 10 liters per second per person or other such numbers. Also the time that an infected person is in the presence of susceptible people might be limited to say, 15 minutes. That's involved in the definition of a contact here in the United States. So what we will see is that it's really not possible to write down a realistic guideline that bounds any one quantity, because there will always be situations where you either are too conservative or are not careful enough in bounding that quantity, because these qualities are all related. So, for example, 25 persons might be perfectly safe in a very large space for a very short amount of time with very high ventilation. But take those 25 persons and seal them into a small tent, breathing each other's air for 24 hours with very little ventilation, it's a completely different situation. And the same holds for the distance, or the time, or the flow rates, everything is mixed. And then what about other variables that we don't explicitly control? Like relative humidity or filtration efficiency, if you're using filters to filter the air. Or if you're wearing masks. The quality of the masks, how well does that come in? How about the volume of the space? It's not just the flow rate, but it's also the geometry, even the area, or the length of the space. So somehow all these variables must be related, and the goal of this chapter is to derive that relationship for the case of a well-mixed room, where it's a unique universal guideline. And to parameterize this specifically for COVID-19, by looking at a variety of superspreading events for which enough data is available that we can make a reasonable and consistent inference of the infectiousness of exhaled breath.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Epidemiological_models_Incubationenhanced_spreading.txt
PROFESSOR: So in the majority of cases of indoor spreading that occur over the space of hours or even a few days, the Wells-Riley model of slow incubation where the number of infectors is held constant at the initial value, is a reasonable approximation. For COVID-19, the incubation period is estimated to be on the order of several days. For example, a number of 5.5 days is often quoted as an estimated mean incubation period. On the other hand, in some cases there are spreading events that involve people interacting over much longer periods of time than that. A famous example, which is a little bit longer than the time of incubation, is the case of the Diamond Princess cruise ship, which was quarantined in Yokohama Port, Japan in early 2020 when it was detected that there were a number of cases onboard. The exact number wasn't known because they hadn't tested the entire population, but there were several cases. And so they decided to quarantine the ship for 12 days. And if you look at the number of infections versus time once they started testing and felt they had a good sense of the numbers, you can see that it rose from value, which we may estimate to be on the order of 20 possible cases initially. And it rises, but in a very non-linear fashion. In fact, it starts to accelerate and go steeper and steeper. Now recall, the Wells-Riley model cannot possibly predict this behavior because if you have a fixed number infectors, then at first you have a certain rate of transmission but it always has to slow down because if there's no new infections, the same factors are running out of susceptible people to infect. The number of susceptible is going down, and so you kind of slowly saturate until eventually you've infected everybody, when it's just one person or a fixed number of people who are basically transmitting at a constant rate of infection quanta to everybody else. But that's not what happened. In fact, it's a very steep rise. It went-- just in the last few days, it jumped by several hundred people. And in fact, if they hadn't stopped the quarantine in 12 days-- there were 4,000 people on that ship. I think 3,711. I'll just mention I think that was the number. So way higher. And you see how this is accelerating. If they'd gone with the quarantine a couple more days, they might have had thousands of people infected. So in fact, this is an interesting warning for quarantines, that just cooping up some people for 14 days doesn't make everybody safe. If many of those people are susceptible and are not yet infected, they can become infected. And many interesting things about this incident is that the people were not in direct contact, obviously. Thousands of people were not six feet apart from each other. They were often in different rooms, different floors of the ship. And yet, very large numbers became infected in a short time. So the important thing I'd like to emphasize now coming back to our SEI model is that this sort of non-linear increase can only be explained if you have some accelerated spreading due to new infected people. And it makes sense after about five days. And we also don't know when people got infected. So those initially infected people may have been infected five days earlier. So there may have been already an increase in the number infected people already at time equals zero of the quarantine. And so we should now account also for the exposed people. Those are people that may not be showing symptoms yet, but have been exposed enough that they can then pass it on. And so the rate of an exposed person becoming an infectious person is alpha. And inverse of alpha is that 5.5 days. So that's the incubation time. And this might be something like 5.5 days for COVID-19. But of course it can vary. But it's roughly in that order. And so it makes sense to look at the Diamond Princess and consider what would be the effect of accounting for incubation. So these are non-linear equations that don't have a simple solution to the full model. But in the same way the Wells-Riley model is the limit of slow incubation, where basically E stays zero and basically you only have infected people. Now, we can consider the opposite limit of fast incubation. So let's consider the opposite limit of fast incubation. And this would be alpha t much greater than one. So basically, we want to make sure that the t is much bigger than the incubation time. And as I said, the incubation time doesn't necessarily start at time zero. The infected people in this case may have already been infected five days earlier. In fact, the cruise ship had been going for actually weeks before that. So no one knows exactly when the infection began. So it may be even likely that that was happening. So let's consider the fast incubation limit. So what this then tells us is that now the exposed portion is roughly zero. So if alpha is very quick, then you pretty much quickly go through the exposed, and you end up immediately being infected. And so this is actually a much simpler model where the number of susceptibles is just n minus I. So there's no exposed compartment anymore. So the Wells-Riley model in some sense is the SE model, where there are only exposed people and susceptible, but the number infected doesn't change. This is really the SI model, where we don't worry about the exposed compartment, OK? So the equation we want to solve then, if we realize that S is n minus I is that dS dt is minus beta of t SI. So that's the same equation I wrote down earlier. But now let's substitute. Let's derive an equation for the number of infected. So that will be from here, dI dt will be beta of t times I. And then S is n minus I. Because again, if we go straight through the exposed fraction, then basically this rate of losing susceptibles is equal to the rate of creating new infected people. Again, because n is fixed. The number of people is fixed. So dS dt is minus the dI dt. So this is the equation now that we can solve for this limit of the model. And fortunately, this is a simple equation to solve. It's a first order separable differential equation. So we can write this as dI over I times n minus I is equal to beta of t dt. On this side, I can write this as-- I can factor out a one over n, and write this as one over I plus one over n minus I. So when I combine these two, I get In minus nine in the denominator. And the numerator, I get n minus I plus I. So just get n. But then it divides by that n, so I do come back to what I started with. And this times dI. And so now I can integrate both sides this equation. Take into account the initial condition is that at t equals zero, the number of infectors is I0. So basically, I can integrate by going from zero to time, t. And on this side, in terms of the infectors, I'm going from I0 to the current number of infectors, I. OK, so now we're ready to integrate this equation. So let's multiply the n to the other side and do the integrals. So basically, the integral of one over I is log I. So we have log I minus log of n minus I. And that's because there's a minus I there, so that leads to a minus sign out front. On the other side, if I multiply through by n, I have n integral from zero to t of beta of t dt plus a constant of integration. The boundary condition that I need is that I of zero is I0. So a t equals zero, this term vanishes, and this expression must be evaluated with I equal to I0. So therefore, there must be on this side of the equation, some constants log of I0 minus log of n minus I0. So now we satisfy the boundary condition or initial condition of t equals zero. Now, I can also write this in terms of the quanta emission rate for the initial infectors, which we defined earlier. So for the Wells-Riley model, we talked about writing q of t is the number of quanta emitted by the initial infector. So if we just define it as I zero times the integral over time of beta of t dt, this is the sort of infection quanta emitted by the I0 initial infectors. So if we take that here, we can express the solution a somewhat different way. If I take an exponential on both sides, the difference of two logs is the log of the ratio. And when I exponentiate, I get rid of the log. So this side turns into I over n minus I. And the other side, we have from the similar expression, I0 over n minus I0. But times, now, the exponential of-- well, if we want to express it in terms of q0, it would be n. And then this d beta dt has a one over I0 q of t. So this is the solution in this case. Now, if we look at early times-- and so then that would be less than incubation time. So if basically our alpha t is much less than one-- so not much incubation is occurred. And basically during that time I is approximately still equal to I0. So that would be kind of in the early, early stages here of this dynamics-- then we could write that I of t is, well, from this we could write it as n minus I0. And then I0 times the time interval of beta, or n minus I0 times q, or it's the number of susceptibles times q of t. So this is a result that can come directly from analyzing this expression. We can also see it here, that if I is not changing, then we have I, n minus I. And we can also see this expression here where we just integrate both sides in time to get to this equation here. So it's basically the same as the Wells-Riley in an early time. And that's an interesting observation, which is that there is a universal sort of small transmission limit in both limits of this model. And what that is, if we take this-- at least for the SEI model, if we write how many exposed plus infected relative to the initial number of infectors, OK? So that is telling us how many transmissions there are, either to make someone exposed or to make them infectious from the initial number of infectors. That at early times, we get this same result that we had before because we got the same thing for the Wells-Riley model for E, here it's for I. And we find that this is this what we call Rn, the indoor transmission number. Which is the initial number of susceptibles times the number of quanta transferred. And in the case where if S0 were equal to n minus one, and I0 were equal to one, this would be n minus one integral over t of beta dt. That would be that case that we talked about before for the indoor transmission number. But more generally, it would look like this. So that is at early times, before many transmissions have happened. It really doesn't matter what's the details of the model in terms of the non-linear response. So even if after a longer amount of time more and more people get infected, the initial moments are always universal and are really just governed by this transmission rate beta and the number of susceptible people and number of infectious people initially in the room. So it's kind of independent of all these details here. And so then to kind of summarize that picture, we could plot versus time here what happens in terms of the-- so we have a rate of transmission where if I look at the total number of exposed plus infected people, OK. And then here's the total number of people in the room, n. So in the Wells-Riley model, everyone's exposed, but nobody becomes infectious. And we know that we get this kind of exponential relaxation as we eventually run out of susceptible people. And the timescale for that is beta inverse, OK. And that gives you the transmission time for just a fixed number of infectors to slowly infect everybody else. But in the case like we described here where we have some non-linear acceleration, that has to start out the same. That's what I'm trying to say here, is the initial transmission rate doesn't matter if there is an incubation rate until you reach the incubation time. So there's kind of an alpha inverse here, which is the incubation time. This one here is the transmission time. And at this time scale, you start to see an exponential increase until it saturates basically once all the transmission has occurred because now there's more and more infectious people. And so this is the fast incubation and slow incubation. And this one is the Wells-Riley model, which is widely used. But if you're fitting spreading data where there may actually be some incubation going on, and also potentially removal-- we could add another equation for the removal of people. We need to be careful on how we fit data in order to extract information about the transmission rate, which is what we're interested in what we're trying to interpret the data. So maybe an important conclusion from this is that infection quanta, this notion-- or the infection quantum, I guess you can say, one of them. Which is a quantity that was introduced by Wells, really based on this Wells-Riley model, is basically-- think of this exponential relaxation, and saying when let's say 63% of people become infected, that's what you say is one infection quantum has been transmitted to each of those people. Then really, it's better defined-- I'll just write here it's defined by the transmission rate and not by the number of people that actually get infected. So if you look at some data like the Diamond Princess or other data that we're going to look at later, you are seeing spreading happening and there could be lots of contributions to the number of people that actually get sick. For example, there could be incubation going on. So what's sometimes called the secondary attack rate is E plus I divided by S0. So the secondary attack rate is sort of the fraction of people that are susceptible that got infected. And the Wells-Riley would say, when that 63% then you've transmitted a quantum to each of those people. Whereas, as we see in the case of the fast incubation model, that's not how you would interpret that data. But on the other hand, beta is well-defined. It's just each person is transferring quanta at a certain rate and has the potential to infect other people. Now, I don't want to overstate the relevance of this model for a particular case like the Diamond Princess cruise ship. We'll come back to this later. But just simply to illustrate that it has this kind of non-linear feature which is suggestive of incubation occurring and an increase in the number of infected people. And just to point out that this sort of simple modeling leads us to kind of a universal expression for the initial transmission from the initial in factors for each of them to transfer to sort of one other, which is this indoor reproductive number. And that's really what's valid at the early times here. And that's where we have kind of a universal behavior. And that's useful in formulating safety guidelines, which we will do next.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Transfer_of_respiratory_pathogens_Drop_sizedependent_infectivity_ASIDE.txt
PROFESSOR: So let's build on this concept of diffusion of virions in droplets to understand how we would expect a size dependent infectivity of virions in different sized droplets. So an important concept in epidemiology that we will come to later is the infectivity, which is the probability that if a virion is transferred that it actually causes an infection in the host. That can be further broken down into a product of two probabilities. The first is that if the virus has escaped from the droplet, it actually causes an infection. And that's perhaps something which is roughly constant. It has to do with the physiology of the host. But then there is the escape of the virion from the droplet. And as we've already discussed, that's a strongly size dependent quantity. And from very large droplets, it's very difficult in a mucus droplet, especially, for the virion to diffuse out in a reasonable amount of time. And in fact, virions are typically found to have a period of deactivation where after a certain amount of time, they are no longer viable and able to basically cause further infection. And so if we assume there's a certain time t, or tau v for the virus deactivation, then we can ask ourselves if the virus has had a chance to escape or not as a function of size. So basically, to solve this problem, we think of the droplet here. And we actually want to solve a diffusion problem where C here is the concentration of viruses in the domain. D is the diffusivity of the viruses. And this is the [INAUDIBLE] equation in the sphere, which is the diffusion equation. And our boundary conditions are that C of R and 0, the initial condition is 0. And then at R and t, it's going to be one. So basically what we're imagining here is that we're trying to figure out the C will be the concentration viruses that has left the system actually. So what we have is if we look as a function of the radius, of the radius of this thing is R, capital R. So in that distance, we have this constant-- what I'm calling concentration here is just going to jump up to one. And then it's going to diffuse inward like this. OK, and then eventually the final state is that it's entirely basically one everywhere. And that's when basically the probability of removal has hit every part of the drop and all of the virus has been removed. So the C is a time dependent fraction of the virus, the virions in the droplet, which had been removed at time t. So this spherical diffusion equation can be solved analytically in various ways. But there's not a simple closed form solution to this problem. And what we're really interested in here is just a rough approximation of what the solution might look like. So let's pull out an approximation for this. So I'll sketch the droplet again here. Now at early times, when there hasn't been a chance for the viruses, the virions to diffuse very far. Then only those which are close to the boundary actually have a chance of leaving. That's this initial boundary earlier that I sketched here, which is working its way in. So why don't we sketch the central region and give that a distance delta, which is the boundary layer thickness. So basically this outer annulus has been-- is really where virions had a chance to leave. And that's where c is jumping to one. And from-- if this were just a plane with a semi-infinite diffusion towards the center, so in other words, this delta is much less than R, capital R, the radius, then it's almost like diffusion from a planar source. And then we actually know that this distance as well approximated by square root of 2 DT. So that just comes from solving the diffusion equation in one dimension leads to that scaling of the diffusion layer thickness. So that's this thickness of this blue region as it goes that way is delta. And it scales as it's approximated by square root of 2DT. And now let's ask ourselves then what is this concentration here? Well, what I'm really interested in actually is this escape probability PE. And that's going to be the integral of CDV over the volume. So this is the integral over all the R's that are less than capital R. So basically inside the drop of this concentration field. So that contrary field starts at 0 and eventually goes to 1. And that base is giving me this total escape probability. So to calculate this integral of the concertation field, I basically have a domain at the outside here with the concept of this concentration variable is near one and a central region where it's C approximately zero. And here C is equal to one on the boundary. This variable I've defined here. So therefore, I can write that this PE is, roughly speaking, if we think of just what is the volume of that spherical annulus, that would be-- and relative to the total volume-- that would be r-cube minus R minus delta cube divided by r-cube. So each of the volumes has a 4/3 pi, which I've canceled off. So this is basically the volume of the total sphere minus the volume in the inner sphere. So that's just the volume of the shell. Then I normalize it properly here. So this is one minus delta over r parentheses cubed. And if I now-- and I have this expression here. So now I have at least an approximation for what this might look like. We can also further say that this approximation here was valid for the delta B much less than r. And when that's the case, then I also can say that this quantity is small. So at early times that's small. And I can expand. All right, this is 1 minus. And then 1 minus something cubed, where that something is small, is 1 minus 3 times that something. That's basically a tailor expansion. So that when I work this out, the ones cancel. And I get three delta over R. So what we find is that this PE, which we're trying to calculate, has two limits that are easy to calculate. One of them is this 3 delta over R. And if that's our delta, then we get 3 square root of 2 DT times R. And specifically, the p is defined up to a certain time tau v. So I'll now replace t with tau v because that is my timescale for virus deactivation. And so this would be in the case where this quantity is-- basically, this ratio here is much less than one. And then in the opposite limit where this diffusion has completely spanned the particle and is getting much bigger than R, then this obviously has to tend to 1. OK, now, I can write down a function that makes this transition right about when this thing is of order one in a variety of ways. One way we could do that would be to write that PE is approximately given by 1 minus the exponential of minus this quantity. So minus 3 square root of 2 d tau v divided by R. And you can see there we have a-- there's sort of-- you could either write this in terms of a time where the critical time is-- so we could write this-- just to get more insight into it, we could write PE is approximately 1 minus e to the minus tau v over some timescale-- I'll call it tau d for diffusion-- where we see here that tau d is R squared over-- and then it's-- To bring inside the square root, this 3 becomes 9. And then times 2 is 18. So 18D. Now, you may recall from our last calculation, the average first passage time in the sphere calculated exactly was R squared over 15D. So this very simple calculation is clearly giving us roughly the right order of magnitude for that time. But we're actually not interested so much in writing this in terms of time. We'd actually like to write in terms of radius. So I can also write PE is 1 minus E to the minus R. I'll call it maybe Rd for diffusion over R where Rd is basically all this stuff, 3 root 2 d tau v. OK, so this is maybe another useful way to write that. And what does this function look like as a function of R, this one right here? So maybe if I sketch that out, I'll look at this a little bit more carefully. Let's plot this. So as a function of R, here is this Rd, this critical size. When we are smaller than that critical size, then basically we have that PE. The escape probability, essentially, is very close to 1, OK, because then we have-- that's basically just what we were just arguing. It's this limit right here. But then it's a function that when it gets much larger than our d, then it decays as we suggested here as sort of 1 over R. So it's actually a fairly slow decay in the long run. So basically, there's this limit here. And I just wanted to get to this picture. Just to point out that even though there are obviously physiological characteristics having to do with the way the a virion would actually get into a host cell and whether they would get infected, but a lot of those properties should be independent of the delivery of the virion in a droplet. It's really more once the virion gets out, there's some process. But what this calculation shows is that we would expect a fairly strong dependence of the infectivity on the size of the drop. Well, in particular, if we calculate this Rd. We'll have some idea that droplets that are smaller than that are highly infectious because every virion in those droplets can get out and infect the host cell. Whereas if the virus is-- the droplet is much bigger than that, then you have this problem where this dead region in the middle. And those virions are not going to be able to get out in a reasonable amount of time, which is set by this tau v. So for example, a tau v for SARS COV-2, the coronavirus, is estimated to be anywhere from one hour. There was one study in aerosol droplets finding that kind of decay. But another study found that after 16 hours, it was still viable. So there's not quite a consensus forming yet. But it may be a time on the order of hours, certainly days, over which the virion needs to get out of the droplet in order to be able to cause infection. And this calculation shows you that as a result, you would expect a size dependent diffusivity-- or excuse me, a size dependent infectivity. And roughly speaking, if we plug-in the numbers, these are the aerosol droplets. And these are the large drops. We've already done that. That was our previous calculation based on this time here, which is what I'm calling tau d here, which corresponds to this, is also pretty close to what we call tau 0. That was the-- well, that was the longest escape time, actually what I call, I think, tau bar actually, which was R squared over 15d. That was the average escape time. So basically, we've already shown that that average escape time starts to become days or even months when we get to large drops. But for the aerosol droplets in mucus anyway, this timescale is of order minutes to hours, which is reasonable. And you would expect those to be very infectious droplets.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Safety_guideline_for_COVID19_Analysis_of_superspreading_events.txt
PROFESSOR: So let's go through, now, how we can parameterize our guideline specifically for COVID-19 by looking at specific superspreading events. So the event that was analyzed in great detail first, and which is going to be of most use for us initially for parameterizing the guideline, is the Skajit Valley Chorale superspreading incident. So this was, again, a 2 and 1/2 hour choir practice involving 61 people, one of whom was known to be infected. The practice lasted 2 and 1/2 hours. And in the end, there were 53 infected people, two of whom later died. So this was a large room with a height of 4 and 1/2 meters and an area of 180 meters squared. It was poorly ventilated. The heat was on for a certain amount of time and then taken off. And the average air change rate was estimated at 0.65 per hour. And of course, the people were singing for much of the time, leading to much higher rates of droplet emission. And in fact, I think we can assume for this event, given the huge number of infections that occurred in such a short time, that the index patient-- the index case-- was at near the peak viral load, peak of infection. And also, that allows us to make a conservative estimate of using that for calibrating our guideline. So if we look at the figure here, we can see the droplet distributions taken from experimental measurements of Morawska et al in 2009. And those droplet distributions have been fed into the model that we just described and evolved in time in such a way that corresponds to the conditions of the Skajit Valley Choir itself. And so what we can see is that the droplet distribution corresponding to singing-- or the closest approximation of singing, which are measurements of voiced aahs from the original experiment-- that distribution is much bigger than all the others. It has a very broad tail to somewhat larger sizes. But it has a peak just below 1 micron. Similarly, the other types of activities measured in the original study, which correspond to, for example, whispered ahh or speaking or in counting of numbers, for example, or various forms of breathing through the nose and the mouth or only through the nose-- all those distributions have much lower sort of magnitude or number of drops. And this is also drop volume. We have accounted for the size of the drops as well. So it's the total droplet volume per total volume or volume fraction. And the peak of all the distributions is in a similar place, just below 1 micron, so again, corresponding to the aerosol range. And it's important that we take those droplet distributions and evolve them in the Skajit Valley Choir space. So then, we can figure out which of those droplets survived and would be in the air and would be then corresponding to the airborne transmission and can be compared with the actual spreading events that occurred using the Wells-Riley model. So is it a short amount of time? We're going to assume there was no delay caused by incubation. But rather, people were getting infected but not passing it on to anybody else. And so we will use the Wells-Riley model. So when we do that fitting, we come out with a value of Cq, the number of infection quanta per volume in the exhaled breath of the infected person, around 900 quanta per meter cubed. A published study of Miller et al came to a similar conclusion of 870 quanta per meters cubed. And so we can take that to be a reasonable value for singing. Now, if we go to the next figure, we can include all of these estimated total quanta concentrations corresponding to different hypothetical forms of respiration in the Skajit Valley Choir and use that for rescaling. So we can say, the choir was actually involving singing. And that gave us a number around 900. And then if we rescale, the other amounts of respiratory droplets corresponding to different activities would have correspondingly scaled values of Cq. And as a further calibration, we can compare it with another recent study of Asadi and Ristenpart, where again, different types of respiratory activities were measured for their aerosol size distributions, including speaking at different levels of volume and also breathing in different ways. And if we line up two values that correspond to sort of intermediate speech as a calibration, then we find that the quanta values that we infer-- the Cq values-- for both of those independent studies of respiratory droplets really correlate nicely across different types of activities from breathing to speaking to singing, allowing us a consistent definition of Cq, again, for a situation corresponding to most likely peak infectivity. So we are talking about sort of the worst case scenario in order to derive a conservative guideline. It should also be noted that the median age of the choir members was 69. And by using this spreading incident, we are again being conservative, because it is well established that elderly persons have an elevated risk of complications and even death from COVID-19, and perhaps also some evidence showing increased risk of transmission. So therefore, when we apply the guideline to a general population, including younger and healthy people, that we will find that we are making a conservative estimate, which is our goal. So at this point, we have a fully parameterized guideline. And we have consistent values of Cq across a range of respiratory activities involving two different studies of respiratory aerosols, all coming from the Skajit Choir incident. So now, let's look at some other spreading incidents to see if we can get consistent values of Cq in cases where people were not singing and where the size of the room was different, and sort of see if we really have a transferable inference here. And if we do find consistent numbers, it provides further evidence to support the hypothesis of airborne transmission in all of these cases. So the next example we'll look at is the incident of the Tiantong Temple religious ceremony and, in particular, the buses that went back and forth from that ceremony. One bus, in particular, was similar to this Dongfeng tour bus luxury liner, which underwent a 100-minute trip to Ningbo and then back in the same seating. The bus had seating for 68, or had 68 persons in it. The total time was 1.7 hours. And the one infected person managed to infect around 21 others, when we account for some that may have been infected at the event, given the low rate of infection to people outside of the bus. Using those numbers and taking into account the size of the bus and the fact that there was no forced ventilation-- this was a winter ride. And there was only natural ventilation-- and if we use a value that's been measured for other types of public transit buses where no forced ventilation is occurring, then we conclude that the Cq for this event is around 72 quanta per meter cubed, which is a very consistent estimate with what we obtained before for a situation where people are perhaps speaking in an intermediate tone on a noisy bus over that period of time. It's also important to note that a recent analysis of the incident involving interviews of all the people involved established that there was no correlation between the position of a person with respect to the infected person in the seating chart of the bus relative to whether they got infected or not. So in other words, it was not short-range transmission through puffs or respiratory jets. But instead somehow, there was a circulation throughout the bus of infected air as the most plausible explanation. Our third example is the Diamond Princess. So this was the quarantined cruise ship in Yokohama Port, Japan. There were 3,011 passengers and crew onboard. And the quarantine lasted for 12 days, or around 288 hours, at which point people began to leave. And we won't use any data from that point. The quarantine is a good chance for us to study airborne transmission, because people were largely confined to their room. So of course, some of the crew were going back and forth, bringing food and checking on the passengers. But the vast majority of people were essentially cooped up in their room with their fellow travelers or family members in small groups, typically with the windows closed because this was in the winter, and with ventilation which was doing a significant amount of recirculation between the rooms. And in fact, transmission occurred across different rooms, where people did not have direct contact with a known infected person and yet still managed to get infected. In those 12 days, the number of infections grew very rapidly and, in fact, had sort of an exponential increase. So in the end, there were 354 infected persons when they began releasing passengers after 12 days. And the fact that the shape of the infections versus time is an increasing exponential-like curve suggests that this cannot be modeled by the Wells-Riley equation, where instead, the number of infected people has to saturate as you run out of susceptibles. So this acceleration of the number of infected people with time is best attributed to incubation. And it is known that the incubation time for COVID-19 is around 5.5 days. Some people may have been infected, and likely were infected, before the start of the quarantine. So there definitely were infected people generating newly infectious people during the time of the quarantine. So as a simple analysis of this incident, we can use-- or let us use-- our model for fast incubation. We have an analytical solution for the trend in the number of cases versus time. And as you can see in the figure, this model has a pretty good fit to the growth in the number of cases. And if we fit that model and infer what is the value of Cq, then we come out with a number around 30 quanta per meter cubed, again, very consistent with all the other inferences and basically consistent with light activity, light normal speech, and sort of resting breathing that was going on in the ship. Now, there definitely could be some debate over the way that we've just analyzed the ship, in the sense that we had analyzed it from the perspective of a well-mixed ship. So we're assuming that the infectious aerosols were spread throughout the ship uniformly. So that is obviously a gross estimate, very crude. On the other hand, we get a very reasonable result. And there is evidence that transmission was occurring through the vents, through the hallways, and through the air to large numbers of people. And so the fact that we get a reasonably consistent number of 30 quanta per meter cubed compared to all of our other estimates does support the idea of airborne transmission occurring in a somewhat uniform and well-mixed fashion across the ship. Yet another inference we could make would be to look at the initial spreading of the epidemic in Wuhan, China, where it first originated. So there have been a number of studies of the initial spreading. And the reproductive number of the spreading of the disease, R0, has been estimated to be around 3.5. In fact, there's a range of estimates from that time period, given the sort of somewhat limited data. But that's the agreed upon average number. Now, there is an interesting thought exercise we can do looking at that number if we make the assumption that the majority of transmissions occurred indoors, in people's family homes or apartments. So if we take the time period for the spreading of the infection in our analysis to be 5.5 days, which is the average incubation time, and we use the average size of a Chinese household in that region of 3.03 people, and we assume an average size of a Chinese apartment for that size family of 90 meters cubed, and we also assume measured typical natural ventilation rates for this time of winter of around 0.3 per hour, or a 3-hour air change time roughly, then interestingly enough, from that analysis, we find Cq again is 30 quanta per meter cubed, the same as the number that we got for the Diamond Princess. So again, this is a very crude estimate. This analysis is even more crude than the analysis of the ship. We're looking at the entire population of a city. And we're assuming that the spreading is happening in people's homes when they spend time together for long periods of time, sharing indoor air which is typically not very well ventilated, and not wearing masks, importantly. So at that time, people were absolutely wearing masks outside the house. And in fact, for much of that time, people were confined to their apartments even under threat of force from the authorities. So people were definitely spending a lot of time in their homes with their families. And it's interesting to observe that despite that quarantine that the spreading still occurred fairly rapidly. And it occurred in a way which is consistent with indoor transmission in people's homes. So if we take all of this analysis and come back to our figure of Cq values-- again, that's the number of infection quanta per meter cubed of exhaled breath for an infected individual-- then we can put our inferences for the Ningbo bus, the Diamond Princess, and the Wuhan outbreak on the same plot as the values we inferred by rescaling the value of 900 for the Skajit Valley Chorale. And what we find, again, is a very consistent set of estimates over a range of respiratory activities, which tells us that the Cq is around on the order of 10 or so, or tens, for light activity and resting breathing. It's in the range of 10 to 100, or maybe several hundred, for speech at different levels of volume, which roughly-- the number of droplets is known to increase roughly linearly with the decibel level of speech. And then singing has a more-- obviously a much, much greater release of particles and aerosols. And that's at a much higher level, in the many hundreds. So I think those are fairly consistent numbers, which again, looking at most of these cases, are conservative and could be applied in the guideline to a wide range of examples involving other types of populations, which may be healthier, younger, less likely to transmit, compared to all of these super spreading incidents.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Airborne_disease_transmission_in_a_wellmixed_room_Sedimentation_and_deactivation.txt
PROFESSOR: So now let's add to our theory ways in which droplets can be removed, or infectious virions can be removed, within the room in addition to ventilation and filtration effects that we've already described. So the primary way that can occur is simply by settling of the droplets. So we've already talked about the Stokes settling speed, which scales as the radius of the droplets squared. So basically, larger droplets fall fairly quickly. And in fact, we've already discussed how they can fall to the ground in a fairly short time. It could be on the order of minutes or less for large droplets, such as those which are spewing out of your throat when you cough or when you sneeze-- out of your nose. But also, the smaller aerosol droplets we've already calculated can stay in the air for a very long time. So they settle much more slowly, but they do still settle. So we can try to see how this enters in. And the second topic is also deactivation. We've mentioned that the virus doesn't live forever. So the virions in these droplets do need to find a target and get out of those droplets and into some healthy tissue to infect it within a certain period of time. So there's a notion of a viral deactivation rate which can also be a parameter in our models. So if we now add those two effects to our existing model-- I'll just keep rewriting our mass balance equation. And this is the mass balance for the concentration of virions in the air. I've also used the terminology in chemical engineering. We're going to call this kind of approximation the CSTR, or the continuously stirred-tank reactor approximation. And now with all the effects we're including, it's starting to look more like actual modeling of chemical reactors and chemical plants by this method. So the mass balance tells me that the volume of the room times dc/dt-- or again, c is the virion concentration per volume in the air-- is the production rate, P, minus-- and then we have a flux, which is the flow rate times the concentration. And there are several flow rates here. There's Q, which is ventilation. There is filtration, which is PF QF. And then we have now a new term, which I'll write in another color-- plus vsA. So that's the settling here. So that idea is, in a well-mixed room, there is a complex flow profile which is leading to the mixing by convection of the air. And you might say, well, OK. That flow is very quickly carrying the particles up, carrying them down-- but on average, the particles go down just as much as they go up. And if it's well-mixed, then the particles essentially are sampling the whole space. And relative to that well-mixed flow, which averages to 0, they are slowly settling. And so a reasonable approximation is to say, well, the removal is basically happening with a flux rate, which is that velocity of falling times the area. That's how quickly those particles are falling through any horizontal surfaces make relative to their average 0 motion from convective mixing. So this is the new effect of sedimentation. And then actually, I will also add to that another new term, which is the deactivation. And so here, we will also add lambda v times the volume. So this is just saying that throughout the whole volume of the room, there is a rate at which every virion is just slowly deactivating. That will be lambda v. Also, if you have any volumetric treatments of the air, such as chemical disinfectants or even UV light, that may also slowly deactivate the virus or the virions in the air with a term that goes like this. So I'll just mention that maybe briefly here. So lambda v is the virion deactivation rate. And well, if we look at tv which we've talked about before, which is lambda v inverse, this is the deactivation time. This thing has been measured to be of order of 1 hour in some studies. But also , even greater than 16 hours in aerosol form in other studies for SARS-CoV-2. So it could be potentially long. Also, this could include effects such as I mentioned-- UV light treatments, which might be operating in a certain part of the room, but then the air circulates and we're essentially treating a significant part of the volume. It could also be chemical disinfectants. So there are various chemicals that can be sprayed in the air which are believed to essentially kill the virus or deactivate the virions, although they may cause other harmful effects, and so it's not so widely used. But in principle, that would also appear in our simple model, lumped into lambda v. So let's put all these effects together now. So again, we haven't really changed the calculation much. We're just building it up and making it a little more complicated each time. So let's see here. So one thing we did with this equation is we divided both sides by v. And so let me write this equation again after such a division. That would be dc/dt is equal to-- well there's P/v. But then we have over here-- Q/v is our lambda. But notice, all these things are essentially giving us a correction to lambda, the relaxation time. And actually, I should say this is a minus sign. So we get minus lambda. And I'll say lambda c, just for the relaxation rate of the concentration field. So we can lump all these parameters and we can write lambda c is-- Well, from the first one, Q/v is lambda a. That's the air change rate of outdoor fresh air. There is PF lambda F, which is the rate of filtration times the filtration efficiency, PF. And then we have another term which we can write as-- well, we have lambda v. That's an easy one. And then the sedimentation term is the one I want to focus on right now. That is vs. We can write it as vs/h, where I've divided by v and I'm writing v/a is h. So I'm writing h equals v/a. So if we have a rectangular box of a room, then h is the ceiling height. But this is some kind of effective ceiling height if it's not a perfect box shape. But if you take the volume and divide by the projected area of horizontal surfaces, then that's giving you a sense of the typical height. And that's the typical distance by which particles have to fall. And notice, velocity is distance per time. So when I do vs, and divide by h I am getting something with units of inverse time. So it's just like all the other lambdas. It is basically a rate-- something per time. So this is the concentration relaxation rate. I guess it would be the theory on concentration in the air, which is relaxing at this rate, lambda c. And then we come back to solving the same simple order of differential equation that we've done all along. And the solution's just c of t is a steady state value times 1 minus e to the minus lambda ct, assuming that this thing is a constant for the moment. And also, we know that the cs is p over v lambda c. So basically, if lambda c is high, if all these removal rates are high, then that makes cs low. So the background concentration of the room is much smaller if these lambda rates are all high. Also, if the lambda rates are high, then the relaxation is very fast, so you very quickly get to the final value. And so that's actually worth sketching what that looks like. So if I plot what is the concentration, c, as a function of time, we have here this lambda c inverse is the overall concentration relaxation time. So it looks like an exponential relaxation to a value cs. But I just want to emphasize what I just said verbally, looking at these equations, is that as I vary lambda c-- so if I have a fast relaxation. Let's say lambda c is a large value. Then I start out at the same rate, but I saturate a lot faster and at a lower value. So if it's here, this is fast lambda c. So now if I increase lambda c relative to the blue curve, the whole thing comes down. But also, it relaxes more quickly. On the other hand, if I have slow relaxation, because any of these processes here are slow, then I get something which relaxes much more slowly and ends up at a higher value. So if someone is breathing infectious air out, exhaling, the infected person, and there's only very slow processes in the room which are removing that infectious air, then it's slow to build up, but it keeps building and building and building until it finally saturates. So this is basically, whenever lambda c, is not a large value or slower, you have a slow process, but it also builds up a lot higher. So this effect of lambda_c is very important to keep in mind, especially because these parameters here are not necessarily constants. In particular, v_S has a very strong dependence on size. So these different kind of saturation curves are, at the very least, dependent on the size of the droplets that we're talking about. And there's not just one size, so we will come to that point.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Transfer_of_respiratory_pathogens_Airborne_droplets.txt
PROFESSOR: So let's talk about the transfer of respiratory pathogens, and in particular, contagious pathogens such as viruses and bacteria, that infect the respiratory system. The way that such pathogens are normally transferred is through droplets which are emitted by respiration, which could be by just normal breathing, coughing, sneezing, et cetera. And so here is a sketch of an infected person who is undergoing respiration and is emitting droplets into the air. And so let's think about what is the fate of those droplets, what it could be. So one possibility is if the droplets are very heavy, they're just going to settle to the ground. And then they may collect on the ground or on some other surface. And then somebody else could touch that surface and transmit it, perhaps by touching their eyes or some other-- or their nose or some bodily entrance point. And that sort of transmission is called fomite transmission. So these dried up bits of droplets on the surface are called fomites. And this mode of transfer would involve settling of those droplets to the surface, to a surface, OK? Now, another possibility is that the droplets kind of float around, and if they're small enough, they might actually evaporate and they might disappear. So they might evaporate. And at that point, if there's a pathogen in them, that pathogen may still be around, but perhaps if it loses enough fluid it's going to lose its viability. And so perhaps those droplets would be eliminated. And then finally, there are droplets which undergo neither of these and remain floating indefinitely, or at least for long periods of time, let's say for hours, in the space. And these are called aerosol droplets. So these are droplets that are very small. They don't really settle in a reasonable amount of time. But they're not necessarily evaporating either. And so they are present. And if another person is here, they can very easily breathe in those droplets, OK? Now, how do we know which of these outcomes is possible for droplets that are emitted from respiration? So what it really depends on at the simplest level is the size of the droplet. So the droplet fate depends on its size. So why don't we do some simple estimates of these different processes? So the first would be looking at settling. So the settling time from a height L, which might be the height of a person, a typical number that's taken is 2 meters for a settling problem like this. It's given by the following formula, assuming we have so-called Stokes' law of settling is valid, which it usually is for small droplets. And that would be that-- I'll just write the formula first. 2 rho g R^2. So basically, there's a 9/2. L is the height which they're going to fall. So basically this is L. And mu_a is the viscosity of the air, rho is the density of the air-- or density the droplet, excuse me, of the liquid. And g is the gravitational acceleration, and R is the size of the droplet. So the size of the droplet is R -- or that's the radius. So what you see here is that when the radius gets bigger, the drops fall faster, and hence the time goes down. So very large droplets will very quickly settle out. Others -- as R goes to be smaller and smaller -- they might be suspended and become aerosols. We also might worry about evaporation, for the smaller droplets especially. And the evaporation time, again with a fairly simple approximation of pure liquid which is evaporating. Basically, just as it's getting more highly curved, the molecules will have a bigger driving force to be removed. And if it's a diffusion-limited process, which is basically water vapor has to diffuse away into the environment, then you can show the evaporation time is the initial size of the droplet, R_0 -- so let's just say R_0 is the initial size. And maybe here, when it's settling, it could still be evaporating. So R could be varying. But why don't we just neglect that for droplets settling quickly. Maybe that's roughly the initial size. And here for evaporation, there is a constant, which I'll call D_bar, which is just something that has units of diffusivity. So length squared per time. And then (1-RH), where RH is the relative humidity. So basically, the tendency for the water droplets to be removed from a liquid droplet end up in the air has to do with the relative humidity of the air. So that's another factor that comes in here. So if we plot these two results, we arrive at the so-called Wells curve, which was first formulated by epidemiologist Wells in 1934. And I'll draw that over here. And the Wells curve is sketched like this. It says that if we have the drop size R_0 on one axis, and on the other axis we have the time of settling-- the time that the droplet has left the mouth, then you have basically two expressions here. So the settling is something like this, where it's a function that goes to 0, like 1/R squared. On the other hand, evaporation is the fastest for the smallest droplets. You see that it goes like (R_0)^2. So it has a dependence more like this. And so basically, these curves intersect at a certain point here. And if you ask yourself, if I am a droplet of, let's say, this size here, then as time goes on, I hit this point, and this is where I evaporate. So for just a pure liquid droplet, at that time, that droplet would disappear. On the other hand, if I have a larger droplet that's going to hit this other curve first, then these droplets will settle, because before they have time to evaporate, which would require all the way going to here, they've already fallen to the ground. They may continue evaporating on the ground, and you're eventually left with a dried up residue of some of the material that may have been contained in the droplet. And then there's a crossover. And so generically, you expect this kind of behavior for droplets that are evaporating and settling. So the Wells curve was first formulated in 1934. And if we just want to put some numbers on here, if we're talking about pure water, then this crossover happens around 70 microns. And the time is around 3 seconds. So that gives you a sense, basically, of how quickly the larger droplets are settling faster than 3 seconds, and then the small droplets are evaporating a lot faster. And by the way, to get a sense of how fast they are, if we look at the dependents, each of these is squared. So if we want to go by a factor of 100, if you go to, let's say, 0.7 microns, which is 700 nanometers, it's a factor of 100. But the time comes in squared. So it's 3e-4 seconds. So we're talking 0.3 milliseconds. So basically, droplets that are in the 1 micron or below range, if they're pure liquid, they'll evaporate extremely quickly. And conversely, if we consider much larger droplets, let's say that are bigger by a factor of 10 or 100, that also comes in squared in terms of the settling time being reduced. And so we would then end up with 100, or up to even 10,000 times smaller settling time. Although it won't be quite as small, because also, the particles need to accelerate to that speed. This settling speed here is the terminal velocity of a drop. And there is a short acceleration time for very small particles. And for very long particles, you may still be actually in that acceleration time when you hit the ground. So basically, it might not be that long. But basically, the time, at large times, is also quite a bit reduced for large particles. Now, there's also the humidity effect, which can be seen here. So for example, if we're at 90% relative humidity, this factor here is a factor of 10. So what was on the order of a few seconds, if we're at higher humidity, then this curve ends up looking more like this. And we may follow this curve a little bit further and end up with something like this. This would be high humidity. I'll say higher, because I haven't gone that far. There's another curve that I could draw where this even goes further this way, and where this could start turning into, say, 30 seconds where that crossover occurs. But in any case, there is a crossover at some point. And at high relative humidity, the evaporation is slower, and so we are more following the settling droplets. And so this is an important set of concepts in the field of aerosol science involving droplets, and especially for respiratory diseases. But it's still oversimplified. So recent research has showed that, in fact, many droplets that are present from respiration do not evaporate on these kind of fast timescales. And in fact, they can linger and can be way into this small size range of aerosols and not disappear. And it's possible, then, to breathe them in and transmit disease with them. So what's missing here is that the droplet fate depends not only on the size of the droplet, but also on solutes. So what I mean by that is that, of course, a droplet coming out of your lungs and passing through your pharynx, your vocal chords, is not just pure water. It's even not pure saliva. In fact, it contains many other molecules. So of course, it contains the pathogens themselves, which are solids, and they don't evaporate. So whether it's bacteria or virus, some of that material has to stay behind. There's all kinds of organic molecules, because in fact, the mucus that comes out of your lungs as a non-Newtonian fluid that's full of macromolecules of different types. Those molecules are usually charged, as are, in fact, the viruses and other pathogens as well. And so there could also be hydration, water, so that those water molecules, which are not freely in solution but were strongly interacting with charge services, or charged molecules, and form so-called hydration shells around those molecules. And finally, there could also be salts, because we all know that our body is, in many cases, similar to seawater, and has fluids which contain a large number of salts. For example, sodium chloride or calcium. And salts love water. So in fact, it's been shown that some respiratory aerosols are actually observed to be growing after they're emitted from the body. In a humid environment, water may actually be condensing onto those particles and causing them to grow, because it has molecules that love water. And in fact, these kinds of molecules or particles that attract and hold water are so-called hygroscopic materials. And many respiratory-- a significant number of respiratory droplets are, in fact, hygroscopic. So this whole picture of evaporation settling really needs to be modified. The settling part is going to always be there. Even a solid particle which is settling in air is going to obey this Stokes settling velocity. But the evaporation part of it is certainly true for pure water, but is not necessarily the right way to think about respiratory aerosols. So finally then, I'll just sketch what happens when people have measured respiratory distributions of particles, and focusing on the aerosol range of the really small particles that might remain suspended. So these are particles like this guy right here, which are around 1 micron, and will have settling times that are on the order of hours. So those particles that can linger in the air for long periods of time. And so if we look at the number of droplets that we have at different sizes, and this is for different kinds of respiration-- and I'll draw this to sketch what it would look like on a log scale. So here I'll put 0.01 microns, which is 100 nanometers. And then I'll put 1 micron, and then 10 microns, and then 100 microns. So when you breathe, speak, cough, sneeze, you're letting out a distribution of particles of all these different types of droplets. And those droplets will typically contain pathogens, such as bacteria or virus. And the way these things look is because of these hygroscopic solutes, in fact, we don't see that all the little ones are evaporating it away in a tiny timescale like milliseconds, but in fact, they do linger. And you do have respiratory aerosols that can be observed. And so what these distributions actually look like, they tend to have a peak around half a micron in diameter, or a radius even smaller than that would be a quarter micron. And so they look something like this. And if you're breathing at rest, it might look something like that. And actually, the volume fraction, if we were to convert this to a volume fraction, ends up being around 1e-16 parts of liquid per volume of air. So these are very small droplets. And you can't see them, but they're there. If you're resting breathing, you might have something like that. There's also an important effect of the type of respiration. So if I start talking, then it turns out I'm still releasing quite a few these aerosols, but now I'm also releasing some much larger droplets. I might even be having-- depending how I'm speaking, and in fact, my personal physiology may vary from person to person, I might be emitting even more of these larger droplets. Or also, the aerosol droplets as well. And then there are other activities, such as singing or exercise, where you're breathing very heavily, where you emit even more droplets. And you can see vocalizations, singing, and this can be, for example, talking, that those lead to more emissions. But the important thing is that there is a big population of particles. In fact, the majority of particles, by number or even by volume, is over here in the aerosol range. So these are particles that do hang around. And they float around the room. And they can do so for minutes or even hours, depending on their size and the conditions of the room. These here are the large drops which will sediment out according to this formula here. So the Stokes formula is still going to be valid regardless of evaporation. If we know the size, we have a sense of how quickly the droplets are falling. But the ones we're really going to want to focus on are these aerosols for viruses, because viruses are small. Whereas bacteria are big and they might have to be transmitting more of the enlarged drops. So now let's talk a bit about the biology of viruses and bacteria and see how that might connect to the physics of droplet transmission.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Transfer_of_respiratory_pathogens_Aerosolized_pathogen_deactivation.txt
PROFESSOR: So we've discussed the extent to which the size of a droplet can influence the infectivity or the ability of a virion to escape from that droplet and, also, to be transmitted to the deepest, smallest passages in the lungs as a function of its size. There is also a dependence on the relative humidity of the air, which is related to size. And so, as we've seen, humidity does vary the size, but there's believed to be also a more direct effect of humidity, as I will now try to explain. So I'm relying here on the recent work of the group of Linsey Marr, two papers cited here. So we can distinguish between two different types of pathogens. The first are the bacteria. And here there's a monotonic dependence of the relative viability of the pathogen, of the bacteria, after a certain time period. Let's say one hour. And what is found is that, above a certain threshold of humidity, around 80% relative humidity, that there's, essentially, no change in the viability of the bacteria. They're alive. They're infectious. But, as the humidity, relative humidity, is reduced, then there's a significant drop off in viability, which depends on the specific type of bacteria, but it's a fairly general trend that it comes down significantly as you approach more dry air. Now what's happening is the size of the droplets is shrinking. In the case of the bacteria, we can understand, to some extent, why this dependence might be here by thinking about solutes that are present, especially salts, in the system, but, also, mucus-- mucosal proteins that we've also discussed. And, when the particles become more dry, then what happens is that the concentration goes up, and there's an increase in the osmotic pressure of the fluid around the bacteria relative to the inside. And, as with many other kinds of cells, when exposed to such high osmotic pressures, that can cause stress on the cell and, potentially, even rupturing of membranes or other structures within the cell. And, obviously, then it is not good for the viability of that cell and leads to deactivation. The case of viruses is a bit more complicated. So some old data of Harper from the 1960s on the seasonal flu, in particular, human influenza virus A, which was recently analyzed by Marr's group, showed that there was a viral deactivation rate that, essentially, was scaling linearly with the relative humidity. So there's a faster deactivation rate in more humid air, less in dry air. This is one way we can understand the seasonal nature of the flu in that, in more dry, wintry environments, especially away from-- in sort of the northern or southern hemispheres, we can expect that then the virus would be deactivating less. But, of course, that's compounded by the effect that, in the winter, people spend more time indoors, and so that's also leading to more seasonal transmission. Now, if we convert the deactivation rate into relative viability again, then we see an interesting dependence in recent experiments, which were done using bacteriophages, which are models of different kinds of human pathogens, including the seasonal flu and influenza viruses. And, in particular, there's a non-monotonic dependence where, essentially, there's a maximum rate of deactivation around the range of 70% or 80% humidity, or 60% to 80%. And, similarly, the viability was the lowest in that range. And the way the authors proposed to explain that was a hypothesis that there are solutes that are present, which may be, for example, sodium chloride or, in particular, chloride ions, perhaps, that, when we reach the higher concentration in the shrunken droplets, that there is, again, a stress on the virus, but, in this case, regardless of the details of the mechanism of deactivation for these encapsulated viruses, the idea is that the cumulative dose or exposure of those solutes is what's important. So, if the shrinking happens very fast, and we end up with a droplet nucleus of, mostly, bound water, and it happens over a short period of time, the exposure to those solutes is limited. And, hence, we end up with high viability, low deactivation rate in dry conditions. Conversely, in very humid conditions, the droplets stay big. In fact, they may even grow because of the hygroscopic solutes. And, in that case, there's plenty of solutes present, but they're very dilute. And so, again, the effect on the virus is minimal. And the greatest deactivation and, also, the maximum-- the sort of minimum viability is actually at an intermediate range of humidities. So this tells you that maintaining a comfortable humidity in the range of 50% to 80% may, actually, be the best for minimizing the viability of viral pathogens.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Course_overview_of_physics_of_COVID19_transmission.txt
PROFESSOR: So welcome to our Massive Open Online Course, 10.S95x, Physics of COVID-19 Transmission. The course is organized into five chapters. First, transfer of respiratory pathogens, where we will learn about how viral and bacterial diseases are transmitted through the air through aerosol and other types of droplets. In the next chapter, part two, we will study airborne transmission in a well-mixed room. We'll analyze the fluid mechanics of droplet transfer between individuals breathing the same air in a room. And then in the next chapter, we will describe basic models in the field of epidemiology to describe the spread and transfer of disease. In the next chapter, 4, we will integrate the disease modeling with airborne transmission to arrive at a safety guideline to limit the indoor airborne transmission of COVID-19. And we will apply it to COVID-19 through analysis of various spreading events. And then finally, in Chapter 5, we will go beyond the well-mixed room to discuss the fluid mechanics of ventilation and thermal flows and respiratory flows and understand the limitations and extensions of the analysis leading to the safety guideline.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Beyond_the_wellmixed_room_Respiratory_puffs_and_jets.txt
PROFESSOR: So while I've argued that there are many ways in which a room can become well mixed, hopefully well enough to apply our well mixed criterion for airborne transmission through long range aerosols, there is one very important way in which the transmission problem is never well mixed. And that is taking into account the source of the particles, which is the exhaling of an infected individual which leads to a very high concentration of particles leaving the mouth, which then ends up being dispersed throughout the room. And there is a sort of space and time dependence of that process which we must consider in looking at other possibilities of transmission than just the well mixed background air. So in particular, I've already indicated that the flows generated by breathing tend to be at higher enough Reynolds number to generate turbulence. And in particular, they generate a turbulent jet which ends up taking the form of a cone. Not perfectly but approximately a cone, which means that the radius of the jet is alpha times the position of the jet here. So if I write a coordinate system, a cylindrical coordinate system where z is the direction of the flow and r is the radial coordinate, then that's the cone and the cone angle is alpha. The term alpha also has a physical interpretation as the air entrainment coefficient. And for respiratory jets in the air, this coefficient is usually around 0.1 to 0.15. So that gives you basically the opening angle of the cone. So I will come to explaining and deriving why the jet has the shape of a cone. But let's just assume that. And let's continue with the assumption that the flow is at high Reynolds number which means it's dominated by inertia, by kinetic energy, and by the tendency for the fluid to keep wanting to move in a new direction. Now if the fluid coming out of the mouth were just being spit out at a very high rate, you might imagine a very narrow kind of stream of air. But the reason it's widening is that air is actually being entrained, is that some of the ambient air is being sucked in. And this is making the jet have more and more fluid in it. But then that fluid is sharing the momentum. It's also spreading and diffusing the particles. And the wider it gets, the more it slows down because now that momentum is being shared through the sort of turbulent exchange going on. So let's do that calculation. So we have-- so just remind you that we are in a situation of very high Reynolds number typically. And we will assume that there is roughly-- and it's actually a good assumption-- roughly a constant momentum flux. What that means if I take a slice here and I look at, essentially, the kinetic energy density or the momentum per time that is crossing a slice of the jet, that that actually should be conserved. And so if I write that momentum flux as capital K is the area of the cross section pi r^2, if I look at a given position r here. And then I have the momentum is the density of the fluid, which is the air, times the velocity field, the velocity. And so that's momentum. Momentum flux would be momentum times velocity or rho v^2, which is also kinetic energy density. This quantity should be roughly constant. So we can now solve for the average velocity, v bar, which would be square root of K over pi rho_a times 1/r, after we take the square root, but then because we have a cone r is alpha z So this is square root of K over pi a 1 over alpha z. So we can see the velocity is decaying like 1 over distance from the mouth. So the jet is slowing down. But it's still, of course, continued to advance as the momentum is being shared across a larger and larger area of entrained, turbulent flow. So we can now use this result to figure out what is the rate of progress of the front. So if I call this z of the front, so let's say when I first start exhaling it is kind of a wall of droplets and I can sketch that this flow is actually full of droplets that we're interested in tracking, I'd like to know how those are first leaving the mouth. Well, I can write that this velocity is, at a given position corresponding to the front, is dzf dt. And if this is the f, so I apply this at the front, then I can put the zf on the other side and I have zf dzf dt is equal to square root of K over pi rho_a. Then I've got, I guess also, an alpha or 1 over alpha. And then this expression here can be written as 1/2 times the derivative of zf squared. So I can then solve for the position of the front and I find zf is equal to-- well, let's see what I get. I put the 2 on the other side. I take a square root. So I get 2 over alpha to the 1/2, I get K over pi rho_a was a square root. And then I take another square root so I get a 1/4. And then I get t to the 1/2. Because when I integrate this equation, I get zf squared is all this stuff times t, starting from t equals 0. So we find is that, initially, the jet starts to progress like square root of time. And so this coefficient here is some kind of-- has the units of a diffusivity. So it kind of like appears that the jet is sort of diffusing. You could call this D_effective, sort of for the front of the jet. But that doesn't last forever because some point the person closes their mouth and starts breathing in again, and maybe pulls the fluid back a little bit. But not too much. Because it has momentum and it keeps moving forward. So when somebody breathes, it starts out looking like that. But then at a later time if I draw this same person again, then-- in fact, actually, let me draw him with a closed mouth. Because he's just finished, let's say, exhaling is getting ready for his next breath. So now this blob of fluid has kind of worked its way out and it's now somewhere out here. And this is what we call a puff. So if you're smoking, for example, a cigarette you know that you can create a puff of smoke by just releasing a finite amount of fluid. And then it kind of goes out and it makes some interesting patterns and usually is very turbulent unless you're very careful in trying to control it. And of course this contains lots of these droplets that we're interested in. And so then now we can briefly ask, how fast does the puff move. Well, you see here, the fluid keeps moving because it's constantly being given momentum. So if it's a steady jet as you're breathing, you're pushing, pushing, pushing and so this thing keeps moving on the square root of t, it keeps entraining more. But as soon as you stop giving it more momentum, you give it just a finite amount of momentum, then the puff actually slows down. It doesn't keep pushing ahead as quickly. It also doesn't entrain more air as quickly either. And we can do a very simple argument to see what happens to the scaling. So this momentum flux here was a constant momentum flux in the case of a jet. But here in the case of a puff, we could, maybe as a very crude estimate, just say that the momentum flux is something now replaced by some kind of constant value that is maintained only for a time, tb, which is the time of the breath, and then averaged over a longer period times. If you think of this as kind of like an average momentum flux, we've injected some momentum flux but then we took it away. And so if I want to find out the average over a period of time t I have to divide by t. So you see if I do that then, I arrive that in the puff case that the position of the front or the puff is a lot of these same constants here, but is now scaling as t to the 1/4 because my K has a 1 over t and it's raised to 1/4 power, then I end up with zf goes like t to 1/4. So it slows down. So first it was sort of square root of time. Now it's become more [INAUDIBLE] time. And it's almost just sort of sitting there, it's not progressing that much. And then a new breath comes. And so that's what happens next. And especially if one is speaking or singing, then the exhaling is a much longer and more continuous process than the inhaling which is very sudden. So you talk and take a breath, and you talk. And so there's a lot more of this going on than this, even in normal speech and in normal breathing to some extent as well, but especially when one is speaking. And so an interesting development then is that if we have a person who is speaking, then we can generate what has been called a puff train. So we have here is the most recent new breath getting exhaled and then the previous one was here, still kind of floating around and the one ahead of that is dispersed a bit more. But it hasn't progressed as far. And so it's a little bit thinner. And if we kind of color each breath a different way, then what we're left with is something that actually looks an awful lot like a continuous cone again. So this is a puff train. This could be, for example, from speaking or singing. And overall, the behavior is quite similar to a jet just where the K is replaced by the time average momentum flux. So I should mention that this way of thinking here was recently introduced and verified experimentally in a paper by Abkarian and Howard Stone and collaborators, and also introduced this notion of the scaling and showed that actually the puff train has a scaling which is very similar to the initial jet with the square root of time. And so that's an important concept come to now. Because we want to ask now, what if somebody is speaking or even just breathing continuously in a certain direction and generating aerosol particles, how does their concentration evolve in this sort of a respiratory plume or jet.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Airborne_disease_transmission_in_a_wellmixed_room_Air_filtration_versus_masks.txt
PROFESSOR: So we've just discussed the mass balance that results from considering ventilation and breathing of an infected individual in a room leading to a build up and then steady state of aerosol droplets that are infectious in the indoor air. We've also briefly talked about masks. I'd now like to introduce the possibility of filtration, which is actually very common in mechanical ventilation systems. And then arrive at a very simple comparison of filtration versus mask use as strategies for preventing indoor transmission. So here's a schematic sketch of the air flows. So basically, we still have this Q, which is the outdoor air flow. So that is the air which is coming in from the outdoors. And for mass balance, it also has to be leaving. And the air change rate is based on that Q. And in fact, if I remember, if there's a volume V of the room, then the air change rate was Q/V. So that's of units of inverse time. So that's the time, roughly, it takes for-- the residence time for the outdoor air, and also for the changing of the air. Now, in addition to that, though, there could be significant recirculation of air. Now, if the air is recirculating but not filtering the droplets in any way, then it really doesn't concern our calculation, which is basically just a mass balance. So if I'm just swirling the air around, or even taking it outside and back in, as long as I'm not losing some droplets in the ducts, then it really doesn't matter if it's recirculating. But what is of interest to us is if in that recirculation we lose some droplets. So some droplets could be lost simply to settling on the walls of the ducts or the heating and ventilation and air conditioning, HVAC, unit. But more interesting is to consider what would happen if we actually placed filters, which is very common in HVAC systems. So we could either have one of those big filters that you stick into the heating or ventilation ducts. And those typically are given MERV ratings, which are Mean Efficiency Rating Values. And we're interested in the filtration of aerosols. And so if Pf is the fraction of the aerosols that are blocked-- and we roughly would look at a size range, for example, less than 5 micron radius-- then the difference of ratings of filters give you anywhere from 20% to 90% filtration. And then a very high quality filter would be a so-called Hepa filter, which is a High Efficiency Particle Air filtration unit. And that would give you even as high as 99.97%. And also, it's worth emphasizing that you could also have a freestanding filtration unit. There are a variety of different kinds of filtration units. It could be a fan which is blowing through a Hepa filter. And it could be a freestanding unit in the room. It could also be more exotic systems, such as electrostatic precipitators, where the air is flowing into a chamber that has a very high electric field, and the droplets are basically projected and deposited onto a surface. It could be flowing the air through a UV-concentrated light treatment to kill the virus. So you're not actually removing the droplets, per se, but you are removing the virus. And since we've been keeping track of the concentration of virions, they could be deactivated in that way. And so all of that can essentially be lumped into this simple picture of a recirculating flow rate Qf, and a filtration efficiency, Pf. So let's revisit our equations from last time. So our mass balance is at V dC dt is P, which is the production rate of infectious air by an infected person. We have the removal of air by the outer flow rate, which we've already discussed. And now we have the new term which is Pf Qf. So if Pf is 1, then the entire filtration flow is removing droplets. On the other hand, if Pf is 0, this term goes away. And that's the limit where you're simply recirculating air, but you're not actually removing any infectious droplets, and it's playing no role. So this is basically our only new term here. And so what this means is we get the same calculation, same as before, but we just have to make a little replacement. What was before Q is now-- notice we can lock-- oops, I've forgotten, actually, here there should be a C here, actually. Because the flow rate-- or the flux of infectious air is the concentration of infectious air, so the concentration of virions per volume times the flow rate, that thing gives me the correct value. So the Pf a dimensionless number between 0 and 1. So anyway, Q gets replaced by Q plus Pf Qf. And also, wherever we saw a lambda a, that gets replaced by lambda a just from the outdoor air plus lambda f, where lambda f is Qf over V. So that's the recirculating or filtration air change. So it's the recirculation or filtration air change rate, whereas lambda a is the outdoor air change rate. And we can also talk about the balance between these two. And so that's also something which is reported or measured for ventilation systems. And that's sometimes called Z, which is the outdoor air fraction. So that would be lambda a over lambda a plus lambda f. Or we could also write it as Q over Q plus Qf. This is the outdoor air fraction of the ventilation system. So another way to talk about ventilation is you give the total flow rate. So as I sketched here, there could be a fan which is in the heating, ventilation, and air conditioning unit, which is just pumping out air at a certain rate. That air is mixing the recirculated air and the fresh air with a fraction Z, which is the outdoor air fraction. And so actually, this might be, for example, a typical value is around 20% for many indoor spaces, such as businesses or classrooms. And basically, you have to tune this number to make sure that enough fresh air is coming in for the occupants that are present. And also, you don't want this number too high, because typically the outdoor air is not comfortable air. It either is too cold or too hot or has the wrong humidity. And so you save a lot of energy in your heating ventilating system by keeping this number as low as it can be effectively, so that you're just recirculating quality air that is at the right temperature and the right humidity. However, that desire to save energy, and hence also save carbon emissions and other related things, goes against what we're talking about here, though, is that when you recirculate the air and you don't take in fresh air, it's actually much worse for any airborne pathogens that might be transferred. So there's a competition here where you design an air ventilation system to make sure that you're saving energy, but also that you're keeping the place safe for potential airborne transmission of disease. OK, so if we just take this change here, we can replace these quantities in the result that we already derived. So I don't have to go through all that again, but I can just say that the disease transmission rate-- and this would be in steady state, just to give us something concrete to look at-- that's what I called beta bar. And let's now write down the same result, but just make this change here. So we have Qb squared, that's the breathing rate. We have Cq, that's the concentration of infection quanta per volume in the exhaled breath of a sick individual, an infected individual. There was Pm squared. I'll write different colors just to emphasize that. And then in the denominator, we had lambda a, but now it's going to be lambda a plus Pf lambda f-- and I'll write that in a separate color in just a moment-- times volume. So here's Pf. So now, in this formula, we have the effects of masks and the effects of air filtration together. And so we can ask the question that I posed at the beginning is, which one is more effective in stopping airborne transmission of disease? So you can see these two factors do come in differently in the equations. So let's first consider the effect our filtration-- effective air filtration. So there's a lot of interest today in installing Hepa filters or other high quality filtration systems into vulnerable spaces, such as nursing homes, and also in spaces such as classrooms, where people fear transmission of COVID-19. So we can ask how effective that can be. So one way look at that would be to say, well, how does air filtration affect the steady transmission rate? So that would be looking at the transmission rate with the filter compared to the transmission rate without the filter. So if we do that, you can see all the terms here cancel except for this one that has Pf in it. And so what we're left with is, because in the denominator we're left with just-- when Pf is equal to 0, it's just lambda a. That's in the denominator, so it ends up in the numerator. And so it's just lambda a over lambda a plus Pf lambda f. So the benefit of filtration can be immediately seen right here. If you know the ratio lambda a to lambda v, because I can also write this as 1 plus-- well, actually, I won't even-- I won't do that right now. But you can see how this is related. It's related to this quantity Z. but basically, the ratio of the lambda a to lambda v is related to Z. And that obviously comes in here. And Pf comes in as well. But an interesting question we can ask is, well, what if you have a perfect filter? So let's say we have something even better than Hepa. Let's say it's 100% filtration. That's like best case scenario. So this fraction here is larger than if I just set Pf equals 1. So this is a perfect filter. So we have the most expensive filter on the planet, and it's going to definitely filter everything. And so what we're left with, then, is just lambda a over lambda a plus lambda f. Now I can make the connection here. That is simply Z. So basically, the effect of air filtration is never better than the outdoor air fraction itself. But I already told you, the outdoor air fraction here cannot be too small or you're not delivering enough oxygen to the occupants of the room. So basically, there are standards for that as well. For example, you need to have, for outdoor air, there is an additional requirement is that you need, at least in the United States, 15 cubic feet per minute per person. So basically, this Q can't be too small, because you have to be delivering, essentially, enough oxygen for a person to breathe. And so this is one way this standard is decided. And so basically, the Z can't be that small. And so if I ask myself, what's the effect of filtration, an interesting conclusion to this calculation is that this is not really smaller than about 10 to the minus 1. So if we just think of an order of magnitude, roughly speaking, gaining a factor of 10, maybe, out of air filtration, but not that much more. Now, you might say, how is that possible? My filter is a perfect filter. I'm filtering everything. The problem is seen right here, is that in reality, what we really want is we want to protect a person who's over here, maybe who is susceptible, from another person over here who is infected. And you see the problem is that if it's a well-mixed room, then the infected person is breathing all these infectious droplets everywhere. And this other person is breathing them. And even if you perfectly filter the piece of the flow rate that is going through the filter, there's still lots of other air in the room and it's not all being removed. So unfortunately, you don't get such a benefit from that, unless you could completely choke off the outdoor air and just keep recirculating until you remove all the virus. But then a problem is, you also run out of oxygen, and you don't have very good air in the room. So that's not really a solution either. So basically, air filtration is helpful, but I would say it's not super helpful. A factor of 10 is good, we like factors of 10. But it's not a factor of 100. And we might actually rather have a factor of 100. So then we can ask ourselves, well, what do we get from masks And by this, I mean everybody's wearing a mask. So keep in mind, we never know who's the infected person ahead of time. So we want to consider a case where everybody's wearing a mask. And so then by assumption, the infected person is wearing a mask, and any susceptible person is also wearing a mask. So that's our assumption here. OK, so this is good mask compliance here. And so we can ask ourselves again, what is the transmission rate with mask compared to the transmission rate when there are no masks? And the way my notation works here, Pm is the mask penetration factor. So that would actually-- Pm equals 1, actually. That would be the case where, basically, there is no mask, because all the airflow goes in and out of the mouth without any kind of filtration. So this is just Pm squared. And the important thing is that it's squared. So one argument you will hear, which even influenced the World Health Organization back in January at the beginning of the pandemic, is that the virus is so small, as we've already discussed, and probably is going to be carried in very small droplets. And therefore, any filtration, even a mask, is not really that helpful, because masks are not going to be filtering at the scale of 120 nanometers, which is a virus. And even the smallest aerosol droplets, many of them do get through. In fact, when you get to about 1/10 of a micron, or 100 nanometers, most mask materials don't have good filtration. The very simple reason is they also have to allow good airflow. So if you have such tiny pores that you're catching things at that scale, the problem is there's a very high resistance to flow and you can't breathe. In fact, you've probably experienced that if you're wearing a good mask that you can't wear it for too long, because after a while you can't breathe anymore. So you need to have some big enough pores to allow flow. And that makes it really hard to filter the smallest particles. But the interesting thing is this thing comes in squared. So we can ask ourselves, well, what about different qualities of masks? So a really good mask material that is perfectly fitting, so like a high quality surgical or N95 mask, if you don't have any leakage of air, you have a really good fit, and you're just going right through the material, this could be as high as 99% filtration for aerosols, for the aerosols that we care about. And this would be like a high quality surgical mask material, like not even factoring for the fit. So if it's 99% filtration, what that means is Pm is 0.01, but it comes in squared. And so this factor is actually 10 to the minus 4. Four orders of magnitude compared to one order of magnitude you get from the filter. Now, that's for perfect masks, really good masks. What if our mask is not so good? So actually, we have an N95. N95 actually is guaranteed to filter about 95% of the aerosols. And so if you take 0.05 and square it, you're getting a number of order 10 to the minus 3 for Pm squared. And what about cloth masks? So this is where the debate comes in, because these days people are wearing cloth masks, which makes a lot of sense. It's way better than nothing, as we'll see in a moment. But yeah, one could argue, well, a lot of the cloth masks might be letting through these aerosol droplets. They're not really that great. In fact, if you look at the cloth masks, they could give you a range, but it might be anywhere from-- I mean a really bad cloth mask is 10%. That'd be like just a single, thin, cotton, very loose weave, like on a bandana. That's pretty bad. But most of the multi-layer cloth masks, or the silk masks, they can actually do pretty well. They can even get as high as 90%, which is pretty high, but it's possible. And maybe a lot of them, depending on maybe they're not fitting so well, they might be more like, let's say, 50%. But the thing is, this comes in squared. And interestingly, even the 90% case, that's 10% get through, so Pm is 10%. But when you square 10%, it's still 10 to the minus 2. And if you take these lower numbers and square them, you're still talking about something which is on the order of 10 to the minus 1. So what this calculation tells you is that even like some of the worst masks you can wear, as long as there's good compliance and everybody is actually wearing those masks indoors, then the fact that this comes in squared means that you get at least a factor of 10 reduction in the transmission compared to having no masks in a well-mixed room. And in fact, decent masks will do way better. You could get a factor of 100, 1,000, or potentially 10,000 in reducing the transmission rate by wearing masks, whereas with filtration, even with the most expensive and highest quality filters, you can barely get better than a factor of 10. So this is a really important concept to keep in mind. And it comes out of a really simple calculation. And the simple reason that I mentioned before is that air filtration is only filtering part of the air, and the rest is just out there for you to breathe. But masks are much better because they capture the source and they also block the target. So that basically, every droplet has to go through the mask on one end and it has to go through it again on the other end. Whereas the filter is missing most of the droplets. They're floating around the room. They're not going through the filter. Unless you choke off the outdoor air, and you can't do that.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Airborne_disease_transmission_in_a_wellmixed_room_Respiration_and_ventilation.txt
PROFESSOR: So now, let's start talking about airborne disease transmission, thinking especially of viral diseases where we expect the transmission to occur through aerosol droplets or at least smaller droplets, which may sediment but are going to be suspended in the air for a significant amount of time. So the first approximation of such a situation is to assume a well-mixed room, meaning that the air in the room is well mixed. And even if there is, let's say, one person who's infected and that infected person is breathing, talking, singing, respiring, and exhaling infected aerosol droplets, then these droplets start to spread around the room. And they do so because the room is-- and we assume that the room is well mixed. So even though, as I've sketched, there's certainly going to be a higher concentration of the droplets near the infected person, that is there are significant airflows in the room, which induce mixing. That is partly driven by the flow of fresh air through the room. Typically, we may have some forced ventilation coming through ducts with fans. There could be open windows. And even when a room is not open, meaning that the windows are closed, maybe even the door is closed, there's always some leakage and exchange of air with the outside, which typically happens on the order of hours. So there's always at least some kind of flow rate. Also, there's a movement of people in the room, and there's also breathing itself, which basically imparts momentum to the fluid and causes swirling motions of the fluid. So the situation actually is more like this where there's these flows that are generated either by the ventilation, by the movement and respiration of the people, also by thermal flows. So when you're breathing, the air coming out of your lungs is warmer, and that tends to rise. But, also, if it's very humid air and very warm air, then actually the weight of all the humidity and droplets that come with air could actually cause some settling as well. So you have all these different processes going on. And your first approximation, just assume that those lead to a well-mixed situation. And so we will proceed to analyze disease transmission from that assumption, and then we'll come back at the end to consider what would happen if there are actually fluctuations and think about the distance from an infected person and how that might play a role in departures from the predictions of a well mixed room. OK. So defining some variables, then. We'll let C of t be the, basically, infectiousness of the air, if you will. But more specifically, if we think of virus, it'll be the virions per air volume. So they're contained in droplets. And later we will consider what size droplets they may be contained in, but for now, let's just average over everything and say there's some concentration in the air. We have a production rate of these infectious droplets, which can be broken down into many terms. So this is the production rate, so this is the number of virions per time that are produced. So that would be Qb is our breathing flow rate. So that's basically the volume of exhaled air per breath cycle, so that's how much air is being pushed out in your breathing. And this is typically-- ranges from 0.5 up to around 3 meters cubed per hour. So 0.5 is a typical resting breathing rate. And, in fact, even if you're just kind of calmly speaking and sitting in a room breathing through your nose, that'll be your typical breathing rate. But if you start exercising and start exerting yourself, then that could go up to maybe around 3. And so that's the typical range of the breathing rate. The next thing we need is Nd is the droplet concentration, so this is the number of drops per air volume. So that's sort of the number density of drops. So if I have various visualization techniques, I can actually see all the droplets, and I can count them. I can say how many droplets are in a given volume. The next thing is I need the volume of a drop. So as I mentioned, of course there are different drop volumes, and we'll come back to assessing the effects of a droplet size distribution. But for simplicity, why don't we just take these sort of average size drops that we discussed coming from respiration, And that might be a number around 1 micron in size. And there's a volume that corresponds with that, which, of course, is the 4/3 pi r cubed where we assume for now only one size r for all drops just for the moment. So now we have the volume here, so now we sort of what is the total amount of drops. So the next thing we need now is Cv. Cv will be the number of virions per liquid volume or per drop volume, so there's certain-- so if I could take the pure liquid in the drop-- let's say it's mucus-- and it's been coming out of your pharynx and has sort of fragmented and taken some virions with it, then that would be the viral load, essentially. Sometimes that's another word for this. And the viral load varies with time. So when you first get infected, at first the viral load is very low in the fluids that you're breathing out. And then that raises up. And during the period when you're most infectious, which for COVID-19 and SARS-CoV-2 virus, that time ends up being around a week or so when you-- well, within a few days, you reach the peak infectiousness. And then when you're at the peak viral load, this ends up being about 10 to the ninth virions per milliliter of fluid when you're at your peak infectiousness for SARS-CoV-2. OK. So just to give you a sense, but of course, a lot of times it might be less than that. But if you're very infected individual, that's kind of a worst case. And we're going to be interested in calculating safety guidelines and probabilities of transmission, so to be conservative, it's good to have an idea of how big this number can actually be. So what we just calculate here-- so, basically, this Nd, Vd is the amount of drop or liquid volume per air volume, so that's essentially the volume fraction of liquid. When we go times Cv, we're essentially getting the number of virions in the air per volume of air. And then Qv is the volume per time, so this is basically virions per time. And then one other factor that we should also consider is what if we're actually filtering those droplets right at the source? And that'll be the case if you're wearing a mask. So this is an important quantity we'll come back to. This will be the mask penetration probability for a droplet. So this is-- of course is size dependent, but we'll come back to that. But for the moment, we're just saying it's one drop size. And so for the size of drops of interest, we're asking, do they go through the mask? So 1 minus Pm is also called the filtration efficiency. Yes. So a very good mask might be 99% of droplets are filtered. A very poor cloth covering might be 10% of droplets are filtered, and we'll come back to that in just a moment. So this here is our production rate capital P. And already with the variables that we've written down here, we can write down a mass balance for the virions. So virions are being produced. They end up in these droplets. The droplets are being swept out of the room at a flow rate Q, and the room has a volume V. So if I write down just the conservation of mass, making sure that I'm not losing any virus yet-- I'm not allowing them to stick to the walls or do anything else just yet but just looking at the mass balance of one infected person breathing out. This is, I should say, production rate per infector, or an infected person. So if there are more infected people, then you'll have this production rate for each person. They also might be at different stage of the disease, so maybe the viral load will be a little different for each person. But let's not worry about such details right now. We want to keep things general. So for the mass balance, we write down the total number of virions in the room and how that changes in time. So that'll be the concentration per room air volume, which is well-mixed times the volume of the room per time. So this is the change in the number of virions per time, and that can change in two ways. One, we have the production P. So for every infected person, we have production P, but this will be-- I should say virions per air volume per infector. So if we want to think about having multiple infected people in the room, we can always just basically increase this concentration. A well-mixed room, it doesn't matter where people are placed. You're just getting more and more droplets, and it's assumed to be mixed. So we produce at a rate P, but then the outdoor flow is taking away droplets and, hence, removing virions at a rate Q. So we have a Q times C removal rate. So this is our equation. So let's divide through by V. So we can write this as dC dt is P/V minus, and then Q/V I'll write as lambda a C. So lambda a, which is Q/V, this is the outdoor air change or exchange rate. So that is the rate at which the entire volume of the room is replaced with outdoor air. So the outdoor air is refreshing the air in the room at this rate Q/V. OK. And so that's what appears here. And if you compare these two, you can see that this is dC dt. This is C times lambda, so lambda is units of 1 over time. It's a t. This is also sometimes called-- the ACH is air changes per hour if you write it in per hour. So that's a typical way that this is written. And, in fact, while we're just talking about, this is a very important concept. So lambda a is around 0.3 per hour, so roughly every 3 hours for a closed room. So closed room or what you might call natural ventilation where there's no attempt to deliver air to that room. Of course, this number depends on the tightness of the construction and whether there is cracks in the windows, whether doors are being opened to the hallway. So, of course, that's not a perfect number, but that's a rough estimate how quickly air is escaping from typical construction. But then it can be, also, in a different range. And this a very important parameter for the theory, so let's pause just to look at some of the numbers. So it's in the range-- it's typically 3 to 8 per hour for mechanical ventilation. So this could be-- it could be open windows with fans blowing in and out, which might give you 3 or even 6 on this number. It could also be a ventilation system, which is delivering fresh air to the space. And for typical classrooms, offices, and even homes, this is a typical range. So for example, if it's 3, then that would be every 20 minutes the room gets its air to be fully exchanged. But of course, if you have situations where you need to have better air quality and you have more risk of, say, transmission of disease or passage of pollutants or contaminants, then we need higher values. A typical number in the United States for hospitals is 18 air changes per hour. And then it can be even larger. So if you have a laboratory, which is dealing with toxic chemicals or even, let's say, virus and pathogens, then you need even higher air changes. And typical rates, then, can be as high as 20 to 30 for labs that are dealing with toxins of various types because-- and any airborne toxins have to be quickly removed so that if they happen to be leaked into the air from your experiment or from your hood, they need to be quickly sucked out. Also, even parking lots where you have cars in an enclosed space that are generating carbon monoxide and other fumes, which have to be quickly rushed out, parking lots tend to have this number around 30. So that's a full air change of the entire room in 2 minutes. That's a very fast flow rate. So this is kind of the range of this lambda a. That's, obviously, a very important parameter. So let's now solve this equation here. So, first of all, you can see when dC dt is 0, then the steady state is just P over lambda a V. But if I write Q equals lambda a V, I can see the steady state is just P/Q. So I can write the solution like this-- that if we're given time dependence, that the P/Q is the steady state. And if I started out with my initial condition, was that C of 0 equals 0. So let's say time equals 0 is when the infected person enters the room and starts breathing. And then there's a mixing process and there's a build-up of concentration until there's a balance between the production of virus, virions, or infectious air and the removal of infectious air by the ventilation. And that gives you this ratio P/Q. But the way the relaxation happens, though-- if you balance these two terms here, that's just an exponential decay with a decay rate lambda a, so e to the minus lambda a t. So this is basically the way the concentration builds up. If I plot this, then at a certain time here, which is sometimes called Tres-- in chemical engineering, these kinds of models are commonly used to design chemical reactors. In fact, this kind of model is in chemical engineering called a continuous stirred-tank reactor. We don't worry about the details, but we have a flow of some reactants and various chemical species going into a tank. We assume it's well mixed and then it leaves, and this is the mass balance that we use. And the residence time, Tres, is the inverse of lambda a in this case. So that is the-- as soon as you know the volume and the ventilation flow rate, there's a typical speed going through here. And the time that fresh air spends in the room interacting with all the droplets and people and then leaving, carrying some of those droplets, is this. And in a simple model like this, it's just an exponential relaxation at that time scale approaching the steady state, which is P/Q. So there's always that kind of balance which is reached. Now let's also ask ourselves, briefly at this point, how reasonable is the well-mixed approximation? We will come back to this and analyze it much more carefully, taking into account all the different processes I described at the beginning, including breathing, motion of people. But let's just think about the motion caused by the airflow itself. So in the case of mechanical ventilation where that flow rate can be sometimes rather high, that can be a significant source of mixing in the system. And the way we'll think about that is by writing down the typical velocity of the air due to the outdoor air flow. That is Q. We can write it as either Q divided by the area, so some kind of representative area of the room. It could be, let's say, the floor area, but depending on the shape of the room, it might be a little bit different value than that. We can also write this as Q times H over V, where V is the volume of the room and H is some characteristic height, like, for example, a ceiling height of the room. OK. So this is the mean airspeed due to ventilation. And we'll come back to a more deeper investigation of these fluid mechanics of the room. But as a first example of what we'll be interested in is we'd like to calculate the Reynolds number due to the airflow. So I'll put a subscript a there. And that is the typical velocity times a length scale, which could be, let's say, the height of the room or some linear length scale of the room, depending on the direction of the airflow. And then the kinematic viscosity of air, so I'll write that as new a. So this here is the Reynolds number. This is basically telling us how important inertia of the fluid is compared to viscous stresses that slow the fluid down. So, basically, it tells us how quickly-- how much of a tendency there is for momentum to be carried in the fluid, which then leads to sort of swirling motions and complex flows, as I've sketched here. And this new a is the viscosity of the air that we've already talked about when looking at Stokes flow but divided by the density of the air. OK. So that's the kinematic viscosity. And for air, the kinematic viscosity is 1.5 times 10 to the minus 5 meters squared per second. And so if I plug these numbers in and I pick, let's say, a typical ceiling height of maybe a few-- I think this-- if the H is of order 3 meters-- or 2 meters might be a typical scale. Just to get an approximate sense of the scale here, the Reynolds number will be varying from around 50 or tens, up to 5,000, which would be in the case of very fast ventilation like the 30 ACH. So this would be if we have 0.3 ACH up to around 30 ACH. That's just a rough number. And from fluid mechanics, we know the significance of these large Reynolds numbers is that the flows really do look a bit like I've shown here. So, basically, when the air is sitting in the room-- you know this from looking at the smoke from a candle or other flows that you can visualize-- it's not just a sort of uniform flow. But instead there are all these sort of plumes and swirls and vortices. And when the Reynolds number is on the order of tens, when there is a motion, it tends to lead to a shedding of a vortex and to some kind of swirling flows. And when the Reynolds number gets up as high as several thousand, then in most geometries that starts to lead to a transition to turbulent flow. And that's when the flow is getting so complicated there are eddies of different sizes and very rapid mixing. So I just wanted to show this right at the beginning of the discussion to point out that for typical flows we would expect that there is going to be a decent amount of mixing occurring just because of the ventilation. And now if you add to the fact that people are breathing, imparting momentum to the fluid, we have people moving and other kinds of activities in a room, that all of those processes lead to giving us a reasonable assumption of a well-mixed room.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Transfer_of_respiratory_pathogens_Equilibrium_size_of_respiratory_aerosols.txt
PROFESSOR: So let's look at a little more detail at the equilibrium size of respiratory droplets that are emitted during breathing, or coughing, or speaking. And the key idea is that these droplets are not pure liquid. As explained in the wells curve, pure droplets that are small enough will shrink completely and evaporate. However, these droplets contain a significant amount of solutes. And those solutes, in the case of mucus coming from your lungs or from your vocal cords, your nasal pharynx, are full of proteins and other macromolecules, carbohydrates. And also there are always in bodily fluids plenty of dissolved salts such as sodium and chloride or calcium or potassium ions. Also, in saliva, many of these species are present, although it's not quite as thick of a liquid. And of course, virions as well will find themselves in here, and it also constitutes solutes. So the idea is that we don't just have a pure liquid. So there is some initial volume fraction of solute in the liquid. And in addition to that, most of these liquids -- sorry, the solutes I should say are charged and thus hygroscopic. What that means is that, of course, the salt, those ions are literally charged species, but also the proteins and other macromolecules that many charge residues and sites along the molecule, which attract water. And essentially, there is a layer of bound water solvating all of these species I just mentioned, including the virus. So I'll just sketch that there's lots of bound water, which is surrounding each species, including the virus. There's essentially a layer of water mostly around these molecules here and other species. So it's solutes plus the bound water that's coming from solvation of these molecules in liquid. So that water is pretty firmly attached. And even if you dry the material, a lot of that water will still be left over. It takes a significant amount of energy to remove it. And so if we think that -- if we describe there is initial volume of the droplet V_O, and a radius R_0, so let's say it's initially a circular droplet, there is an initial amount of solid, V_s, which is phi_s^0 times V_0. So there's a certain amount of solutes in there which cannot be removed. So the water can evaporate, but the solutes will not. So let's think a little bit about what the consequences of that are. So let me do a brief derivation here for looking at the thermodynamics of this system. And the key idea is just to get to the final result. I don't want to dwell on the details of thermodynamics. But an important concept here is the relative humidity of the air. So there's moisture in the air. There's water vapor. And it's at a certain level. So we often write that rh for relative humidity. And that can be defined as the concentration of vapor, water vapor in the air relative to the vapor concentration that would be in equilibrium with pure liquid. So when the concentration water vapor gets high enough, eventually you start to nucleate water droplets. And you start to have condensation water. That's essentially how rain forms from the clouds. So that's that ratio. So relative humidity is telling you how close you are to basically having, water liquid water come out of the air. OK, now, the relative humidity also tells us something about how far you are from that phase transition point. And there's a very simple approximation. I'll put approximate here. We can also write that this is -- scales with, and is -- can be in fact close to the liquid volume fraction in equilibrium inside the drop. So that'll be the water volume fraction of water liquid inside the droplet. At least you can see here in this relationship when this volume fraction is one -- in other words, we have pure water -- then the relative humidity is 100%. OK, and on the other hand, when you have, let's say, only 50% water, over here, that's like having relative humidity 50%. This can be derived by more careful consideration of the ideal entropy of mixing where essentially this term here is take into account the excluded volume and the fact that all the sites in this droplet are not available for the water. So they're being excluded by all the solutes and the bound water that are present. And similarly, we have a buildup of free energy in the bulk as well. So basically, this comes from some thermodynamic considerations of equilibrium between water vapor and water liquid. We can write this as 1 minus the volume fraction in equilibrium of the solid. OK, now the thing is that we can now write this. So if we multiply through, we can write this as 1 minus the -- so what is the volume -- so when we get to equilibrium, this droplet is going to change its shape. It's going to reach a new shape, which we're going to calculate our new volume, V_equilibrium. And so what this would would be V_solid, which is phi_s^0*V_0 divided by V_equilibrium. So it's going to be new volume, V_equilibrium, which will be achieved then. And then we'll end up with the equilibrium volume fraction. So if I take these equations here, and I solve, I get a fundamental result, which is that the equilibrium volume of the droplet relative to the initial volume is equal to, well, we have put this on the other side. That'll be (1-RH). And we divide that out. And we find that it's the initial volume fractions solutes divided by (1-RH). That is our key result. And let's plot what this looks like. So if we prop the relative humidity on the horizontal axis, from 0 to 100%, so at 100%, the water vapor is saturating the air. And you would start to nucleate and condensed water liquid from that. At 0, the air is completely dry. And there is essentially no water vapor present. So that's the range. And typical comfortable rooms have a relative humidity around 50%. This is a typical number. And let's plot on this axis the equilibrium volume. It could also be the equilibrium radius because I should say that if there are spheres, that this is also equal to R_equilibrium divided by (R_0)^3. So I can also take a cube root of this. And I would have the ratio of radii. So we would know if we started a certain radius, what's the final radius. OK, so we can talk about volume. We can talk about radius. So here is the initial size of the drop. And somewhere down here is V_s, which is the solute volume, which is phi_s^0*V_0. Now what is this value? It depends on the kind of liquid. So saliva is mostly water with some salt and a few other molecules. But in saliva, the volume fraction phi_s^0 is 0.5% in saliva. OK, so that's just gives you a sense. So this is quite far down, right? But then, if you look in mucus, it depends which mucus you're talking about. But the mucus that comes from the lungs or from the pharynx, it can vary. But what has also been measured in droplets that are emitted by breathing is that this can range anywhere from 5% to 10%. So a fairly significant amount of the volume of the droplet is containing all these molecules and the bound water around that. Now we know that because mucus is very sticky. It's a non-Newtonian fluid. It doesn't maintain a nice round shape even. It can have a regular shape. It flows slowly. It has a high viscosity. And that's because it has a large amount of these hygroscopic solutes. So mucus might be a little bit higher up. But in any case, what you then find is we can sketch different regions of this plot now. So this curve this formula drive here, when the relative humidity is zero, we start here. So that's saying when there's no water in the air, you completely dry the droplet. And you're left just with the molecules, the solid molecules, and possibly the bound water around it, depending on how dry the air actually is. And then it rises up and blows up at 100%. So when you get to 100%, then droplets are getting really large. And if you actually hit 100%, then you can't really speak of an equilibrium size because you'll just start to get lots and lots of water. So that's that limit. And so now we can look at three different regimes of the kinds of droplets that we'd expect to see. So down here at 0% or close to zero, we have a dried droplet nuclei as they're called in the public health field. These respiratory aerosols, if they completely dry out, and you're left with just these solutes, then that's called a droplet nucleus. So it doesn't necessarily mean it's a nucleus for phase transformation as we use that term in, say, engineering or in physics. But it's really just refers to the core of just the hydrated solutes. So if I could sketch what that looks like, that would be-- for example, all those molecules I just sketched there might be condensed into some little blob, which, by the way, could include a virion. In fact, it could even be just one virion if that were all that were in there. And you would have a little bit of bound water around it. But you essentially have a dried up blob of just the solutes. OK, and so that -- and then, of course, the smallest volume you can get is just the initial solute volume that you started with, plus the bound water. On the other end, if we are near 100% relative humidity, then the fact that these are hygroscopic solutes, which like to have water near them, will form as a nucleation site to actually cause more and more water to grow and be absorbed into this droplet. And not only do the droplets not shrink, as predicted by the Wells curve for a pure liquid, but they can actually grow. So if the size here is small enough to begin with, let's say it were a several-micron droplet to begin with, but it contains a lot of solutes, and we're at very high humidity, actually the particle can grow. So over here, we could end up with an even larger droplet than we started with where now because the humidity is so high, and we have the same number of molecules in there that I sketched before, that's more dilute now. And there's maybe a virus or a virion here and there. And of course, there's also some salt. But basically, the droplet is growing. So here we have hygroscopic growth and also we have what's called deliquescence, which refers to water that's absorbing around these salt molecules and even causing some other molecules or charges on these macromolecules to dissolve into solution because it's more and more water present and it can solvate more species. And so whereas hygroscopic growth refers to water being absorbed into a more solid-like framework, you can also be generating more aqueous solution, which is deliquescence. So basically, the droplet can actually grow. And that would be like when you're here, let's just say. And this might be when you're here. And then, of course, when you're at 50% relative humidity, you can see the droplet has shrunken but not all the way down to the initial solute volume fraction, but something larger. And in fact, if the relative humidity is 50%, you end up at exactly twice the solid volume fraction. So if the solid volume fraction of mucus is 10%, you may end up with a droplet that is maybe 20% of the volume. So maybe it looks something like this. OK, and so we have a little bit of shrinking going on. And maybe there's even a virus in there as well. But there is still plenty of water. And so you can see also now the value of having solutes in mucus in terms of making the virions more viable and more easily transmittable because they hold onto the water. So that the virion is in a stable environment. So that when it ends up being inhaled into someone else's lungs that it can then more easily diffuse out of that region and infect the host cells. In contrast, if you have a nearly pure liquid that the virion is in, let's say pure water or even saliva, which is actually mostly water, then the droplet will shrink by a factor of 100. And it might be just literally a virion with a [couple of ions] just enveloped with a tiny bit of water. And maybe that, in some cases, would be not as viable of a situation for the virions. So basically the mucus fragments are likely to be the more common source of the aerosols that will stay in the air and remain infectious.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Beyond_the_wellmixed_room_Social_distancing.txt
PROFESSOR: So when we start considering short-range effects of respiratory jets and plumes, it raises the interesting question of what is the role of social distancing, which is our primary means for fighting the pandemic at the moment. So as we've discussed, this is especially important when masks or face shields are not worn, so that the jets and plumes that come from breathing and speaking and singing and other respiratory activities are not blocked, in the sense that the momentum is not blocked. And furthermore, there's no filtering going on. So then we have our long-range airborne risk coming from respiratory aerosols that have become well mixed into the background air. But we have a new term that comes from short-range transmission. And in a worst case scenario, we can imagine that as a wedge-like or cone-like plume, which we have just described. In reality, there is going to be some background flows in the room, which will sweep that jet in different directions and break it up. There'll be motion of people, turning of heads. So it'll never be this simple. But the worst case scenario is really a well-formed respiratory jet and somebody essentially standing right in it at a certain distance, X bar, which is our sort of average social distance. As we've discussed, we can also take the ratio of these two terms and speak of fs, which we define here as the short-range risk enhancement factor. So that comes down to two new parameters which we've been discussing-- P jet, the probability that a susceptible person is in the respiratory jet of an infected person, and X bar, the typical distance over which that occurs. Xc is this transition distance, where the respiratory jet concentration starts to match that of the background ambient air, which we substitute and form this expression shown here. So there are a number of situations we can think of to estimate what would be these parameters. So the first would be just to consider random occupant placement. So in this situation, if we view from above, we have, let's say, a person. This is now viewing from above. And they have a respiratory jet, which is going off like this in one direction. And then at any given distance, we can imagine there's sort of a typical social distance here. Now, I'm kind of exaggerating how far that is compared to the size of the head. But if this distance here is X bar, then at that distance, we'd like to say, what is the probability that the susceptible person is in the jet? And that would simply be formed by taking the angle of the jet relative to the full circle. So we could imagine, what is the probability of putting a person right here versus over here or over here where they're not going to be in the jet? So in that situation, we could write that P jet is approximately the inverse tangent of alpha. So alpha, remember, is the ratio-- is the entrainment factor. And it's sort of the slope of this line defining the cone of the respiratory jet. And then we can divide that by pi. So that basically tells us the probability of being in this yellow region versus somewhere else at a given distance. And then, now, if we think about the distance factor-- if it's completely random, then if we just imagine lots of people scattered throughout a room-- imagine a busy space, for example, in a bar or a nightclub, a place where it's somewhat crowded but people are kind of moving around. Then we might have an average social distance, which is the area of the room divided by N. So that's the area per person. And then take a square root. That's an estimate of the distance. And the square root of area is the length L of the room. So it's divided by square root of N. And so with these two assumptions, we would then find that this short-range risk enhancement factor-- well, what does it look like? It then is this factor P jet times Xc over L times-- and then we have this N minus 1 factor. So if N is larger than 1, we could just estimate that as a square root of N. So what that shows is as more and more people pack into the room, we have an increase in the importance of long-range versus short-range transmission. And this pre-factor here is something which typically will be much less than 1, because the probability of being in a jet-- if alpha is around 0.1, 0.15, this turns out to be around 3%. OK, so that's-- it's not quite as wide as I've sketched here. The jet is a little bit more narrow. And we've already discussed that Xc-- the distance where the concentration in the jet starts to match that of the ambient-- it does depend on the conditions of the room and the ventilation and other factors. But roughly speaking, Xc is typically smaller than the size of the room. And so this factor here is quite a bit less than 1. So you see that in a random situation, the short-range transmission starts to become important only when the number of people becomes large. And it grows like square root of N. And there's a certain point, which might be at N equals 20 or N equals 50 or N equals 100, depending on the space, where the short-range risk is larger than the risk of the background airborne transmission. And that's, again, just coming from random placement. So on the other hand, the random placement situation is not quite the best case scenario. But it's sort of a fairly optimistic scenario. The best case scenario is if people are always kind of having their backs to each other, never breathing on each other. That never really happens. In fact, it's more typical in human interactions for people to face each other. They may be talking to each other, looking at each other. And so hence, we tend to find ourselves more typically in the way of breathing. And so it might be better to think about this estimate in other ways. So a second way that we can think about it is by using a social distance guideline, or a social distancing rule, let's just say. So let's say that instead of a random situation where people are keeping sort of the average distance between each other, that no matter how many people in the room up to a certain maximum occupancy, they're not going to come within, let's say, a 6-foot radius or a 3-foot radius, 1-meter radius. Depending on what the social distancing guideline is in that region, there's going to be a certain sort of minimum X. And let's just think of that as the worst case scenario. For the people that are that close to each other, that's the distance we'll choose. And so this might be something like 6 feet, 1 meter. You can pick. And in that case, the fs is P jet over N minus 1 times Xc over X. And we've already estimated that at 6 feet, Xc over X, it does depend on the ventilation and the size of the room and other factors. But we've already said that this factor here might be something like 6 to 600 for a certain set of examples that we've just considered. So when we put all this together, if we take into account also that P jet is around, let's say, 3%-- if we imagine having random placement of people but at a fixed distance, minimum distance, of 6 feet, then what we'll find is this ends up being something like 0.2 over N ranging up to, let's say, 20 over N. And again, this is just very rough. So this is with random angle. So in other words, we're still thinking now of just this kind of random orientation of somebody's head, not necessarily facing one person all the time. And so you can see here that if N is-- if we're in the situation of the 20, which is-- by the way, that's the case where this is getting large. So that would be a case of a large room. Or it would be of low breathing rates, et cetera. Then we-- you can see that when N gets-- if N is small, then this number can actually be quite large. So in other words, if we just have a few people in the room, and they can get this close to each other-- like one minimum social distance, like 6 feet-- then the primary risk is coming from airborne transmission, especially if the room is very big. If two people are by themselves in an enormous room, then the risk from background airborne transmission is minimal compared to the risk of direct short-range transmission. So that kind of makes sense, right? On the other hand, notice the effect of N. If you have more and more people in the room, then even though most of those people are a lot farther away, there's so many of them that are potentially going to get infected that it becomes worse, even in the case of 20 here, which is sort of the more conser-- the case where short-range is more important. It starts to switch to where the long-range transmission becomes more important when N is larger than 20 in this particular example. So basically, so significant for small N and basically large V. So basically, low-occupant densities-- if people are to come close to each other, then it makes sense short-range is important. In fact, a limit that we haven't talked about yet is, what if we're outside? What if we don't even have-- what if the V goes to infinity or V is very large, so effectively it's infinity? What this is telling us is that we have to stop worrying about the long-range background well-mixed concentration of infectious aerosols. And instead, we have to focus on the short range. Just whenever people are coming close to each other, make sure they're not breathing directly on each other for long periods of time. That's the key. Now, the last thing we can think of, which is really kind of like a worst case scenario-- that would be where a person is at the closest distance that's allowed or expected-- let's say, 6 feet, maybe even 3 feet, 1 meter. And also, they're not randomly placed and angled. But they're really just constantly in the jet of the other person. So let's say, for example, P jet is 90%. That's a pretty high number, keeping in mind that there's also turbulent and chaotic flows in the room, such that even if you're facing somebody all the time, the stable respiratory jet and puff train is going to be swept away by other currents. And so it's really not as though the other person is constantly going to be in-- fully exposed. But let's just say, we pick a number like 90%, which is kind of like a worst case scenario. And we're always facing each other. So let's, again, pick-- X bar is, let's just say, 6 feet. Let's just say, maybe we're sitting across a table at a distance which is sort of an acceptable social distance according to the 6-foot rule. Then we're going to find that this enhancement factor is around 6 over N to 600 over N. So we can see that the short-range effect can be a lot larger. So notice, even if N gets to be really large, like a lot of people in the room-- let's say, it's a restaurant with 100 people in the room-- still, it's 6 times worse to be in a single person's respiratory jet 90% of the time at 6-foot distance. So that's actually fairly alarming if you think about situations such as restaurants or office meetings or any other normal activity where people are facing each other and breathing on each other continuously without wearing masks or face shields. So this would be-- I'll just mention here, short-range dominates in this situation. So then, how do we mitigate against this short-range transmission risk, which is going to be worse, again, when we have people that are not wearing masks or face shields and are in close proximity and facing each other for long periods of time, and especially in a larger room and a lower occupant density where the N is maybe not so big, and we don't have to worry as much about long-range transmission? Well, one way to proceed is to continue using the universal guideline that we've derived right here for long range, and simply choose a small epsilon, much less than 1. And you can see here, it doesn't have to be that tiny. From this simple estimate here, maybe an epsilon of 0.01 might not be so bad because, see, this number 600 is there. And if N is maybe 5 or 10 in the room, then that might be already helping you. So remember, we use the guideline with a fudge factor, or a tolerance, epsilon. And some of the uncertainties in different modes of transmission are already kind of included in there. But certainly, if we're not wearing masks, the risk is pretty high. And so that may not be enough. And that's why social distancing can help. But I want to emphasize from our discussion of these respiratory plumes, there's nothing really that special about 6 feet. There's unfortunately today a feeling in the general public that if you're closer than 6 feet, you're at extremely high risk. You've penetrated somebody's bubble. But when you're a little bit more than 6 feet, you can breathe easy, because you're safe. And you can see that's really not true. If you are not wearing masks, these respiratory jets and plumes can travel very long distances. Of course, airborne transmission is everywhere in the room at any distance. But even the elevated risk of short-range transmission can extend further than 6 feet. On the other hand, when people are turning their heads, and there's sort of convection in the room and thermal convection as we've been discussing, in fact, 6 feet might be overkill. Maybe being even a little bit closer might be OK. But the important thing is that distance is not so special. And also, the details matter. So as you can see here, we can get very different estimates based on where the occupants are placed, how they're facing each other, what kind of activities they're engaging in, what kind of movements. So there's really no universal guideline. It makes more sense to start from a universal guideline for long-range transmission and then enforce mask use in situations where we are worried about transmission, rather than trying to guess how people are going to behave and what kind of distance they're going to keep not wearing masks. So the safest thing is, if you're worried about coronavirus transmission, just wear a mask, indoors especially. So that brings us now to thinking about summarizing this entire chapter, going beyond the well-mixed room, where we've been discussing the fluid mechanics of indoor spaces and of human occupants and respiration within those spaces. So I think I'd like to leave you with the picture of people who are smoking in the room. So we're all familiar with the situation where you have a room-- let's say, could be a restaurant or some other space where there is, let's say, a table or a few tables, where there's maybe 5 or 10 people in the room. And there's one person or two people that are smoking. Now, we know that when the person who's smoking breathes in on the cigarette and exhales, there's a very dense plume of smoke that comes out. It gets carried by thermal currents and rises. And you know that if you stand right in that space, it will be a very significant amount of smoke you're going to have to breathe. In fact, it's courteous for the person who's smoking to [EXHALES] breathe away and not breathe directly in other people's faces. That would be sort of rude. And maybe, as we're thinking about respiratory transmission of viral disease, we should think in the same way. Each person should be worried about, am I really breathing directly on another person? If I'm not wearing a mask, why don't I breathe somewhere else? But we also know from our experience with smoke-filled rooms that there may only be one or two people smoking. And you can occasionally see these puffs or burst of smoke when they're exhaling after drawing on a cigarette. But if you look around the rest of the time, the smoke is kind of swirling around. It's a little more concentrated in some places. But you can see very quickly, it's uniformly spread throughout the room. And somebody who is on the far side of the room is at essentially the same risk as somebody who's very close-- even 6 feet or 3 feet away, whatever that special distance may be-- because the air is typically well mixed. And those smoke particles end up throughout the room. And you're just as much risk at 60 feet as you are at 6 feet under those circumstances. And so that's really the main message of this course, while we also keep in mind that there are these sort of details of short-range transmission that I'd like you to be aware of when applying the guideline for long-range aerosol transmission.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Course_conclusion.txt
PROFESSOR: So this brings us to the conclusion of our Massive Open Online Course, 10.S9,5 Physics of COVID-19 Transmission. So let's briefly review what we've learned. We began by talking about respiratory pathogens, both bacteria and viruses. And, in particular, we focused on viruses and understood how they can be transmitted through aerosol particles through infected indoor air. And this includes the important case of SARS-CoV-2, the virus that causes COVID-19. We then went on to use epidemiological models and fluid-mechanical analysis of a well-mixed room to arrive at a universal indoor safety guideline to limit transmission of the disease. A very important conclusion is that it's not possible to bound a single variable, as in all of the current safety guidelines from various official organizations. So, for example, you cannot only bound the distance between people, for example, 6 feet, 1 meter. You cannot bound only the occupancy, say, 25 persons. You can't even bound the ventilation rate, say, a minimum of 15 cubic feet per minute per person. Or you can't bound only the time-- let's say 15 minutes or one hour-- because all of these variables are, inevitably, linked. And the simplest way to see that is through our universal safety guideline, which shows you how to limit the cumulative exposure time, (N-1)t, which is a product of the number of susceptible people in a room times the time they're together with an infected person. And there are a number of factors that come in. So epsilon is a tolerance you can choose. And then there are these factors lambda_V over -- Q_b^2 P_m^2 Cq. And we can discuss, based on that formula and our analysis throughout this course, the most important ways of mitigating transition based on this formula. So I've, roughly, put them in order here. So the first thing is to wear masks and, in particular, try to wear good masks. So these might be surgical masks, N95s, but even various cloth or silk masks, especially double-layer fabrics, can be quite effective because, as P_m goes to 0, the mask penetration factor, you can see this bound gets larger and larger, like P_m^2. So a factor of 10% transmission can still give you a factor of 100 compared to not wearing masks in terms of filtration. That's a very significant amount. Secondly, we can improve ventilation. And this can be by imposing faster mechanical ventilation with more fresh air coming in. It could also be by opening a window and turning on a fan. And that's increasing lambda_a. We can also try to spend more time in larger rooms or even outside, which is, basically, increasing V and, thereby, diluting the air that is present and all the infectious aerosols. We can also look at imposing air filtration. We've shown that there is some benefit there. Although, it might not be as large as you think. Even very good air filtration doesn't buy you many orders of magnitude because it's only filtering some of the air, but not all of it, compared to masks, which are filtering at the source and at the target and, thereby, are much more effective. We can also try to make sure the occupants of the room maintain lower activity levels if possible. So they're breathing less heavily. So they're exchanging air with the space and with other people less frequently and at a lower rate. We can also try to avoid vocal exertions, which tend to lead to much larger emissions of droplets, for example, singing being a particularly bad case, but even loud voices can be a lot worse than quiet voices. So, generally, keeping the noise in the room down-- I know this will be welcome news for many teachers-- but, in general, that is a good way to try to limit transmission to keep people calm. We can also take measures to try to enhance the deactivation, the natural elimination of the infectiousness of the virus. One way to do that is to maintain an intermediate, comfortable range of humidity from 50% to 80%. So very dry air turns out to be worse, and that is one of the reasons that viral diseases tend to be seasonal, like the seasonal flu, typically, worse in winter, in addition to the fact that you're spending more time indoors. There's also ultraviolet treatments that might be used, which is, effectively, like another form of filtration. And then, finally, we spent a lot of time talking about the fluid mechanics of indoor spaces and of human respiration and movement, and those considerations take us beyond the well-mixed room. And the main thing to remember there is, thinking back to our example of people who are smoking, if someone is exhaling right after breathing in a cigarette, there's sort of a narrow plume of turbulent, very smoky air, which you want to avoid. And the same thing is true when dealing with a respiratory pathogen. You don't want to spend a lot of time in the respiratory jet of a person who is not wearing a mask if you don't know if they are sick, potentially, even asymptomatic. So that's an important just general piece of advice, and we've given some insight into how to quantify that. Although, any treatment of short-range transmission through respiratory jets is, inevitably, dependent upon assumptions about the activity of the room. How much are you turning your head? Where are people placed? And so, hence, you can't really get a universal guideline, as opposed to this boxed formula, which is, essentially, the mass balance for the whole room. And that is a universal guideline. We've also talked a bit about types of ventilation. And, as opposed to ventilation that seeks to mix the space, there may be situations where having high ceilings and trying to take advantage of buoyancy-driven thermal flows that you can sort of target the airborne aerosols to be sitting higher in a room where they could then be removed by ventilation at the top, which is displacement ventilation. That's another strategy that may be useful. So these are all different strategies one can use. And which one is most effective or makes the most sense in a given space really depends on the details. And, in order to facilitate the application of the guideline, we have provided an online app and, also, a spreadsheet, which you can use to adapt this to your own space. And I hope that you will find the principles you've learned in this class useful and that, perhaps, even you'll find these tools useful, specifically, to combat the transmission of COVID-19 and, in the future, other respiratory diseases.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Epidemiological_models_Indoor_disease_spreading.txt
PROFESSOR: And so now let's consider the spreading of infections, disease indoors. So the McKendrick-Kermack models and various compartment models that we just described start from the assumption essentially of a well-mixed population, where there's no spatial dependence, no network dependence of connectivity of people with each other but rather sort of everyone is interacting with everyone in some sort of average sense. So that philosophy has also been brought to bear on spreading events indoors where a little bit stronger assumption is made is that the transmission is occurring through the air and the air is well-mixed. So not only are the people well-mixed in the sense that they can interact with each other regardless of their distance apart or anywhere placed in the room, but the reason it's a valid approximation is because the air itself is well-mixed. And so we've already been talking about the air, the well-mixed air, and we've calculated transmission rate. Now let's start to connect that to disease-spreading. So one concept there would be to consider that the initial number of infected people, i0, enters a room or an indoor space with n persons, including the infectors, for a time, tau. So we now have introduced a finite timescale, which is the time spent in the room. So you have a room. Some people are in it, and we're imagining an infected person comes in, and they spend a certain amount of time, and then they leave. And during that time we'd like to ask ourselves what happened with all the other people. How many people were infected? OK? So this tau kind of acts like the inverse of the removal rate, but we also will assume that gamma tau is much less than 1. So if we think of gamma as the recovery rate or the death rate from the disease, so that's the removal quantity at the population level, then we're assuming the time spent in the room is small compared to that. So in other words, while you're in the room, nobody is going to be recovering or dying. We're just simply going to be transmitting the disease for a certain amount of time. So the r is gone. On the other hand, we can also consider an exposed compartment of the population, the group in the room. So the exposed group is one which has essentially been exposed to the pathogen, so let's say they've inhaled a critical quantity of the virus, but they have not yet become themselves infectious. So they are already exposed. They are going to come down with the disease. So they are going to be eventually symptomatic and potentially infectious, but they may not be infectious yet. So the exposed group basically has received the disease. So it's been transmitted to them but is not yet themselves infectious. So that's another sort of group that we want to think about. So another way to say it, they're not yet contagious. So they're going to get sick, but they're not contagious yet. So I should put here contagious is another word you could think about using. So we've gotten rid of r, the recovered compartment, but we've added e. So instead of the SEIR model, we have the SEI model. Now, where we can write down the same kinds of equations as before. ds dt is minus beta, and I'll write beta of t, si, because another complexity here is we know is that as soon as the infected person comes in if it's airborne transmission, it takes time through the fluid mechanics that we were just describing for the concentration to build up. So the transmission rate is not constant. We've actually already calculated that transmission rate, but it now enters this equation. So this is now the susceptible people are getting exposed. And so when that sort of reaction happens, if you will, the exposed population then grows, and then there's a certain rate with which exposed people can become themselves infectious to others. And that rate we'll call alpha. So we can remove exposed people at a rate alpha, and when they're removed they become infected people. And so then the number of infections grows. So this is a very closely-related model that we could solve. And in order to couple this to airborne disease-spreading, we need to go back to our model of the airborne pathogen. So, for example, the virions per air volume which we called c, and, in fact, I'll even write this as a dc dt with a partial derivative, which is p, which depends on the size, r, divided by-- actually this should be v-- minus lambda c of rc, so that was the model we just talked about, which we have to solve. And then the beta of t is the breathing rate times the integral over all these sizes of c of r and t and then ci, and of course there could be masks, which might also be size dependent, dr. So we tie it together this way. This kind of model, which is typically run without accounting for the size dependents, but that has been done by some authors as well, generally falls under the category of Wells-Riley models. And I mentioned here that Wells actually was a real pioneer here, already starting in the 1930s. Did careful studies of disease transmission, including for flu and other viral diseases, and really was one of the initial proponents of the idea of infectious air, that there are particles suspended in the air which can become infectious. He also was the one who pioneered the Wells curve, which explored evaporation versus settling. And in 1955, he introduced this kind of a model where he was taking into account the balance of the production rate, p, and then the ventilation rate in order to describe the buildup of the infectious droplets in the air. And then the transmission is connected in this way here. And I should say that in Wells-Riley modeling, usually e is ignored, and so we normally just go straight from s to i. So I'll say e is neglected, and what that really means is that the incubation rate here, this alpha, is the incubation rate. So it's the rate at which an exposed person becomes infectious or contagious. That alpha t is much less than 1. So it's essentially the Wells-Riley model, it's the slow incubation limit. But we've written down something more general here, which does also allow for the possibility of exposed people that have not yet-- that could become infectious, so we'll come back to that in just a moment. So let's kind of summarize the results of the Wells-Riley approach. So we'll consider the slow incubation limit, which is alpha tau much less than 1. And so in that case, if there's slow incubation, during the time of the infected person being in the room, there is no time for new people to become infectious. So it's really just like a fixed source. So it's essentially saying that i is equal to i0, which is a constant. So even though the disease is being transmitted, there is no kind of amplification effect because the people who have been exposed are not able to themselves infect others because the incubation is too slow. And this model here, where you sort of couple these equations with the si equations basically, that was done by Gammaitoni and Nucci in 1997. So you can see starting from Wells, we start to move towards this picture of actually coupling the dynamics of the pathogen to the transmission. And there have been a number of models since then which have even taken into account some of these aspects of the different drop sizes and other effects that we've been discussing that were not initially considered by Wells. So that's this. So the first thing to do is to deal with beta of t. So how are we going to do that? So if I put beta t on the other side here, if I kind of divide it down, you can see that I really would like to define a new differential of a variable t hat, which is beta of t times dt. That would be my definition of a new variable. And so when i integrate this, it tells me that I should really switch to a new time variable, which is not just t, but it's the integral of beta dt. So it's a time-like variable. And once I do that I'm essentially sweeping this guy into the derivative there. And if I also assume that i is roughly equal to i0, then what I have now is a much simpler equation here. I go from this non-linear equation with time-dependent coefficients to something which is now linear. And we have ds dt hat is approximately equal to beta-- sorry-- which is minus i0s. So, again, our i is roughly constant. Our beta got swept into the time. So this is just kind of a simple exponential relaxation. And the initial value is that s at time equals 0 is n minus i0. So there are n people. i0 of them are the initial infected ones. And so this a pretty easy equation to solve. It's just an exponential relaxation, which we've already seen a few times before. And so the solution is that s is n minus i0, the initial value of s, times e to the minus i0t hat. So it's i0, integral 0 to t of beta dt. And then we can also write-- so that's the s, and the e is just if we have fast incubation-- or sorry, slow incubation, this term is gone, and we basically just have e as sort of just-- s and e have to add up to n basically. And so we're left with e of t is n minus i and then 1 minus, and let me actually write this quantity up here as little q of t, where little q of t is i0 times the time integral of beta dt. And what is this q here? This q can be thought of as the number of infection quanta per time that are released by the infectors because their i's are infectors. Each one of them is infecting another individual person at a rate beta, and that beta is time-dependent, so you have to integrate in time. And so this is telling me the total number of people that would be infected by those infected people if you weren't running into all of the limits that are described by this model. So what is happening is that if someone gets exposed, they can't be exposed again, so there's some numbers there that are changing, so you can't keep passing an infection quantum to somebody and have them get infected over and over. So that doesn't happen. But what this is measuring is somehow the number of times that somehow there's been an attempted infection or an expected infection if a person were susceptible. So this q of t here is the infection quanta as defined by Wells transmitted in time t. So up to time t, there are i0 infectors-- that number's not changing-- and the way you can think about this kind of airborne transmission is they're essentially spewing out infection quanta. Each one of them if it hits a potentially susceptible person will infect them, but the number of susceptible people is changing. So it's not like you keep getting more infections. Eventually you run out of people, and you can't keep infecting them, but that's one way to think about this. And we've already calculated this concept of infection quanta in the context of the breathing. We've defined a quantity, which is the number of infection quanta per volume of exhaled breath. We've also talked about the quanta emission rates for people and connected that back to the droplets. And so here's how you see how that quantity enters into the disease-spreading models. And in fact, we can sketch what happens with s and i here. So basically if we start with s, for example, starts at the value n minus i0 and as you go in time it decays-- well, the decay rate is basically set by sort of the average beta. So this is kind of like some kind of maybe if you're getting close to steady state, then beta inverse is sort of what this timescale is. But what's actually happening is you cut it off at a certain time, tau. So that's the time that you're in the room. In that time, the number of susceptible people has gone down, and the number of exposed people goes up. But then because the incubation rate is slow, the exposed people never become infectious in this model. So the number of infectors is fixed. And so since we have slow incubation and since the number of infectors is fixed, another way to look at this is notice this non-linear equation, which had s times i, it became i0 times s. So it became a linear response. And so another way to think about this limit is this is the limit of linear response. So the Wells-Riley models are basically linear response models, which are typically not taking into account a growth in the number of infectors, which would lead to kind of an amplification of disease-spreading in a room. And that's often justified because the time someone spends in a room is often times a lot less than the incubation time, which could be on the order of days. People might only spend a few hours in the room. On the other hand, there are situations such as classrooms, long-term care facilities and homes, prisons, workplaces, where people are exposed to each other for days or weeks or months actually. They may go home in between, but there's a constant exposure. And so you may need to worry about these other sorts of dynamics, even in an indoor setting actually. So we'll come back to that. Now another important concept I'd like to get here is what is the early rate of infection. So this quantity here is q. I wrote n minus 1 here. That was the case. Actually more generally this should be n minus i0 because often times we're thinking of one infector, so I kind of jumped to that, but more generally it'd be n minus i0, And this quantity here is also s0. That's the initial number of susceptible people. So basically you take the initial number of susceptible, and they slowly become exposed based on how many quanta they've consumed. Now, the definition of quanta of infection often is tied to this equation. This essentially is the Wells equation, derived in 1955. And Wells actually defined quanta from this. He said that one quantum infection corresponds to a probability of transmission of 63% because if you put q equals minus 1, 1 minus e to the minus 1 is 63%. So basically he said a quantum-- so if someone asks you what is a quantum infection-- a quantum is basically 63% chance of infection. But the problem with that thinking though is that in some other situation where there is maybe incubation going on, I might get more infected people. So I can't just count the infected people in a room and assume that I'm getting a measure of infection quanta. Infection quanta actually are defined by beta. That's an important thing. So really what is beta? It's actually the rate of infection quantum transfer from infectious to susceptible. Sorry. It's si basically. Oh actually did I-- no, that's correct, yeah. So it's the rate of basically becoming exposed. Excuse me. So it's from susceptible to exposed. So that rate basically defines beta, and that is giving you the rate of infection quanta transfer. How many people actually get infected or exposed involves solving some set of equations like this, which might not be the same as the Wells-Riley model. So that's an important thing to keep in mind, and we'll come back to that. So this is basically Wells' definition of a quantum is that basically there's a 63% chance that it's going to infect somebody, but that's completely tied to this linear response exponential relaxation. It's not really, I don't think, is the appropriate definition. Now, another thing we can ask is, what about at early times. And that would be the case where the expected number of quanta transferred is less than 1. So at early times you haven't really seen a lot of infection take place. This will be very important for us because we're going to come to safety guidelines, and in my opinion the right way to think about a safety guideline is you don't want one person to cause one or more infections. You'd like the expectation to be that less than one person will get infected. So this, what I'm calling early times here, in the epidemiological model is actually very relevant for, like, safety guidelines. You don't want to deal with the case where there's rampant infection. You just want to say if an infected person comes in the room, are they going to infect anybody else. And you want that to be a low probability. So this is a very relevant limit. Now, if I take that limit here, I can expand this exponential, and I find that e of t at early times behaves like s0 times q of t if s0 is the number of susceptibles. And if I plug in what we have here, that's n minus i0 times i0 times the integral in time of beta. And so this is the expected number of people that will be infected by i0 infected people. So the ratio of these two things is very important. So if I look at e in time tau. I should say we do this integral-- typically we want to go up to dd which is the time that the person spends in the room, and we divide by the initial number, then that's basically telling us sort of what's the reproductive number of the room. i0 people come in, and the question is do more than i0 people come out that are infected. So have you infected others? So that's the ratio that you really care about. And so I would call this the indoor reproductive number. And in this case, if we pick i0 equal to 1, then this is n minus 1, 0 to tau beta dt. So that's going to be an important thing, which we'll come back to later, which is the definition of kind of what makes a room actually safe. Well, what we'd like to do is have this quantity be much less than 1 because I want to say if one infector comes in the room and everybody is currently susceptible and healthy, I want to make sure that less than one person actually gets infected And there'll be some tolerance maybe on how low I'd like that to be, but that's a very important concept. And here you see it comes out of the slow incubation limit. What I'd like to show you next is that the very same indoor reproductive number actually occurs for any model including the opposite limit of fast incubation.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Transfer_of_respiratory_pathogens_Chapter_1_overview.txt
PROFESSOR: In Chapter 1, we'll study the transfer of respiratory pathogens. Specifically, we'll consider bacteria and virus, which originate in the lungs or in the pharynx in the body and are transmitted by forming droplets during the act of exhaling-- by breathing, or speaking, or other respiratory activities, and we'll study the fate of those droplets. Will they shrink by evaporation? Will they fall and sediment to different surfaces? Or will they be inhaled by an susceptible person and thereby transmit the infection?
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Beyond_the_wellmixed_room_Forced_convection.txt
PROFESSOR: So let's begin to examine our assumptions having to do with the transport of respiratory droplets in a well-mixed room by thinking about effects of fluid flow and transport going beyond a well-mixed room. So to begin thinking about this problem, it's instructive to think of examples of simpler flows, in particular the canonical problem of flow past an object-- for example, a cylinder. Here I've shown a cylinder placed in a uniform flow field with increasing speed and fixed object size. So as you can see, at low speeds, the streamlines are fairly reversible and simple looking. And at high speeds, you end up with very complicated turbulent flows. A very simple parameter controls the transition between these flow regimes, which is the Reynolds number. This is a dimensionless quantity, which is a property of the fluid, which includes the kinetic viscosity, for example, of air. And I'll just write this, kinematic viscosity. And it has a measure of the flow speed. So we'll call that U. So the magnitude of the background flow here is U. And it has some information about the geometry, in particular the size of the object. So for example, we could define this based on, well, let's just say, some length scale L, which could be, for example, the radius of the cylinder. And in the case of this cylinder that I've shown here, if the Reynolds number is less than around 10, you're in this regime here of so-called creeping flow where you can see the streamlines are very simple looking. And they are reversible. So they kind of smoothly conform to the object. And you can run the flow in the backwards direction. You'll get exactly the same streamlines. But once you get to a Reynolds number of around 10, then you start to see different effects. And that's because the physical interpretation of this quantity is the ratio of inertia, which is the tendency for fluid to want to keep moving in the same direction due to its mass that has been set into motion, relative to viscosity or to viscous stresses. And that's the tendency for the fluid to have some friction with itself. If you start to move a piece of the fluid, other elements of fluid nearby are pulling back. There's some friction. And it impedes the motion. So whenever inertia is getting bigger than viscous stress, we start to have a tendency for the fluid to want to keep going in the same direction. And it can lead to very complicated flows. In particular here, the first thing that starts to happen is the fluid kind of whizzes past the cylinder and then starts to have a recirculation on the back side. So this happens around Reynolds number equals 10. So if you have Reynolds number greater than around 10, you have some vortices are created. So it's no longer an irrotational flow. It has some obvious closed streamlines. If we increase the Reynolds number further, like in this situation here, we get to Reynolds number bigger than around 90. Then those vortices themselves are starting to spin fast. And they also have some inertia. And they start to separate. Also the fluid is pulling those vortices away. So we have vortex shedding. And that happens in an unsteady fashion. So what happens is there's-- one of these two vortices goes unstable first and peels away and starts to move downstream. The other one kind of takes its place. And it's almost like a flapping flag kind of motion, where a train of vortices is released. So this is an unsteady situation. And this is called a vortex sheet. But it's basically an array of vortices that are being released in a time-dependent fashion. And at first, that's a fairly regular process when the Reynolds number is on the order of 100. But if you keep increasing the Reynolds number, that process becomes more and more chaotic until there's a transition to turbulence, which is a fully chaotic, heavily-mixed flow. And that happens through an instability around Reynolds number of a 2,000, where you have a turbulent wake. So behind the object or the obstacle, there is a steady stream of turbulence where, as I've tried to sketch here, you have vortices and eddies of all sizes. So there are eddies like I've shown here that are at the scale of the cylinder, but then much, much smaller ones, too. So it's a very complicated, time-dependent flow field. So we can see here, the Reynolds number has a big effect on the types of flows that are generated and obviously also on mixing as you go from low to high Reynolds number. So let's think about how that would change in the setting of indoor air. So let's look at flow. Let's look at airflow in a room. So let's think about the different scenarios we could have. So maybe our first scenario would be we have all the windows closed. There's no movement in the room. It's essentially a still room. But as we've discussed earlier, there's still some air change with the outside. So air is still leaking to the outside. There's still a little bit of instability and movement also from thermal effects, which we'll talk about shortly. So there is a little bit of flow in the room. And in particular, let's just ask ourselves what happens if we have a little bit of air exchange with the outside, which is this flow rate Q that we've discussed, but in the case of natural ventilation with closed windows. And in that situation, we've used as an estimate that the air change time, or air change rate, might be on the order of 0.3 air changes per hour, which corresponds to 3 hours' time to have the room air changed. So that's a pretty slow pace. If you consider a room whose height is 2.7 meters, just to put a number on it, and you have that air change rate, then the average velocity in the room is on the order of 1 meter per hour. So that's a pretty slow pace. So to go 1 meter, it's going take a whole hour, so very slow. So you think, not much interesting is happening. But if you calculate the Reynolds number for this situation, the Reynolds number is actually 110 with those numbers. And that's already putting us in the regime of not only forming vortices, but also having some vortex shedding. So even when a room is rather still, and there's just some very gentle movement of air out the windows, or cracks around the windows and other places where the room may not be tight, or from some other minor movement going on in the room, we already expect to see some unsteady vortices and movement of air in that room. But the situation gets more-- gets much stronger if we now move to having ventilation. So let's imagine that we have an HVAC unit on top-- maybe, let's say, an air conditioner, which is blowing air into the room perhaps from somewhere above. So now we have a flow which is going into the room. It still has to leave somewhere, let's say, through an outlet vent. And in this case, the flow rate might be a lot higher. So let's imagine, now, we have lambda a would be, let's say, 8 air changes per hour. And this would be looking at a typical case of mechanical ventilation. And what we really care about here, by the way, is the total flow rate. So we're not interested in just the outdoor air. So what I really want, I should mention, is lambda bar a, where that is the total air change. So lambda a bar, we'll define as the air change due to just the fresh air-- so that was the Q over V-- plus the air change due to the filtration flows. So there is sort of this recirculating flow-- and we've written that as Q-- the outdoor airflow plus the filtration airflow divided by volume. So I just want to make sure we throw that in there, because you can get some flow going also by having a HEPA filtration unit in your room. And there's circulation going on from that. And that does contribute to mixing. And so if we now calculate what is the Reynolds number, we're now getting up to around 2,000. And that's interesting, because that is already getting to the regime of turbulent flow. So just the velocities and flows that are generated in the air from typical air conditioning or even gentle fans will lead to turbulent flows in the room. And turbulent flows are very effective at mixing. So that's one reason we might expect to see strong mixing. So what this actually looks like is-- imagine there's a person in the room. There could be a table and a chair, some kind of obstacles in the room. And even if nobody's moving, just the flow through the room is causing some turbulent wakes and vortex shedding. And there can be some circulations, such that you have some fairly significant inertial effects leading to mixing in that system. Now, we can also ask ourselves, what about other ways that momentum is imparted to the fluid in a room with people in it? Well, it could be, for example, human movement. So what if we have-- let's think about what the Reynolds numbers might be for that. So if we have human movement-- let's say, for example, I'm moving my arm. Or I'm moving my head. I'm not even talking about running or really moving fast. But let's just think about this kind of a motion. I might be moving with a velocity anywhere from 10 centimeters, or 0.1 meters, to 1 meters per second, right? So I could easily go 10 centimeters in 1 second. But I could maybe go a little faster. And I'm moving a part of my body, which might have a length that scales from, let's say, 10 centimeters to 1 meter. So I'm moving maybe my arm or my hand or my head. So if we take this range of values here and ask, now what's the Reynolds number, then the Reynolds number is around 10 to the 3 to 10 to the 5. So and you know this. If you have a room where there's some smoke or some dust particles that you can see in the sunlight, just take your hand and move it like this, even fairly gently. You're going to see very complicated turbulent flows in the wake of that movement. So basically, anything going on in the room in terms of movement is leading to substantial complexity in the flow fields, which contributes to mixing. Another one to think about, which we will come back to, is human respiration, so the fact that you're breathing. So we will come back to this term more carefully. But we can still think about it in terms of the Reynolds number right now. So if you imagine just the flow that is leaving your mouth-- so in this case your length scale is on the order of 1 centimeter. So maybe the area of your mouth that's open is maybe several centimeters. And the velocity of your breathing depends on how heavily you're breathing and what your activities are. But it's on the order-- so if I write this as the velocity U-- I should have written this as a U as well, from actually over there. So if my velocity scale is around 0.5 to 2 meters per second-- that's a typical respiratory velocity-- then now we find the Reynolds number is on the order of 10 to the 3 to 10 to the 4. So again, all these activities of breathing, motion, anything humans are doing in the room, even just sitting there and breathing is leading to Reynolds numbers locally that are on the order of thousands or tens of thousands, which means that we are seeing turbulent flows in the vicinity of those motions. So that doesn't mean that those flows are enough to mix the entire room. But certainly in the vicinity of a person, there's a lot of mixing just from the natural movement and respiration of that person. And when you add into that the mechanical ventilation, there can be very significant inertial mixing going on in the air of a room. So in this simulation, courtesy of Saint-Gobain Ceramics and Plastics, we see an office space containing a number of workers sitting in cubicles. And one of them is an infected person emitting infectious aerosols. And the simulation shows you how those aerosol particles are transmitted through the room by forced convection, where the orange and white squares represent ducts of inflow and outflow in the mechanical ventilation system. This video shows the effect of motion and also respiration leading to turbulent flows by forcing the air to move at a high Reynolds number. So in the case of motion, we see a turbulent wake behind the moving person. And for each breath, we see a turbulent plume emitted by the momentum transferred from the breath. These images are taken by a schlieren imaging method that looks at differences in the density of the air and visualizes the texture of the patterns that are formed in the density.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Safety_guideline_for_COVID19_Reopening_schools_or_businesses.txt
PROFESSOR: So now that we've discussed prevalence and risk scenarios, we come to a very important topic, which is, when should you impose the guideline? Obviously, when the epidemic is raging at a very high rate, it feels out of control, it's spreading, we want to impose the guideline that we've discussed, which limits the indoor reproductive number for each space within some tolerance. And that would, for example, give us some number N1 from the safety guideline, R N less than epsilon, which is going to be smaller than the normal occupancy of that room for a given time and all the other factors that we've been discussing. But what happens as the prevalence of infection goes down. Generically, there must be some curve like this. And we'd like to understand what is the simplest reasonable model we can come up with that can tell us how to relax the approximation? So for example, let's think of a school or a business, where we typically have a lot of the same people there every day, but some others are coming and going, or maybe going home and getting infected and coming in. So the rate of infection is low. So we expect typically the number of infected people is zero or occasionally one. And as the prevalence goes down, then we start to see that the situation is getting safer and safer. So that at a certain point P1, where we start to say, you know what we can actually increase the occupancy of the space with everything else held fixed while still wearing masks. And then we hit the normal occupancy. And you might call that the new normal, where we're going about our business, the room is filled with the typical number of people. Let's say, the classroom is back to its normal size. We don't have any remote teaching going on. But we're wearing masks or taking other factors into account, such as higher ventilation rates, let's say, open windows. But then we continue lowering the prevalence. There's a certain point where we get rid of those other precautions. So extra open your windows. Or more importantly, the dominant effect is the removal of the mask because we know that's a significant factor. And then you might call that back to the real normal, actually, not the new normal. Where we are back to full occupancy and really not taking any precautions. That's going to happen at a rather low prevalence. But we hope that that time will eventually come and hopefully not so far in the future. And I mention here a very important point we've not talked about yet in this class but we should keep in the back of our minds. When I talk about occupancy I'm really talking about the number of susceptible people. We just saw that in the last couple of boards. But the number of susceptible people are really only those that are not immune to the disease, for example, by vaccination. So as more and more people become vaccinated, then this occupancy number might-- for example, let's say, as a typical occupancy will be at 25 people in a class. As more and more people are vaccinated, the number that we plug in this formula here might actually be lower when we just make decisions because there are fewer and fewer susceptible people that are left. So that's another very important factor to keep in mind. Of course, also, vaccination has the indirect effect of lowering the prevalence that is seen in the population as we start to stamp out the epidemics. So we also have that effect. So I won't talk about that any further now but come back to this calculation based on the risk scenario, the second one that I talked about last time, where we take into account prevalence So let's look at the three different cases here. So the first one is where we have restricted occupancy. This is where we've decided there's an N1, which is epsilon over average beta tau. So all the physical parameters are buried in there. And this is going to be something less than the normal occupancy, N0. Now what is the tau we want to think about? Well, if we have a school or a business, this would be the cumulative time that people spend together to the point where the number of days they spend together is, let's say, on the order of a week would be a reasonable number to think about. Because if we write tau is the typical hours per day. Time is some kind of maximum number of days. This maximum number of days could be set by, for example, the testing frequency. For example, here at MIT, we are testing our entire population at least once a week in order for anyone, including myself, to be admitted to the campus. And so we are definitely testing within a week and catching new infections at that rate. It could also be motivated by the incubation time, which is the time to show symptoms. And most people will remove themselves. And we know that's around 5.5 days, a typically reported value. So again, on the order of a week. And there's also, of course, other ways that people are removed or they recover. So there's removal and recovery, which is another way that if you start to go more than, let's say, two weeks, we start to think an infected person that didn't get removed then end up in the hospital has probably recovered. So if we think of a certain number of days and hours per day, that gives a tau that is going to go into this formula. And actually, I should also mention that for simplicity here, technically this should be N1 minus 1. And I can either include that or not when I do this calculation. But I'm generally thinking of N1, which is going to be bigger than 1. So think of an occupancy of 10 people in a classroom, or something, might be a limit that we would be interested in considering. But certainly we can put the 1 in there if we want to. So now let's ask ourselves, how would we start to reopen the space once we've decided on a safe occupancy during the greatest level of restrictions? So that would then lead us into a phase of relaxing restrictions. And this would still be with masks. So keeping in mind that masks are an essential part of achieving a reasonable occupancy when the pandemic is high and there's a lot of prevalence. And that we would only start to relax occupancy first before we take away the suggestion to wear masks. And that would be then the last step. So for relaxing restrictions, we're then going to be interested in the indoor reproductive number being less than now a rescaled value, which would be epsilon over P i Q i N. And the indoor reproductive number, remember, is N minus 1. But it's approximately N times beta tau. So I've replaced again N minus 1 with N just to get a simpler formula. And so notice now here, N is in both sides of the equation. If I want to solve for the value N2, which is this yellow curve here, I'm actually going to have to put the N's on one side and take a square root. So my N2 then would be the square root of epsilon over beta tau times P i times Q i. And remember also, another approximation here is that P i is definitely much less than 1. We're looking to limit a very low prevalence. And so also therefore, Q i is basically tending to 1 because it's 1 minus P i. And so that factor is really not that important. And notice also, epsilon over beta tau, that's N1. So N3 is approximately related to N1 by the square root of N1 divided by P i. That's this number here. So the function prevalence, this yellow curve, is 1 over square root of prevalence. And one nice thing about writing it this way is that I can decide on a reopening protocol without actually redoing my calculation with all those complicated variables, including the risk tolerance epsilon, and all the factors that go into beta because I've lumped them into N1. What I'm saying here is that we've already done a calculation and decided to impose a certain occupancy restriction on certain space based on principles that we've been discussing in this course. But now as prevalence goes down, according to the simple formula, whenever N2 is bigger than 1-- we would use this if N2 is bigger than N1. So basically, when these two curves cross, as you get a lower prevalence, you now switch to two and you'll start increasing. And you will do that until you get to N0. This is the relaxing restrictions. So basically, when N2 is bigger, than we allow this until N2 equals N1. And that's this time here, P, P2. So basically, we start imposing restrictions at P1. So basically, we start relaxing restrictions when the prevalence equals P1. That would be when N2 is equal to N1. And so that would be when P1 is 1 over N1. So essentially, that's when you expect to find one infected person. So this is approximately 1 over N1. Up here, when you go below this, you're saying, well, it's actually unlikely that during the time tau we'll even get one infected person. And that's when we start to relax. So that's the first crossover point. And there's a second crossover point when this is equal to P2-- sorry, is equal to N0. That's [INAUDIBLE] P2 is. And so basically, P2, which is when we would hit the saturation point and that's when we've reopened in some sense to the full normal situation, that would be when N2 is equal to N0. Wait, sorry. I wrote here N2 equals N1. Sorry, I meant when N2 equals N0. Sorry. That's the time P2 here, when N2 is equal to N0, that's when we cut off. So this is N0. And we solve for P i. You can see that we get N1 over N0 squared. So that is the place where I then switch. And now I'm going to cap the occupancy at N0. So maybe to summarize here, what I would say is that the occupancy should be less than or equal to N1 for P i greater than P1. It'll be N2, which depends on P i for P i between P1 or P2 and P1. And then as the prevalence gets lower, we go to full occupancy, N0, when P i is less than P2. So this is basically this full curve of reopening. And then the final decision to make is, when do we return completely to normal and take away certain restrictions we've done? So here I mentioned masks. We could also include in this calculation relaxing other restrictions, such as maybe not having the ventilation on quite so high. So that would be when we finally go back to no restrictions of any kind. We're back to normal. So this means no masks, no other precautions, full occupancy. So in this case, R N is going to be less than epsilon over-- well, let's see here. It's the same as before. We have this P i, N that we just looked at. Or technically. times Q i. But now we have another factor, P M squared. Because compared to the case with no masks, we know the bound in the guidelines. So the effect of beta just gets rescaled by P M squared. So technically that's in R N here. But now we have this extra factor. You think of it like a rescaling of epsilon. And so what that's going to do for us then is that there's another curve that goes like this, which is just like this one but it's shifted by a factor P M cubed-- or P M squared, sorry. Which is like the case where you had-- so this is like no masks. It's like the N2 with no masks. And the other curve is with masks. And there's a rescaling factor, which really has to do with the remediation that you've done. And in this case, P3 would then just be P M squared times P2. So if our P M is a factor of 10%, let's say, masks are letting 10% of infectious droplets get through, the P M squared might be a factor of 100. So then we would wait to the prevalence is 100 times smaller before we finally allow people to remove masks and be at full occupancy. And you could make a similar calculation for other types of restrictions. And in fact, you can calculate such a curve for a given room, given scenario of human behavior and interventions, such as filtration or ventilation. And what the theory allows you to do is to, of course, recalculate N1. And then you can recalculate N2 as well. And so you can say, well, I don't like this curve. I would like to try to reopen my school sooner. How would I do that? Well, I know that if I make various interventions, I can raise the pink curve. So I could end up somewhere here, let's just say. This might be with safety interventions. Actually, one such intervention, by the way, is masks themselves. Because if I follow this curve all the way down here, there's some curve down here, which is no masks, which is the safety guideline with no masks. And if I turn on masks, I go up. But when I make that intervention, also P2, notice, scales also like N1. And so that's actually moving in this direction. And so I essentially move this yellow curve. And so I'm now going to say, well, with a different set of interventions I can make the room safer. And what that does, it gives me more people in the room. But it also means that I change when I make the decision to reopen. And in particular, I can get myself to full occupancy at a higher prevalence because the room is actually now made safer. So I note it. But on the other hand, this switch here was at 1 over N1. So this part shrinks a little bit as this ultimately goes up to full occupancy. So basically, I think compared to the current situation or the typical situation where policymakers are making decisions based on something like the six-foot rule and a somewhat arbitrary feeling about what is a high prevalence-- is it 1%, is 0.1%-- and we decide, OK, now we can close our schools or reopen our schools or set the occupancy at half, the guideline tells you how to set occupancy for of the worst case scenario, when the pandemic is very prevalent in society. But now also, through these kinds of calculations, we can make rational decisions about how to reopen. I'm not advocating necessarily for the exact formulas we find on the board here. But the principles I'm showing you could lead to quantitative and scientifically justifiable ways of taking a specific space and a specific usage of that space and deciding how to close, as prevalence goes up, and reopen, as prevalence goes down, including ultimately returning to normal and removing masks and all other forms of precaution as the epidemic disappears.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Safety_guideline_for_COVID19_Transient_transmission_rate.txt
PROFESSOR: So in order to use our results for the analysis of the well-mixed room into the safety guideline, let me just summarize the results for the general case of transient build-up of aerosols in the room and the associated transmission. So here is the result that we derived earlier, which is that the transmission rate as a function of time is an integral over all the droplet sizes. And then you have here the mask filtration factor, which depends on size P_m. You have the breathing rate Q_b that comes in squared because there's one person breathing out, another person breathing in. You got the volume of the room. You have here the relaxation rate for the concentration of aerosol in the room, which here is given by four factors for ventilation, sedimentation, filtration, and deactivation. And then for the production of aerosols, there's this n_q(r), which is the number of exhaled infection quanta per volume, per air volume leaving the breath per radius because it's still resolved by the different droplet radii. And this has several contributions. It has n_d(r), which is the distribution of droplet sizes. V_d(r) is the volume of each droplet. Depends on the respiratory activity. C_v is the viral load, which we are typically assuming is near the maximum when we're concerned about controlling spreading. And C_i(r) is the infectivity per virion, which we have discussed before also may have a size dependence and is most likely higher in the aerosol droplets. So that's the general solution. And the safety guideline we just discussed has in it the time average beta, so beta with the two brackets, which is the integral in time of beta divided by the time tau. So you break that into two parts. So we integrate this here. The one here is-- it gives you a steady state term, and the interval is shown right here. And that's basically the-- ultimately the average that remains, but initially when the infected person first walks in the room, there's a time to build up the concentration, which only lowers the transmission rate. So the average transmission rate is always less than the steady state. You're approaching the steady state from below because you need the time to build up those droplets. And so the DELTA beta here, which is that correction, takes the following form. What you can do is bring the interval over time and switch places with the integration over r and do the time integral inside the integral. And so that allows you to get-- instead of lambda_c here, you get lambda_c squared, and you get the following expression for the DELTA beta. It may not be obvious looking at it, but if you take a look at tau going to 0, this expression leads to just beta bar. So DELTA beta of 0 is beta bar, and that's because if you take this exponential here and you go to small times, you can linearize that and find its lambda_c t. So it factors-- cancels one factor of lambda_c, one factor of tau, and you end up with just a single lambda_c as above. So what that means is that the average beta, which we're plotting here, as a function of the time tau starts out at 0. It ramps up and then eventually approaches a steady state, and here's the full solution. So all the information that we've talked about before in terms of filtration, sedimentation, other phenomena in the well-mixed room are all included in this framework and can then be put into the safety guideline to derive a general safety guideline that has all of the physics that we want in there and allows you to define a safe occupancy for a room.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Beyond_the_wellmixed_room_Aerosol_transport.txt
PROFESSOR: So we've just discussed how inertia and buoyancy forces can destabilize the fluid and lead to complex flows of air in a room. But now, let's think about how those flows will influence the transport of aerosol droplets which transmit disease. So maybe a simpler way to think about it is imagine we have some object which is releasing a substance which could be particles containing virus. It could also be heat or other chemical release. And that's in a flow field similar to the types of flow fields that we've been discussing. So that's the problem of forced or natural convection, convective transport of particles. So here I show the problem of flow past a cylinder. And let's say the cylinder is a hot cylinder, which is releasing heat into the fluid which I've kind of sketched by this red region here. Now if the flow is fast compared to the diffusion of heat, you would expect a situation like this where there's a thin boundary layer of heat transfer along the front end of the sphere. But on the trailing end, there might be a wake of kind of hot fluid that's been kind of carried away. And that is indeed what happens when we're in the regime of dominance of convection over diffusion. And there is a dimensionless which describes that, which is the Peclet number. So the Peclet number is defined as the velocity times a length scale divided by D where D that is the mass diffusivity. Now normally, D is the molecular diffusivity. Or it's the diffusivity or let's say the droplets or whatever the individual particles are. Now, if you want to think about diffusion of gas-- so if we have for gas molecules-- for example, like for CO2 or oxygen in the air, then the ratio of kinematic viscosity to diffusivity of the gas molecules, that's called the Schmidt number, is also around 0.7 for air. So basically molecules of momentum are diffusing at about the same rate. And so that tells us that for the molecules of the gas, the Peclet number is actually the same as the Reynolds number and, in fact, is very large. If we have a larger object such as maybe a droplet, we have, of course, much lower diffusivity than the air molecules. And we're not talking and it's a different process on a collisional. But we're still dealing with very large Peclet numbers which will lead to kind of wakes, as I've just described here. So it depends on the details and the size of the particles. But for the gas, I just want to write here that Peclet number is on the same order as the Reynolds number and hence is very much larger than 1 in many cases. Now the thing is that because the Peclet number is 1, the Peclet number is measuring the importance of convective transport to diffusion. And so because the Peclet number is typically large, we have a dominance of convection over diffusive process. So think of our aerosol droplets. They do diffuse in the air. And we can calculate that with the same Stokes-Einstein formula that we've used earlier. But that diffusion rate is very slow compared to the sort of convective processes that are occurring in the room. And so we typically have a high Peclet number and we see this kind of behavior. On the other hand, we're also in the regime of high Reynolds number. And that really changes things because then the diffusion due to sort of random fluctuations and collisions with other molecules is overwhelmed by the sort of transport and diffusion light transport that follows from vortices and turbulence. So if I take that same cylinder and I at now at a higher Reynolds number, then we can see that the wake is not a thin little tail that extends downstream but it's really a turbulent wake where the warm fluid in this case is dispersed everywhere throughout that wake. In fact, it's fairly uniformly mixed. A similar situation occurs in the case of breathing, coughing, sneezing, and other forms of respiration, which we will come to shortly, where we have a relatively high Reynolds number flow, often turbulent, and we are injecting in it some respiratory aerosol particles. And they're quite well mixed across that jet. So we can see there's a very strong coupling between the fluid flow that we've just discussed, which is often turbulent and containing vortices and eddies, and the transport of suspended particles and droplets. So how can we think about this problem? So the important thing is when we get into this turbulent regime, the way an individual particle-- imagine-- I should think even here in this case. How does an individual particle move? It kind of follows the flow. So it goes first through a little vortices occasionally around a big vortex, and then it does some little ones. And so it's also doing a random walk but it's one that's driven by the turbulent flow itself. And the length scale for the sort of steps of the random walk is actually the vortex size. And in particular, it's dominated by the largest vortex. So whatever the flow is, there's always a certain scale which sets the size of the largest vortex. And so that leads to the concept of so-called eddy diffusivity in turbulent flows, which is also important in air flows. So the eddy diffusivity is an effective diffusion-like parameter that describes the mixing and spreading of suspended particles or droplets in a flow field that is turbulent. And in that case, we can write it two ways. So either we have an imposed velocity u and we have length scale l, and then our eddy viscosity might be written as u times l. So u is distance per time. l is distance. So it's distance squared per time. So that has units of viscosity. And the way to interpret this is basically sort of swirling around eddies of a size l and a characteristic velocity u. Another way to write this would be that it's l squared over 2 times a timescale. Because we often write diffusion, if there's a time step tau, and a length scale l for a given step, then l squared over 2 tau is the diffusivity. That's how we think about molecular diffusivity as well. But the question is, what is this timescale. So we can see here that the timescale is l over u. So it's a convective times scale. It's the time to, essentially, go around one of those eddies. And I'm writing these really just as scaling arguments here. So if we look at these flows, what are the relevant length scales? So at the beginning of this flow here, it's the length scale is that of the object. In the case of the breathing, the length scale is initially that of the mouth opening. But then we form these turbulent structures that expand. And a constant we will return to you shortly is that the relevant length scale as you continue here is actually something which depends on position. So as this thing grows the eddies are are getting bigger and bigger, and so also is faster and faster the transport by diffusion, which sort of maintains a fairly uniform concentration across that space. And that brings us then to what happens in the whole room. So we've been very interested in mixing in the whole room. And so a natural picture here is to say, well, if the room has a height H, then the eddy diffusivity, which aerosol particles in the room are feeling as long as it's a fairly well mixed turbulent, more isotropic flow, could be described as the height squared over 2 times a timescale. And the timescale should be that of the effective air change or the total air change time, basically. So this here, again, I've written lambda bar a is the outer airflow plus the re-circulation airflow, which may be going through a filter, and divided by the total volume of the room. So this is the total ACH. And so what we see here then is that the formula is roughly that we should have 1/2 H squared lambda a bar. And so this is a very simple argument based on the largest eddy is going to be, in a well mixed room, at the size of the scale of the room. And sure enough, this relationship has actually been verified for houses and actual indoor rooms with all the furniture in them, where it's been shown that if you release a passive tracer such as carbon dioxide in the middle of the room and you have the ventilation on at a certain ACH, Air Change per Hour, lambda a bar, then this formula, even with the 1/2 actually, turns out to be a pretty good approximation for the spreading in that room as you change the size of the room and look at different rooms and also look at different air change rates. So that's, I think, a good starting point for us as well. At least when the room is well mixed, this is a good way to think about transport in the room. Also, we see from this picture that the timescale for mixing is the inverse of the air change time. So the mixing time is also comparable to the residence time of the air, including re-circulation. So basically it's the time it takes for air to typically go through this system is also the time it takes to fully mix the system, roughly the same order of magnitude. And that is the characteristic of a well mixed turbulent room which could occur by any of the mechanisms we've just been describing. Although the same principles also apply to jets or strong, maybe ventilation flows past an object where you might still have some heterogeneities as I've sketched here, that we will need to consider. The last point I'd like to come back to also is the question of sedimentation. So you may have found it surprising that we describe the flux of sedimenting particles to the ground in a very simple way by just using the Stokes velocity, vs, and multiplying by the area. So we said the droplet flux out of the room was just the sedimentation velocity of the droplet, which was radius dependent, times the area, the floor surface area. And what's a bit confusing about that at first is that we know the flows are very complex in the room. In fact, if you look at dust particles in a room as we discussed earlier in some cases, for sure, you will see them actually rising and not settling. So they may be settling relative to the flow. But the flow is actually convecting them upwards. And that's here, for example, if you look at there's a dominant role in the flow that I've sketched here. The particles over here actually might have a net velocity going up. And over here, they're going down. But if you decompose that velocity field, then in some cases, they're going up due to flow or convection, while they're sedimenting at the rate vs. But then necessarily, because the fluid is approximately incompressible and is returning somewhere-- wherever it goes up, somewhere else it's coming down, we find that in other areas you have vs still pointing down with the same rate and now the flow is going down. And if you imagine a particle that is sampling all the different velocity vectors, the blue vectors, sometimes they're up, sometimes they're down, on average the blue velocity vectors of the flow have to average to 0. Because there's no-- or at least near 0. There's not, let's say, a very strong vertical relative motion. Then it's reasonable to assume that the particles will sediment out of a well mixed turbulent flow at a rate given by vsa. And that is something that has also been validated experimentally for well mixed chambers and rooms.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Safety_guideline_for_COVID19_Cumulative_exposure_time.txt
PROFESSOR: So now let's synthesize all of our models of aerosol generation and dynamics in a well-mixed indoor space and epidemiological models of spreading and transfer of disease in that space and formulate a safety guideline for COVID-19 assuming indoor airborne transmission in a well-mixed room. And so there's a number of ways we could go about this. And the approach that I would like to propose is that we require that the indoor reproductive number is less than some tolerance. So this epsilon here is the tolerance. So typically something might be a lot less than 1 or maybe you get to allow it to be closer to 1. And so that is essentially the probability of a transmission from one infector that enters the room at t equals 0 and stays for a time tau in the presence of a total number of n people in the room, which is the occupancy of the room. So obviously, this doesn't handle every possible situation. It is possible, if there is a very prevalent infection rate in the community that the room may find several infectors entering at once, in which case, we could certainly increase this number. And then the 1 here would be replaced by i0 as we've seen before. On the other hand, if you're interested in controlling the spread of the disease and you're not worried about a specific person, but you just want to say, I would like to make this room not contribute to spreading of the disease-- and imagine if everybody did that, then no rooms would contribute to spreading the disease-- so then this criteria would say that if two infectors came, they might infect two people, or three infectors, you're worried about three people. Well we want to make sure is that the number of infectors is not growing. So in other words, if there's one infector, they don't infect another one. So the fact that there are more infectors gives a chance of at least some infection, but this seems like a more general and simpler criterion. It's also a little bit like when the fire department looks at a room and decides on occupancy limit for an indoor space. There's no fire going on. They're not estimating what's the probability of a fire. It's a conditional probability. They're saying, if there's a fire, we want to make sure these people can get out. And so they think about where the fire might start, where the smoke is, where are exits, and how many people can realistically get out of that room before the fire and the smoke reach a dangerous level. So this indoor safety guideline has a similar flavor, where we're essentially defining here the number of people and also the amount of occupancy time that would be allowable. So that it'd be unlikely for one infector to create a new infection and hence spread the disease. So that is the basic thinking. So this it's basically the conditional probability of spreading given this, that an infector enters at times 0 and stays for a time t. So that's basically how we formulate the guideline. And now let's think about-- already at this level without getting into the details of what goes into beta, which we've already talked about, there's a very important concept here, which is that if I define beta with brackets here to be the average transmission rate. So the transmission rate may be changing in time, for example, as droplets are building up in the room as we've discussed. But let's just think of kind of an average transmission rate. Then this integral in time is just beta brackets, beta average times tau. So if I put the beta bracket on the other side, I arrive at a very fundamental result, which is that the number of susceptibles in the room, which is roughly the occupancy-- although, if you're very low occupancy, it's everybody but the infected person, so it's n minus 1-- times the typical time spent in the room by the infected person is less than the tolerance divided by the average transmission rate. This is very simple and very general relationship, which does not depend on the details of beta. But what it's telling you is something which already you may recognize is very different than any of the existing official safety guidelines. Official safety guidelines always limit one quantity. For example, they might limit the number of people in a room. They also can get a limit such as that from social distancing. So you could take the area of the room, divide by a little six foot radius around each person-- or a three foot radius or a six foot separation-- and get at an occupancy. So you fix n. The problem with that is that time has to play a role. If a certain number of people are in a room for a very short time, it's really unlikely to have any transmission. But if the same number stays for a very long time, eventually some transmission must take place. So time has to be important. You will also find limits on time. The CDC defines a contact where transmission is possible as being within six feet of an infectious person for more than 15 minutes. So there is a time constraint there. So it says that after 15 minutes, you should expect to be potentially infected. But the problem is that that's not accounting for transmission rate factors, such as ventilation and all the things that we've discussed, and also the occupancy. So how many people could you transmit it to? If you're by yourself in a room or there's only two or three people in a very large space, are you really going to transmit in 15 minutes? Not necessarily. And anyway, so that is the problem with those kinds of criteria. And it can be easily seen by plotting this relationship here. So by the way, this quantity here, which is the product of n and t, or n minus one and t, is what I like to call the cumulative you exposure time. So the time that you're exposed to the infected individual, tau, is multiplied by how many other people are in the room because that's how many people could get infected. So the more people that there are, the more chance that one of them could be infected in a well-mixed room. And so you really don't have a bound just on time. It's really this product that's important. And so let's plot what this looks like now. So if we have here the time and here the occupancy, then this guideline is a bound that looks like this. It saturates at one. So basically, if you're below this guideline here, you're considered safe. And up here is potentially unsafe. Meaning that given the tolerance that you've chosen, you would expect that there could be a transmission with greater than that probability. OK. It saturates at one because it will never go below one. Because only one person, you're not going to transmit to anybody. So it's OK. So an occupancy of one is OK, except in a building where perhaps other people have come and gone from that room or ventilation is bringing particles from other rooms. And we'll come back to that. But just at the simple level of analyzing one room, obviously one is fine. And here you see a very fundamental problem that I was just alluding to, which is that if I put the standard guidelines on-- for example, a limit on number of people. So this could be fixed occupancy. And in fact, in Massachusetts, for example, right now there is a guideline which says n is less than 25. No more than 25 persons can congregate in a room. In fact, I'm teaching class right now which we have 51 people, and I was able to split into two rooms. And then this rule came along, and then we had to start doing three, remotely broadcasting between those rooms because we have a 25 person rule. The problem with that rule is it doesn't take into account time. What if my class is only five minutes long? Or let's say it's one-hour long, OK. In that time, if I don't have an expectation transmission, I should be OK. On the other hand, if those 20 people sit in the classroom for several weeks, it's pretty likely that if an infected person is among them, there will be a transmission. So time has to come in. And that you see very clearly here from the crossing of the fixed occupancy with the safety guideline. So for a short amount of time, the fixed occupancy, which is telling you should be under this-- so basically you should only have a lower occupancy. So this is a fixed occupancy bound. For example, the one in Massachusetts. At first, this is too conservative. So you are telling people they cannot be in the space, but until this amount of time has passed, it's very unlikely that anybody would transmit. Imagine this time as one minute. It's pretty unlikely that you're going to have a transmission. On the other hand, if you keep waiting, you always cross the yellow line. And over here it's too risky and too dangerous, basically. Because you're allowing people to think that they're safe because there's only 25. But let's say this crossover happens after one hour, and the people are in the room for five hours, there's a very high risk of transmission. So just putting out a number like 25 doesn't really protect you because these lines always cross. You will always cross the safety guideline at a certain time. That's the safe time. OK. Similarly, so basically-- so let me just kind of stress that here. So this is basically, what we're really concerned is this. We're very concerned about situations where a guideline is giving people the sense of protection, and in fact they're at high-risk. We're also concerned about this case here. Because for example, as in the case of my class, we might be causing some damage to people's education or to their businesses or to the economy by shutting down a certain space or imposing a limit that doesn't have really a strong scientific basis because we don't expect transmission to happen under those circumstances. So we're concerned about both. But I'm particularly concerned about the overly risky case because that is contributing to transmission of the disease, and potentially loss of life. Now, if we look in the other direction, we have the same kind of issue. So if we have a time limit-- for example, let's say here. You know, this could be like this. So this would be a fixed time. For example, 15 minutes from the CDC is the time they recommend for contact. So that's a pretty short time. OK, and what a guideline like that says is that if you're in the presence of an infected person-- sufficiently close-- for 15 minutes, then here you're safe, but over here you're unsafe. But again, you have the same phenomena. Now, 15 minutes is a pretty short time, so that's typically going to be safe. But again, in this region it's too conservative. So the blue here is still too conservative. But now, when you get up here, even the 15-minute rule eventually becomes unsafe. Because if there's a very large amount people in a room and they also can interact with each other through well-mixed space-- so I'm not talking about a really, really big room. I'm talking about a small enough room that you could expect air to be transmitted between the people in that space-- then if I keep increasing occupancy, again I cross that line. There's always a crossing. So you cannot have a guideline only based on time or only based on occupancy. If you think of this limit-- for example, how about we take for the 15-minute rule, what if I put 15 people or maybe 20 people into a small tent-- just let's say a little bit bigger in the size of this board-- and everybody's standing close together. We might pass the infection in much less time than 15 minutes because we have a chance to infect each other, and if I keep going up, I'm going to cross that line. And the tent also has a very small volume, poor ventilation, and this yellow curve is very low. OK, so this concept of the cumulative exposure time is really important to understand because it's very general. It really isn't so tied to the details of the model. And it just shows you that any bound on one parameter such as occupancy, time, and I should mention also social distance. Because that's the big one which is happening right now. Social distance guidelines-- so this would be that the distance, d, is greater than six feet. And that's a CDC guideline in the United States. It can be greater than one meter, which is 3 feet, and that's from the World Health Organization. So about half the distance. That leads to a guideline where there's still a maximum occupancy, which is the area of the room divided by d squared, or by some other factor depending how you think people are going to be arranged. But basically, you go into the room, you map out that spacing, and you arrive at a fixed occupancy. This is being done everywhere, including here at MIT right now. And that still leads to fixed occupancy. So regardless of how it was derived, you still have a fixed occupancy, which is too conservative at first, and eventually is too risky. And you must know where is that crossover point because occupancy and time are linked.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Safety_guideline_for_COVID19_Vaccination_and_immunity.txt
PROFESSOR: So we've just discussed the strategies for reopening schools and businesses based on the indoor safety guideline for the given space and the various physical parameters in that space. And the new concept we added was a prevalence of infection. So we would know, on average, how many infected people we might expect in a room. And as that number goes to zero, as the pandemic subsides, we can switch from a more restricted situation with an occupancy N1, prescribed by the original guideline based on the indoor reproductive number, to kind of increasing occupancy up to N0, which is the normal original occupancy, initially with masks. And as the prevalence goes down, we switch to taking the masks off and really returning back to normal. So I'd like to take this a little further now and ask, how would we change our policies or this discussion here if we not only have information about the prevalence of infection but we also have an understanding of immunity, which could be acquired through vaccination or through previous exposure? And especially right now as I'm recording this lecture in early 2021 and several vaccines have rolled out for COVID-19, this is a topic of great interest. So let's think about, how would we take into account susceptibility? So, in some sense, this was a conservative estimate of the true risk of transmission because we've assumed that everyone who's uninfected is susceptible. But, of course, as immunity is increased in the population, we're going to have to modify that. So instead of having our populations of susceptible and infected persons being sampled from a two-state or two-category process, we can think of three categories. There can be-- PI is the probability that a person is infected, which means really, again, that they're infectious and they can affect other people. And this is coming from the local population that is entering that indoor space. And now we're going to add PS, which is the probability that a person is susceptible. And then the third category is PM, which is the probability that the person is immune. And so we have a three-category process, so those three should add up to 1. So this is 1 minus PI plus PS. And we can also further write this as PVAC, the probability that a person has been successfully vaccinated and actually has acquired immunity, plus the probability of previous exposure. We'll call that PX. So this would be vaccination, and this would be previous exposure if that previous exposure has actually led to immunity. And that's a controversial topic, still, under research and may depend on the specific population at hand. But let's imagine that we have subsets of what these numbers are and then we'd like to see how to adjust our thinking here. So we're still going to base our guideline on saying that the expected number of transmissions is the expected number of infected time susceptible, so the expected number of pairs, times the average transmission rate, average of beta times tau, the time, and that that expected number of transmissions should be less than our tolerance epsilon. So that's still our guideline. So what we're really trying to consider, now, are different assumptions about this expected number of infected susceptible pairs that are in the room. And we've broken that down into three risk scenarios. And let's revisit that, now, with our three-category model. So the first risk scenario was describing a desire to limit spreading of the disease through this indoor space. This is our original goal. And by that, we mean, if an infected person enters the room, then we would like to make sure that it's unlikely that a new case would emerge from transmission from that person. So in that case, we have I is equal to 1. And then, now, I is known. So this is just the expected value of S. And the expected value of S, though, in this new model, is the number of other people in the room, n minus 1, times PS. So you see, now, when I define my indoor reproductive number as N minus 1 times beta tau and I want to bound that to be less than epsilon-- that's my typical guideline-- there's this extra factor, PS, which could be moved to the other side. So one way to think about it is, since PS is less than 1, we are increasing that tolerance because there are fewer susceptible people. So we're allowed to stay in the room longer, have a higher occupancy, lower ventilation, et cetera. So this is one case. The next case is to limit transmission. So here we're not going to assume that an infected person actually is there, but we are going to consider the possibility that there is an infected person there. So that makes transmission potentially a lot less likely. And so what we'd like to do here is to look at the expected value of I times S. And I won't go through the details. But for the trinomial distribution with three independent possibilities, with these probabilities-- and you're making N samples from that distribution-- you can show that the expected value of this product is actually N minus 1 times PI times PS, by very similar arguments as we have done for the binomial case. One way to think about this is that N minus 1 is the number of permutations of two people that can be made in that room. So if you pick one person to be first the infected and the other one to be the susceptible, this is the number of such pairs. And PI PS is the probability of each of those instances. So this is the expected number of I to S pairs. And I put a directionality here because we are distinguishing each individual person. So if I take two people, I am counting differently. If one is infected, the other one's susceptible or the reverse situation since everyone's a unique individual. OK, so if we then substitute into this formula, then, notice, now, we've picked up some extra factors. So now the guideline would read that RN will be less than epsilon over N times PI, which is something we already had before. But now there's also a PS. So that's modified. And then, finally, our third risk scenario was to limit personal risk. So this is the case where S is equal to 1. I'm only worried about one susceptible person, and that's me. And I'm then-- if S is known, then we just have the expected value of I, which is just N minus 1, all the other people, times PI. And if you plug that into the formula, then you find that RN is now bounded by epsilon divided by PI. So take into account both prevalence of infection and susceptibility, at least to these somewhat modified bounds. And let's focus on where the changes took place. So first of all, in the original guideline for limiting spreading, we can be a little bit more lenient. So as there's more vaccination and more immunity, we don't need to keep holding that guideline at the same level, even in the most sort of conservative stance of trying to limit spreading. What's more interesting for this plot here is the middle one. So now, we again think up a factor of PS. But everything else is the same. So what it means is that, relative to the calculation that I showed here, I should actually make the very same plot where I don't just plot PI in this axis, but actually plot PI times PS, where PS is the probability of being susceptible, which is related to vaccination and previous exposure rates. So that actually does bring this down and, hence, make it easier to make the decision to relax restrictions and even ultimately take off the mask, because as a combination of these two factors, we're getting even more safe. Interestingly, down here for personal risk, we don't really care about the probability of susceptibles because the only person I care about in that situation is myself. And so I don't have any effect of susceptibility, only the effect of infection.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Safety_guideline_for_COVID19_Risk_scenarios.txt
PROFESSOR: So now let's use the results of our probabilistic model of transmission to see how we can modify our safety guideline to take into account different risk scenarios. So our general result is that the expected number of transmissions in the room in a given time is given by the expected number of infected times susceptible times the mean transmission rate, which is the average beta times the time. And we want this less to be than some tolerance. And also, given the various approximations we've made where, for example, we have not let the number of susceptibles change-- people can be infected more than once, and we've neglected various aspects of the model in that way-- this should typically be assumed less than 1. So we're always kind of thinking of transmission of being a rare event. If we have lots of infected people in the room, lots of transmission going on, that's a more complicated situation which requires more sophisticated models. But for purpose of guidelines, this is the limit that we want to think about. But the different scenarios correspond to our different assumptions about I and S. So first, I'll just remind you of what we've been doing until now, which is to limit spreading of the epidemic as a whole. Thinking what if every indoor space were to impose the guideline, we're really thinking of the case where I is 1, and S is N minus 1, and then this expected number of transmissions is just the indoor reproductive number, which is, of course, just N minus 1 times beta tau is less than epsilon. So that's the guideline that we've already been talking about because the I and the S here are now actually no longer random. We're just saying let's just consider that situation. And if everybody does that, then we are limiting the spread of the disease overall and should be hopefully fighting it. What we'd like to talk about here is how to start with this kind of a restriction which gives us certain bounds on occupancy, ventilation, and other factors and think about, well, how would we actually remove that restriction as the prevalence of infection goes down? So that's a bit of a different question, which is to limit transmission-- or maybe another way of saying that more precisely is new cases that are going to arise in this indoor space. So now we're not just saying if an infected person enters, we don't want any new cases. What if we just don't want new cases at all, including taking into account the low probability that somebody actually does enter this room who is infected? So now I'll just remind you of the results from the last board for that situation with all the assumptions of the previous model. So the expected number of I is now pIN. The expected number of susceptibles N minus the expected number, which is qIN. And importantly then, the expected value of I times S is pIqI times N times N minus 1, which was our result from the end of the board. And so now when we write that we like the expected number to be much less than epsilon, expected total indoor transmissions, now notice instead of just an N minus 1 like we had before, we have these additional factors pQN. And so we effectively divide by that. So in this case, we can write our safety guideline taking into account the prevalence as epsilon divided by pIqIN. And so this allows us as pI goes to 0 and qI goes to 1, so in other words, as the infection becomes less prevalent, then we can start modifying our guideline to increase this bound and, for example, a lot more people to enter the room or to increase their time in the room or to maybe turn down the ventilation a little bit. And we can make changes like that. It's typically considered to be a high prevalence infection when we're getting, let's say, in the range of maybe 100 to 1,000 infected per 100,000 people in the population. So that would be 0.1 to 1.0 percent. This is really considered usually quite high prevalence, actually. So there's quite a few infected people around. But in that case, if-- let's just say we had a situation with 10 people in the room just to give an example then. What would this tell us in terms of increasing our time in the room or our occupancy? Well, occupancy is here fixed. But let's say time in the room or ventilation or other factors. We can basically increase our N minus 1 tau, our cumulative exposure time, which is basically this indoor reported number that we're bounding. This bound will increase or increases by 10 to 100 times because it's basically-- yeah. This is an extra factor there. So that means that if the thing was telling us that we could be in the room for five hours, maybe now it'll be 50 hours or even 500 hours actually depending on how low the prevalence actually gets. And of course, as the prevalence goes down further, and the epidemic disappears, we start to completely relax our assumptions. And we'll talk about that shortly. There's a third risk scenario that is also of interest, which is to limit my personal risk for a given individual. So in this case, we have a situation where I only care about myself, one particular person in the room. So the number of susceptibles is now fixed at 1. And the number infected people is potentially anyone else in the room. So I'm worried about attending event or being in an office or some situation where there's a certain number of people. And the number of infected that I would expect would be-- expected number infected would be-- well, it'd be exactly equal to pI times N minus 1. So, basically, any other person than myself could be infected. And so that's the expected number. And of course, if S is fixed at 1, then this expected value I is also the expected value of IS. So my transmission rate now has this factor. And notice in this case, we get the same N minus 1 as before. But there's this new factor PI here. So that then tells me I could express the bound as the indoor reported number is less than epsilon divided by pI. So, basically, we have these factors that come in when we talk about prevalence that take our previous bound that brings in all of the physical quantities related to the room, its ventilation, filtration, viral deactivation, time in the room, occupancy. And we take those bounds. And we can essentially rescale them with these values depending on how we are using the guideline. Now, this is a very simple model but at least gives us a sense of how to make those decisions. So, for example, let's consider a case like we did here where if the prevalence is in the range of 0.1% to 1.0%, which is actually a fairly high prevalence, then we could increase the bound on N minus 1 tau, our cumulative exposure time, by 100 to 1,000 times as a factor. So if the guideline is telling us that you have five hours in this room, it might actually be more like 500 or even 5,000 for one particular susceptible person given the prevalence in the population. So, basically, just want you to keep in mind that when applying the guideline, the basic ideas don't change. We start with a bound on this reproductive number that brings in all of the physical quantities and disease quantities that we've been talking about. But we also may modify that bound a bit depending on our risk scenario.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Transfer_of_respiratory_pathogens_Bacteria.txt
PROFESSOR: So now that we've talked about the different dynamics of droplets in the air, we can think about how different types of pathogen can leverage those droplets to transmit from one person to another through respiration through the air. So let's begin with bacteria. So there are many different kinds of bacteria. The typical size of a single bacterium is on the order of several microns. So let's just say, 1 to 10 microns. On the other hand, bacteria can also exist in colonies or larger structures. And that does determine, to some extent, what kinds of droplets can transmit those bacteria. So let's begin with an example of a large-drop bacteria, which is typically going to be found in large drops. And that would be the bacteria that causes strep throat, the streptococcus. So the streptococcus bacteria, which is shown here, has a typical size that's around two microns. So it's on the smaller end. And it forms chains and even larger colonies they may have different structures. And the total size of the colonies can vary to be as big as even 500 microns. So it may not be always that big, but it might be, let's say, 50 to 500 micron sort of colonies. And they can often be these sort of stringy type structures. As a result, those require fairly large droplets. Now just simply knowing that we have large droplet, tells us a lot about how this bacteria can actually transmit and what measures must be taken against it. Because as soon as we have large drops, then what it means is that we have fast settling. So we know drops that are in this size range are going to fall to the ground within a few seconds. And even if you cough them with a lot of big velocity -- which is something we'll talk about later in this and these lectures -- then still, in only a few seconds those droplets can sediment out and they really don't go very far. OK? And so they're fast-settling. And so what that means is that we have either fomite transmission, where those droplets settle on the ground, or on some other surface -- somebody touches that surface, touches their eyes -- and so that's one method of transmission. We could also have direct airborne transmission. Where let's say, I cough or breathe on you. And the droplets end up in your face and maybe directly on your eyes, or maybe you breathe them in -- so there could be also direct airborne transmission for some of the smaller sized droplets or for large droplets which are ejected from let's, say a cough or something. And so what that means, that the way that we protect against bacterial transmission of disease is, for example, for the fomites, we will disinfect surfaces. We can wash hands, of course. And we will avoid touching eyes or nose. So those are pretty basic measures. At the same time, if we're worried about the direct transmission of the droplets from especially coughing or sneezing larger droplets, then what else can we do? Well, we can have plastic shields. Either worn over the face or maybe a barrier between yourself and some other person that you're interacting with to avoid that sort of projectile transmission of large droplets. And also, we will get the six-foot rule. Which is just an example of a social distancing measure, which is recommended to avoid this sort of direct airborne transmission. Again, coming back to the idea that droplets of the size typically settle in a few seconds. And if you look at the typical velocities of ejection, especially from coughs and sneezes, then they will settle in about six feet or so or about two meters. Although that's not a hard rule. The six-foot rule happens to be for the United States Centers for Disease Control, the CDC. But there's also [the] one-meter rule, which is basically a three-foot rule, for the World Health Organization. So this is not extremely well defined, what should be the distance. But clearly, if you are able to stay away from people, then even if they're coughing or breathing these large droplets with only a few seconds of settling time, those droplets will hit the floor. And it will not be able to infect you directly. And so it's important to maintain that kind of distancing. OK. Now there are also other kinds of bacteria that don't form these larger colonies. And they remain small even when they're transmitting. And a classic example of that which can be transmitted in small drops is tuberculosis. In fact, the original study in the 1930s -- also involving wells -- was for -- that led to this -- essentially the distancing rules and the six-foot rule in particular -- had to do with coughing and sneezing and the distance over which droplets could be transmitted containing tuberculosis. Now tuberculosis on the other hand, as you can see in the image, is a sort of rod like bacteria that's quite small. So the length is several microns, typically two to four. But the radius is half a micron or even down to 0.2 microns. So actually, we're really talking about hundreds of nanometers in length. So these are kind of little rods like this. And they can be contained in a pretty small droplet. These big colonies of course, require something much larger. So these droplets could be small. And so there is a possibility here of larger aerosol transmission. By larger, I mean in the range of 5 to 10 microns. So of course tuberculosis also can be contained in much larger droplets, which, you know, fall to the ground as we've just been describing, but tuberculosis is a bit different in that these individual bacteria could be transmitted airborne in larger aerosol droplets. And in fact, that is what is found. If you have this size of droplet, then it turns out, for example, if my radius is let's say greater than 4 microns, then the settling velocity you can find given the density of aqueous fluids, is around 2 -- bigger than 2 millimeters a second. And then the time to drop from a typical person's height to the ground is around 15 minutes. So these are not the kind of aerosols that might stick around for hours like the sub-micron aerosols that are also produced by breathing and which are too small to contain tuberculosis. But still, these times suggests that tuberculosis can linger in the air. And in fact, long distance airborne transmission is not only possible as we've just argued based on physical grounds, but it's been directly verified. That's been done both in humans studies and also in animal studies. Where they can have two different compartments with a sick animal, such as a ferret or some other animal model, and the disease is spread to another animal who's had no direct contact but is sharing the same air. So the airborne transmission is definitely verified in this case. And now you might ask, well, what are the preventive measures? So instead of all of these measures, like plastic shields and six-foot rule, if you're actually airborne transmission -- if there is a shield, then the air is going to go right around the shield. And it's going to go everywhere else in the room, because the air is flowing. In the same way that when you light a candle and you see where the smoke is going, it quickly spreads around the whole room. It doesn't care if there's a shield there. As long as there is a way around the shield, that's going to find that. And so instead, when you're trying to protect against airborne transmission, your protective measures are things like ventilation, so bring in fresh air from outside or opening your windows; air filtration, where you're passing the air through a filter which is going to filter out the particles by size or by charge or some other mechanism like that; and also it becomes more and more important to wear face masks because then at least you can block these droplets at the source and also at the target, as we shall discuss in much more detail. So what we see here is just knowing simply the size of the pathogen and understanding the physics of droplets in the air, helps us to understand the modes of transmission, which are written here in blue. and also, the appropriate protective measures, which are written here in pink, to protect against these different types of transmission.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Epidemiological_models_Chapter_3_overview.txt
PROFESSOR: In Chapter 3, we will study epidemiological models, which describe the spread of an infection and then the recovery from that infection in a population. And the traditional epidemiological models that we will study for populations, and also adapt to indoor spaces, involve keeping track of compartments or subgroups of populations, such as the number of susceptible people, the number of infected, exposed, and recovered. So for example, an infected person can expose a susceptible, and then the exposed person can themselves become infected, or they may eventually recover. And there are various rates for these different processes. And this leads to a set of nonlinear differential equations that describe the evolution and growth, and then ultimately decay, of an epidemic.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Airborne_disease_transmission_in_a_wellmixed_room_Chapter_2_overview.txt
PROFESSOR: So having established the importance of airborne transmission in the spreading of COVID-19 specifically, and more generally in the spreading of respiratory pathogens, let's proceed to analyze airborne transmission in a well-mixed room. So we're thinking of a situation where a certain number of people are sharing an indoor space. And the air in that room is well-mixed enough that as pathogen-containing droplets are released from the breath of one person, it's quickly spread throughout the whole room. And by the time someone else breathes it, they're essentially exposed to the average concentration of pathogen in the air after accounting for such effects as the settling, the evaporation, the ventilation, filtration, and other effects, but still at the level of a well-mixed space.
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Airborne_disease_transmission_in_a_wellmixed_room_Dropsize_distributions.txt
so now we've included all the key physics in our model uh sedimentation deactivation of virus filtration of the air ventilation flows and now we have to include one last piece of information which is that all of those quantities or many of them that we've discussed have a dependence on the size of the droplets and the size of the droplets and number of the droplets is a strong function of the type of respiration which is being performed by the infected person in spewing out or emitting these droplets and also the respiration of the susceptible person who's breathing those droplets in so here is a sketch of what droplet distributions look like that have been measured for different kinds of expiratory activities so this is shown kind of on a log scale at least what i'm attempting to sketch so the dash the dotted line here roughly speaking is separating the aerosol droplets which are less than several microns in radius and the large droplets which are bigger than that and what you can see here is that as you change from resting breathing which is let's say through the nose or even resting breathing through the mouth this distribution starts to go up when you start speaking there's a significant increase in the emissions uh presumably because there is a fragmentation breakup of mucus which is surrounding the vocal cords and the pharynx which leads to significant aerosol emissions which do not evaporate and do not follow the wells curve but actually survive and then that becomes much more extreme as we get towards singing and any prolonged vocalizations in fact also this the size here uh depends on on the volume of speaking so this is loud speaking and somewhere below here is whispering so basically the amount that you release is a strong function of the sort of volume of speech and whether you're vocalizing or not makes a big difference where vocalization leads to many more droplets being formed that are aerosols that remain in the air and singing is actually probably almost as bad as you can get in terms of spewing out droplets what's interesting is that all these distributions have a peak or the most probable value is actually less than a micron so it's around maybe half a micron in diameter which is even a quarter micron in radius but it can vary so it's somewhere in the range of uh let's just say 0.3 to 1 micron is somewhere where you'll find the peak of these distributions and then they they drop off but in such a way that they're not they're dropping off fairly quickly in number but since the larger droplets carry more volume like r r cubed it turns out they're not decaying quite so quickly as a function of volume but still they are decaying and the peak is in the aerosol range so clearly there's a strong size dependence there which we must take into account but that's not it we've already talked about other quantities which also have strong size dependence so we've talked about infectivity if we think about a water droplet i was calculating the time for the variant to get out of the water droplet by diffusion and we saw that even up to sort of 10 micron type sizes the virion can still escape on the other hand if it's in mucus as the droplets that are emitted from especially vocalization patterns then you'll find that as we've already calculated that cutoff where you start to not have time for the virus to get out might be on the order of microns and so we're certainly going to see that the aerosols are infectious and that has been shown experimentally but we may not find that they're as infectious in the larger drought form so that's something we could also take into account in the model and then of course mass and air filters have a strong dependence on size and that's included in their ratings so for example when we go from an n95 mask to various forms of cloth face covering there's a lot of experimental data showing the transmission of those of those different droplet sizes through different materials and in addition to the material there's also the fit factor so you can have a really great material if you measure it it's 99 you know removing uh of the aerosol droplets but then you put in your face and you got a big gap over your nose or you let it sink down over your below your below your nose and suddenly like you're letting a lot out as well so the fit of the mask is also very important and somehow ends up being included in this in this quantity but clearly there's a strong size dependence although i would point out both these quantities are not varying too much over the aerosol range so if we're focusing on aerosols then the variation isn't as much as you might think actually so what happens with the filters is when you get to bigger than 10 microns like to millimeter scale then there's a physical blocking of those droplets and they get condensed and collected on the on the fibers of the of the material and so there's almost 100 filtration at the scale of like millimeters right so when you have a large cough or sneeze if you are wearing a mask even a fairly poor mask a lot of it gets filtered yes some drops do get through and that's been observed but it's very significantly filtered at the at the high sizes on the other hand when you get down to the aerosols then the mass may not be filtering so strongly and the same thing with the with the filter so the hepa filter is intended to remove the aerosol droplets even those which are below a micron but the various merv ratings of filters are not really going after the aerosols as much but still have a significant removal maybe in the order of you know tens of a percent or maybe up to even 90 depending on the rating so the point is all these quantities are size dependent so we have to go back and revisit our model uh including that size dependence okay so it just makes things a little bit more complicated but not too much so let's go back yet again to our mass balance but now we're going to do it in a radius resolved fashion and so for this i'm going to need a c of r and t and i'm going to use the same notation t c even though i've just changed the meaning here by including the r and this is going to be a radius resolved concentration of variance so that means it's a number of virions per air volume per radius okay and so now when we write the mass balance instead of a dc dt with a you know an ordinary derivative it's actually a partial derivative because c is now a function of r and t so i have a partial of dc dt and then if i divide through by volume on the other side i have a p which now depends on r so the production rate of droplets is certainly very strongly size dependent and then all these other factors may come in as well divided by volume and then we have a lambda c which now depends on r times concentration so this is now what happens to our mass balance and this lambda c of r what does that look like well it's lambda a times 1 plus and then we have the sedimentation effect and i'm going to write this as r over rc square because we know the sedimentation velocity scales like r squared i'll come back to this in just a moment this is sedimentation that's the tricky part it is strongly size dependent uh plus lambda v and plus pf lambda f which of course depends on r um as well lambda v in principle could depend on rl that we haven't really considered that we've been thinking more just volume you know spontaneous or uh maybe chemical defect i guess in the case of chemical disinfectants maybe that size dependent too because whether the virion is attacked by a chemical depends on how big the droplet is so in principle this could also be size dependent so basically everything is size dependent except for lambda a is the is is the air flow error change rate and that of course does not depend on size now what have i done here in this rc so i've written here um that if i write lambda s of r is the sedimentation rate that's vs of r times are divided by the height so basically that's what we talked about before as sort of the the the the rate at which particles are sedimenting onto the horizontal surfaces where remember h is v over a and then what i've done here notice i have so what was supposed to be here is is lambda s of r but then i factored out lambda a so what this term is here is lambdas s of r over lambda a that is uh by definition um so that so that's the same as v s of r over v a where v a is the air velocity um so that's q their flow rate the outer flow rate divided by a and then we're writing that as r over rc squared it's because we know the vs scales like r squared so when i sort of factor this out i can define a quantity rc and what does that rc turn out to be that is basically you can you can actually just see by working through this formula actually i'll just leave it here as as a definition basically um now then we can also now say what happens to these kinds of droplet distributions so this is kind of the important thing is that lambda c is something which has some r dependence from filtration as we're sketching here but it has also very strong our dependence from sedimentation so we don't show it here but the sedimentation velocity kind of increases like r squared so that's a pretty big dependence and so what that means is that if i take some of these initial droplet distributions that are coming from breathing so imagine this is kind of the droplet distribution soon after the aerosols leave the mouth after the initial evaporation has taken place but the wells curve has been disrupted by the fact that there's mucus with lots of solute and charged molecules and so they've kind of reached the droplets of reach an equilibrium distribution that looks something like is shown here if i then ask myself what happens at a later time how does how does that build up so let's say one person in the room who is who's basically one of these curves is describing the distribution of droplets that they're emitting from breathing so let's ask ourselves what happens to the concentration profile in those cases so there's an rc here and r and and basically as you can see when r is less than rc then the setimation term is small and essentially it's just lambda a so that means that the removal of the infectious particles is dominated by ventilation which is not size dependent so when you're less than rc which is another way of defining the aerosol range that is where you don't have a strong size dependence but when you're in the large drop category of course those drops are sedimenting and in fact this term can be very large if we go out to 10 microns or even up to a millimeter the sedimentation rate is incredibly fast and those droplets are very quickly removed and do not end up swirling around the room as we've been describing so let's imagine we take let's say it's somebody speaking and let's say that just for illustration purposes let's say it initially you know looks like this so this is kind of the initial profile that's in the room in early times now at first until we get to a time inverse of lambda c that's the concentration relaxation time over that time scale the concentration is going to be building but it's building the fastest in the aerosol range because those guys are not being removed these guys though are being removed so instead of increasing it doesn't increase that much because they're also being removed so what happens at a later time is that this goes up but not so much over there and it gets more and more peaked in the aerosol range because here you have basically fast removal of large drops but here you have slow buildup of aerosols so if you wait for a time scale the time it takes here time is basically lambda c inverse so we call that tc or tau c that's the time scale for build up that's the inverse of this that's how long it takes to essentially reach steady state and where you really get the the biggest increase is among the aerosols because over here you're basically losing that now there are filtration effects too and there's also the infectivity so there are other competing size dependencies that might actually emphasize the large scale here so i want to make sure we're clear about that but these sine dependencies are bounded for example the mass filtration is sort of you know bounded by one right but r squared just kind of keeps going so if you go to larger and larger particles like they sediment faster and faster and faster okay and so it really is true i didn't draw this very well but if you get to like large sizes on the order of let's say millimeters you know those droplets really never build up in the air because they sediment almost immediately when you cough or sneeze the largest droplets fall and it happens very quickly it might happen in a few seconds or a minute and we're talking about time scales here that on the order of the air change rate which might be on the order of hours or tens of minutes so basically this size dependent relaxation tends to kind of sharpen the distribution right into the aerosol range which is again why we're talking about indoor airborne transmission being dominated by these aerosols that are swirling around the well-mixed room and that's very different from transmission through coughs or sneezes of large droplets which are only very briefly present and then they sediment out okay so if you're not standing in the way of that cough or if the cough is blocked by a mask or a shield then you really don't have to worry as much about that form of transmission but as shown here you do have to worry about the aerosols which are building up in the air and are very strongly related to all these different factors that we've been talking about okay um and in fact maybe i should also skip just to help understand this picture maybe i should draw lambda c as a function of r so where this rc is is again i was kind of saying it in words but it's dividing a situation when you're less than rc where basically you know these quantities aren't varying too much to the situation above where it's growing like r squared so basically a much faster relaxation again separating aerosols from large drops so if we substitute the stokes velocity that we've been talking about earlier into this formula here we can actually drive what rc is and it comes out to be the square root because the define is defined with rc squared of 9 9 halves that's from the stokes formula we have lambda a the air air change rate the effective height of the room or ceiling height the viscosity of error and the denominator is 2 times the density of the liquid and gravity okay and this number it turns out is also on the order of a few microns so it does depend on what lambda a is but it's you know for sort of lambda a on the order of tens of minutes to hours this is a kind of range it could be smaller it could it could be bigger um in fact it could be maybe yeah maybe it could even be like 0.5 microns up to 5 microns is probably 0.5 to 5 microns probably more accurate but basically it's sitting there right right about in the as i was trying to sketch here in the barrier between aerosols and non-aerosols and also below rc is actually where the peak of the distributions is from typical expiratory activities and so the point is that those are not being affected very much by sedimentation now the last thing i'll mention is that as soon as we have these kind of size dependent properties what happens to our calculation transmission i'll just put the mathematical formula on the board here without really dwelling on it but the way you can plot what i've sketched down here for an actual distribution as shown here would be to substitute into the formulas we had before so for example what is the time dependent transmission rate well remember that's qb times the integral over all the sizes now of all the size dependent quantities so it's p m of r uh c of r and t and c i of r d dr where ci is the infectivity so basically as you solve this partial differential equation here there's sort of a a sort of change of the concentration field as i've sketched you have to then integrate these curves against these other factors the mass factor and the infectivity factor in order to figure out the transmission rate at that moment through the well-mixed error and if we solve this equation here we can actually substitute back in and get our formula for beta which is that beta of t is qb squared over v so here i'm substituting uh p of r so remember before p was basically nd times c times the infectivity so that was basically the production rate of virions um and we've already seen that before and so when i kind of substitute the solution that we've already derived but just keeping track of the fact that these quantities are all radius dependent and appear under the integral we are left with the integral from zero to infinity of p m of r squared cv ci of r cv is the concentration of virions per liquid volume or mucous volume in the droplet ci of r is the infectivity um and let's see then we have also an nd of r v d of r where v d of r is the droplet size all divided by lambda c of r which is all this stuff and this integrand is multiplied by the exponential relaxation 1 minus e to the lambda c of r t d r so this is your general formula for the time dependent transmission rate in a room where you can select an actual distribution that corresponds to the infected person's breathing activity and tells you the kinds of aerosol droplets that they are emitting and then you basically have all these other parameters to do with filtration and air flow and ventilation viral deactivation and sedimentation and you finally have to do this integral and you end up with the transmission rate
MIT_RES10S95_Physics_of_COVID19_Transmission_Fall_2020
Transfer_of_respiratory_pathogens_Viral_deactivation_in_aerosols_ASIDE.txt
PROFESSOR: So, as an aside for more advanced students, let's try to fill in some mathematical details to provide a theory to support or interpret the Lin-Marr hypothesis of disinfection kinetics having to do with the concentration of solutes during drying and their effect on deactivating viruses. So, to put it in mathematical terms, if we have a certain number of viruses Nv in a droplet, then we'll postulate that d Nv dt is minus lambda v0, the deactivation rate per solute virion collision, times the volume fraction of disinfecting solutes we'll call phi d, which is time dependent, having to do with the size of the droplet, times Nv. The volume fraction of disinfecting solutes we'll write as alpha D, a constant, times phi s, which is the total volume fraction of solutes present. And that might be, for example, the fraction of solutes that are sodium chloride or some other salt that might be causing the damage to the virus, as opposed to the mucins or other macromolecules that may be present. Then we can-- as the droplet is shrinking with a radius R of t, then it's simply the volume of phi s that is getting re-scaled relative to the initial value, phi 0, as R0, the initial radius, divided by R of t cubed. So that's just simply the changing of the volume. Now let's recall some of our results from the past earlier part of this chapter having to do with Wells' theory of evaporation. So, if we consider diffusion-limited droplets, we've shown that the radius of the droplet versus time relative to the initial radius R0 is square root of 1 minus t over tau e, where tau e is the evaporation time, R0 squared divided by d bar, a constant with units of diffusivity, times 1 minus RH, the relative humidity. Now that predicts pure liquid droplets that shrink all the way to nothing and evaporate away, but, when there's solute present, there's a cutoff, which we've also discussed that gives you an equilibrium stable size of the drop, R equilibrium, relative to R0, which is given by phi s0, the solid volume fraction-- or solute volume fraction initially divided by 1 minus RH raised to the 1/3 power. By writing that as square root of 1 minus tau over tau e, we can also define the time tau when you reach the equilibrium size by a diffusion-limited evaporation process. So that's sort of the time to form a stable droplet nucleus. Now let's start combining all these equations, and we can write what is the volume fraction of disinfecting solutes, phi d of t. Well, from this equation here, it'll be alpha d times phi s of t, which is phi s0, times this ratio, R0 over R cubed. So, using this expression for diffusion-limited kinetics, this would give me a 1 minus t over tau e to the 3/2. And, if we look at the ultimate limit here that they'll get from when it's a solute, when tau goes-- or when t goes to tau, the evaporation time, so when you've reached the droplet nucleus stage, we're left with just alpha d times 1 minus RH. So that tells us sort of the fraction of solutes which are present as a function of relative humidity, but, also, as a function of time, as drying is going on. So now let's go back to this dynamical equation. And let's go ahead and solve it. So this is a first-order, separable-order differential equation. So what we can do is write this as minus d Nv over lambda v0. Nv is equal to phi d of t dt. So we've put all the N's on one side and the t's on the other side. And so we can actually then integrate this equation. And so the integral of dN over N is the natural log of N. So we can write this as minus 1 over lambda v0 natural log of Nv over Nv0, which is the initial value of Nv. And, in time, we're integrating from the initial time 0 up to the droplet nucleus time tau of phi d of t dt. So, substituting our expression right here, we then see that we have alpha d phi s0 times the integral from 0 to tau dt over 1 minus t over tau e to the 3/2. And we can do that integral and get alpha d phi s0. And then let's see. To get the integration variable, we need to have a tau e here and write that as dt over tau e. And, doing the integral, we would get 2 times 1 over square root of 1 minus tau over tau e minus 1, evaluating at the two limits of integration, taking into account the integral of the-- antiderivative of integrand there is 1 over 1 minus t over tau e to the 1/2 power times 2. So, putting all this together then, we can write the viability. So we can write the log of Nv over Nv0 as minus-- we have all this stuff here-- 2 alpha d phi s0 lambda v0, putting the lambda v0 back on the other side with the minus sign. And then we have times two factors. So, first, there's the factor, which we know has units of time, which is R0 squared over d bar. So that's, essentially, kind of a water vapor diffusion time that comes into the evaporation time, tau e. So that sets the timescale here. But then what we're really interested in is the relative humidity effect. So that would be-- let's see here. So we have this factor, and then we also have the-- let's see. The 1 minus RH is coming in where? Sorry, so, 1 over square root of tau, this one is from right here. That's R over R of tau. And R of tau is, by definition, R equilibrium. So it's this factor here. So we get 1 minus RH over phi s0 to the 1/3 minus 1. And then we also have this factor of 1 minus RH that comes, yes, from the tau e because the tau e has this sort of basic timescale, but there's also a factor of 1 minus RH that I've included. So the point of all this theory was to try to understand what is the dependence on relative humidity, which is what I've shown here in white. And, if you plot this function, then what you find is a function of relative humidity. Then, if you do here log of Nv over Nv0-- so this is our relative viability of the virus, and the 0 here corresponds to Nv0, the initial-- then, this white function, what this looks like is something, which decays like this. It kind of reaches a minimum around 80 or in this range from sort of 60 to 80, depending on what the values of this parameter phi s0 is in fact. And then it goes back up again. So, basically, we get a shape for the dependence of the relative humanity that nicely matches the experimental data and is consistent with the hypothesis of disinfection kinetics that was postulated by Lin and Marr.
Robotic_Manipulation_Fall_2022
Lecture_3_MIT_6421064212_Robotic_Manipulation_Fall_2022_Basic_pick_and_place_Part_1.txt
thank you okay welcome back everybody I think we've determined by the way I there is a shade up there so you can see the canvas being like crinkled in the corner but the buttons that allow you to move it are uh inoperable so uh they've they've hopefully adjusted the camera settings and it'll be less washed out but we'll keep fighting the fight all right last time we did a sort of quick tour of robot Hardware to sort of as background for the class and today we're going to start into some of the real core material where this time we really do want you to understand the equation you know every equation and and build the you know build up a solid foundation that we're going to use for the rest of the course okay so make sure you Slow Me Down Speed me up uh you know whenever if there's any questions today but I don't want to just drop in and you know start teaching you the um the approach until I motivated with an example so the um I told you the whole course is going to sort of unroll with giving making a robot do something very simple and then making the task harder and harder and and trying to build up our capabilities so for today we're going to do just a basic pick and place this is what we're going to achieve here we've got a red brick in a bin and your we are collectively going to figure out how to program the robot to go pick up that brick and move it over to the other bin okay we're going to do one major cheat which is the reason why I didn't bring the robot here to do it today in addition to the logistical hurdle that that was but we're going to assume that perception is given for today we're going to assume that someone has told you exactly the location of the red brick in the world and we're not going to rely on the process of getting that information from the cameras okay but given that Assumption of someone telling you exactly the ground truth initial position of the red brick and a Target position and orientation of the Red Brick we're going to be able to try to figure out all of the work we have to do to make the robot do that okay and the sketch for how we're going to do that is like this okay first we're going to go through a bit of kinematics in spatial algebra I think so the polls by the way the survey in the first piece set I love seeing the the results of those I read them carefully I adjust what I'm doing here because of it uh so it was very helpful for me to see that 27.4 I think of you said you know kinematics uh right that helps me dial it in right and but even those of you that know kinematics you might still benefit from going through it the way we're going to go through it um I'm going to emphasize things a little bit differently than a standard robot kinematics sort of treatment pretty really emphasizing the algebra and not the trigonometry okay after we understand basic sort of kinematics we're gonna start by sort of pretending the robot isn't there and just thinking about where we want the hand to move over time okay so we're going to come up with a sketch for the end effector of the robot the gripper okay and we're going to just even imagine that the uh you know the block picks gets picked up when we get there okay but we're just going to sort of hallucinate what we want the hand to do and that's going to be a fairly easy thing to program once we have the language of kinematics and transforms and spatial algebra okay and trajectories and then we're going to do the work to then convert the target end effector locations trajectory okay into joint angles for the robot and that's the kinematics problem and we do that by thinking about a coordinate frame attached to the gripper I'm going to say all these things again carefully this is just the sketch for the lab for the lecture okay so we're going to figure out how to move that end effector by backing out and figuring out what all the joint angles have to be on the robot so that we can turn that into the Ewa position command going into our low level controller okay so let me start by talking about kinematics right and spatial algebra is the way I want to talk about that so kinematics is the language the set of tools for reasoning about the geometries it's a geometric properties of the um algebra of the system okay it's about geometry the concepts of geometry I'm sure are familiar to you right trigonometry is sort of familiar to you but even some of the smartest most advanced roboticists I know often screw up their spatial Algebra I have done it for years I will do it again uh right so it's it's the underlying geometry is very familiar but I would say that the getting this right it's very subtle it looks so easy but if you don't get it right you will inevitably mess it up down the line okay and this is how it goes right so so typically you'll see someone you know roboticist working on a problem it's not quite working out they've got typically got their like right hand in the air right their head is not upright most of the time they're like doing something like this and then at some point they get frustrated and they phone their Dynamic Sprint right and uh like in particular there's a there's a Dynamics Team that works on Drake and they're just fantastically good and you can phone your friend and ask like okay I've got this series of operations I'm trying to get right and it's wrong what do I do and they always have like a really fast correct answer right and so what puts these people in their sort of elite group of people that can answer Dynamics queries correctly right almost all the time I you know they're very smart that's that's for sure but that's not the the reason right they just took time to make good notation that's it if you have the right notation you can you know you can make the notation make your math work out right right so it just takes a little bit of work up front to agree on a notation that makes it so you can't write things down incorrectly on paper or in code all right so we're trying I'm going to be a little Fierce here today about telling you some of our notation because it's going to save you bugs you know down the line if you listen okay um so we're going to think about talk about a bit a bit about you know emphasis on notation it matters all right let's begin so we're going to talk you know the notion of geometry of kinematics it always starts with just a point okay so let's define I think I start thinking about a point in space we'll call our point a okay we want to talk about um the the position of this point in space we're going to talk about we're going to call that the position p of a okay so P for position not point and uh and a is the name of the point we're going to name our points okay now we have to be a little careful if you talk about a position in in a 3D space positions are are sort of not absolute they're always relative to something even if it's the world origin right Okay so we want to say it's relative to some other point for instance so we're going to write that as the position of a relative to B and I'm going to use this weird superscript before the P okay everybody hates that I hated it when we first saw it but it's it's the way that you get things to work okay um but even that is still not quite enough right so if I'm saying that I've got a um a point B and a point a then I've got a position you know of a relative to B that's describing the vector between them but if I want to write that Vector down in three numbers I have to embed it in some coordinate system you know how do I decide which the first number is going to be my x coordinate let's say my my y coordinate my Z coordinate is going to be in 3D space but I still need to Define some coordinate system so we're going to Define we're going to Define coordinate systems with the notion of a frame that's the right hand that people hold up in the air and look at it from different angles okay we're going to talk about a coordinate frame or just a frame F okay now um is where I got my multi-colored chalk to help coordinate frame could have its own it has its own origin and then it has an X Direction a y direction this why Z okay now that color matters you should you'll remember it always and you would what I was just doing with the mnemonic I was just doing in my head to make sure I got it right is that x y z goes to r g b and every time you see in pretty much any software you know this could be computer graphic software or whatever you're going to see these little Triads they're called these little coordinate frames just like the ones you see okay and you know which which one is X which one is Y which one is z just by the color code and everybody should use the same color codes every once in a while you see someone uses something different and that's just weird but uh yeah so XYZ RGB okay those are your three axes and our full notation then for just we've just done a point so far in a frame we're going to say the position of point a relative to B in court coordinate frame f okay that's the worst it's going to get but it's gonna it's gonna go the distance for us okay so this is our Target this is our relative to and this is our expressed in frame okay that's a subscript now um writing that all the time is good it's it's explicit A lot of times these things are you know there's some common choices for these so we do definitely have shorthand for it so if I were to write p a of f for instance that is just shorthand for when this and this are the same and there's a really important frame which is the world frame my world origin which we'll call W and if I write just P of a up there I'm going to assume that what you mean is a in the world frame relative to the world origin okay that's just our shorthand okay so um we're going to have a few other really important frames today we're going to use like G for our gripper frame o for our object frame this becomes a very nice vocabulary for starting to talk about these things so just so you start thinking about the the mechanics of the these right so for instance when does um well let me let me do it in the other order here 's a little check yourself okay let's say G is my gripper frame by the way you think about if you if you see this now this is just at the the object frame here oh okay I've got red is the x-axis Y is the green X's Z is the blue axis this is called this is commonly called you can always use your right hand to do it's a right hand rule okay and it's a little weird you have to remember that Y is going into the board not out of the board okay that's called vehicle coordinates right it's used commonly in a lot of Robotics certainly in autonomous vehicles for instance if you're piloting an airplane they do this so don't yeah like I guess you want to get that right but but for this class you can always assume we're in vehicle coordinates okay there's pretty much two canonical choices out there okay so now take a second and think about this which as a possible value for the position of the object my red brick relative to the gripper using the notation we talked about good yeah yeah so so if I use a frame as my relative to I'm talking about the origin of the frame so in this in this relative you could it's valid to put a whole frame in there meaning the origin great everybody's good people I see some hands yeah what's up there is benefit from for decoupling the B and the F yeah you might have I mean even in the world frame you might want to just the difference between two different points right the the relative Vector between two different points that's used commonly in fact and the mechanics of operating them depends on having that ability to decouple it yeah okay option b correct right because G is implied the expressed in frame right and so the when you put your hand up like that and think about it you're gonna the object is along the y-axis here right so point three has got to be the big one and just to lock that in if I were to put a w here now right now I want the location of the red object relative to the gripper frame but in World expressed in World coordinates yeah oh the world I'm sorry the world is uh is is yes that's right just assume it's right here that's I forgot to say that thank you it's at least a line aligned with the grid yeah I got an A yes okay now the fact that that changed you know with just that one change in notation is going to be important and and you'll understand some of the properties right so um you'll pretty quickly understand that for instance um let's see I got another example here yeah when do these two things match right if I have if I have different expressed in frames when are the positions the same yeah foreign the object is this red brick oh yeah it's my little foam red brick it's like my little prop yeah and it's at or near I mean the world frame all you needed to know to answer this question is that that the grid is the ground I should have said that more carefully but the grid is the ground and the world frame is aligned uh yeah with that's the x-axis in the world frame yes so like in your hand would you think of it yeah from that yes and then like and then I find it in World coordinates exactly so it's going it's so that Vector that you drew that in my head that I drew from here to here goes X2 and Z negative two yeah perfect so when would I have a property like this if I have two different expressed in frames but the same points what would when would the what in what case would that you have a quality there okay that's good yeah I mean certainly if there's no rotational difference between C and D if they're just translations apart from it yeah then that's going to hold okay now so that's a particular property the notation is going to make it so you don't have to remember I mean you can think about that but the notation is going to protect you from having to remember every detail okay but that's that just exposes some of the subtlety that will that we want to build the notation around this is why we have rules and why I like to think about the rules as uh as the algebra of the rules okay I'll leave that here come over here right so um you know in a standard sort of kinematics class uh for robots you'll like start saying I have Theta and then you'll get some sort of a length and you'll get a lot of like x equals L sine Theta or this is cos Theta or something like this okay that all this stuff that you might have seen before in your kinematics classes for the 27.4 of you that have seen that before right that is still valid but I would like to lift us up instead of talking about the trigonometry we're going to talk about a frame here a frame here a frame here okay and we're going to use algebra to talk about how they can be combined how do you go back and forth between them when the computer goes to do that algebra for you it will compute Sines and cosines and everything like that but we're going to just stay at the algebra level for now okay so here's the basics of the algebra spatial algebra we have an addition right I want to add um two positions together two relative positions together I can do that but I need my superscripts to match for it to be a valid addition operation and that is how I get from a to c right these need to match and these need to match okay there's an additive inverse right so p of a to B have B relative to a if I take the negation of that that just flips my Vector around right that just points the vector in the other direction okay now you can see we with a few operations like this we can start building up a powerful set of tools that'll allow me to think about you know where my friend how my frames move around in space but so far we've only talked about positions and translations the other thing I need to talk about is rotations again there's a lot to dig into and rate rotations especially 3D rotations but I want to stay first at least at the spatial algebra sense okay so I'm going to have my rotation here that goes between two different frames for instance this is again my relative to in my target frame here and think about that as the relative rotation that gets me G to F yeah now I write this as a capital R it looks like it could be a rotation Matrix it's not wrong to think about it like that but there's actually many ways to represent rotations I'm on the computer or on your pad paper okay and I'm not assuming that it's a rotation Matrix here this is just an abstract concept of a rotation if you choose to represent it with a rotation Matrix you're welcome to do that if you want to represent it with Euler angles three angles you know roll pitch yaw you can do that if you want to represent it with quaternions you can do that okay we'll talk about all those things again but I just to abstract that it's not necessarily a matrix but we're going to operate it on it in kind of a matrix like way in our algebra in particular we will Define the multiplication I want to take a a point that's expressed in one frame and express it in a different frame it's the rotation Matrix that tells me how to do that so right so this is back to your question right so if I if I want to change my expressed in frame and leave the other ones the same then I need to take in the it's the rotation operators that operate on these expressed in frames positions I can shift you know these other these the superscripts I can shift with addition the subscripts I switched to multiplication you put them together and you get the full transform chain okay you can also multiply rotations so r a b r b c as long as those intermediate subscripts hold then this gives me a rotation remember to see and there's a multiplicative inverse two inverse I take the inverse of that rotation that's just rotating me the other direction and it gives me eat it yes that's great so that's a key key idea is that you can change the expressed in frame and the origin doesn't matter yeah that's actually an exercise but it's it's a it's a it's a straightforward uh calculation to convince it's a good question to convince yourself of yeah exercise Worthy yeah multiplying positions no adding rotations no the the standard operation uh if you want to operate with different rotations is to multiply them together yeah well we're going to be able to uh yeah you'll see you'll see the last few but there's only a few operations that we allow but they become very expressive notice and this is related to your point we don't actually need an expressed in frame for the rotations if you're talking about the relative rotation between those the origin doesn't matter and you don't need an expressed end frame okay now yeah the interesting stuff is when you have to put them together and do both okay and that's where you get a a full transformation of coordinates according to transform is that a question or a stretch stretch yeah that's good well I mean it's not you're good to stretch I welcome the stretch but I'm sorry your arm is numb yes ah it's that's a great point so yeah so the the interpretation so certainly in one in one degree of Freedom if you were to rotate along one axis and then rotate again with a multiplication of our rotations that would be like adding the angles completely agree it gets more subtle in 3D but but you're completely right yeah okay so a transform coordinate transformed is what we call it is going to be a rotation plus a translation put together and we're going to use capital x for that this is a rotation and a translation now you might have heard this by a few different names pose is another word for you um you might call it a full spatial transform if you're these are well they're not quite synonyms like technically pose is a noun and transform is a verb so it's okay that we have both I guess and they should be they could be used correctly in a sentence in that sense um the the object in Drake in code that we use to talk about this is the rigid transform okay there's um a multiplication for transforms it looks a lot like the rotations and there's a multiplicative inverse that flips the order of those two yes I'm sorry say it louder why do we choose it to be a multiplication yeah it's it's um it's really I think to to keep things familiar all the operations are going to feel I mean we could have chosen anything right we could have we could Define our algebra anyway but the way you're going to operate on these things I think is most similar to what you think about with uh like linear algebra multiplication and stuff like that I think that's what that'll keep your I mean even just the notion of an inverse and stuff like this when you want to use a minus sign when do you want to call that inverse I think it's the most consistent with our wiring hmm okay now it's interesting to to think about this now if um remember I've tried to abstract the way the mathematical concept of a rotation from the representation in numbers in in you know if I if I represent this as a three by three Matrix for instance you can that's one way to do it is the basis the XYZ um you know coordinates basically then this inverse is actually just the Matrix inverse so that's another answer for you here right is that it's going to operate in many cases like a matrix it happens that if you were to do a similar kind of you know representation of this then typically these are represented with three by four matrices because that's what Graphics processors and even CPUs these days can eat up and and chew out you know chew whatever chew and spit out very well they can crank lots of these um but then the inverse here the inverse of this is not just a matrix inverse even though there's a matrix representation of those things it's slightly more so it's not hard it's again computationally optimized you know but it's not quite a matrix inverse here we have to we've defined an abstraction here that is independent of the Matrix representation okay and the code will take care of that if you operate among like matrices okay the most important thing here then is that um I'll put it right here is that this pose helps us go move points around in different frames it can it combines the operations of translation and rotation so in particular we have to get the subscripts and superscripts right to get it completely right here if I want to transform a position relative to F expressed in F into being relative to G expressed in G which if you think about it that's kind of what we're going to want if we're you know that's the operation that matters the most if I've got a let's say a point in the world and I want to think about where it is let's say my camera took a picture of something and I want to change it from camera coordinates to World coordinates you know this is the operation we want and all you would need is the pose of the camera here for instance which I could maybe write as x w c the camera in the world frame and that would allow me to take points that were measured by a camera and project them into World coordinates okay and when I do Define this multiplication of a transform times a position then you could also break that down and it's two steps it's the added it's the addition and the rotation that are held inside that transform with the subscripts implied with that's my shorthand okay so this transform collects the operations of position and rotation now I I promise like you were really you'll start writing code and you'll you'll twist around saying should I should I add the or you know the orientation or the the position of the frame first and then multiply or should I multiply or whatever this notation can save you if you choose any more questions on that I love the questions yeah oh series of questions I'm adding the position and I'm multiplying the rotation times the times this other position s yes that's to take the total transform from frame F to frame g if I want to take this thing that's expressed in and relative to F and put it as expressed in and relative to G then I can I do this which is just a short I mean in some ways it's a shorthand for doing this yeah a transform that contains both the position and the rotation correct they're frames they're frames because points don't have orientation right yeah no that's these are all good things to check and I yeah when I wrote the notes I thought about it I tried to think about every one of them and be explicit about that these are all written up carefully in the notes but um yeah Bridge right answer it goes right yeah there's a there's I forget what it was there's one of them that you would think you could use a point but it doesn't it just never occurs in practice um so we don't even include it in our notation but everything we've said is consistent okay um good so so we have a basic language now of thinking about how to operate on uh transforms we need to take points and move them around between different coordinate frames this is our language okay and um and the way we get it right it's also this language is also going to extend into velocities for instance even accelerations forces right the same the same tools go all the way up through the Dynamics level we've only talked about position so far okay here's a interesting question for you when I call I'm going to reference the problems that you just finished here and some of the questions here I call plant let's say get positions okay some of you thought a lot about that of the context and the iwa then what do I get out this isn't a great question this is a real question but I'm just using the notation to connect yesterday what what's what does this thing return the vector joint positions and how big was that vector it's seven most of the time it happens in the pset problem I think we use the planar one so it was three right in the so we have in the full 3d you uh has seven degrees of freedom in the plane we locked a bunch of them out so there's only three okay so this returns a seven by one vector what do you think you would get if you called get positions of the same context these are the plant contexts right contexts for the foam brick I'll just write brick what do you think you get out if you call that okay that's nice okay seven joint angles I need to skip to the brick so which what are the seven how would you I mean you don't have to yeah what seven angles do you do you think about the Ewa joint angles got it that's not what I'm looking for but it's a good idea yeah yes okay it could be a scalar but I want to be able to it needs to be a complete description of the position of that brick that would be the position okay so of the generalized position means position and orientation that was I know that's confusing but the generalized position everything I need to represent the the configuration of that brick yeah good so he says I don't know what which rotation we're representing internally but maybe if it's a quaternity and it would be seven by one it does come out to be also a seven by one vector and it's seven numbers that represent um this okay why is it seven numbers the seven numbers are the X position the Y position the Z position and then the quaternion X y z W where this I know not everybody knows quaternions but this is a unit quaternion so the magnitude of Q is one that's why it's unit there's an extra constraint on that my point is not to teach you everything about quaternions my point is that in robotics there are many ways to represent 3D rotation and even when you run like a single simulation you will go back and forth between them often and we try to make that easy so so you can in the if you have a rigid transform you can extract its rotation you can flip it to a quaternion representation to a three angles a roll pitch jaw representation you can treat it as a matrix each of those are particularly well suited for particular operations if you want to throw it on the GPU and compute series of transforms then you want the Matrix representation and those who become Matrix multiplies that are heavily optimized okay there are other operations where you if you wanted to represented a vector if you took your rotation and took the nine numbers and put it in a vector then you're carrying around more numbers than you need it's not an efficient representation you would think you only need three numbers the roll pitch yaw but the roll picture I won't go into great deal of detail but it has singularities right so you can get yourself into problems if you only use real pitch yaw it's not actually a complete representation of of orientation in 3D space there's something called gimbal lock right and robots don't like that so we carry around four numbers in our configuration in our sort of minimal Vector representation of an orientation and we carry around one extra constraint and that's the way we deal with it we get around the singularities yeah sorry did you good yep uh so in in the context it actually uh because this is a floating base it's it's only exists in World frame that's true yep I'm gonna that'll be super clear and if you like a few pictures I think call me on it if it's not nice okay so we're building up the tools and just different many different representations of orientation it's interesting actually everything is beautiful and clean in two Dimensions right if you want to do rotations in 2D you can represent it with a single number an angle you know everything works beautifully when you get to 3D everything's worse right like um you know there's all these weird quirks about representing orientations in 3D we have you know highly optimized tools to deal with it but it's slightly unsatisfying that there's not like one right answer for all of these when we get the spatial velocities angular velocity things collapse and actually there is a one right representation but orientation is bad it's ugly okay so now now that we have our algebra we're gonna program a robot and I told you the way we're going to program the robot to start is by making a series of keyframes really I'm going to start by I have an uh I have a the world was perceived by an oracle okay right so I'm someone like the Oracle of Delphi says oh your object is at X w o right someone told me xwo when I say oracular or an oracle it's a weird yeah it's a the etymology of that sort of is weird but uh we often say like oh if if someone magically gave you the right answer we call that the Oracle or the oracular perception okay in this case I'm going to say the object which I'll just use is o the object frame or the object um pose or the object frame which I'm going to describe in numbers with a transform or a pose here in the world coordinates someone told me where the red brick is and they told me by by giving me this okay and now my job is to come up with a series of of uh hand you know I want to figure out what x g of w is at many different times that's the next step of our of our operation okay so actually there's going to be many different frames we're going to think about let's take I'm going to even call this like the objects initial that says initial there and I want to have like a Target or a final object frame so where I want to set it down I guess I called it goal in my notes and if I don't make the match then I will do the wrong thing in 10 minutes okay and then I need to start designing the gripper keyframes so I'll make maybe a uh I have an initial gripper right when the robot woke up the robot is in some particular initial gripper location the object is in some initial that I've been told about and then I want to make um I figure figure out where the gripper needs to be when I pick the object where it should be when I place the object and actually there's a subtlety right because um we kind of we don't want to knock the object over when we're coming in to pick it up so we typically make a pre-pick location so we first actually just drive the robot simply just above the object we'll call that pre-pick and then we'll go down we'll close our hand we'll go back up to pre-picked and then go over the pre-place yeah um is that good I sound silly saying it but I think you know what I mean pre-pick G pre place and then maybe the G5 the G final right I'll go to some final position right so these are the frames abstractly and we're going to represent them by giving them transforms for each of those first in the world coordinates and we'll design a set of keyframes that do that by the way all you know all that algebra stuff is right in the notes and uh and you can highlight on it right and change my lectures um right and although all the rules are there and careful in the inverse and all that stuff's there okay so this is roughly what we're trying to do now is come up with this imagined hand moving through the ear trajectory that goes from object initial to object goal okay and we're going to ignore the Dynamics of actually picking it up we're just going to hallucinate if I were to do this what's it going to do it's pretty funny actually I made this list typed in the code to make the example and I had forgotten an important detail there's one more frame that I think you need seems pretty important uh it'll come up in a second here you kind of need to go you can't go from a straight line it turns out from the uh you know pre-pick to the pre-final this is a simulation running right so I've shared the hand off because I forgot to uh you know clear the bins so we're actually going to make one more in the middle here which is G clearance near culpa right look at that it's and that's why we test things in simulation before we go into the real world okay so the language that we have now makes it actually relatively easy I mean it's a little verbose because there's a lot of frames but the every individual line is actually very simple okay to make uh from if someone just tells me the um the initial gripper the initial uh uh object that I can go and go on through I should say when I write p a b f on the board to write that in code we do p underscore a b underscore f and that that works right so this hard to write in uh in your software editor uh this this is our translation of that and similarly for rotations and and transforms okay so given that you can sort of read through here I have um the first thing I do is I try to decide an initial uh grasp for the object for you know I want to I wanted to find the position of the grasp by first thinking about where I want the object to be relative to the hand okay so I the numbers here are zero zero point one one and zero why is that I want it to be above and down the y-axis green is why it's positive 0.11 puts it like right about here I'd like the center of the object to be at y 0.11 it turns out that's the length of the hand of the fingers it's in meters okay and I'll be otherwise zero zero so that'll place my hand exactly above it and then for my rotations I just need to make sure that the frame is such that I'm coming down from above so I can do that by by I'm just putting a wrote the relative orientation of the gripper frame relative to the object with a couple rotation matrices that's my least favorite line in this I could do it a couple different ways but I made a x rotation and a z rotation I thought that would be the simplest out of the box and then I can combine the rotation and the and the position that I desire of the object you know relative to the gripper into this object but that's not enough right I still have to go from uh defined from the grasp so the object is the inverse and I want to be I want to ultimately go um from the grasp to the world but since I know all of the relative transforms I can make all of that work right I can go from XO initial XO grasp tells me what the grasp should be at the time of picking it's all just following the algebra a few it was interesting to you know we asked separately do you know python which most of you said yes and we asked if you know numpy and less of you said yes um this is multiplication matrix multiplication and numpy so it'll add around symbol right don't don't shoot the messenger right uh that's not my favorite uh but that's that's just a matrix multiplication so the numpy abstraction is provided here on transforms and behind the scenes it's doing what it needs to do with whatever rotation representation you've done in order to make that math we just did work okay and with a relatively simple Cascade of transforms we can come up with a list of grippers in the world frame and I just did it in a little dictionary here that goes through and names all of them the clearance one is actually one of the easier the more interesting ones because I I wanted to I was in some initial orientation and I had some final orientation and so to be somewhere half orientation between there I had to do a little bit of orientation math to find to decide what the half angle was in a general way but and that was easiest to switch to yet another representation of orientations which is the axis angle representation and then there you just flip the axis angle you divide the angle in two and you flip back and you're good right so all of these are good at different pieces of the stack so that gives us our keyframes and then we have to somehow decide the timing right so we need to go from just keyframes to keep to the the timing see I want to go from x g initial x g pre-pick to just some trajectory that I'm going to execute on the robot which is the gripper in the world frame as a function of time right so somehow I need to embed those in time so maybe at Time Zero I met x g initial and at time different times I'm going to be at X G pre-pick and so on but in order to move the robot I need to Define how I get between the two right so the language we use for these is the language of trajectories so I'm going to make a a pose trajectory okay and it's interesting now we have to think in order to do this we have to think about what's the right way to interpolate between poses between transforms okay so for positions it's perfectly reasonable to just do sort of a straight line interpolation that would be like if people know the term first order hold that would be a if you from sort of filtering or something like that you can call it a first order hold but for positions P of g at Time Zero PG say at time one linear interpolation is fine you can get fancier maybe you don't want to have any discontinuities your robot might go right if you do that maybe you want to smooth those out so you could have a a smoother interpolation like a cubic spline would be something we could people people would do but for today we're going to just do a linear interpolation okay so I just Define the time of this um time initial time pre-pick and then I just take a linear combination of those two index by time what about for the rotations though once again unfortunately we have to think about the different rotations uh the different ways to represent rotation you can do X's angle but the axis is changing not just the angles changing As you move along interpolating linearly in rotation matrices is not a good idea I could convince you that if you want but if you just took the the numbers that populate a rotation Matrix right so so what are the so the rotation Matrix would be a three by three Matrix it would be an orthonormal Matrix right you'd like the vectors to all be of unit length you could interpolate between two orthonormal matrices and get vectors that are not internally nor unit length for instance you wouldn't even have a valid rotation Matrix if you just took rotation one rotation two smash them together you you wouldn't get a valve rotation okay it turns out that the most natural way to do this is in the language of quaternions again so of our many rotation representations quaternions tends to be a winner here and basically you take a Str it's almost to say a straight line and uh quaternions is the right thing the quaternions have to be unit Norm just like rotation matrices do so you have to do an interpolation on the unit on the sphere basically okay um so that's all good it's called spherical linear interpolation slurp if you're cool slurp iCal linear take some Liberties interpolation yeah okay and uh the code will do this for you right all you have to know is that don't call linear interpolate on quaternions you'd be unhappy call slurp on the quaternions okay now this matters a lot actually like so a lot of people have thought now about uh pose estimation with deep networks for instance right so um you're trying to train a neural network let's say to Output poses you know if you don't get these details right you know you might train to get you might get zero training error on this pose zero training on an error on this pose but when the network goes to interpolate you're going to be have the same problems right so the representation people you use for your output layer in a neural network that's trying to estimate pose matters right and it's the same it's the same all the same details we're doing here okay this just happens to be our interpolation across time but for years people would you know my neural Network's not learning poses and it's just wow you asked it to learn something it sort of couldn't learn and then we the field figured out the right ways to ask it okay um is that much is that clear so it turns out that uh you know we try to make it all this stuff easy in code um there's trajectory there's a various different trajectories that you can use piecewise quaternion slurp right for instance and a piecewise pose which just takes the linear interpolation plus the the quaternion slope interpolation and it takes the whole pose so a piecewise pose will take a pose at this time oppose it this time oppose it this time opposed to the sign and give you a nice smooth linear interpolation as appropriate with the rotations okay and that gets us our imagined trajectory a lot of tools but hopefully it's okay all right so that was step two step one was just understanding these notion of frames step two was making the sketch step three is connecting the gripper into joint back to Joint angles okay and so luckily forward kinematics is a pretty simple thing given all the spatial algebra okay so forward kinematics is the operation of going from joint positions most of the time for most of our robots I would I could just say join angles it's mostly angles but you could have a prismatic joint or a you know a linear joint on your robot right and so it's more generally joint positions you can have a couple different you can have helical joints and all kinds of weird uh things on robots right you go from joint positions to poses and just to get the language down inverse kinematics is from poses to Joint positions we'll do differential kinematics soon two which goes from which is going to be the basically the derivative of this map so it's going to be from joint velocities to 3D spatial velocities which we'll talk we'll Define soon but this is the family of kinematics methods the one we want first here is just forward kinematics forward kinematics would be if I want to find the um position of the gripper let's say the the pose of a gripper in space I need that to be a function of um The Joint angles The Joint positions here this is the forward kinematics function so the way that we do this and normally that I mean in the most cases the software will do all the heavy lifting for you just need to know how to ask it right okay it depends on how we represent positions our choice of poses but mostly it's the picture you you already suspect here where I've got a kinematic frame at my gripper step one is to go to the kinematic frame in my second to last link using my spatial algebra and then do the my third from last link my fourth until you get all the way back to the world and you just recursively apply the spatial algebra and it goes it'll do the work for you the only interesting thing is that the relative frame between any two bodies when you have a joint in the middle depends partly on Q okay so you'll get for instance the from body one let's say from body one to body two relies on first saying from the body frame where is the joint then figuring out what is the relative the transform as a function of the joint angle and then from that joint to the parent frame so there's three transforms that will take you from your one link to the parent link okay so this would typically be if I got to go from body one to body two then it's going to be composed of from the let me call it actually the parent and the child how about that from the parent to the child then I'm going to go from the parent to the parent joint and then I'm going to let's say it's Q7 first this is my joint from parent joint to child joint and then from Child joint to child frame and every one of those relative transforms is made up of a composition through our spatial algebra of those transforms yes so that was my that's my seventh joint angle on the robot but the the point is really that it's just one number for a revolute joint here that defines the rotation Matrix if you will that's inside here if it's if it's a revolute joint it's going to change the angles now this is not an artificial construction it's everywhere in robotics you may realize it right but right in the robot description files if you look at the way things are defined you define first a link like you will link one you'd say where it's you say it's pose it's inertial pose where is it Center of mass and its moment of inertia but then you also say what's the joint you give the position of the joint as a relative pose right this is not my format this is the standard format you say what's the child what's the parent okay and the fact that it's revolute lets you know what this operation has to be as a function of that one angle okay so it's just a matter of composing these transforms together and and the the order of those transforms and the and the values in those transforms are completely described in our robot description formats the e was kind of a boring one because you're really just going from the tip back to the base the Allegro hand is a slightly more interesting right the a hand here right so if you plot the kinematic tree if you just plot the parent joint a parent-child relationship straight from the urdf that describes the robot description file that describes the hand right you'll see that there's a root like the palm of the hand and then each of those fingers has a revolute joint that goes connects to the hand and then there's another joint that connects to that and another joint that connects to that these are all revolute so if I have a red ball maybe an apple is that that's an apple yeah an apple relative to the tip of my fingers I want to figure out where it is the where the tip of the hand then I just apply the transforms up a tree okay and similarly if I need to figure out what these two are I can go up and back down typically there's a lot of very clever caching that happens so that so you can you can ask questions of the kinematics many many times and it's just it just computes the tree once and then makes it fast to ask as many questions of the kinematics as you want sorry I saw a few hands yeah awesome so that's a great question is is are there multiple Solutions so the map from joint positions to poses is unique if you tell me what the joint angles are the robot really is in that position and the end effector is in a known position that is a one-way map and that's that's unique if you ask me to put it in a pose how do I get the joint positions there could be multiple joint positions that give me the same so in general the inverse kinematics problem is harder than the forward kinematics it turns out that the differential kinematics is also uh it's a little bit easier to think about so we're going to do a lot of the work for for that demo actually we're going to avoid solving the inverse kinematics problem we're going to solve those two problems first yeah you can ask him yeah you're you're right on so so the follow-up question was why is that useful why is forward kinematics useful right um you know absolutely we need to solve a version of this but it turns out that the differential kinematics version is going to be the best solution for this first version and you know so basically I when the robot wakes up I have a cue I'm going to figure out where the end effector is now and then if I want to move the hand in a particular trajectory it's easier to figure out differentially how I should move my joints to follow a trajectory than to answer the question of you know if I'm here I want you to be over here from scratch decide how to be there it's actually easier to have a fully defined trajectory that takes you there because you can use the jacobians of the kinematics and we're going to see that so we actually are going to do differential inverse kinematics because it's easier than the full inverse kinematics problem great question yes oh good good yeah I was using p for parent C for child yeah I'm gonna yeah okay so then so think about there's a body this is where multi-color comes into play I've got a parent object a parent and it has some frame here it's 3D in general I'm just going to do the two okay and then I've got a child and it's got some natural coordinate frame here okay but I need to to do all the kinematics math I need to find two different things I want to Define where is the joint that's connecting them relative to the parent and then I'm going to Define where is the joint that's connecting them so I'll call this one P joint let me write it not right on top of everything else P joint uh what I call it P yeah P joint and then I'll call this one C joint okay and so it's like the location of the joint relative to the any body is a constant it's the numbers are just hidden in the description format similarly The Joint relative to the parent those are constants but the transform that goes from Child joint to parent joint is a function of the joint current joint angle and so you you have for for rotary joints there's a one function that you would use to go from this to this as a function for prismatic joints there's a different one for helical joints there's a different one and that's defines that last transform that you need and you just apply it recursively no thank you for asking what's good okay so um when you want to call the kinematics engine in uh in code the you can say you can ask a multi-body plant right you can just there's a method on multi-body plant that evaluates the kinematics there's a bunch of methods on multibody plant you can get the kinematic velocities you can do accelerations you can do forces you can all these other things the one that is exactly what we're talking about today that computes exactly those series of transforms for you is with proper optimizations behind the scenes so you can call this as much as you want and everything like that is a Val body pose in world right pretty clear and the code is says I'm going to give you xwb for a body B in world and you'll see that notation the monogram notation in fact it was working on Drake with the Dynamics Team that made me change the way I talked about kinematics and I adopted the notation when we tried to get rid of the bugs in my code roughly um okay now so that so if you just have a multi-body plant you know plant dot of L Body and posing World we'll do that the context is what holds Q right so that is a function of Q it's just it's just this okay um if you're if you want to do it in if you want a different system to evaluate the poses of the multi-body plant in the systems framework you can just pull on the body poses output Port that has all the body poses available for you pre-computed so a downstream system can just chew them up right so the other I mean there's a bunch of optimizations hidden in the code if nobody ever asks for those body poses it would it only computes the things it needs to compute right so it's only when a downstream system asks you for let's say the camera output or something like this then it would render the camera but if you never ask it doesn't render and for body poses it happens that almost everything you do with a plant is going to cause you to compute the kinematics so you're probably not saving very much by not calling that one but in general it only computes things when it needs to and then it caches them so you can call it many times right and again the body poses output Port just says this is going to give you a list of xwb for all of the bodies in the plant okay so that's just the mechanics of using it all right so I'm I'm going to start differential kinematics but we left time on uh Tuesday to to finish it and maybe I'll just set as a goal to try to make my answer to your question a little bit more complete and will will fill in the details after so let's think about differential kinematics so what I want to do is somehow say if I make a small change to my join angles what's the small change that's going to result in the end effector right so let me use sort of this variational sort of notation here I want to say what's a Delta and xB as a function of a small change in q and the math we need to do that is just taking the gradient of this thing of whatever function we've already got if I just take the gradient of that that's going to give me my differential relationship between a change in joint angle and a change in the end of vector positions okay and you know in every field but certainly in robotics that thing is called a Jacobian in this case it's a kinematic Jacobian okay so we we would often just write this as JB remembering that it's a function of Q of DQ okay so this is the kinematic Jacobian now your your question about invertibility was good so this is still going the good way given a Delta a change in velocities I can tell you what a change in poses with no problems and then what you're going to see is that the inverse of this may not exist which is because there could be multiple Solutions if there's a manifold of solutions that could give me the same end effector than that then the inverse operation could be opposed okay the interesting question so we know how to represent I think it's reasonable if Q is a seven by one vector join angles let's see then DQ if I want to say joint velocities if I put it in units of time in a second it's sort of reasonable to just say I'm going to take the seven numbers and talk about the root the rotational velocity of each of my joints right so we'll call that Q Dot or you'll see it written as V sometimes which is the joint a seven by one joint velocities the interesting question is how do we represent this thank you we had a bunch of different ways to represent orientations how are we going to represent this and I'll just give the answer and then we'll we'll have to continue next time but so remember we talked about a three by four matrices as a way to represent X we talked about in in the if you want a vector representation you could have x y z Plus quaternions okay and there's axis angle there's all these different things the derivative of this though there's a one true answer we're back to like everything's good again okay in this case there's a spatial velocity which really is there's no singularities with respect to it it turns out just three numbers are enough and there they represent everything and they're useful for all of our computations okay so it turns out we're going to call this a spatial velocity numbers which is going to be x dot y dot Z Dot and then three three angular velocities so always and forever we'll use um we'll use this sort of spatial velocities spatial velocities have an algebra right you can we're going to say Define the same addition multiplication operations on these if you want to take a spatial velocity expressed in one frame put it in another frame we're going to grow our algebra to that accelerations and whatnot okay so I'll pick up there next Tuesday good yep good so we have um I made the point last time that the simulation is more than the physics engine right so when you have a whole diagram one of the pieces of the diagram is the multi-body plant there's also the scene yeah it's the physics engine so yeah yeah that's fine that's fine um so the only reason I want to make that distinction is because like the controller also has a model of the robot in it there's multiple places where you might want to have a model of the robot in there okay and they all might have some state so you have a context for the whole diagram if you want to get the plant context is just the context for that it's just you know it's a position it's just time State and input of the plant of the multi-body plant yep of the of the physical simulation of the robot that will be the one that contains the context of all of the sub-diagram the pieces of the diagram okay it is yep it's actually just a it's a pointer into the middle of the of a bigger good good how's it going I have a question in regards to like uh okay yeah um okay so this class doesn't necessarily like pull any kind of requirements so I'm not bounded to like or something like that um and so I saw that in grad version is yep um is that because of the fact that the grad like a graduate version does not have the same component like what's the weather that's exactly right yeah the recitations and the work that they do in the recitations adds three more units okay now the problem sets are are mostly the same with like a few differences right um so like this week there's one more problem on The Graduate it's it's not always going to be more I mean because they're both meant to be 12 units but we are aiming for a you know a little bit more maturity so we should be able to do the same work you know that work in 12 hours and we're gonna our survey questions sort of keep us honest about that and we can scale it back and forth so um yeah it's it's designed I mean I think the biggest thing you lose is all is like the super project aspect of it and the Friday recitations just the the communication staff you know talks you through the rhetorical argument of your project like set it up how are you going to make it you know what is the thing you're trying to prove or disprove they're gonna like help you with your project proposal help you with the right and so and and use it as an excuse to teach you how to communicate right and then help you make a really good project okay okay the technical part of the project is the same in both okay and then I guess like uh as a follow-up question to that um for the grad so like for the ungraduate one is it still the same or like one or two students yep yep yep and what kind of difference and expectations it's not about quantities like they just we expect a little bit the emphasis on the project I try to say this carefully on the website so read the you know read the words and make sure you're happy with those words yeah okay thank you appreciate it how's it going uh I was this morning uh I know like I'm not sure if you're gonna explain this in future lectures but uh you should like the link structure for the kinematic moment yeah the kinematic structure of great I'm just wondering if the dynamic structure is similar to that and if that like inhibits some models that you can use like in Drake like for example if you had like two rotors at
Robotic_Manipulation_Fall_2022
Fall_2022_642102_Lecture_19_Intuitive_physics_part_1.txt
okay so um thank you guys for filling out the survey I had a couple different ways we could go with the next couple of lectures one of them was to continue on the more details about RL but seems like there was more interest in other areas than than RL so I'm gonna Jump Ahead a little bit talk about the intuitive physics portion which I think is we didn't even ask you about that one because I think it's important enough that I wanted to include it and it does connect to RL you could call this you could think of this as maybe part of the model-based reinforcement learning pipeline and let me just kind of you know transition from the RL and uh Behavior cloning kind of conversation into what we're going to talk about today so we talked about I've been saying over and over again that um I like visual motor policies right and so far you know where we the goal of the visual motor policies is to control the state of the world and the robot and so far we've given you two major pipelines to possibly try to find those visual motor policies the first one was Behavior cloning and the second one was a very brief look at a big topic of reinforcement learning hmm and both of those can work both of them I mean all of the approaches we'll talk about have strengths and weaknesses actually there was a really interesting question that came up at the end last time I guess the person who asked isn't here yet today but um someone asked if I had just run RL for longer on the Box flip up would it have done something less ridiculous right and um I answered quickly you know and I think I answered about how RL can get stuck in local Minima and maybe because it's stochastic enough it could jump out and eventually find its way down but I think there might have been a I don't know if there was a deeper um point to that question but it occurred to me that you know when people like David silver talks about RL for go for instance he gives a the impression that I think there's a belief there that um the more you learn the better you'll get it go that you'll just you've got enough parameters in your network you've got enough experience coming in that you can just you know it's just the amount of compute you're willing to spend it'll just get better and better and better the more you play and and so I don't think that's the regime we're in with the RL examples I've been showing you here I I can't say that you know this is this is my intuition and my you know experience but maybe not a proof here but I don't think we're in that regime with the robotics RL experiments that we've been seeing in the market I think we're more in the regime where you can bounce around but you can really get stuck in local Minima and one of the reasons might be that we're not really in the dramatically over parameterized regime in the in the policies where we can just keep going down and down and down and down okay so but RL is doing a pretty good job of even without a model you know Finding its way into a at least a local Minima in the in the cost landscape and if you run it long enough it might bounce around to a better local minimum but it's not guaranteed to solve the global problem in any way okay so how are we going to go from why are we going to move on from this to something about learning models instead right so the the drawbacks of of both of these approaches I think are largely about generalization so um the the experience we talked about in supervised learning was this amazing capability of transfer learning that you can learn on imagenet and apply that to some different uh domain I think a a a strong criticism of reinforcement learning and behavior cloning is that these tend to be very task specific okay so if you have a particular task you want to flip up the box in a particular way you write your cost function you find your policy and this is a strength and a weakness actually of this is that it learns exactly the policy it needs to learn to accomplish that task and it doesn't learn other stuff and that's gives it Power by ignoring all the things it doesn't need to learn but it's also a limitation if you want to to then change the task a little bit the naive recipe is you just start over right there are ways to try to make RL generalize more you can talk about you know goal conditioned policies for instance where maybe you make if you can find a efficient way to parameterize the objective function the goal let's say then you could basically throw that in your state space it's not quite that but imagine putting it into your state space okay and then learn to learn a policy that is a function of the gold parameters as well as the state and there's also nice work on sort of multi-task RL which is I think a very powerful framework um and I think there's that that Field's still growing up you know but but these you know this is a going to be a constant struggle because like I say it's it's a good thing about RL that it ignores all the things that are not relevant to the current task at hands that's a that's one of its strengths but if you want to compare that to for instance our kinematic trajectory optimization or something where you give it a new task and you just change the cost function on the Fly you're doing the new task right RL is not doing that it's not not immediately well suited to that it's it's a very you do a lot of computation to get a good controller for one cost function and you have to start over so the most sound way I guess the to address that is to learn something more than just the parameters of the policy you want to learn something that generalizes from different tasks to different tasks and the opposite extreme of that is that maybe what you should do is actually learn the Dynamics right I talked about these things are hard because we don't know how to what the Dynamics of spreading peanut butter on toast is or what the Dynamics of buttoning my shirt are but neural networks are pretty good at learning things so why don't we try to learn the Dynamics of the of the world and then apply our our best control tools to those learned Dynamics okay so for the idea for today is learn try to learn the Dynamics of the world some versions of this will actually just try to learn the state representation in addition some of them will require you to give a state representation okay now some people call this model based reinforcement learning I'm not a huge fan of that particular word just because it's it's a I mean yeah yeah people do model free reinforcement learning even though they have a simulator which is a model in my book and they have they this one could just be called control but it's called Model based RL don't get hung up on the uh the naming but I guess if you want to if you want to call it model based RL you're welcome to Okay so you know let's let's talk today about learning the Dynamics of the world and what tools do we have to offer about that it's a super rich subject and actually even once you learn the Dynamics model based control of course is a huge subject and that's one that I spend basically all of the under actuated lectures talking about how to do model based control but let's think about you know today about learning the Dynamics and what kind of tools we can bring to bear on that and there's sort of a zoo of model parameterizations that we might try to learn right we talked about um a little bit about it but broadly speaking learning Dynamics is that is the topic of of system identification say AKA system identification or at least that's the old word for it and even though a lot I think there's a lot of things from system identification that we we want to remember I think there's also a lot of new tools for machine learning that have come and recently contributed you know new life to some of the old system ID questions and that's not just deep learning that's also finite sample results and and online optimization kind of results that are are very very powerful new approaches to system identification okay but in the world of system identification or learning the Dynamics the first thing we have to do is we have to pick a family of models that we're going to optimize over to try to explain the data right so the standard thing is I have my some system coming in or some data coming in let's see some signal coming in I'll call it I have some system that I'm trying to fit and I have some signal coming out these are my actions and my observations in this case right and we've already talked about some of the different forms that this could take because we use this this sort of setup for even um learning policies right so we said already that you could talk about static parameterizations which would be y n is just a function of u n we've talked about feed forward or our Max kind of models where we could say that my output is some function of my previous outputs n u n minus 1 . and we've talked about State space models right so I might learn something where this is the x of my controller right so that we've talked a little bit about that before and I won't talk about it again but now the question is given that we're going to choose one of these forms whether it's a static or a feed forward or a state-space kind of model I want to start digging into one of our choices for f and what are our choices for G if we have one right and what parameterizations can we say strong things about and what can we you know what gives us there's going to be pros and cons to each different choice of of f and g okay again we there's the there's lots of choices and I I think I can kind of paint the picture and then we'll dig deep into one or two of them okay so you could choose f and g to be linear that's a natural powerful Choice really because because it affords so much uh analysis that we can we can understand a lot of things through the lens of linear Dynamics even theoretical machine learning folks are have returned had a a few years where they returned enforced to linear dynamical systems to understand finite sample bounds and everything like that the other case that we can really throw the math Hammer at would be if you did finite models or tabular models if you're thinking about uh uh stochastic versions of this would be like learning the traditional MVP Markov decision process those two I think have limits to what they can represent but they have huge value in in what we can understand through those legs so we a lot of times you know some of the lessons that you learn in linear control are pretty specific to linear systems but some of them generalize beautifully and we should we should take those lessons and carry them with us because it's much harder to see the same lessons in the more in the more non-linear setting okay of course we have neural Nets but uh but in the world of neural Nets there's a there's all kinds of choices right we have even okay in any one of them there's a lot but there's the standard sort of feed forward neural networks convents and all the others um we talked about recurrent networks like lstms but what you'll see more and more in some of these model learning cases you'll see graph neural networks come up Transformers are coming they haven't been a huge Focus but they're coming right I think and um and so on right all of these are going to be relevant and we'll give a few examples maybe in the next lecture on this uh but then there's also I think different uh levels at which these try to represent the world some of them operate directly on pixels some of them try to operate on particles they try to say that I'm going to assume the world is based on particles or almost Point clouds some of them try to do more object-centric kind of model right this is the broad strokes and I'll cover I'll go into a few examples of of these now it's interesting to put these right next to each other right because linear and tabular have limits on what they can represent but they have huge powerful back ends mathematically neural Nets can represent anything but we have less that we can understand when they're when they're not working well in their or um you know less guarantees about if you care about proving your robot's not going to do something bad let's say and uh they're going to also be harder to control with harder harder to do control design for there are many other non-linear models model parameterizations uh I don't know if you've heard Volterra series or you can learn polynomials you can learn all these different things okay and I put a few of them in the in the notes but I won't dwell on them now neural Nets have consumed people's attention in the nonlinear model front but the one that I think I don't want to forget about it I want to spend some time on today is the multibody equations which is a particular type of non-linear models that has the important structure of lagrangian mechanics okay it's far richer than these models and what it can represent um and and there's a there's another important consideration too right so um if you're just doing system identification for the sake of system identification then maybe the only metric is to try to predict as well as possible why given you okay that's the standard system identification uh metric but if your goal is to do system identification in the service of building a controller and getting closed loop performance then that can change your requirements a little bit it might be actually you don't have to predict all of your observations perfectly some of them are task relevant some of them are not task relevant but there's also you know it could if I learned a perfect predictive model of my system but within equations that are very hard to do control design with then that's a problem too right so I think one of the interesting things I'm sorry to depend on you here but one of the interesting things here is that you know if I were to learn a model that could describe the data that was linear or tabular that control design is easy if I learn a model that's a neural network depending on how big or complicated the neural network is then it might be that control is still hard okay the multibody equations are somewhere in the middle right there's a lot of structure in those equations if you can find a multi-body equations that describe the data well then we have more structured tools more powerful tools for control okay and there's some things we know about this that just are so strong and so powerful and I want to make sure people remember about them okay so let me start by digging in a bit to multi to learning multibody equations and I'll I'll do it by motivating you know I'm motivated with uh with throwing stuff yeah so um whoops that's um oops I don't want the audio so a bunch of people watched the tossing about video Everybody in the cim class uh in the first lectures talked about the tossing about paper this is the the video that went along with that paper great paper by Andy Zhang and Company of picking up relatively basically unknown objects and tossing them into a Target location right now they used a neural network to do that well they used a combination of a simple model and then they learned a residual neural network to capture the differences so I think a lot of people have seen that and I'll I won't play the whole video here it's a nice long video beautifully put together but there's another there was an original tossing bot that I don't think as many people have seen okay this is actually um work from MIT but in like 91 okay all right so this is a Wham robot a Barrett hole arm manipulator okay let me just show you what it does first okay um but again what it's doing is actually so now that the bin is right here which is about the size of the ball it's a pretty small bin by the way right you saw I did a little regrasp there just to know where it is right in the bin across the room okay the mass of the ball was unknown the size of the ball was unadown it was approximately what fits in the hand or whatever but um but there was something even more remarkable about that wasn't learning in that this was online system identification and adaptive control okay so let me just let's just watch it one more time here that was too fast he's throwing to a person by the way human robot interaction um okay so Watch What Happens here quick re-grasp okay now right here there's one back motion and one forward motion and what you don't realize is that in that backward swing it's estimating the mass of the ball and then on the then it computes exactly what it needs to do to throw it into the target wherever it is really good okay um you know they went on and did other versions of this too they actually didn't publish the ball throwing one this is Jean-Jacques low teen it's work and actually when I when I first came to the AI lab it was I only got here much later but the arm was still up in the ninth floor of the of ne-43 it was at the time the way they tracked this they had cameras up here can turn the volume off again they have cameras up here and they're doing phobiating Vision okay so there's a bright red ball or a bright white ball in the last one and they just get get the cameras to lock on the initial location of the wall and then they track the white blob this is before computer vision really worked right they tracked the white blob with the servoed cameras and they could back out the 3D position of the ball based on that the center of the ball that's a good question let me just throw back probably yeah this is um catching paper airplanes okay with a simple model of a paper airplane so this one the catching one doesn't involve the same parameter estimation it did some online tracking and they they have incredibly good control which I'll mention at the end of trying to um a line that they're using feed forward inverse Dynamics model cancellation but but it's also an Adaptive control and the they take into account the mass of the ball when they're executing their trajectory okay but this is I mean this is pretty good 91 you know like catching airplanes out of the air throwing balls across the room into targets exactly the size of the ball it's pretty good okay so let's think about how I would learn how they learned the model of that and how the sort of Rich initial literature on learning the Dynamics of the objects in the world and it goes through multi-body equations so let's just even think of for a second about what you need to know about the ball in order to be able to do a good throw the Dynamics of throwing are you know nice because they're simple if we ignore aerodynamics okay so um you can throw anything you could throw a banana which that's an example from Andy that sounded arbitrary but that was it was the last thing I saw them throw in the tossing bad video when I just played it you could throw a banana you could throw a ball you could throw whatever any one of them no matter how complicated they might flip around or whatever if you write the Dynamics of the center of mass then it's trivial okay the Dynamics of the center of mass are always if I just do it in the plane if I do the Center of mass without drag the Dynamics of the center of mass are simple if I put the G is positive or negative right and even the rotations around the center of mass the angular conservation of angular momentum this is the moment of inertia about the center of mass it may spin it may be spinning around the center of mass but it won't spin more or less as it flies through the air because of conservation of angular momentum okay so there's a couple things that immediately we can notice here is that this implies that since mass is not zero that X double that Center of mass is zero Z double dot Center of mass is negative G and Theta double dot Center of mass equals zero and in the air you don't have to do any work to estimate the parameters of the um you know the mass the mass of the ball is irrelevant once you throw it similarly if you were to to throw a ball at me there's no way that I can just by watching the motion of the ball I mean this is classic physics that I could estimate its mass right every ball will take the same Arc just remember that so now why then do we have to estimate why is it useful to estimate the mass of the ball if we're going to throw it well what matters very much is the location of the center of mass and the spatial velocity of the center of mass at the point of release right so everything follows from that the Excel there's no accelerations after you launch but if you don't know where the center of mass is when you launch then you've got no hope of controlling where it goes okay so what matters is the moment of release this was all in the air has nothing to do with the moment of inertia that's just a bad choice of words maybe but so I care about you the spatial velocity in general the position and spatial velocity at release okay so at very least when I start moving around I need to know the center of mass of the ball or of the whatever banana I'm going to throw relative to my hand okay so that's the key parameter to um to estimate and then depending on how heavy the banana has relative to your arm um you might also need to know the mass and other things just to be able to execute the trajectory well enough right if the if the object is insignificant Mass relative to your arm then your robot controller can just ignore the mass and execute its trajectory and expect to do well but if you start to throw heavy bananas I got to pick a different object um yeah then suddenly the ability to track a trajectory with very high fidelity I mean that's High Fidelity to throw it in the bin all the way across right with very high fidelity requires you to consider the mass in the in the hand and that's where the extra terms from you know mass and inertia start coming in more is in the tracking and it's a good problem for lots of our applications right is we we want to um Amazon wants to move boxes fast or you know you want to play Tiddly wings or whatever your robot wants to do you know there's a lot of applications where you'd want to be able to pick up a pick up an object and learn something about its inertial properties by interaction okay so let's just think about um how do we how would we learn those parameters by picking up an object and moving it around turns out that estimating the mass of an object in the hand especially once it's in a reasonable grasp inside the hand remember they opened and closed to make sure they had a really good grasp and it was kind of he was solidly at the base of the gripper it turns out that's just a a special case of the more General problem of estimating mass and inertia of your robot As you move around okay okay so how do we do multi-body parameter estimation what does that even look like how do we set it up what special structure can we exploit what guarantees do we think we can get right so the multi-body equations you've seen me write them many times they might have other terms like friction damping um contact you needed other terms like that these are non-linear equations right you'll see Sines and cosines and inside this so that sounds like we're going to have a tough nonlinear estimation problem but it turns out they're the parameters that we care to estimate enter these equations in very particular ways okay there's um there's parameters inside here right all of these possibly those two and they tend to be of two basic varieties there's the kinematic parameters lengths for instance and then there's the dynamic parameters like the masses and the moment of inertia's okay you can actually estimate them jointly that's okay but you know just get a ruler for this one if you don't have you know like this one you should probably just measure fairly well and then there's actually kinematic there's more subtle problems in kinematic calibration which are more about uh joint offsets if you have like an encoder offset error or something like this but a lot of times people will separate out the kinematic parameter estimation and maybe have a way to so a lot of our robots actually wake up and go against the joint limits and then come back just to sort of calibrate their their joint encoders you know uh and the links are normally pretty faithful to the cad models so we I'll talk about estimating them jointly because we can but know that I wouldn't actually recommend that you should probably just pull those out and estimate them separately you could for instance you know just try to draw a straight line and if it's not a straight line you haven't got your kinematic parameters working very well okay um so what's the special structure they they cut they enter here into these different terms hidden inside there are the masses the inertias that make the equations of motion and they're non-linear yeah encryption also you can estimate some of the parameters of friction there are identifiability requirements you know so for if I never see it slide I'm not going to be able to get its friction coefficient but but yes the rules of the game apply to friction as well more subtle more subtle okay so uh the if you take away exactly one thing from this part you should know that there is a um a very important particular way that those parameters enter the equations and it's true of basically the same way we said almost every kinematic chain is representable as a polynomial basically almost every multi-body set of equations that you'll come across the parameters enter in a particular way and it turns out that the mo the worst non-linearities your Sines and cosines or whatever are separable from your parameters that your parameters group together and your Sines and cosines and other non-linearities group together and you can separate those two apart and estimate just the parameters and it's a nicer problem than you would expect out of the box and I'll show you that in a simple example the simplest of possible example is just the one link robot which I think is the right place to write it on the board and then I'll show a couple more complicated examples so if I just make my pendulum Dynamics and I have my mass and my length and my gravity coming down and my Theta uh then the equations of motion ml squared theta double dot maybe I have some damping okay so in this particular set of equations which I know by heart you can see that what I said is true that these things are separable if you know the lengths already because you did separate kinematic trajectory optimism you know sort of kinematic parameter estimation then that gets even simpler okay but even if you don't it's useful to write the equations in a slightly different form let me separate out the um what I would consider to be the data that's the things we're getting from our measurements and our sensors and I'll throw the sign on that from our parameters I'll just write it in a vector form here that thing equals Tau okay in general it's a scalar as I've written it here but in general it can be a vector okay yes oh yeah thank you and L squared tau is also measured yeah so here we have data coming in which we'll have in general the trajectory of theta and Tau yeah and we can take its derivatives okay so we're going to have have all those come in and we can just compute this a priori there's no parameters involved and this is our parameter vector in fact this is famously known as the lumped parameters now you might say well that's not cool I've got a mass here I've got a mass here it entered twice I want to make sure I estimate the mass there's some coupling I haven't acknowledged in that that's true but it turns out that these are exactly the parameters you can you need to know to do prediction can't no more than that we'll say that more carefully in a minute you can't sort of know more than that without some other priors on on on those things and this is actually um I think that maybe the right way to think about the parameters the way we happen to write it in the urdf doesn't line up perfectly with what you can estimate in the real world thank you okay so that's just a different way to write those equations if I write them over and over the entire course of an entire experiment where I have lots of samples of this and this over time right by the entire trajectory then in general I can write the equations I can write my system my parameter estimation problem in terms of a data Matrix which would be let's call this thing w okay where I'll write all of my W's these are it at time equals zero let's say I'll do the same thing at time equals one right so Theta time equals one Theta dot this is just all of my data and I'll stack them up into this okay I've got the same parameter vector and now I have the torques t0 T1 okay so if you're willing to give me that this is the right this is an okay Vector to try to estimate and certainly if I do estimate this then if you give me a new um state of the robot I can predict the torques you know if you can I can do all the operations I want if you give me that the value of that Vector correctly you can evaluate the Dynamics okay then you can see from this that solving for the best lumped parameters is actually just a least squares problem so that's a very specific even though they're nonlinear equations we can actually solve for the lumped parameters with least squares okay so that's that's super important idea it turns out it's more General it's than pendulums right it works for the general multi-body equations you can take any equations of this form that come out of our you know multi-bodies that you know that when these things are derived from multi-body equations then you can basically I think screw joints might screw this one up too like I said just don't use screw joints in your robot that screws up all of our math okay in general I can have this big data Matrix times my parameter Vector is my other side of my data Matrix right so this is just my right hand side here whatever the leftover stuff is I can convert this set of equations into this and even when they're more complicated when they're I've got the double pendulum equations here oops right here okay that's the double pendulum equations it still turns out that even though m c and Tau are more complicated the little C2 is my shorthand for cosine of theta 2 right signs or s's okay but the point is that they still separate out I get a sine of 1 plus 2 is sine of cosine of theta 1 plus Theta 2. so you get lots of instances that look like parameters times sine of let's say Theta 1 plus Theta 2 whatever but what we're saying is we what you don't get and what's beautiful is you never see for instance sine of M Theta 1 or anything like this none of my parameters sneak inside my ugly nonlinearities so it's more structured than the general set of grab bag of similarly non-linear equations right I don't see a length that ever pops inside there it's only all those non-linearities operate directly on the angles and so it's separable but wait there's more so um I can solve for the lumped the fact that I can solve for the lumped parameters with least squares means I can do even more okay so it turns out that not all parameters are identifiable and this goes to Tom's question about friction and the like Okay so there are some parameters that you might have written in your urdf which get turned into your multibody equations that you won't be able to estimate let me give you an example right so I'm a Ewa bolted to a table I wrote down in my urdf the inertia of that link zero no matter how much I move around the Ewa I've got no data that could possibly the the inertia of the bottom link has zero effect on my Dynamics it's been welded to the world so I will not be able to estimate that and on the flip side it has nothing to do I don't need to estimate it because it has nothing to do with my Dynamics okay if you go up one link now there's only a one revolute joint before the first link so you can move that thing around but you're only going to ever estimate the moment of inertia about the one axis that moves there's two other axes of a moment of inertia that will not be relevant to your Dynamics right it's just it it will affect your mass if you know like you could but but the the inertia around those other axes that is unidentifiable so what's beautiful is because we're in the realm of linear algebra we can see that how does that manifest itself in these kind of equations is that the the data Matrix will drop rank there will be some parameters here if they are not identifiable then the data Matrix will have we'll have a you know zero singular value corresponding with those with those parameters and in fact before you even start you can do basic analysis to I to extract the identifiable lumped parameters okay let's just say with linear algebra right so the the point is we're in the space of linear algebra here and all of the concepts of rank and and uh single you know kernel of that Matrix and stuff apply okay and it's a direct operations on the the data Matrix which we've separated out again that's not a bad having a low rank data Matrix does not mean you're a bad person right it just means that there are some parameters that are irrelevant to the Dynamics so you will not be able to estimate them but you also don't need to estimate them there are some deep implications of that right so like I mean like I said if you throw a ball at me I'm not going to be able to test to know the mass of the ball before it gets to me right if I've watched a ball fall and then I go to pick it up I I can't from just passive observations know the mass of the ball in fact as a general rule one of the things you see from the identifiability is you're going to need Force measurements or to apply actions in order to estimate any Mass kind of parameters right I could watch a video of a walking robot and I don't I have no concept from just watching a video whether it's a 40 foot tall you know Car Crushing fire breathing uh massive robot or like a little walking toy right unless there's something else in the background that gives me context and I use common sense but from the Motions of the robot The Joint angles I won't be able to tell okay so I guess I mean robots can't probably can't learn everything by watching YouTube right we some of us wanted that to be true but you're not going to learn inertia from watching YouTube it's not identifiable okay so that's cool or at some point robots matter you have to you have to have embodiment to learn everything about the world even going riffing on that even just a little bit more uh there's a I mean in all learning kind of applications there's a question of of data generation of exploration of of experiment design okay so uh you know if I'm the robot we watched in the in the video just went like this and that was enough in that particular case to estimate the parameter it needed to throw the ball but in general just going like that won't be enough to estimate the parameters that matter for your Ewa if you wanted to estimate all those parameters and there's a nice problem of experiment design Okay so let's say your goal is to design a trajectory for the robot to follow that will excite the parameters so that the system identification problem goes well right because we're in the land of linear algebra there's a natural objective for that you can just look at the condition number of the data Matrix and you'd like to say you know I want to get as much of the of the parameters to have data you know to have to be away from Singularity as possible you don't expect to get all of them some of them are identifiable but the ones that are are identifiable you'd like to move their singular values away from zero okay so there's a natural trajectory optimization formulation it's a little bit ugly to implement but people do it this is you know is that you design up to your torque limits and maybe don't drive your motor crazy right you put some heuristics on costs on action but then you put a cost on the condition number of the data Matrix to design informative trajectories for parameter estimation okay okay so let's just pause for a second and just think I mean I motivated this conversation by saying you know I think there's going to be cases where you want to train a neural network to do a lot of these tasks the things you're training the neural network are are multi-body systems in most cases you know for for in this class yeah even if you want the representational power of the neural net a lot of these lessons about what's identifiable how do you excite the parameters what do you expect to estimate what do you not expect to estimate still apply in particular the thing that I said is linear in the lumped parameters is this equation this is not the Dynamics of the robot this is the inverse dynamics of the robot if I were to take Mass Matrix inverse to solve for the forward Dynamics that things get worse right I all my parameters become I come rational parameters I don't get these polynomial beautiful polynomials I get mixtures of the Sines and cosines in ways that are in the denominator and the like okay things get worse but basically every machine learning project that's learning Dynamics is learning the forward Dynamics you know I mean I think there's room to to explore that more okay and there's there's a lot more lessons and I'll try to point about what we as we hit them okay but this idea of EX of doing nice experiment design and having a metric that talks about how well you can identify the parameters that's a luxury that we don't have that in the general case but it's a beautiful way to have it in this case okay so let me um let me tell you how you'd actually do this right so in the particular case of the pendulum or maybe the double pendulum the lumped parameters come in in a way that's kind of I can fit it on a paper if I do a little scratching in the on the paper I can extract the lump parameters I can write a program to estimate them you don't want to do that for the EU the equations don't fit on a page they're big and ugly they have a structured form so there are actually specific numerical recipes that will extract the lump parameters directly the same way you'd write a recursive algorithm to write to generate the equations of motion you can write recursive algorithms to extract lump parameters they're a little bespoken actually I think not many people have have implemented them um you can get them in Drake using the symbolic toolbox right I'm actually very proud of this this is uh you know a lot of people think I'm crazy for the emphasized symbolic as much as we did but here's the reason why it's very powerful okay um well I'll just show you in code right I'll make it a lot bigger than that Okay so um I'll just do a very simple example here which is I'll take the double pendulum I'll load it into my multi-body plant how makes them symbolic variables for q v and Dot B okay and then I the parameters that I want to estimate are going to be the mass and the length and that particular one I could have put the damp actually I put a to do to make the damping in and I can in order to put that into the manipulator equations I have to compute the spatial inertia in terms of the math you know just to relate the mass and length to the operations I do a little work with spatial algebra and then I just basically ask the symbolic engine to print the equations of motion and it'll give me a nice little lockback equations of motion through the Dynamics engine right so that's uh you guys know probably under understand kind of How It's built behind the scenes but you in behind the scenes in Drake we build every all of our systems or almost all of our systems they'll support doubles they support Auto diff and they support symbolic okay and any system that opts into supporting symbolic you can just ask it for its symbolic equations and it'll it'll generate them like this okay and I can get all of them like that okay so let's do a simple example then so the cart poll is a very you know it's it happens I did it on the carpool instead of the double pendulum but it's another two link system it's common one in control it's in the AI gym okay let's just do the lump parameters for the card pole and do some basic estimation and I'll just show you how the mechanics of that works right so this I know the equations look like this but I'm not going to use them directly I'm going to use the multi-body engine to get them the first thing I did to is just generate a bunch of data okay so I loaded in the multi-body plant I loaded the cardpool urdf um I made a trajectory to input I didn't do the optimal experiment design on this I could have but I just did something much simpler which is I made a sine wave okay because the carpool there's kind of only one thing you can do right you've got only one motor and one passive joint so all I did was I just shook the motor back and forth for a little bit made sure it was sufficiently exciting parametrically exciting if I chose that to be too large things might have gone crazy and it wouldn't it was too small then it would just it wouldn't excite the tanks well I had a low condition number okay so I just generate a bunch of data and what's important to me is if I plot I just rolled it out four times reset it to zero rolled it out four times uh and I got what I considered to be I mean I so the sine wave the period of the sine wave was a function of the rollout I did slightly different sine waves on each sweep and I was happy to see that X moved more than a little but not too much you know zero to four is like a pretty sweet pretty you know four meters is like a good amount of data okay Theta seemed in a reasonable regime and in fact I could check the I will check the condition number and show that it's it's reasonable okay now I'll do the I'll estimate the lump parameters and I'll do the symbolic thing to get the lumped parameters and then I'll fit them with least squares so I'm going to pretend I didn't know about that multi-body plant come up with a new multi-body plant load it again make the symbolic version of that make my decision variables or those are my data variables and then my parameters that I'm going to fit is the mass of the cart the mass of the pole the length of the pole just because I can even though I should have just gotten a ruler okay um and then I'm going to fix the input to be the data and evaluate the Dynamics okay and I generate my I basically generate my big data Matrix which I'll show you how it comes in here I just get a big I call the w0 instead of Y here okay and then I just call the new and NP lineal these squares to extract the lump parameters and then I I have my FIT parameters Alpha called the vector Alpha I have my true known parameters Alpha which came from the urdf and the question is how well did I do I do and it's I don't know why it's random why is it random I've noted that it's random I don't remember why okay but I get my symbolic acceleration residuals are this but my lump parameters so the urdf says the lumped parameters The Joint mass of the carpool should be 11 I got 10.91 if I did a little more data I'd get I'd get better you know my mass times length is 0.5 I get 0.56 and this is a case where more data does make it better yeah that's just me being a little lazy not letting it run for very long okay so that's really powerful I think people should know about that tool chain right if you want to estimate the parameters of a of an object and you assume that the object is welded you need some model of how it's connected to the hand and some measurement of how it's connected to the hand because you need like if you said it was connected to the hand in a pin joint then you would need the angle Theta of that pin joint to feed into this algorithm the standard thing is to make a grasp on the object and assume it's welded to the hand then you estimate the inertia of the hand as well as the object and you can do that with just the joint torque sensors you can do it better if you have a force sensor in the wrist okay you run the nice thing is that so symbolic computations get expensive as you do lots of symbols through lots of nonlinear equations but you only have to do the symbols that you're trying to estimate and the rest it treats as doubles so so you just make the parameters of the system symbolic in the unknown parameters of the inertia you run your experiments you back it out and you have an estimate of the inertia you can actually do a more special case more dialed in for J if it's exactly just one inertia you want at the end Factor there's an even more specialized version of that but the rest the recipe is this that it's just a special case of that any questions on that yeah so um it's a great question and let me repeat it so so the question is uh what about so I talked about cases where there are unidentifiable parameters what happens in the case where there are um I think I think the case you're talking about is really the case where there's not enough data to excite the um I mean the the opposite version of having losing rank would be if you don't have enough data in your Matrix to to identify all those parameters you could similarly in that case you would expect to get more data and be able to to bring the rank up and identify you said okay The Other Extreme you're talking about is if there's um if you're over constrained you you would expect that if your lumped parameters um came from the equations of motion then there is a solution that satisfies all of those so the uh having insufficient data insufficient rank is one thing but you wouldn't expect to be in an over constrained situation if they came if they if those lumped parameters came out of the inertia the equations of motion I think that would be the natural other side of the linear you know of the over constrained equations instead of under constrained was that your question yes I see and I didn't tell my equations about that what would it do so it would find a so so the question is if I if I didn't declare in my equation the fact that the hand the object is sliding in my hand what would it do it's going to find the least squares approximation given the data for the rigid Leaf I mean for whatever model you fit so if you said it's rigid and you have accelerations that are inconsistent with that it'll try to find the best explanation given the model you fit that's one of the beautiful things about the least squares is that you actually expect to get the best model in the class but it can't guarantee if your class if your model class is wrong it won't of course overcome that yeah so I think it I think it will do it relatively it doesn't die catastrophically in that sense right it should try to find the best in class other questions like that yes yes good so so it right if you if you there's a there's a question of um sort of estimating the numbers of degrees of freedom in the object that you're trying to to manipulate and that is an extra interesting question where you're actually trying to do um well so so there's ways that you could do that through this lens you could try to fit the one-dimensional version the two-dimensional version the three-dimensional version and expect to find some amount of I mean any more degrees of freedom will only explain it better but you'd expect same with like proper orthogonal decomposition or PCA or anything like this you'd probably expect the fit to level off when you found the appropriate degrees of freedom that's right good yeah good point yeah so so in that case so we have assumed that you have the cues you could do it um no you're right you're right that would be the sticking point because we wouldn't have cues in that setting yeah people do a lot of work on um you know topology estimation and the like first and it's typically a pre-processing step a separate a separate optimization nice okay so there's a lot of um a lot of other hidden lessons here um I mean so there is actually one thing that um when I was talking about the over parameterization I kind of came to my it is possible that in some sense we have over parameterized um by are you rdf okay when I said that I kind of caught myself so there is one sense in which we are over parameterizing our urdf which is that if we specify the mass and the center of mass and the seven or you know the the uh two three four five six numbers of the of the inertia Matrix all separately then you have actually over specified the number of free parameters and if it is important if you want to get a dynamically consistent inertial Matrix out that you actually have to put in a few additional they can be written as convex constraints pretty a pretty tight approximation of what it means to be a valid inertial Matrix can be written as a convex constraint um but that was it just as I said that I was like I should be a little more careful okay so yeah our urdfs are not a minimal representation of the parameters right and in fact it's kind of um the one case where you might want to go back and back out Mass separate from length separate from whatever is if you want to write your results back into it you already have to share with your friends then you you would you would have to solve that last piece of the problem but it shouldn't be it shouldn't matter which ones you pick in terms of the simulation if you want if you know if you had an initial guess you could solve a nonlinear problem saying give me the closest to the initial guess in some of these squares sense that satisfies exactly the lumped parameter of values that just would be a standard thing to do I looked for a while at whether you could do that convexally and I I don't think you can only in very simple cases can you okay so just to finish the story for the throwing it turns out that um because this is least squares it's beautiful it's good you can actually do it with recursive least squares online as you operate and that's what's happening in the slow team throwing example is that he's actually estimating the parameters of the ball as the robot's operating and then there's a desired trajectory but the execution of that desired trajectory is being is using a computed torque inverse Dynamics controller that has the Adaptive parameters in so it's actually refining its execution of the trajectory as it throws so it really does estimate the mass of the ball and then throw it better because of that estimate awesome right and you remember in the inverse Dynamics setting we talked about error could go to zero if you put the feed forward term in there are the classic the famous result by Jean-Jacques as Latin is uh is that you can still get that in the parameter estimation regime you can still get your tracking error to go to zero even if you have to estimate online some of your parameters because of this least squares result that's that's a very strong result it requires um it does require your system to be fully actuated it requires that you have a controller that can accomplish the task stably without any parameter knowledge and basically it just gets better and better as the parameters dial in but it's a famous classic result the controller is a closed form function of the parameters the parameters evolve along with the state of the robot this is a an Adaptive controller architecture so you think of it of having my dynamics of for Q but you also have your Dynamics for your parameters which are the recursive of these squares estimator typically there's a couple other more clever ways to do it where you can you can estimate a subset of the parameters in order but the simplest way to think about it is a recursive of these squares update to the parameters and a controller that's using those those parameters to execute the trajectory foreign in that line of work sorry I'm just sometimes you guys got me talking which is great um but in that line of work they also did some residual models you know the residual models back in the day were um were radio basis functions or um radio basis functions they did a couple other they did a lot of wavelets actually back in the day I'm sure there was a neural network version of it but not a deep neural network version of it right and they worked incredibly well and that's how they fit the airplane I believe was with a radio basis function Network so I think residual models are can be added on top of this and actually a radio basis network with fixed means would still be linear in the parameters so it fits beautifully in this in this kind of framework if you compare that to the tossing bot tossingbot is solving a harder version of the problem because it is worrying about you know unknown Center of masses and rotations that are you know more complicated than than what John jock did it's potentially talking about aerodynamics the paper talks about aerodynamics if there's at least one ping pong ball that might have been subject to aerodynamics right so it is solving a harder version of the problem uh I if I had one wish for that paper would be that it compared against the Adaptive the fully adaptive controller because there was an intermediate result that would have been really nice to say how much does the residual the neural network residual learn over the parameter adaptive controller okay um one last maybe nugget on that so um let me just write it on the board since I said it in that conversation but um uh some interesting an interesting lesson is all the nice properties are for the inverse Dynamics the forward Dynamics is less pretty uh and there's one other you know just I think the algorithms that people tend to implement are not quite what I said there's a another nice Insight which is that we talk about how q v v dot or the data that I I talked about we've said so far that these two are relatively clean data I've said it's okay to use those in your controller but be a little careful sending your raw acceleration measurements back into the controller this is a little more noisy maybe maybe be more more careful there's a version of the same the same sort of parameter estimation story where they you can reduce the sensitivity on the accelerations by writing basically the the error in terms of power instead of in Terms of torque and that allows you to average out over you know over some time some of the noise and the acceleration so there are power formulations or even energy formulations can decrease sensitivity pretty good I think that stuff's very very powerful maybe under appreciated I won't do all of linear control in five minutes linear system ID in five minutes but maybe there's at least one thing that it's from one lesson I would say from linear system ID that is immediately applicable to this and so I'll just say that one idea and we'll we'll wrap up for today foreign there's loads of insights different insights from linear system identification but one of them is this I can people have different names for it but there's a distinction between equation error versus simulation error okay so if I have my system identification problem there's maybe even let's just pretend I have state observations just to keep it simple there's sort of two ways to write the objective of system identification two common ways one of them is the equation error is the one step version which is I'm going to minimize over my parameters Alpha the the Dynamics of alpha of the one-step Dynamics let me get my notation be careful here so I'll use a lowercase like this okay so this would be I'm trying to use this this is my data okay uh when I use the subscript here I'll say those are that's data that's coming in a priori whereas this is uh simulation of my model I'll write them both and I think it'll be clear what I'm trying to say foreign okay sorry there's a lot of writing there's two ways you can write a system identification objective one would be to say all I want was for my model is that every data point I just like I'm going to reset my simulator to the current data point I'm going to take one step and I'm going to compare it to the data at the next step that's a different objective than if I say I'm going to take my simulator I'm going to put them in the same initial condition I'm going to simulate forward using the parameters Alpha and and I'm going to have my data and I'm going to then evaluate even the term in the future the long-term robots right the question is am I going to reset on every data on every time step or am I going to roll it out with the Dynamics the multi-body parameter estimation that we just did was this version where we reset the data in every step and we all need to implemented the Dynamics for one step and that's that's for a fundamental reason which is this tends to be even in linear system identification this tends to be the easier objective it's convex in the multibody case and the mult the multiple step one is non-convex okay this tends to be the better objective for an important reason that the simplest argument I can give you for that that you can find models that score that minimize the equation error which are actually have unbounded uh simulation error and the the particular case that that that comes on you know like a simple way to sort of think about that is if you have a model you're trying to identify that's on the boundary of being stable that the true model is actually okay you could find a set of parameters that fits in the least Square sense but actually gives you an unseen model okay so you're so your true data is stable it maybe goes towards zero it happens even in the linear setting okay your least squares fit because you didn't have perfect measurements or you've got you got something your best estimate came up with an unstable model and it paid diverge that doesn't happen if you can minimize this error but this model is susceptible so even in the multi-body setting we tend to use least squares and do our very best with that but oftentimes we do a cleanup pass this looks like a trajectory optimization problem finding the parameters Alpha that satisfy over entire rollout we will often fine tune Alpha at the end after the initial solution here just to make sure we minimize the long-term error okay and this is a you know in the linear system identification World there are strong solution you know approaches to avoiding to actually making this problem still good okay and there's lots of lessons there too but I won't I won't Jam them into the end here okay but I do want you to understand that there's a difference between a one step and a long term and the long term is the one we really want the one step is the one we often use because it's more convenient okay to be continued foreign Happy Thanksgiving
Robotic_Manipulation_Fall_2022
Fall_2022_642102_Lecture_8_Simulation_basics.txt
thank you for about alternative ways is like okay this is um testing testing not that much I actually don't develop and develop my tests because the short car doesn't work on my favorite films they they they were working and now they stopped they changed them or yeah they're just different I see it's funny to hear Tomas um Lozano Perez he talked about when he was writing his thesis Richard stallman was writing emacs and they had the one time shared computer right so it was like they had to log in for time he'd get two hours to work out his thesis and overnight stallman would come in and change all the key bindings he said every day the key findings were different very hard to write a thesis he said foreign foreign seconds of lecture I become chalk colored oh it's terrible foreign part of me as I was prepping for this makes me I think there's some simple simulation questions we could ask that are not in the piece set but maybe I'll I'll write some tonight probably if I came up with if I had the energy to come up with something that got vetted tonight then we could consider for this week otherwise I'll just do it for a future year I think but it does seem like we're going to have all their questions about Thursday and lots of cool stuff today but no questions about it the one I actually I'm going to use an example in the first part here that uh I think could have been a question instead maybe but uh we're gonna just do the example and lecture foreign plots foreign foreign foreign people do really grab the slides it's good I see it on a handful of screens so what's up foreign foreign let's get started so we're going to start the next sort of chunk of lectures we just finished three lectures about roughly about Point clouds and geometric perception so let me just uh start by reminding you from a big picture where we've been and where we're going next so we started off with a lecture about Hardware Basics just to get you up to speed and the mechanics of simulating all of the all of the pieces including the control stack and the like and then we just got our robot moving with some basic kinematics and Jacobian based control and we took a single object and moved it from point one to point B the simplest version of pick and place and then we complemented that with a handful of lectures now about the simplest sort of form of geometry of perception of if we needed to Now find that object from the cameras then we had some basic tools to do that in the spirit of now spiraling spiraling out and increasing complexity what we're going to try to do is bring that pipeline into more complex scenes now in particular where we had just taken one object and it was a known object in the next three lectures I'd like to think about a bunch of objects a whole bin of objects you know cluttered objects and I'm going to try to reduce the assumption that we already knew everything about the object when we started so we started off before we had a perfect mesh model let's say or a point Cloud Model that we were going to find and we want to think about techniques that don't make that strong assumption in particular we're still going to do in the first pass of you know increasing that complexity we'll still mostly just pick things up and set them down and I and after the next round of complexity then will be uh you know known objects with deep perception and then more contact Rich manipulation and forceful manipulation so we'll keep we'll keep increasing but the jump for this section is more complex scenes with many objects and diverse objects so this is um this is roughly what we're going to build in the next three lectures uh it this was a a project at Tri that I think other groups have certainly done similar things but this was um let's see it was motivated partly by wanting to do deep learning for perception and uh needing to generate a lot of training data for our deep Learning Systems right and so we wanted to have relevant images of bins with relevant objects in the real world so we set up a system that would basically move objects back and forth all day long okay and then we would occasionally go and dump new objects in and take other objects out and this thing would just do its thing uh all day long but the pipeline behind that was actually interesting in and of itself because how do you build this robust system that can pick up all the objects actually getting you know getting every one of those objects out or even some strange objects that can be put in and get in the up to a level performance where it would almost always move all of the objects was a pretty interesting undertaking and that's kind of what we'll pursue today we're not going to do the super handle every corner case version but we'll get the nominal situation working well it was interesting if you would walk by this robot it would be operating you know all day every once in a while there'd be something on the floor like and actually there was a at some point someone threw in this rubber duck that had looked like Spock from Star Wars the first Star Trek and uh and for some reason it always threw Spock I I it was like with high probability if you walk past after for a couple days you know Spock was on the floor so um it didn't like that okay but this is what we're gonna build so that has lots of interesting questions and we'll do it in three parts but the first part is how do you just generate those richer simulations and how do you Generate random kitchens or random distributions over what's going to be thrown in the kitchen sink or in a bin okay and there's a pretty sophisticated answer to that that could be trying to calibrate probability distributions over environments that we're not going to do in the first pass we're going to do a simpler version of it first but it's a it's a super rich question it's a big question for um for all types of I mean if I wanted to make a safety case for autonomous cars right how do I somehow write down a distribution of all the possible environments that the car could could be operating in what's the probability of a pedestrian coming at a certain in a certain way uh it's slightly easier when you have inanimate objects but still very very hard to get the distributions of possible kitchens so what we're actually going to do instead is um basically drop things out of the sky here we go it turns out I mean if you were going to go into the you know in into this world the room and do something you just dump a bunch of objects in and we have a really good simulator which we're going to try to appreciate um the details of today because this is going to demand more from our simulation but it turns out a fairly reasonable way to make cluttered scenes is to just drop a bunch of objects out of the sky okay and for reasons we'll understand by the end I don't just it's hard to start it in this configuration it's actually relatively easier to start it up in the air with the objects separated so they're not in collision and to drop them and then just rely on an optimized Collision engine that can simulate forward with high reliability and then you get these interesting cluttered scenes interesting distributions you know I did this with many copies of the the red brick uh but you the exact same code works for uh for more interesting objects I guess you uh you know I don't know if I've explicitly said yet but the the mustard bottle you know and the even the red brick actually is it is in here right um these are these are from a famous data set that's there's my red brick right and uh this is a this is a project it was Yale Cal Berkeley um got there were research groups from all three of those that got together and said let's make a common data set for Robotics and they picked a number of skus from Amazon you could tell they tell people how to buy the same objects they would even ship you some of the objects and they gave out 3D mesh models of these objects with you know High Fidelity texture Maps like super high fidelity I mean the well maybe not Fidelity High megabyte count right uh the reason it takes so long to load in mesh cat is because these these things are super uh enormous meshes okay but it's a it's been a very valuable and you see lots and lots of manipulation research today still that uses these data sets did what did I say I said Cal I said I said Berkeley twice see ya yeah yeah sorry CMU Berkeley thank you that was Sid it's the Emu yeah and so you can do exactly the same thing you can load those objects into the simulator drop them from the sky and get interesting distributions of uh of ycb objects okay I I love this because this looks like you know the central limit theorem right but it's got spam cans and mustard bottles and all kinds of bizarre objects right so uh something sort of amuses me about that okay but um but doing generating these scenes is going to allow us to do you know drop a bunch of objects in the the bin take a picture above and then we'll be able to make ground truth labels for a deep learning perception system for for next week whenever we get to it um but the demands actually on simulating this well are increasing the demands on our physics engine and so I think this is a good time to sort of stop and think about what's happening in the physics engine a little bit more it's going to dust off some of your 1803 skills and feel a little bit like mechanics so those of you that don't like mechanics will hopefully get a little uh you know refresher and uh and appreciate some of the details but um you know it's another one of these hopefully there's levels of detail for different people in the class okay so I have said before that simulation for manipulation actually is has proven to be harder than simulating for other types of Robotics certainly for quad rotors flying around in the open air that's doesn't have as complex physical interactions I mean you can get ground effect you can start doing serious aerodynamic modeling but vehicles have tended to be simpler to simulate first things to simulate well walking robots um are actually not so bad to simulate we've done a lot of for many years we've trusted our simulations but it took a long time for us to actually believe that we could simulate the complexity of manipulation and so why is that why it seems like walking robots should be just as hard to simulate even walking around by doing a backflip should be just as hard to simulate as manipulation okay so we're going to talk through that today and there's um there's a few major points that I want to make right so let's say simulating contact for manipulation in particular challenging so this is there's the simulation of course means many things but I'm talking about the physics engine piece of the simulation today okay and the first one is that we end up with stiff differential equations often this is partly under your control is how stiff these are and that's one of I want to I think even just as a user of these simulations understanding where stiffness comes from and how to deal with it will help you use these tools better and understand when things go wrong what you can do to fix them so let me take a few minutes to do a sort of explain what I mean by stiff differential equations and how they manifest in these kind of simulations okay and I'll do it with the super I don't even need a big complicated walking robot or uh manipulation system to to make this example I can just even think about stiffness in a spring damper system okay so let's just say I have a mass spring damper this is really your 1803 type system um right so let's see I've got a just a mass there's no friction to speak of and I'm going to put a spring on it and I'll put a damper on it which I normally we normally draw like that okay and this would be its X position okay and this is like x equals zero here okay so the equations of motion of of this actually let me call it not X let me call it Q since we are going to distinguish between x and q and we've been calling Physicians Q right so this is going to be mq double dot plus b q dot plus k q this is the classical equations of a damped oscillator right where this is the mass Matrix is the damping this is the stiffness spring stiffness okay so um in in my other robotics course under actuated robotics we talk a lot about stability and what that means and how to generalize it to non-linear systems but I think even without that full introduction we can talk about just uh the basic stability of this system uh stability in this very simple setting just just is just asking is the limit as time goes to Infinity of X of t equal to zero Q of t right I'd like to say if I start this here with some initial position and velocity and it starts is it going to come to rest or is it going to do something different right this damper is the physical intuition you should have is that's pulling out energy that's friction right so this thing is wants to oscillate because of the spring but it's going to slow down because of the damper and we're going to try to understand make sure we understand how to simulate this accurately even if you've got you know these kind of simple equations is that clear for the continuous time system the answer the the question about stability is very easy to understand um you could you could ask what is the stability properties given a change M B and K the parameters and it turns out that the the answer is yes it's stable I'll just say if if mass is greater than zero the damping is greater than zero K is greater than zero as long as I all of my numbers are positive which is the normal physical intuition case then yes that that system is going to decay and I put strictly greater than zero for the damping so that it's it's actually decaying to zero and that's not oscillating forever and this just because I don't I don't want Mass to be zero right okay and if I wanted to to if you remember your your differential equations right so I could I obtained this uh with confidence by taking the eigenvalues of the of the system right so if I write this in my state space form just to completely close the loop but here I've got q and velocity Q Dot and this Q dot does equal V in this simple case so I can write Q dot equals or x dot equals Q Dot and then 1 over m negative V Q dot minus k q which I can write that as a x where a is just 0 1 negative K Over m negative B over m is that familiar to people and give me a thumbs up or a thumbs down if that is that familiar to people that kind of stuff it's been a while maybe so but this would be a differential equation way to think about that system I could just write the equations of motion in this form you're this is you know for this example you don't need to generate these from scratch here but uh if you remember if for those of you that remember then the eigen values of this Matrix can check the stability of the system yeah Okay so what's interesting now is if I try to integrate that on a computer right so that you could take an entire course on numerical methods I don't need all of that but I just want to think about numerical integration in its simplest form here to make a few points so this is a continuous time equation and I want to Now integrate it forward uh with a with an algorithm right so the simplest form of that would be to make an approximation here a discrete time approximation to say I'm going to advance this simulator by taking my current state and I'm going to increment it by I'll call it h this is my Dynamics this is my x dot right this one here H is what I'll typically use to denote my time step I want that to be also greater than zero okay and if you care about the linear analysis again you could write this as I plus h a this gives me a different linear dynamical system which we can talk about its stability or its instability asking does X of n as n goes to Infinity does it go to zero and nice questions from analysis are about you know if I choose different H's for instance how accurately can I simulate the true solution from this differential equation these are the fundamental questions of numerical analysis okay and the answer here is much more subtle than this right and I think I actually have the eigenvalues and here but they're they're messier than I wanted so um so I'll just do it numerically I'll do an example numerically okay one of the great things about linear dynamical systems is I can integrate them perfectly in closed form so I can I can give the solution to this with no artificial artifacts from numerical integration and then I can do the integration version of this and ask how similar they are by the way this is a particular type of integration this is called the Euler integration you've probably heard it it's even forward Euler if you want to get um more into the details Okay so I'm worried that this is not going to show up great on the monitor because webgl doesn't accept the line with command sort of annoying um but let's try oh I could try Safari I wonder Safari's whipped yellow supports okay nope still doesn't it's like right in the webgl docs it says most browsers don't support line width so yeah uh okay that's annoying but we'll deal this is the phase portrait let me let me draw it first on the board just to make sure I you know what we're seeing here I'm going to go in and add like axes and other matte plotlet like things into mesh cat but I haven't done it yet okay so the plot I want to make here to understand the integration of this Mass spring damper system is a phase plot phase portrait portrait which is when I plot Q versus Q Dot so if I plotted Q as a function of time I would expect to see an oscillation that would Decay if it's stable or blow up if it's unstable but the timing of this is is one feature that I'm less worried about the timing I want to see more it's be it's long-term Behavior it's going to be a little bit clearer to see it in this plot where I'm going to start if I start from some initial condition let's say a positive q and zero velocity then I would expect it to start getting pulled towards the origin so that's negative velocity and you'll expect it's I think not too hard to exp see that we're going to expect something like that to happen when it's stable when velocity is positive I'll be moving in this direction in the in this picture right and when velocity is negative I'm moving this direction so I could tend to take Spirals and in the good case when the when there's damping I expect those spirals to be like this okay so that's what we're seeing here we're seeing an initial condition here and this is the origin and I can change the stiffness and damping and the like okay and this is chosen with a pretty small H H is chosen to be 0.1 seconds here okay so I have a mass of one a damping of one which I should have picked 0.1 or something and a stiffness of two so these are all kind of like a round number the number one I get the sort of reasonable solution interestingly as I increase the stiffness this is making K bigger then I'm going to get elongated I'm going to get larger velocities right I'm going to like this right but it it'll still eventually converge now the red line so the blue line is the analytical solution no numerical integration accuracies the red line is the numerical approximation that I get from doing Euler integration now the natural thing you could imagine is if I start taking bigger steps right so basically so what it's basically doing is it's taking the derivative at each time and then forecasting that out as if it was constant and then it's evaluating the derivative at that time forecasting it out so it's going to make an approximation of that curve the problem is if I pick um H to be I'm going to start increasing H now so if I start increasing h my approximation is going to get more and more decimated okay and I'm going to my accuracy is going to go down right if I increase H too much okay sorry that was the stiffness let me increase H too much things can get bad right so I'm simulating A system that is the continuous time Dynamics are stable but the discrete time approximation is unstable this seems totally artificial enforced you might see a simulator do this on you uh one day okay and it's the same the reasons for it are exactly what we're seeing here okay so the the very first lesson is just if you see your simulator blow up turn the time step down that's the first one but but actually the interaction with the stiffness and the damping and the mass is actually um easy to understand too okay and we're going to try to make sure that those those points land okay in particular what matters is the time step as a ratio or a compared relative to the um not K but K Over m okay so the the thing that matters is H relative to K Over m these are the accelerations this is like the Q dot the nominal Q Dot I'm sorry Q double dot okay this is the the stiffness divided by masses and if if K Over m is large then you're going to need a smaller h to simulate it accurately because those this is going to make if I increase K here as I increase K that's getting my my derivatives are going to be larger and so taking a long step with fast changing derivatives can quickly lead to Big errors that's the intuition if your accelerations are big then you need to make your time steps small so it turns out that when you're simulating a humanoid let's say a 400 pound Atlas robot that's generating relatively big forces on the ground you would think that might be a hard thing to simulate but actually the accelerations are relatively low okay so big forces on a big robot are kind of okay when Atlas goes to pick up like a toothpick that's the bad case you get big forces on a little mass and you send toothpicks flying in your simulator okay and it turns out he was picking up stuff are not so different from that and in the range of inertias that you have to deal with in manipulation is the first challenge you tend to have very stiff equations and also the accuracy of the contact simulation is such that you need to choose K larger than you would for walking and I'll make that point when we get into the contact equations okay does that sort of that basic idea makes sense what does it mean to be a stiff differential equation it means that I have these accelerations this K Over m in this simple system is large which means K has to be small I'm sorry H has to be small good so we're going to see now where some of that stiffness comes from in the contact problem let's do a little contact mechanics I'm going to take almost the same example but I'm going to put a contact inside it okay in fact I'll I'll do it in the vertical plane this time instead of the horizontal plane just so I'm pulling down with mg okay and I've got a ground here and I want to think about what happens if my my point Mass goes into the ground now your physical intuition will probably tell you all kinds of things about that um first of all you probably expect it to bounce but that will depend on our contact parameters some of the models that we're going to talk about it's actually going to go and stop depends what you if you assume that the ground and the point are infinitely rigid then actually unless you do something else to model restitution or something like this then actually rigid contact would have it defined as just coming into losing all of its energy on collision and so most of our models most of our mathematical models are actually rigid geometries so so the default answer actually is stop but we'll see ways to make that better okay so this is almost the same equations but I don't have damping now I just have if I call this now the Z m z double dot is just negative uh mg is when I'm in the air but then once I get into the ground I'm going to have a contact force let me call it the normal force for now but we're going to use our full spatial notation after the basic example okay so when it's on the ground I'll have this force of normal and I'll have the mg both okay so then in this case how my equations look like this like I said you might expect it to bounce but we'll see the cases where it's it's not so the question is where does f n come from when it's at rest you could sort of Imagine That FN will be negative mg and you'll have an equilibrium but it's more interesting to ask what happens when you when you collide okay so option one is rigid contact meaning we've assumed both objects are perfectly rigid to like steel objects you know colliding like this right even harder than steel right because steel would bounce a little bit right um so the way that FN is defined in the perfectly rigid case FN is the smallest Force that resists penetration it's the principle of least action if you want the variational version right so the this represents a constraint and the forces of constraint in the principle of least action which is a mechanics principle are that the constraint forces are the minimal forces they need to be in order to they make the minimal change to the unconstrained motion as possible now what is that what does that Force have to be what is the minimal Force at the moment of impact is the interesting case anybody know the answer I don't even have to tell you what m is you can tell me the answer at the moment of impact FN has to be infinite okay why so if I were to plot the function of T let's say Z Dot let me just simulate for it I'm going to a version of this of my my pot here so Z dot is just going to increase it's going to get more and more negative until the moment of collision this is the and then in the perfect rigid model Z dot is going to suddenly be zero okay so in an in an Impulse I went from having a negative velocity to having a zero velocity and that requires an impulsive Force a Delta function in force okay which does it has finite work but over an infinitely small interval so it's it's actually a Delta function yes discreet timer yes simulation good that's exactly the the question so so this is in the perfectly continuous time case and the question was what happens if I now do my Euler integration if I only ask that the penetration is satisfied at the finite time steps of my integration which is all I tend to ask then yes you can have a finite force that resists and gets you perfectly back okay so I was going to say that later but I'll say that now that's good um let me let me introduce one bit of notation to say that carefully so we'll there's one other quantity that we're going to use generally here which is the sine distance function this is just the sine distance function we used in perception right this is the distance between the object and the other object that I'm trying to collide right so this is the smallest force that resists penetration resisting penetration means that Phi of Q stays greater than equal to zero okay yeah and I think um I guess I don't I don't need it yeah so for Euler integration if we only require V Q at certain n to be greater than equal to zero right at discrete times then FN can be finite that's a little unfortunate there's two ends there yeah and in that case it has the it's still we think of it as an impulsive Force integrated over the some finite time so it has a finite magnitude is that clear yeah how about I guess good most of you aren't giving me any thumbs but that's okay I know it's a little strange okay so but but this notion of rigid unrigid contact which most of our mechanics is rigid body mechanics leads to this seeming problem where I need instantaneous forces and collision which makes makes it hard to simulate accurately you could if you want to simulate and continue you know as accurately as possible in continuous time you actually have to do event detection and you have to handle the that Collision event explicitly and then continue integrating if you make a discrete time approximation then we can we're going to play different tricks so let's just before we move on to those time steppers let me also say option two here is to think about soft contact so now when the ball is on the ground here I'm actually gonna let me draw it even a little bit in penetration here I'll I'll make instead a spring law that governs I can make it bigger you guys are far away I've got a ball that's way under the ground right and let me put a spring that's trying to Define my normal force that would be an alternative and I can do a spring damper in fact that's more practical to do a spring damper and in fact another good way to define the normal force here would be to say it's zero if V Q is greater than zero and it's negative VQ times k if V Q Plus or equal to zero and then we could we could add damping into for instance foreign this is actually I would say a better model of what really happens in the world than the rigid objects no objects are technically rigid but the stiffness of real collisions because it says to simulate accurate things that don't penetrate much like imperceptibly right you're when I'm when my foot's hitting the ground there's a small deformation of my skin right but but not a big deformation which means in order to model that accurately I have to choose K to be very big and this is the tension right so so now I have I can add stiffness in my contact model but it makes means I have to simulate slowly right even worse the the simulation is changing the the stiffness of the simulation in some sense is changing as a function of the of the configuration so in my plot it's hard to pick one time step that will work in all of the regions because the stiffness values are different in different regions if I have two objects if I get an object jammed between multiple objects for you know that's a different configuration it's much harder to pick a single h that works for all the possible derivatives you're going to get across the states okay so we in practice you'll pick K enough to you know visually be satisfying you won't see I mean when we debugged we were first debugging Atlas we would have Atlas like fall a meter into the ground just to make sure that we got you know the time steps right and stuff like this and then we would slowly make it tune that up okay and then I should have said but when in the when I make H small it takes longer to simulate right I have to if I want to simulate for five seconds and my time step is small it's going to take a long time to simulate so this is the tension how accurate do you want your physics to be how large do you want K to be versus how fast you want to simulate sorry okay good is that clear the basic basic story okay so let's think now about friction we've only talked about horizontal force or sorry vertical Force so far in the where vertical is always defined in this in this Frame as the direction the the gradient of the sine distance function so I'll say that more carefully but um in the ball falling from the sky it's the normal force is always vertical but for any two objects as they're coming together the normal direction is the the place of it's the direction of closest distance minimal distance okay that's the generalization think about what friction looks like in the simple case let me now throw a ball sideways at the ground okay and when I get to the ground I'm going to have not my blue again I don't know it's blue I always break blue yeah you guys are not going to be able to see blue okay that's my normal force and I'm going to call this my tangential Force here okay we came up with um two ways to possibly Define a normal force loads of details behind them but the general idea is either the the whatever force is required to completely resist penetration or a spring Force to go like this but we haven't said anything about slowing me down if I'm sliding along the ground okay and the rigid approximation of friction is most famously cool in friction which says again it's a dissipation law which says that my my um my frictional force my tangential force is less than or equal to the some coefficient of friction times my normal force and inside that limit then it's going to be the um the smallest Force that resists sliding okay and then once you're sliding and so this is sticking and then once you're sliding when you have a velocity at the at the interface that's greater than zero then it's the maximal dissipation it's it's the force that has magnitude mu FN that dissipates the most energy that just tells me what direction the force should be applied the magnitude is going to be based on my normal force and the direction is the whatever direction would slow me down the most okay this is kind of a strange thing but but the coulomb friction law just says that if my object is if I have a larger normal force if I push down harder on my object then it's going to resist it's going to provide more friction and it's proportionally more and that I can summarize that friction interaction with this single coefficient okay in 2D what does this look like this is this is saying I say it's greater than you know I've got an absolute value of the normal force is is proportional to my the absolute value of the tangential force is proportional to my normal force that means as my normal force grows I can get more tangential Force and the way that we draw this you could draw this as a the admissible frictions here live in a cone right as the normal force increases then I can have larger tangential forces anything inside this region here and this is called the friction cone foreign this definition still works in 3D the friction cone now becomes an ice cream cone in 3D it still describes a limit on the total amount of frictional force based on the normal force Okay so what happens here then so when I said these things quickly it's the smallest force that resists sliding and then it's the maximal dissipation so if you think about a simple case for instance of my ball or let me make it a box so we don't think about the rolling version of it okay on an inclined plane then I'll have normal forces that live inside some friction cone I'll summarize the contact at these two points for now I could even I can say there's little little points of contact here and the rest is suspended a little bit foreign okay so I'm allowed to have any Force inside the friction cone so what happens now if I have mg pulling me down like this and I have a friction that's trying a frictional force that has to live inside this cone right that's trying to push up and it's going to pick the maximal direction it can pick up my my artwork I think actually I had forgotten to make this carefully but I did make the both of those red lines are consistently slanted so given this beautiful rendition here there is no Force that can be applied at these points that would perfectly resist gravity so this is this object is going to slide if my friction cone was if my mu was larger my friction cone could increase there's a you see the geometric picture I'm trying to say there's a in order to get force balance I would need a force of friction that's perfectly resisting gravity since this is outside my friction cone in that picture that means that thing's going to slide if I were to increase the friction cone by increasing mu then I could get inside that or if I were to decrease the ramp I could get inside that okay but there's actually a beautiful geometric version of friction in that picture okay ask ask some questions is this Landing is this a useful level of detail no that's fine yeah yes exactly so if my if I were to make a free body diagram which I'm doing here right and I said I wanted the forces to balance then in order for the forces to balance the forces of contact would have to be able to be equal and opposite to the force of gravity and because that is not an acceptable friction Force that means that it would slide and in fact the friction Force you would get would be on this line resisting as much as possible that's the maximal dissipation idea resisting as much as possible but ultimately still not stopping it from sliding what else can I say to help friction cone idea is going to come up over and over again you can't talk about manipulation without the you know saying friction cone so you don't have to understand all the details about it but the concept of a friction cone is essential yes the forces will always be inside the cone that's the the rule and they and if I want this brick to stay still then I would want the friction the force to the friction cone to have a force inside it that could resist that gravity yeah is this like arbitrary the way you do the code or is it based off of like good good good good question so the normal is um is defined it turns out in the general case it's defined by the direction in which the distance is smallest in this picture it's totally reasonable I'm going to use the same color as here is to say that the normal FN was here the tangent is a core is a coordinate system that is on the surface yeah when it's a two-dimensional surface we have to pick a coordinate system for those tangents and so therefore the tangent in red is in this direction it's normal so that's what defined the geometry of that cone the angle there is given by the coefficient of friction I have a picture actually of the coordinate systems somewhere here okay so in general I have two general objects I defined the sign distance between those objects with this the normal is always the direction of that sign distance the gradient of the sign distance it turns out and the tangent is some Choice like this in I tried to do a 3D picture I hope that is sort of clear you could pick any coordinate system on that surface and then you'll design a cone so it would look like an ice cream cone uh in that with the normal in the middle of the cone sliding friction is cool too but I'm not sure if you have the appetite yeah so uh there's actually no top so the this law of friction which is not a great law honestly it's the best we've got but it has problems um I can actually I'll tell you the problem in a second here but um this says if you give me more normal force I will give you more friction there's no limit yeah okay here's here's a reason why it's bad and this is why rigid contact is actually bad Here's a thought experiment let me just so imagine I have a table with four legs okay and all I know all of the equations of motion would tell me is that if I if I walk up to a table that's standing still then the normal forces must be sufficient to balance the force of gravity so I know that there's a sum of normal forces that cancels gravity even more I actually know that those forces have to resist the torque any torques due to gravity but I've got four feet and I could produce the necessary torque with any balance and with only three feet that means that the rules of mechanics don't actually tell me how much normal force is in each feed it's a It's like a just a failing of our of our rules of rigid mechanics okay so I could have I could take some normal force over here and put it over here and still get a valid solution to the equations everything I've said here does not completely define what FN is here's the problem now so if you go to push the table some of those feet might be in sticking some of them might be in sliding depending on the normal force you don't know what the friction force is so the the rules of rigid body mechanics will not tell me if the table goes straight or turns or left or right all of them are possible solutions given the governing equations they do not give me a unique solution to the differential equations I generate that seems totally embarrassing right we have like so much smarts and mechanics but we can't tell you which way the table is going to go we would need a like atom level simulation of the differences of the actual contacts or whatever to we need more than a point model of contact at each feet to resolve that ambiguity I can write down a perfectly reasonable set of equations and it has non-unique Solutions yes yep that's a different thing well first of all that's probably not good for your felt table right but uh like if my kids were doing that I would yell at them um you're saying you push down and then it squirts out yeah I think if you push down perfectly with absolute normal force it wouldn't Square it out you're putting a large force and you're accidentally putting a little bit on the side and uh and it'll go out yeah done in that box diagram if we put oil on that yep yes that's exactly if I were to change the the slipperiness that is this coefficient of friction the cones will come in and I will more likely slide it's exactly the right intuition now the interesting thing is that the coefficient of friction is really a property of the interaction between the two types of materials so you can't say this object this mustard bottle has coefficient of friction three it's when the mustard pairs with the steel table it has a certain coefficient it's in contact pairs right so it's a little Annoying to specify and people typically specify individual numbers for individual ones and come up with a hacky way to combine them uh in in the simulator yes I am going to not yet not yet this is but this is exactly right so I summarized that's why you heard me hemenha about I just drew the forces there but really there should be a forces distributed potentially across that entire thing um but most simulators don't do that so let's actually talk about that so so um sliding friction is awesome but I'm going to skip it but basic yeah I mean there's a there's a different uh you know the rules of governing what force you'll get when you do start sliding are have a similar rules typically it's it's basically it's almost exactly the same with maximal dissipation but most of the time people will say that your mu decreases once you start sliding you'll have a different coefficient of sliding friction then static friction so just if you see mu static or mu Dynamic that just means once you start moving we have a different coefficient of friction and that is uh you know it shouldn't be that way right that's just like our equations aren't quite right and we're going to cover it up by saying there's two numbers there and they discontinuously interact uh it's just weird but this is our best rigid approximation okay so um so now you I think you appreciate a little bit more this notion of contact forces how they need to be big potentially so for instance this was back in the day when we were competing to win Atlas and this was in gazebo um right and the we didn't get to pick the parameters of the simulator right um and it was really annoying to walk over the rough terrain because it would go like this right um so one of two things should have happened at this point if we had control of the simulator we would have changed the stiffness of the ground down and admitted that it would sink in a little bit or we would have turned the time step down right when you start seeing artifacts like that that means you're getting into the regime where your integrator is not accurately tracking there's a few other ways that you could get that artifact and I'll tell you but but that's getting into the regime where your your integrator is not tracking the true Dynamics and your first instinct should be to turn the time step down your second Instinct should be to start looking for where the stiffness in your differential equation is coming from ironically in the first demo this is the Ewa moving around and picking up the block I had to pick a certain stiffness time step right and I wanted it to run well in deep node and all these things the thing the place where the stiffness comes from I think you'll never guess there's just a even moving around with a little red block any guess where's what's the stiff part of the differential equation in that system I've set you up to say it's the red block it's not the red block it's the fingers it turns out the way that we simulate the the wsg I guess I did say it maybe a little bit but that was that excellent but um yeah the the wsg has a mechanical drive so that the two fingers are always equal and opposite magnitude we simulate that with a stiff spring two stiff spring like it has to be stiff but it's this it's the thing that drives the bottleneck of my simulator that's annoying okay but at least I know it I found it and I know that if I want to make my simulator faster I can change that number or handle that in that stiffness properly okay let's think about the the question of surfaces right so you so um we had a question now that why did I draw two arrows when I really should be thinking about integrating over the surface it's a pretty subtle thing so most simulators think about Force vectors applied at points that's the natural thing to think about when you're writing a Dynamics engine but it gets subtle fast so let's just think about box on box contact that's what we saw when I dropped a bunch of red boxes out of the sky right so let's say I've got I've got one box and I've got another box here okay box one and box two where do I draw the contact forces my normal forces right I drew them here and here to make my free body diagram work but that was a little arbitrary you might think if you're writing the simulator that you'd think oh I'll just pick the corners but that's not robust enough to handle all the cases right so what where are you going to pick them if you find yourself in a situation where your box is hanging off the end right the points of maybe I need to somehow pick here and pick here for instance would be a reasonable choice but there's another you know there's other cases too so because the integration time step isn't perfect and I might not simulate the the equations perfectly I might find myself with my block like this it you know my integration wasn't perfect so my geometries are overlapping a little bit now where do you draw the contact forces right it's not super clear right maybe you pick here maybe you pick here writing an algorithm that picks those forces consistently is hard maybe impossible I don't know I think it has never been done and this is another big reason why simulators of contact become unstable is you'll be you'll be simulating along you'll take steps of the integration and it will quickly change its mind about where the forces should be applied and then all of a sudden something will explode okay that could be what happened it was happening here too was that it could be slightly changing its mind about where the forces should be applied so um yeah so here's a here's a a reasonable candidate right so let's say I've got two boxes that are in penetration I think a natural algorithm that I would try to write would say I'm going to pick the point of maximal penetration and then I'm going to take the closest exit from that maximal penetration and I'm going to apply a force like that okay so that's a pretty good algorithm yeah but if I start looking at the way that Force gets applied as I move the objects through there's a problem which is that the location of the force changes discontinuously as I move across that corner so in particular this this notion of penetration distance and closest exit has the property that if I'm going in and I cross this corner a force that was previously pushing me in this direction my normal force all of a sudden tried to decide me push push up in this direction since I'm into penetration it could be a large force that suddenly changed Direction this is another big source of stiffness or or instability in the differential equations in in the contact equations okay take a quick stretch and then I'll tell you how to solve that problem thank you my blue shot okay I'm gonna solve it because we only have a little a little bit of time left so the I think the this really is a big source of of errors and a lot of simulations I think the answer in some sense was in your question why not integrate over the surface right the answer has just been it's computationally expensive and hard to do that computational geometry but one of the things that has made the simulation engine and Drake much more robust now is by we're doing that extra work to integrate over the surface so um but okay a first trick which actually we use a lot is to just um if you and you could put this in any simulator you certainly put it in in Drake we often actually put extra contact geometries down so for that red brick to fall and land in the piles and and be accurate in all the different cases if you look at the if you visualize the Collision geometry instead of the visual geometry you'll actually see that the red box has a little box just in set just partly inset from the original box and then there's for there's zero radius spheres at all of the corners why I'm telling the simulator do your best in all these hard places to pick one point to um you know one point to summarize the force but go ahead and put an extra point at the corners always do that I'm saying there's an extra so we get one contact force per Collision geometry pair so by having extra points declared to the side I'm just saying always pick the corners and then do your best on the interior when we're simulating with Point contact that actually works surprisingly well even in these sort of cases um like the the bricks falling down like this if you watch carefully this is actually only simulating with Point contact and it I had to convince myself like every one of those cases is actually resisting penetration because some corner is hitting some that other interior that slightly inset box right or the points on points so it's a very good heuristic that makes these you know these things work pretty well even if like grippers picking up the box in the middle it's all you know that that has worked very well so if you just want to use a Point contact based simulator better then you can help by just making the Collision geometry a little bit more friendly okay but you can also do this harder thing of trying to to um do the surface integrals how does that work okay so this is called in Drake it's called hydroelastic Contact so it turns out the rules of physics across a surface in 3D of arbitrary geometry they're not in a textbook we had to invent some that sounds bad but uh wait we took our best extension of the existing laws of physics across the surface and we applied them in this in this way but it required first a bunch of computational geometry which you take meshes intersecting with other meshes and you compute volumetric meshes on the Fly of the intersection and then you try to take a surface inside that mesh which summarizes the Collision boundary and you take an integral across that surface mesh and you apply forces in a proportion across that whole surface integral and that has made a huge difference in the way that we do simulation you get these beautiful simulations that if you turn on all of the hydroelastic visualization you'll get you'll see the rich contact geometry moving around if you take the box on box Collision right where it was changing discontinuously before now we get beautiful smooth changing and contact forces just to show the before and after here right that is a salute it's an expensive solution to the problem but that is a robust solution to the problem okay there's a lot of computational geometry work behind that and I want to be clear so the the model that we have going on here now is actually uh in most of the cases when I say it's soft on soft for instance that would be the spring like model so we have a stiffness in these hydroelastic simulations the object is not deforming you can do fully deformable simulation too this is a nearly rigid simulation we're allowing penetration so the contact geometry doesn't change but on every instant we're allowing the penetration and we're Computing an integral over the penetration to present the force so this is good for soft simulation in the low deformation mode if you can say my geometry is roughly not changing but I want it to be squishy a little bit this is a good model for that right or for a nearly rigid if I turn K up on my stiffness I can simulate very pretty nearly rigid things with this okay it's not going to simulate a plush toy if you want to pick up a you know a plush toy and shake it around or a rope or a cloth this is not the right model for that that's what it means when I said State space for simulation and planning is the original rigid body State it's not adding state to track the shape of the surface friction in hydroelastic again is uh I mean it's applied in a similar way sliding friction is even more subtle but it required taking our best approximation of coolant friction on that on that surface right so it it's in the paper there's a lot of details but it's uh it's a it's a almost cool and frictiony so it's more expensive than Point contact but less expensive than finite element for instance so you get these great simulations then if you watch the hand pick up the mug and you turn off the hand visualization and just show the the hydroelastic regions you know this is what turned it into a really robust simulation this is one of the things yeah the other thing is a really good contact solver so that we're not doing Euler integration we're actually doing a much more advanced integration scheme inside inside Drake the if there are demo reels from the teams that are working in this are hilarious they always just have random objects falling and uh they always look beautiful and are doing more advanced things but this week it was green peppers falling out of the sky and this is now a green pepper in a bowl that almost falls off a table but not right and the contact forces are being visualized what's really amazing is if you watch the surface contact surface changing even between the bowl and the table it's the little lip of the bowl right and that's so subtle to get that right and if you want to measure the happiness of a Dynamics expert the the measure is how constant how smooth are those contact forces moving right if you there's some really fun situations I think I've got a Lego example here somewhere yeah we put Legos on right we're mating Legos and the contact forces came out like this and were just like a rock solid and I think that that was you know ecstasy for the Dynamics Team that was like did you see what our contact force engine just did you know because normally you'd be like like this right um in a lot of so a lot of the a lot of simulators will use this blocks falling from the sky as kind of a demo reel right of a very advanced simulations but it's a kind of a trick right because you can't look at that and judge if it's physically accurate or not in practice if you were to zoom in you'd see objects intersecting all the time because what they do when they're simulating that is they they say I've got like 10 000 contacts happening I'm going to pick 10. I'll make those work this time step and then the next time step I'll pick another 10. and that 10 is a magic number right and in net it kind of looks like Things fall and separate and that's all good but it's not very accurate and the contact forces like you see all over the place and they'd be missing in some places or whatever you know so this is this is uh the real test is if you can visualize the contact forces and they are just sitting rock steady then you've got a strong numerical recipe behind it let me ask yeah so um okay there's a multi-level response so the question was why is it important that we allow penetration so fundamentally if we think about a spring model then there's no contact force until penetration so the first choice is to do that the alternative to a spring model is to handle the impulsive forces and try to obey those constraints exactly there is a mode you can try to put one soft thing in with another hard object in hydroelastic that does some of this work but but right now in this in the simplest version of the model forces are only defined when you're in penetration that's when you have a surface to work with yes for the mug with the hand yeah yeah I mean they're they're we always aim for real-time simulation um what's that still it's uh it depends on which simulation and it depends on the number of the mesh test relation of the so you have to take your surface mesh and make it a tet mesh and the number of finite elements in your tet mesh will cut would be the computational cost so the reason it is not on by default in Drake is because it's slower but it's it's actionable we use it we choose to turn it on often in our manipulation workflows yeah yes right so so the so the question is so what about finite element models Fem so let's just mention deformable simulation for a second here so if I wanted to simulate a plush toy or something like this then I might use a finite element model which is f e m okay which roughly says I'm gonna I'm gonna model my um my Stanford bunny plush Stanford Bunny with a bunch of Springs and spring masses across the the surface of the object and allow them to compress and shrink and possibly change their relative location because the thing is deformable fem is just uh computationally optimized version of that to some extent so the question was can we make finite element real time I think people definitely have and can GPU is is a big accelerator for fem type models where um contact is actually not super opt it is not optimized well by GPU because it's very branchy in the sense of you you take a different path through the code whether you're making contact between object one and object three or not and that branches so much that the GPU isn't a huge accelerator but for finite element where you have a whole mesh and you know things are going to be uh you know there's no branching on that particular mesh then you put it on the finite element and people have very uh performant simulations so there's an Nvidia simulation of a fingertip with a soft finger that's running in real time rates yes so I don't guess it's not not a lot of penetration simulations yeah so so okay then there's a question of how does the finite element get connected with contact forces and I think you can use all the same if you treat each of your methods as each of your elements as either rigid or soft allowing penetration or not I think all of those things can can be handled the same connections now Drake has fem also it's not on by default either but we do have a finite element method that's not GPU accelerated yet because it's very hard to release open source code that will work on everybody's GPU but that's coming um and uh there's one other one I'll mention but there's a question yes so this is a great question so um the recipe of trying to summarize a whole body when you're in penetration with a single Force is is difficult with a spring Force but if I'm willing to take forces everywhere then the question is resolved again so I'm actually it's not just a single normal force that's being applied you could summarize the total forces by putting a point somewhere you know they're mathematically equivalent but the way it's being computed is by taking an integral over the surface of contact and so that ambiguity disappears disappears the other um so finite elements are are good for many for I've never actually seen anybody shake a standard money but I'm sure they I'm sure there's a paper out there that does that um but let's say I wanted a plush toy or something um large and soft when you get to like cloth simulation that's an extra level of complexity because you have to worry a lot about thin objects you can have piercing phenomenon so a small if you take slightly too big of a Time step you're on the other side or part of your object is on the other side of the object and then so you just have to deal with sort of these breakthrough events but you can do that again with a finite element method for instance and extra extra work but we I would say I don't claim that Drake can support cloth yet that'll be a future future work the other one that that a lot of people are interested in and have been working with is uh mpm material Point methods particle type simulations um so if you've seen Nvidia Flex for instance or if you've seen like these particle simulations where you have something that could be a fluid it could turn into a rigid it can be something in between this is doing like not quite the atom level but like a you know big molecule kind of level simulation of these objects and on the GPU it can be made surprisingly performant it's you tend to give up a lot in terms of um if you tried to simulate a rigid thing with an npm model you would find big gaps in the modeling but for fluids and other things you know again if you were to compare it to navier Stokes you'd probably find big gaps but in terms of like if I just want to see a robot crack an egg and see what that looks like then it's not it's it's I think it's the way that people have been trending yes foreign that's why I gave these um optimization principles they give trivial answers in the when two so let me repeat the question sorry which was I've only said anything about two objects colliding but I can have cases where uh you know I've got a bunch of objects colliding like in the blocks falling for instance so then how do these forces get resolved in that so in the dissipation sorry in the penetration with springs it's clear the interesting part is when I'm doing rigid contact let's say and I'm trying to avoid penetration in a multi-object setting then you actually you you resolve these forces of smallest force that resists sliding and maximum dissipation in the multibody equations with all of them adding constraints and you solve an optimization problem on every time step to resolve the forces this is these are the time stepping simulators there if you've heard of linear complementarity problems LCP that is a way to solve this problem it's falling out of favor because it's um it's a bad numerical problem but it's um yeah that yes we solve an optimization problem on every time step in order to figure out the forces in the general case great question Okay so your Wizardry level of uh of using even just being a user of context emulation I hope got up a little bit there's you saw a little bit of um what's going on behind the scenes but if you learn if you take away one thing if you see your feet shaking or objects whatever the first thing you do is you turn down the time step and make sure that phenomenon goes away you should always be able to take that times that phenomenon Away by making the time steps small enough it just might be painfully slow to simulate but then what you do is you find the source of stiffness in your equations figure out why the gripper was tuned with too high of a gain it might be your controller oftentimes a controller will add stiffness or it might be some contact parameter resolve that stiffness and then crank your simulation back up to get fast simulation okay and this the you know what makes a fast simulator is really how big can you make h there's a bit a little bit about how much you've optimized your multibody code a little bit but the dominant factor is how big of a Time step can you take and that's a property of of your integrator and your in your equations of motion good okay I'll see you Thursday
Robotic_Manipulation_Fall_2022
Fall_2022_642102_Lecture_14_Motion_planning_part_2.txt
part two of our motion planning week I want to just um you know quickly say a few of the key ideas from last time to kind of launch us into this time so last time we started talking about motion planning um and there were a couple important ideas that came up one of them was almost hidden I didn't emphasize it particularly but one important idea was the idea of just um configuration space and we're going to lean on that more today so remember that I drew some plots one of them for instance I used this example of a simple pendulum with two links and uh a ball on the end of the hand that had q1 Q2 right and so this picture when I draw it in sort of the 2D world I guess in this case this is what we'd call our task space or our world space if you will or workspace but and then we drew similar pictures talking about the geometry in the if I drew it instead of this is sort of you know in our canonical X Z World axes right but we also tried to draw the same geometry in our q1 Q2 space right and this is our configuration space joint space if you will but more generally you know configuration of the robot and you know we drew this was if you call this for instance a task space obstacle then we could also draw this would be an excellent time to go multi-color right then we can also draw these regions in in task space in the configuration space you remember I had curves that looked a little bit like this right foreign for this region right and so what does this what is this obstacle in configuration space look like right so if I take a particular set of joint angles q1 and Q2 and my robot is in collision with this obstacle that I'm going to mark that similarly you know for for that configuration if it's in Collision I'll mark that as an obstacle in the configuration space right and that geometry is such a strong sort of there's such a strong correlation there that we often refer to these as configuration space obstacles okay so this notion of configuration space I drew it a handful of times but I didn't maybe call it out as important as it really is and we're going to use it more again today and one of the important things to observe that this picture was meant to show was that even geometries that are sort of simple in the task space this is just a half plane or a wall right it's certainly a convex geometry if you will in the task space it got a little curvy a little bit more complicated in task space because if you think about it the conditions that make this hand run into that wall depend on the Sines and cosines of the of the joint of the kinematics right so those warp my task space obstacles into configuration space okay so that was one really important idea and then we actually started motion Planning by talking about inverse kinematics and inverse kinematics is just trying to find one in you know especially the the type we wrote the Richer form was just trying to find some point in my uh configuration space that I like right by some objective and and probably that it's not in Collision right so we tried to write inverse kinematics as an optimization say find me some Q maybe that's close to my comfortable Q that's the way I wrote it a bunch maybe I wanted to have that the end effector is in a desired position so I put my kinematics as constraints right and maybe I'd say that the minimum distance to any collisions is greater than something like this right this is the kind of thing we wrote down in our optimizations last time and then we said that kinematic trajectory optimization traged if you're the cool kids call it traged okay then was just finding now instead of finding one point was trying to find a series of points in configuration space and it may be a trajectory of points in the configuration space that satisfied some optimization problem right so the in the optimization view the way I described it last time I said let's try to find some trajectory and I'll parameterize it with parameters Alpha you could describe your trajectory as let's say a neural network that took t as an input and put Q is the output with weights Alpha if you wanted to I wouldn't recommend it probably for this case but you know if you think about things in that space this is just a a parameterized class of Curves and the ones that we use the most are um are actually just polynomials these are polynomial bases okay so we just parametrized some class of Curves typically they're defined well over some finite interval maybe some you know that's a that's a nice curve it doesn't blow up or something in the in some finite interval and then we just did only a little bit more than this we said let me find a minimize over the parameters of my curve you can have various interesting objectives maybe shortest path Maybe minimum time you know kind of objectives and you put some additional constraints like maybe my my starting hand position is where at the beginning of my trajectory so let's say Q of alpha zero and maybe my goal satisfies the kinematics at time t for instance hence you could put collision avoidance constraints and Joint limit constraints and all the other things off the end of this I think the one thing I got a couple good questions after lecture like one thing I wish I had said better last time was right that um you know this looks a lot like this if I'm going to hand this to my optimization package and it's solving it with an as a nonlinear optimization that it's it's doing things like taking the gradient you know partial left partial you know Q for instance and that's how it's doing gradient descent or sequential quadratic programming to get to the bottom this is just this is not any this is not much worse for the optimizer right this is just saying now F of kin of sum of alpha I and I at time 0 I guess right I can still take the gradient of this constraint with respect to just Alpha instead of Q that's just you just use the chain rule or call it back propagation if you want to to take the gradients of this thing with respect to Alpha instead of with respect to Q and this is a similar type of constraint you can hand to the optimizer and it can it can do sequential quadratic programming or gradient descent or whatever it's going to do to try to satisfy these constraints and find the best Alpha the one we did last time was a little bit fancier it also allowed you to search for the time Horizon you didn't have to specify the time Horizon so that it made that a decision variable too okay and then you could put things like minimum minimize time here okay but that's a that's just a detail on top of that so everything we did in that space was with um you know non-linear non-convex optimization right in general this is this is non non-convex problem right which means it's subject to local Minima and local Minima can get local Minima sounds like um you know I almost want to call something more dramatic than local Minima local Minima is not just you've got you always get a solution and it might not be the best solution in this case local Minima could mean the solver was not able to satisfy all the constraints was not able to able to find a a satisfying a path even if a path exists right it could just get stuck foreign and I think there's two particular ways I want to sort of distinguish between the two particular ways that that non-convexity comes in because we're going to try to address that today right so it I think there's I mean they're related but there's two big sources I think one of them is from like the kinematic constraints if you tell me that I'd like my cue to satisfy some you know to be my hand to be in some position we know that for instance there might be multiple solutions that achieve the same again my shoulder isn't mobile enough but there might be multiple Solutions and unfortunately if you take a straight line between those two solutions it's unlikely that they're both going to be good salute that the that the set of Optimal Solutions is convex when you have complicated nonlinear kinematics so just by the fact that this is a non-linear function is the First Source okay but there's a second source which is um I think obstacles are sort of a fundamentally different type of non-convexity where you really um it's like there's multiple things you might have to try to do I might have to be above the table I might have to be below the table and you have a somehow a discreet choice to make about what are you going to do around that obstacle right and let me just make that super clear with my little example here so here's my little here's uh my non-linear trajectory optimization you know I actually in this example I left it going through I said all the all the points are out of collision but I was trying to make a point about that the segment you have to check the segment two to keep it out of collision but let me focus on the the non-convexity here right so if this is now a picture in configuration space okay so I just have a point call this my start another Point that's my goal that I'm trying to find in configuration space and there's a configuration space obstacle this is way simpler than what a real configuration space obstacle would look like but let me just make the point okay and the the solver is doing a good job of finding a path you know the goal here the objective is to find the shortest path from the start to the goal and it's doing a pretty good job apart from cutting the corner right and you can solve these in real time that's no big deal right but the because it's a local trajectory optimization approach it stuck in local Minima right even if I were to pull the start and the goal over here once it starts thinking about going right around the obstacle it's not going to change its mind because why is that right so if I were to make an incremental change to those points that went into this direction then it's going to get worse before it gets better right because it's going to get a big penalty for going into the obstacle there's a better place over here but it has to get worse before it gets better and so the gradient-based methods aren't going to find it okay so I can't do that um so our goal today is to try to get around first I would I'm going to emphasize that type of convex non-convexity and we'll think about how we get around this kind of non-convexity too and the two big approaches we'll talk about we'll talk about sampling based methods and ways to try to do Global optimization that try to do more than just look at the gradients okay that's my setup is it clear okay so people have incredibly good solutions to this let me stop that um and you might have seen some of them right you've seen Roblox doing amazing things um one of my favorites still is my start to the goal okay written as a simple optimization and that's what I was solving okay forgot to use my slides but uh okay this is a humanoid this is a video from James kufner back in the early 2000s of a humanoid reaching under you know that's a lot of degrees of freedom reaching under to get a flashlight he's solving a big complicated motion planning problem right and there's another example here from the same time same guy James kufner where he's he's actually solving a geometry puzzle like one of those things that you know you find at parties and they're a challenge to try to figure out how to to separate the two rings right that's the economic well it's one of the canonical geometry puzzles for motion planning another one is the piano movers problem we're trying to if you have a apartment in Back Bay and you live on the third floor and you have a piano how do you get it up there I think there's no solution but uh or it revolves taking out windows or something but um but if there was a solution these would be the methods you would try to use to find it I'm going to put it back to there just so the animation stops okay so how are we actually gonna going to do that um how do we solve these geometry puzzles when this picture of local Minima I think is real and it's sort of mysterious maybe on the face of it of how do you actually solve highly non-convex geometry problems like that if I'm talking all about local Minima right and the two most famous algorithms the variations of this many things are there's many many algorithms in this family but they're maybe derived from these sources here one would be called the probabilistic road maps prms and the other is the rapidly exploring random trees how many people know PRM and rrt okay that's helpful thank you okay so the big idea here I'm going to actually start with this one it's a little bit I think I mean I know chronic chronologically this came first but I think it's a little simpler to talk about the rrt that's just my preference different people have different preference Okay so the idea is is simple we're going to get around this non-convexity by sampling okay so if I have my configuration space or obstacle my start of my goal the basic idea of an rrt is I'm going to don't worry about getting to the goal maybe with some probability I'll try to get straight to that goal but I'd fail I'm going to pick a point I'm going to just Define some region safely and around my workspace okay and I'm going to start sampling I'll pick a point at random I'll take my start and I'll see if I can grow my goal to the start Okay so in practice I draw a straight line to that goal and since I don't want to I don't want to sort of teleport all the way to some distant goal I'm going to just take a a motion in the direction of that goal now when I sample in configuration space again right this is this is q1 Q2 right configuration space if when I sample I'll end up in the Ops in the obstacle like which I can check quickly by putting my manipulator in the in that joining in those joint angles calling my Collision detection engine and deciding if I'm in the Collision that's an easy check to do relatively given the geometry engines then I'll just discard that right off the bat I'll say I sampled there it was a bad choice ignore it let's keep going and then at some point I'll sample over here I'll grow a little bit over here and as the name suggests I'm going to grow a tree the the exact version of the algorithm every time I pick a sample I'm going to look for the closest point in my existing tree and try to extend from that closest point of the existing tree towards my sample okay the details I'll put on the slide in a second here but the intuition is that I'm I'm solving a non-convex problem by choosing points at random yeah in this case you're right I might grow here at some point I might pick a point that would cause me to grow into Collision there's various choices you could do you could try to just grow up to the Collision um or you could just discard that sample and keep and keep going okay and at some point the hope is that I will have picked enough of these random sub goals that my tree will come out and I'll have found my way all the way to the goal in low Dimensions it seems completely reasonable that that would solve this problem what's surprising is that you can do it with a humanoid right that you can that it works surprisingly well in high Dimensions four problems that have a reasonably large configuration space region and relatively large tunnels that will get you from one place to the other right if you have to um you know thread a needle to try to find it then sampling is not a great strategy for that okay so um yeah so this is the the picture from the original rrt paper where they started there with their initial they've grown a tree you pick a random sample you extend towards that sample and you had a point okay for those of you that are familiar I'm actually I'm not going to spend that much time on this I'm just saying the basics um because I think the the intuition is important the details are easy to to get okay so I grow towards this tree now what's important is that that very simple strategy can solve pretty hard problems this is actually a narrow passage kind of problem it will solve it eventually it just will take potentially a lot of samples so I had an initial guest that was in here and a goal that was out here and eventually if I draw enough samples and grow a dense enough tree then this algorithm will grow these sort of characteristic rrt trees and find paths if they exist right this is a good example of a path where if I did you know if I started here and I was trying to go here and my initial guess was trying to take a straight line there it'd be very very hard for a trajectory optimization approach to get this and this by brute force will eventually find its way around it that's pretty cool Okay so it's it sounds like an extremely so it's it's actually it is an extremely simple algorithm but it is not I would say a naive algorithm there are some very clever things that that make it work um even though it can be written into three lines and anybody anybody can type it in and make it work for instance if I take this problem again and I do a more naive thing right where I just say I'm gonna um I'm going to take my my current tree I'll pick a point on the current tree and I'll grow it in a random Direction right so just pick a point on the random tree grow it in a random Direction add a little Edge that sounds like a very reasonable sort of search type strategy I'll just make a random Direction grow okay that doesn't work at all right that gives you these sort of um hairball kind of uh pictures where with high probability that would be taking a random walk right and random walks have this characteristic sort of Brownian motion kind of fixture right that it would be a random walk that would stay near the origin so there's something very clever about this sub goal idea where I'm going to pick a point randomly in this in the space and try to grow towards this distant goal that causes it to have this exploration Behavior which avoids this naive sampling okay in particular it has something called the voronoi bias which is like the best property of this thing you know what the voronoi regions of a graph are or a set of points are if I have a a set of points in the plane then the voronoi regions are the regions associated with a point which are if I were to if I have all these points and they make them a little bigger so people in the back can see but okay so I've got some points in my plane if I were to draw the regions of all the places in the space that are closest to this point okay then it's going to be there's going to be a line that separates from that point there's a line that separates it from this point and this point and I don't know it should have worked out better than they did but there you go something like this would be like the voronway region associated with this which is all of the points in the plane that are closest to this point and there's a voronoi region for this point something like like that in a vorno region for this one right okay so that would be a voronoi partition of the space is this notion of what are the sets that associated with the closest distance to each point Computing the voronoid regions explicitly is a little bit track is a little bit computationally intensive the RT doesn't do that right the RIT is just the algorithm I said just pick a random Point grow towards it turns out implicitly because it does when you pick a point you try to find the closest point on the tree it's acting as if it's trying to you know take a local valuation of the boronoi region and that has this remarkable property of causing it to explore not just exploring like well let me show you so so if you start growing the tree and I draw the voronoi regions which is relatively expensive the animation is harder than the algorithm okay then it starts at the initial condition and because the voronoi regions of the cells on the outside are large with very high probability as it starts let me sort of reset a little bit here whoop okay that was a fail let me all right so as it starts the voronoi regions on the outside are big so the specific thing that says is that the chance the probability of expanding some node is proportional to the volume or the size of the Val of the voronoi region at the beginning of the search the regions that are unexplored have the largest boronoi regions so with high probability at the beginning it's going to grow out into the free space as it continues the free space gets filled up and there's some still large regions on the interior and it will start filling those in with probability and that sounds like incredibly clever but it's just just because it does this nearest neighbor query right it's like it's a it's like when you're reading a book in literature and they tell you all the things the author meant and I'm not sure if the author really meant it you know but uh but it's like giving endowing it with like those incredible properties post hoc right um but it's beautiful so in practice these things will expand out into uh into the unexplored regions and then it'll fill in all the nooks and crannies and if there is a region like this little tunnel in my bug trap example then it will eventually sample there and will eventually grow right so this this idea of the voronoid bias is one important idea and the other super important idea is that it's probabilistically complete which means that if there's a path from the start to the goal then it will be found with probability one as the number of samples goes to Infinity that last part's a bit of a bummer but so and it's actually not that hard to make an algorithm that has the proper problems to completeness property um in fact that's even the naive one is technically probabilistically complete it's just it's going to take a long time to find to go this path to the goal but that's an interesting class of algorithms to say I want to guarantee that eventually I will find my path if it exists okay there's tons of variations and extensions that I'm just I'm not going to list but you can imagine for instance that if you wanted to you might do better by growing a tree from the start and backwards from the goal at the same time and just let them connect that works that's extremely effective used often in practice and there's a there's a whole pile of heuristics that make these things work really well any questions about RIT at the high level here Okay so the rot um I mean people like I said people plan humanoids they were playing on the on the Fly you can have a mobile manipulator that goes into a new environment and it'll start planning with rits um but they do spend some amount of you know especially for hard search problems they they can potentially do a lot of computation to find a path if you're going to do the same if we're like in our clutter clearing example if you're going to be moving in the same environment over and over and over again it makes sense that you'd want to reuse some of the graph search that you've you know some of this tree that you've built to do a multi-query version of this this would often be called multi-query planning then that's where the PRM comes in the probabilistic roadmap the probabilistic roadmap basically says I'm going to build a graph in the first place okay I'm going to separate my computation into first the roadmap construction where I'll go ahead and build some complicated graph I'll show you a couple pictures of it in the in the in the first step and then when it's time to plan I'm just going to try to find a shortest path on the graph okay so imagine building one big Network roadmap once and then planning on it separately and this was actually you know published a few years before the rrt um in my mind it's a it's a I like I like talking about in disorder but okay so now I've got my simple configuration space obstacles and I'll pick a point at random and I'll just look around and try to connect it to all of its nearest neighbors I'll draw an edge to all of its nearest Neighbors and I'll make this graph okay and the offline phase is just a build up this road map by by picking points at random there's no start there's no goal pick points at random just try to connect it to the neighbors make a nice dense Network and then at query time I have an initial I have a true I'm the robot's in some initial Condition it's got some true goal all I do is try to connect that up to the closest point on the graph and do Graph Search In My Graph to try to find the path okay it's very similar to the RT but um it makes sense if you're going to do repeated plans and you might as well add that edge add that node and add that edge to the graph and this thing can have kind of a you call it lifelong learning if you want to let you know this little thing will will continually sort of fill out the regions of space that you're actually experiencing and uh you know it can be very very efficient so for instance um there's a company now called real-time robotics that does these prms on the fly from in perception by putting all this stuff into the GPU prms and rrts are both have and the Collision queries that go along with it have been highly optimized you know with GPU implementations and the like this is actually a specialized Hardware I think it's fpga I actually forgot no it's fpga or even Asic or something but um they're talking saying that planning doesn't start until I hit the red button and it plans with no pause no delay and uh you know this can be made to be very performant to the point where he's going to come in and Shuffle the the environment people can walk through the environment it's doing planning on the fly with highly optimized prmrt right yes good effectively so um I have a slide to show yeah so so um there's something that that I think a lot of people that I tend to call the rrt dance right which is that if you're trying to find a path from the start to the goal to pick this up um this strategy you might do something like this you know right and that's real that happens all the time that the and there's some I'll put a bit I put a video in the slide somewhere uh that will show that okay so um sample based planning gives random paths okay the there are post-processing algorithm algorithms like you just alluded to you could for instance take any plan you get out of there and call a trajectory optimization you could tell you kinematic trajectory optimization that's a great way to do it the sample based Community tends to do things like shortcutting so if you found your big complicated plan that was you know visiting more places than it should to get to the goal and maybe there was just one obstacle over here but it decided to do a little dance then the shortcutting type algorithms will for instance there tend to be very probabilistic it's kind of the the theme Here the problem and very approximate but they'll just say you know I'll take my path I'll pick a couple points on my path on my um on my path if I could draw a straight line to them then I'll just remove all the ones that were there and I've just added a straight line you know I'll take a few more random points if I can do this and I'll just erase this these are kind of a heuristic thing that's very fast Implement on on and gives you pretty good uh results if you have a an edge that goes through this right so people typically this is one of the things we'll talk a little bit about today people typically try to verify this just by running the Collision engine on many samples inside the along the segment but there are more advanced approaches that can try to guarantee that you don't have any collisions on the line segment I think um I think at some point you switch to an optimization to smooth it out um but I think you'd be surprised how often people send uh you know piecewise constant curves to the robot almost certainly the controller is smoothing it out a little bit but okay so that's a cool set of ideas I mean a super powerful set of ideas to just to use sampling to sort of get around this these hard non-convexities they also get around the kinematic non-convexities because when I sample points I just have to call the function so I could if I want to if I have a sample a point in joint space I can just call my kinematics on that function and put my robot into its pose and check constraints in task space I could check whether the real Hardware is running into the obstacle in in the original you know task space right so by virtue of going to points instead of trajectories and and doing sampling they're sort of solving both of those of those problems if you do want to play with um you know more and more of these tools oh the ompl the open motion planning library is a fantastic resource that has just a long library of different all the different rrt PRM variants it's in Ross that you can connect it to Drake and collision queries it's a great resource for um for just you know even seeing the list of planners even getting your sense your head around the sense of different types of planners people are are using there's the PRM there's different um single query planners the rrt right expansive tree space trees a lot of the different algorithms that are out there are available there okay so highly recommended even if you just look at the website it's already super valuable we have a bunch of um of sample based planners that have been written at Tri that we're pushing to Drake now hopefully but uh not quite in time for this lecture I asked so if you'll you'll when it's a when it's there I mean if anybody needs it I have I have versions that I could share that are just not as polished but um uh you know it has the rrt the by Arty is the going in both directions bi-directional rrt prms it has we have all the basics in there here's the RIT dance and this is made the the point was made nicely in a paper by uh sir Tosh and others this is the RIT applied to the PRM I mean you will really see robots do that sometimes it's it's like oh you know it's kind of embarrassing the other the other one is that's really embarrassing is um when you have a beautiful manipulation robot system you know in it and it goes and it acts like it's picked something up but it has nothing in its hand and it continues through the rest of the motion I said you know it's like oh that we should never see these things again okay but that's a that's real [Applause] okay so I mean that's just a very quick version of it but but I think this this idea of sampling is I think easy to communicate and and very powerful and it opens up a whole class of algorithms that you could play with and people use them in Industry all the time it's real let me tell you a bit about the uh the optimization view of being more Global okay I want to tell you spend the second half of the lecture on on sort of global optimization based planning this is one that I'm super excited about right now we've been working hard on it in my group um so I'll give you a slightly biased version of this but um but I do think we've we've made some some nice improvements and and to what you can do with motion planning here so I want to tell you that story I'll maybe I'll tell it first with code so if I go back to this simple example my red box I'll run my I'm going to run so this is this the work I'm going to tell you about is motion planning around obstacles with convex optimization right so I just talked about how the problem was clearly non-convex I'm going to try to do it with convex optimization and that is supposed to be surprising so here's the same old example and I'm just solving a convex optimization problem but now it solves you know beautifully to Global optimality okay so I'll tell you the basics of how that works and it's not magic it's uh just putting together some some good ideas but um but I think it opens up what we can do some of our our motion planning approaches in particular the advantage of these optimizing based planners is that you you avoid RIT dance kind of things and if you care about like I told you with the dexai example the other day if time is money then having the the benefits of kinematic trajectory optimization combined with the globalness of planning is the dream right okay so um so how did I do that simple example and where did I put my chalk geez so Pete's okay I said that that was a non-convex problem and that there's non-convexity everywhere in the motion planning problem so how can we possibly do you know around obstacles with it with convex optimization so saying it's non-convex is a little disingenuous um any problem can be made convex if you just lift it to a high enough dimension so you know typically it's not practical to do so but it's theoretically interesting to know that you could okay so really I want to I want to dial that in a little bit more what's in what I think is important is that we found a convex formulation that is Compact and efficient so it's we didn't have to reduce it raise it to some ridiculous dimensionality that would be impossible to solve but we have an efficient convex uh formulation and the way that we did it in that particular way that I coded it up in that particular example is I have my um configuration space obstacle and I'm going to first decompose the space we'll talk about how to do that in a bid I'm going to manually decompose the space into a couple different regions okay I said that the rrt did this automatically without any doing any explicit decomposition I'm going to do the explicit decomposition but it's okay you can do that it's not it's it's not the fastest part of the algorithm but it's it's approachable all right and then for each piece of this for each possible segment or you know region in the in the optimization problem in the configuration space I'm going to put a small kinematic trajectory optimization problem inside it okay so I'm actually going to solve lots of different kinematic trajectory optimization problems all at the same time okay which sounds bad but we can be made very efficient in particular because this is the best type of kinematic trajectory optimization problem because it only has to stay all of its constraints are convex once I've made a convex decomposition of the space saying that this curve stays inside a convex region is it easy thing to do staying inside a non-convex region is a hard thing to do okay then I'm going to build a little graph out of these regions so if I call this like region one here I'll make a little graph okay if I go this region two Region Three oops and four maybe I have three and four if those regions touch in the configuration space that I'll draw an edge between the two it's actually a bi-directional graph in this case two and three touch three and four touch right one and four touch I could have probably put four there that would have been a little prettier but okay and then I'm going to do the same sort of thing as the PRM I'll take my start I'll take my goal I'll add my start on my goal here add if whatever things it's touching I'll put an edge to it there okay this was in four and three I guess right and I'm going to solve a graph search problem okay but unlike the way that the PRM solves the graph search Problem I'm going to solve a particular type of graph search that is actually solving the kinematic trajectory optimization at the same time as it's finding the shortest path on the graph okay so let's just understand it at that level I'll tell you why how that works just a little bit in a second but this is like a sampling based approach I'll show you the direct connections to the sampling based approach but it's more explicitly saying there's a there is a combinatorial problem in motion planning you have to decide am I going left or right and once I if I write down that combinatorial problem then I should use Graph Search type tools to to accommodate it and what's nice is that there's actually you know when you think about Graph Search you probably think about a star and dijkstra's and these kind of methods but you can solve graph search with a linear program and optimization based approaches too and so there are ways to jointly solve the up the graph search and the kinematic trajectory optimization at the same time and make that very efficient it's not magic it's just explicitly writing the combinatorial problem and the continuous problem down in one formulation and then doing a lot of work to make that a formulation very efficient okay so I won't go into the math of the optimization but if you if you want to read more but the basic intuition is we take the the optimization view of the shortest path on a graph I could find just you know from the start to go on any ordinary graph I can write that down as an optimization problem in addition to a standard Graph Search kind of if the way you think about a standard a star kind of algorithm and the mathematical background of this we call it the graph of convex sets because you know there's PRM there's rrt we needed a three-letter acronym so ours is GCS okay that's the brand um and the the way to think about how do you do a continuous optimization at the same time as a discrete optimization is you basically the abstraction we have is that every time you visit a node on the graph you pick one element out of a convex set think stop thinking about don't think about this as obstacles or anything for a minute this is just an abstract mathematical framework where you say I'm going to do Graph Search but I get to pick one element out of a convex set every time I visit and I'm allowed to put Edge costs you know the short standard shortest path could have a an edge cost on each Edge now I'm allowing The Edge cost to be a function of the continuous variables and we can put constraints and other things like that too we've made a lot of progress on having strong optimization formulations for that abstract problem and that abstract problem is exactly we just this thing I drew on the board we just transcribe it this is now the motion planning problem where our regions are the blue regions in the last or the blue regions here but we put and we make a graph based on what is uh in just touching okay and in bed in each of those a bee spline the kinematic trajectory optimization problem you remember I mentioned the B spline has that convex Hall property we leverage that convex Hall property okay and the so it's the parameters of the B spline that form the convex set you know the the intuition I want to make sure you definitely get is that we can combine this continuous search with the graph search okay and when the graph when you pick an edge on the shortest path on the graph then it implies that some constraints must be true which make those curves connect and you can make them continuous up to arbitrary degree okay and you can scale time similarly all in the framework so so that's an awesome question so what is the convex set so in this picture over here I have this abstract notion of the set so it lives in some space called Big X what is Big X in in this picture right it is not this region it is so it's a the Cartesian product of that region times the number of control points I want to say that one element in that set means a choice of all of the control points and they all must live in that set so it's only it's only a little bit different than the original picture but it's an important difference thank you for asking is that clear yeah okay what's cool is that that solves really hard problems with convex optimization okay so this is like two he was playing twister right where you have to put a mug on the shelf and you got to reach under the other iwa right and that is being solved by jointly solving the convex optimization on the graph and the kinematic trajectory optimization and it would you know this is on the real the robots upstairs and this is actually um solving even a richer version of the problem than I've already suggested it's it is choosing what order to pick up the mugs and the the combinatorial decisions of the task are also being embedded in this graph in this big graph and it's getting smooth beautiful motions out we have a handful that we put together for the paper we're writing right but you know we want to have sort of smooth beautiful time optimal if you want motions uh coming out of the robot that's not that's the the opposite of the RIT dance right that's a little that would mean I'll show you my decomposition probably wasn't very good on that one but uh we choose the number of control points yep but we allow it to stretch in time right but it's like it's like how um there's a representational power of the curve in that set how do we choose technology yeah so how so the question was how do we choose how many control points um the more you give it the more curvy it can be in inside the set but the more expensive the optimization will be so it's a trade-off we'll um you know I'll talk about the different gaps so I what I would like to say is that if an optimal path exists if a path exists we find the optimal path we can almost say that we can say if the path exists we find the optimal path as parameterized by the bezier curves in the decomposition so there are a few gaps and that is one of them I I you know in theory I would need an infinite number of control points to say any possible curve could live inside the set right the problem is so the or the interesting part of the problem is that I talked all about how Maybe that's long gone but I talked about how you know convex obstacles in task space turn into non-convex obstacles and configuration space so how did we get those convex regions right there's an algorithm that we're going to have you explore it was released moments ago uh for your pset okay which is this approximate convex decomposition algorithm and it's related closely to the sampling based ideas that we we used in PRM and rrt but the idea is that when I pick a point at random in my state space or my configuration space I'm going to go ahead and do a little extra work which is to try to find a big convex region that is configuration that is Collision free around that point okay and you'll understand I think by the end of the piece set that there's a it's it does this by an alternation of choosing of of finding half spaces which which just which um separate the sample point from the obstacles and then once you have the half space regions you do a maximum inscribed ellipse and then you alternate okay the details you'll see closely uh on the pset we'll work through a half half of it we won't make you do the inscribed ellipse but we make you do the other part okay uh and so you so this is an efficient optimization it's a large-scale quadratic program and a really small convex optimization okay and when we did this initially a few years ago Robin Dietz was the um was the inventor of this algorithm he was trying to do it for Atlas walking around and he cared very much about doing finding big places for Atlas to step uh come from raw perception okay so he wanted an algorithm a convex decomposition algorithm that could scale to Raw pixels and he accomplished it you know enough to work on Raw sensor data this is a version of that um but as the robot was walking around he was trying to figure out he was trying to decompose the space into regions that it could step on or touch right and we had these convex decomposition type algorithms um okay so that was a tool that we had done for for walking and um it turns out then that that if you have a convex obstacles in your con configuration space then the picture is going to look a bit like this so this is these are the gaps here the the gaps that prevent us from saying it's globally optimal because we're going to just do an approximate convex decomposition and if you think about the roadmap generation phase of the PRM we're going to do something like that here but every time we pick a point we're going to grow an iris region okay and we'll end up with an approximate decomposition of the space but then we get to solve a continuous set of Curves whereas the PRM was only walking along the discrete edges in the graph and therefore was restricted to the Motions of that graph here we have enough room for the kinematic trajectory optimization to do its optimization and we get beautiful you know curves importantly when you get to things with Dynamics which we haven't talked about in here the kinematic trajectory optimization typically doesn't include the Dynamics of the robot but if you have Dynamic constraints and other constraints optimization based Frameworks handle that naturally where sampling based Frameworks can struggle with those kind of constraints okay so so we say we can't quite say what I want to say we say that we they were guaranteed um to be Collision free because of this Iris algorithm we can guarantee that once I'm in the curve in that region I won't have a collision I don't have to check the line segment at a bunch of different samples I have this nice property and then within the convex decomposition and within the class of Curves we can be complete and globally optimal so this is the um back when we were doing it with the big mixed integer problem this is Robin's version of it so you have your original obstacle-based you know environment and the first step is you compute these approximate convex decompositions and this is the Irish algorithm that you'll implement and then once you have those decompositions you can you can plan optimal motions that used to um that particular environment um it used to run as one of our unit tests on an early version of Drake that was running on the build servers in my office and um there was a bug for a while that the it would publish its visualization to the LCM Channel across the subnet um so everybody in the on the third floor used to see trees appear randomly and quadrators flying around and they're like where's you know where is this coming from and there was a long time where I would just randomly see that pop up on my screen so that brings up fond memories to me okay so that was the case where the obstacles are in task space or you know the or the obstacles are convex and that's the case we'll have you do but there's a more sophisticated version of that thinking about how do you do it for configuration space obstacles right the original algorithm assumed convex now there's new extensions for for c space okay and um one of them is is uh just using nonlinear optimization and it's fast and I'll run it right here there's another one that gives actual certifications using sums of squares optimization and that's uh guaranteed to be Collision free but it's slower and I won't run it here so I was like how do I visualize Collision free obstacles in a way that could that you can understand actually so this is a pretty good visualization right so you've got some weird q1 Q2 there's a weird Collision free um you know Collision region which are these two EOS smacking into each other and it makes this very non-convex shape and this is a visualization As you move around through the C3 you can see them not in Collision right here's the version I put up last night here okay or this morning those together okay this is instead um I made an iris region where which was seated with a point inside the Shelf and I get a convex polytope in the in the joint space and I'm thinking how do I make you guys understand what that region looks like and I was going to plot it in 3D and and it's completely uninterpretable in joint space my brain does not understand what's happening there so this is what I did instead is I just basically I I wrote a little program that would basically visit random boundaries of the joint space and I just plotted it here so basically this this robot is is just walking around inside one of the regions one of the iris regions and you can kind of see that it carves out a nice large part of the state space and it's not in Collision it'd be nice if it walked a different dimension I guess Randomness oh there it goes see up and down right so it's got this nice sort of joint space region you would you might wonder what if a convex decomposition of this this is a pretty narrow part of of this joint space of the configuration space too because of the Collision geometry but it fills out a nice big region okay and in practice we found in all those motion planning examples we did we only needed a handful of regions it's pretty surprisingly small number of regions it's also working with the original Collision geometry of the robot so like for the kinematic trajectory optimization before I had to turn it into a sphere to make sure it got out of local Minima none of that here this is the it's still the simplified geometry but it's it's good yeah either joint angles that is done complex good so so that's exactly the right question he says so um the non-convexity that's coming from the kinematics so the way we're doing this here is we are um we are we are building the graph the graph in joint space not an end effector space but we're certifying that every point in that region is valid in task space so we are addressing the your your concern but the you know we're using Iris to go across that non-linear boundary of kinematics the graph of convex sets cannot go through the convex optimization cannot go through the nonlinear transformation but the iris can so we have to pre-compute any of the non-linearities away that's what I tried to do actually I tried to even walk along the vertices but um so the reason the way that I've implemented Iris today it kicks out a stupid number of vertices uh basically so it would have been a great video that went you know like so I was like you know what I'm gonna just make it go at random directions instead yeah you caught me that was that was the first idea that's an awesome question so the question is what happens I mean even so your question was removing the gripper but even more relevant maybe is picking up an object and you suddenly want to not collide with the object in the sink right um so I think that does change the configuration space and we haven't addressed that yet with this is a hot off the press you know algorithm um I think for the case of the Clutter clearing or like the dexai workflow or the any of the million robots out there that are moving in a relatively similar environment all day long the multi-query case right any case where you have you can afford to do some pre-computation and then exercise you know optimize very quickly we've dialed that in well and I think the next round is to you know which I'm happy to uh have you guys uh you know think about is uh is to think about how to dynamically change those regions for instance maybe straight from perception maybe from maybe you can crop regions you can there's all kinds of clever things to do we we see them yeah you can see them in either place but they have to be seeds in queue in the joint space but we typically seed them by solving an inverse kinematics problem just because that's easier for the human because they want to make me nothing so if you said it in like even during your schematic it doesn't cover it intense based and is there still current useful so you're worried about oh yeah so I think it I think there's a there are sampling based approaches that could try to fill out the space and have a probabilistic completeness kind of guarantee but we found actually that um I mean there's a lot of regions you probably don't want to visit you know um so we've found it's more useful to to pick the ones and use ik to sort of filter out some of those crazy regions okay yeah so so you can see right you can sample directly in queue to grow the regions if you need to um that's true yes that's a great question do humans do graph of convex sets or rrt or PRM um and you'll get a different answer from everybody you you ask I think um I would guess that they are not doing this um I would have said that a sample the sample based things like prms and rrts would be a very weird thing to think about a human doing but I've had conversations with people like Josh Tenenbaum who say that the cognitive scientists are actually pretty excited about the rrt kind of view of the world as a as a cognitive model still feels weird to me I don't know uh you know I think we are certainly a parallel processing machine um and maybe we can do a lot of things like that uh if you ask me I would think that this is speculation right I would I think we're doing uh we're probably not solving the geometric puzzles that we're asking our motion planning to solve I'm going to make that point at the very end this is a harder problem than humans are probably solving right this is when you're asked to separate the puzzles out and stuff like this and we humans do that rarely and not particularly well okay I think we are much more approximate and we're not afraid of bumping into things and the and I think much more simple strategies can solve a lot of problems but apparently some cognitive psychologists like RIT it's all good I would love to know more about that uh I just I do I don't know as much as I would like okay um so so just a summer to give you kind of since I I kind of advertised that let me just say tell you what it can do and what it can't do okay so um not every kinematic trajectory optimization fits in that framework right I have to restrict ourselves to things that are convex costs and objectives but it's a pretty large Library if you use bezier polynomials and the bezier splines and and uh and all the right tricks and tools that you can minimize time you can minimize path length you can minimize some sense of energy in it you can make trajectories smooth up to arbitrary derivative degree you can avoid collisions by being inside the iris regions of this picture and those are guaranteed for all time there's no sampling based concerns right of clipping a corner or something like that which is a big deal I think lots of people suffer the those clipped Corners in practice you can put velocity constraints I really want the next thing to say acceleration constraints but we don't know how to put acceleration constraints on like that's non-convex okay so far maybe there's a problem we can crack okay but you can put bounds on the time you can give an initial conditions final those kind of things that's that's a little bit bigger than that but that's the main library of kind of costs and constraints that we can put on here but in that regime we're solving you know to Global optimality with mostly with convex optimization I I even that I want to be super clear that we're it's actually a mixed integer program where the convex relaxation is almost always tight so it's um it's not guaranteed to solve the optimality of context optimization but in practice we find it almost always does and you know we've done a lot of work to try to compare it to PRM and this is the shortcutting that billion asked about which is you know take my crazy PRM and try to find the the shortcutting and how does it compare in terms of time and the message this is a unoptimized PRM I think people have more optimized prms out there but um you know in it's a reasonable it's a good PRM you know um just not super GPU enabled and stuff we tend to find better paths that's what you would expect we're using optimization instead of just sampling even than the than the shortcutting area our PRM we're doing better than that and we can often find them in less time than the PRM in particular the one I really like is that there's some things you can do with on the graph there's some observations you can make by just looking at the graph without even thinking about the continuous variables that allow you to rule out lots of possibilities very quickly and kind of pre-process yourself into a very simple problem and some problems if there's not big branching can just solve almost instantly okay so we get all these little examples of of you know complicated bi-manual I like the bimanual case because most PRM rrt algorithms don't scale well sampling scales surprisingly well like I wouldn't have expected it to work in 10 12 Dimensions but when you start getting to you know 14 Dimensions or if you put it on a mobile base and you've got 17 Dimensions maybe you've got a torso you're up to 20. most people don't you'll see actually most robots if you see a bi-manual robot almost always you look for this this is if one arm will be fixed it'll move the other arm and then it'll stop and it'll move this this error right almost always almost always or you know they'll be not moving close to each other right okay so yeah just to just to wrap that up and say super clear so what the what I'm advocating here is a change from the PRM where you sample and make the roadmap to every time you make one of those samples you grow a region but to some extent the everything you can do with the PRM you could do this way it's just trying to make those samples into big regions so you have room for the continuous optimization to do its work okay you pay a price that's that's an expensive Step at the at the offline although it's even comparable to building a denser roadmap because we can we can make like you know 10 Iris regions in the time you'd make ten thousand uh regions over here and that's uh you know what people tend to do is make super dense prms and we get better motions all right sorry for the advertisement yep so um having said all that uh I actually don't like Collision free motion planning as a problem formulation this is the point that right so so I want to just make sure that I I land at the end that that this is kind of a weird problem it's probably not the problem humans are solving right anybody know this one this is like from my childhood you know it was a little after people played with sticks and stones you know they've read a little bit before iPads right so um a little game called operation yeah thank you yeah and it's annoying it's an annoying to have in the house right but basically you have to you have to pick out the bones out of this poor patient and if your leads touch the side of the of the cell you I killed the patient or something it goes you know and mom and dad go right okay that's a weird game right um and it's probably not how we move through the world right so I think I think a better formulation and we'll talk you know so as we get past the core material of the class we're going to enter this Boutique lectures and I'll help I'll ask you guys for feedback and we can choose which ones we're going to cover of the more advanced topics but task and motion planning I've mentioned belief-based planning was a possibility um planning through contact is another possibility as a more advanced topic uh and it's it's something that I I care a lot about but it's I think a richer formulation for these kind of things you shouldn't be afraid of bumping into the world all the time there's a bunch of projects this is this is when a uh at Tri we're trying to build something kind of like Baymax we call it punyo which is soft and cuddly in Japanese I guess and uh we have all kinds of prototypes if you were to come over to Tri you see all kinds of prototypes of different soft sensors and and soft structures that are instrumented that try to make it so the robots don't break when you make do-rich planning to contact and they're safe to interact with with humans good so that is a the second half of the motion planning allows us to think about how to address the non-convexity of the motion planning problem prms oh yeah go ahead you can solve the um but I feel like if you were to think about humans that should but maybe I could do something implementation problem the workspace first and then once you have the gradient you just do it you just use a Jacobian to get back to the becoming dude so go back to go back to the configuration do you feel like that is that is there a protocol good so the question is um if the configuration space is so messy and the the task spaces so much cleaner in general we've done a big big boxy objects right where Engineers until AI designs everything everything you know we've got nice rectilinear structures why not plan in the task in the task space and then map it back to the joint space I think that is viable um I think there's of course then the the reason it's often preferred to plan in the joint space is because the mapping from joint space to cart to workspace is the good one is the forward kinematics and going backwards is inverse kinematics but you're right we can do differential inverse kinematics and the like I I do think it would be more if I think about how a human plans and this is I have no reason to have a strong feeling about this but I would it feels very natural to think I I plan a fairly simple motion of my end effector and uh and and then execute it so I do think that can be very natural so if the the question the statement was that maybe the gradients in the task space could be reasonable but but then the question is if you know what if I have to worry about where my elbow is um while I'm reaching through right so the relical path the whole path and the auditors I think if you if you were to not think about the arm and only plan and affect your space that's kind of some of the the problems we we had uh you know with my simple example where I would eventually the elbow would fold in on itself and the like so I think you need to think about both jointly in order to solve the full problem you know um if you need you know that and I think maybe humans don't do that very often I think most of the time we probably we don't do that but to solve the full problem as specified you would have to solve both of them jointly and you can choose to do it with the forward kinematics uh or you could choose to do with the inverse kinematics and I think most people choose the forward kinematics but I agree for more approximate methods that maybe don't don't worry about making some collisions then it might be very natural to live in the in the task space good okay see you next time
Robotic_Manipulation_Fall_2022
Fall_2022_642102_Lecture_18_Reinforcement_learning_part_1.txt
today we're going to start talking about reinforcement learning it's funny how the the pendulum swings but last I was just saying that last year I felt bad about waiting until lecture 18 to talk about reinforcement learning and this year I'm not sure if anybody cares you know so uh and who knows what next year is going to be right but uh I I think some people are incredibly excited I think it's an incredibly exciting topic let me try to do what I um what I do on good days I think which is to to bring back the last ideas from the last lecture and kind of Fade Into the ideas from today so last time we talked about the visual motor policies right for me I think it is now clear that I want policies that I will no longer be happy if my policies don't read from the cameras and make decisions at high rate I think that's we've seen something good happen there and we should continue to try to do that so um you know the way that looks in our our world here is maybe I have the manipulation station here it has a bunch of output ports it has the EU estate the wsg state but it also has some cameras and then we put that broadly into a visual motor policy here and that's what's going to send ultimately back into my Ewa commands input Port my wsg command input port all right the idea is we want to go we want to build controllers that can go from pixels all the way to Melody's positions maybe not torques that'd be crazy um it's there's reasons why you might want to do torques but a lot of our a lot of our cases positions are going to be good and we talked about you know the various ways you could think about what's inside this box including maybe everything we've done in class so far sort of fits inside this box but the huge promise of trying to get these with tools that are more based on machine learning was um you know the huge promise let me just right as I say it of ml in here is that maybe I don't have to do anything in the middle of that that assumes a state representation for me that's the the biggest deal is um using learned State representations right that I don't know how to write the state of the peanut butter on the toast so with a lot of the classical tools I would have to impose some representation of what's happening in the world in the middle of this and maybe with visual some of the tools we're talking about in the last lecture and and in these upcoming lectures I don't want to do that I don't have to do that so the you know we could say that like I said every everything we've done in class so far uh where I maybe have a state estimator and a motion planner and a low-level controller that could all be inside and I would call that a visual motor policy but I only be happy calling it a visual motor policy if it was really making decisions based on that camera at high rate last time we said a simple way to start thinking about what visual motor policies can do was the behavior cloning where we really just took in for instance the U.S state and the cameras you know wsg in the cameras and we put it into a big they're on that and we output the Ewa command from that we talked about the sort of characteristic architecture of that as you maybe have an enormous Network to deal with the camera input because it's a very complicated input and maybe pre-training on imagenet makes sense and for all the reasons we might have an architecture that has a big Network to deal with the images but then you bring in your EOS State your wsg state where you have sensors on them directly you shouldn't have to only get that from cameras you know those are easy sensors to add in and then maybe a relatively smaller policy in BC we would um you know collect input on an output of this directly from Human demonstrations let's see right and so if we really just have a bunch of samples of the data here and the data here then I have a standard supervised learning problem where I just want to find a function or potentially a recurrent network if I want to think of it as a sequence I want to find a function that predicts the Ewa command given the inputs that I'm seeing okay that was last time and um I think there's a lot of Promise in that we're seeing like some of the best demos in the in the manipulation world today are using that pipeline but we also have you know these are some of the ones from our group but we also know that there are limitations to that so people it's a big open question about how far that takes us in the grand scheme of things how much is imitation going to learning going to be part of the final answer that's maybe one question uh and maybe a slightly different question is do we only need Behavior cloning to get to the broad manipulation systems that we that we are looking for right and maybe as we get if we get cool enough haptic interfaces then uh you know then all this stuff starts to work we don't know okay so that was the backdrop now enter reinforcement learning how does reinforcement learning compare to that right so this this was a large requirement right to say you know the burden of of these demos of doing a great Behavior cloning demo starts with collecting lots of human data collected on the day of your demonstration you know collected with experts right this this is a high cost just like in supervised learning there was a high cost initially of having somebody label every pixel in your uh in your data set right the cost of labeling or the cost of demonstrating becomes very high and if we can't generalize beyond our broadly beyond our demonstrations then there's limits to how valuable that can be okay so so a harder version but a super interesting version is to try to describe what we want out of the system not with specific input output pairs but by saying just what we want the robot to do over the long term right and that's what reinforcement learning is doing in fact that picture up there is the classic picture you always see in RL right you see they don't always call it the plant but I'm going to keep calling it The Plant right maybe it's the world or the environment so my visual motor policy is my policy my manipulation station is my plant I have my observations coming out here I like to call observations y right you could call them o if you're more uh Cs and Clyde maybe but these are my observations right over here is you for me which is actions and uh we also have one other thing involved which is a reward r what you'd like are to be some function this is the let me even say clearly this is the one step reward function okay just a scalar function of the of the current observations and the current actions okay so a richer way to describe what we want the robot to do would be to write it as an optimization problem let's say I have some policy I'll call it pi and maybe it's got parameters Theta the weights and biases of my neural network for instance then the optimization problem I want to think about is how do I maximize over my policy parameters Theta the sum of the long-term Rewards given that those rewards were generated by interacting with the plant so this is not maybe what you'll always see in the way you'll always see in RL but I'm going to write it this way because I think it's consistent so we're going to define y n to be my output function of my dynamical system you know that I've defined carefully and xn plus one is just that for dynamics of that system and I have to somehow specify the initial conditions to start that up right so if I was actually going to implement this in Drake for instance then I have some potentially a whole diagram it has a context which has its state described right it has its forward dynamics that the simulator knows how to evolve it has its output functions the diagram has outputs that are pulled from system outputs right that evaluates things like the image or the state of the Ewa right and subject to this dynamical system generating y's and U's that are consistent I want to to maximize the long-term reward I think I did it I was like worried that I would get here and I'd write min and I was going to write you know cost over here it's um it's my habit but I didn't change you and why but I did change to R so that's a little compromise okay so this as I've written it here this is not just an RL problem this is an optimal control problem right broadly and um you know we haven't spent the many lectures it takes to sort of build up the total toolbox of optimal control here I could still I think give you some useful insights about RL today but like so the other class I teach under actuated we spend a lot of time thinking about the optimal control problem more broadly and all the ways to think about um you know using this dynamical system to solve these problems but but long before RL existed there was optimal control and I would say reinforcement learning is a subset of optimal control which has a particular characteristic particular emphasis can I say RL is a subset of optimal control I mean some yeah some people might want to write it that way some people might write them I'm just going to say that I'm going to stick with that okay um and it puts particular emphasis on a few things in the library of optimal control tools RL methods emphasize first of all black box optimization so unlike most Tools in optimal control I'm going to the RL algorithms I think people know are not going to assume necessarily that we know F the function f or the function G or even the function R only that we can get samples from it right so we don't have to know the structure or the the governing equations I just need a simulator that will tell me the output or a real robot that'll tell me the output and the other thing that it emphasizes strongly partly because it wants to connect to the real world and the data is uh the stochastic aspects stochasticity so stochastic optimal control would be if I wanted to maximize over Theta for instance the expected value of my rewards okay and I think um you know there's lots of ways I could discount this or or you know do average rewards and stuff I'm just going to keep it at the high level for for now but um and it's interesting to think okay so this I haven't written anything that has probabilities yet so how does that enter in my equations so my output functions can be can take random variables and my Dynamics functions can take random variables and we've already done this a little bit actually this is also a random variable and I can start my initial conditions in general from some probability distribution okay and once I pull even I mean any one of these would be enough once I make the initial conditions come out of a random variable then um then suddenly the future state is a random variable and the future observations are a random variable and it makes sense then the reward is not a random variable and if I want to maximize a scalar function of the of the of a random variable I need to take some measure of that random variable and the most common one for lots of important reasons is to take an expected value so this is the I think the way to connect the modeling that we do oh yeah please yes oh no it's it's not it's just uh too many symbols coming out of the typewriter yep thank you yeah so this is a I think a very nice match to the modeling uh you know the modeling power we've been talking about in Drake's systems framework for instance and the way we've been writing simulations and the way you would generate this there's one thing that is missing actually there's another way to add Randomness besides just the initial conditions and the that sort of noise that's a really rich specification even if I were to restrict myself to gaussian noise coming in once I have a non-linear F I can do anything they can't hear you on the lecture thank you very much we've stopped using that we just record ourselves okay cool thank you yeah so so even if this is just gaussian random variables since I'm multiplying it by an arbitrary F I can do really rich things with that but there's one thing that I think is missing in that standard map anybody know the thing I think is missing we'll see it in the code in just a minute but yeah okay there's definitely a reward that's in that's in here oh okay I you might say the thing that's hap that I'm missing constraints um but you can make reward up you know in in reinforcement learning or any of the unconstrained optimization you would take the constraints you might have in a normal optimization and you would shove them up into the objective with a as a penalty function so maybe that's yeah but I would tuck that inside here but as rich as I let R be as rich as I let G be f and this probability distribution there's still something that this this model of the world doesn't quite capture yeah unknown model I mean depending on how rich your unknown model is that could fit this is a pretty rich specification right the policy can be stochastic that's true I didn't actually even write the PO I should have written the policy I missed my U of n is a policy of Y of n and this could be um that was an Omission not a philosophical Miss you're totally right you called me on it and this could be stochastic or deterministic you're totally right but that one I just forgot to write there's a more fundamental thing that this doesn't capture okay it's discrete but I would say if I make my discrete to continuous math you know I could make discrete steps small enough that it can capture most that's not the fundamental one I'm missing people do it in RL all the time but they we didn't do it enough I think in classical control which a particular type of randomization I think has happened in RL that didn't happen in optimal control if I made like a random number of objects up here in my world then the whole even the definition of X is different the number of State variables is different and so I don't quite know how to write that here there's like a distribution over the size of x and that kind of breaks my my otherwise beautiful state space view of the world you can still accommodate that in the in the tools we'll build today but um you know this is like almost perfect but some there is something that it's missing which is that I really could have in random draws of my simulator a completely different state you know a completely different number of objects in the world that's the only thing I find unsatisfying now about writing it that way I think it must you know in I would apart from like you know clever Network parameterizations and stuff like this but I do want y to be roughly fixed I think it's X that gets you know bigger or smaller if why is a camera right if Y is my my camera input and my robot data then that doesn't change even if I have number of different numbers of objects in the scene but X would be a different size so the generating equations yes okay so if I were to so so maybe you say that this if I was thinking of this as like a general data structure then it then everything's fine yeah if I'm thinking about this as a vector space then then it uh you know and and uh difference equation on a vector space then it's impoverished but if this is like a dictionary in Python then I'm good again and that's maybe that's maybe a good way to think about it is I need to go from like my comfortable Vector space world to like a dictionary in Python world and that kind of you know messes with me but uh but it's important it's an important change Okay so let's just you know take a second to appreciate that this is harder than BC right so Behavior cloning is here I collected directly input output data but this is a harder problem than than BC for a subtle reason because BC can do sequence learning you know if I'm going to train a lstm policy of my for my cloning then that's actually got some temporal components to it too but RL harder than BC you know this is supervised learning right even possibly sequence learning but BC doesn't have to think at all by by having the data before and after my policy curated for me it doesn't have to think at all about all the stuff that happens inside my plant it just completely removes the you know completely avoids the plant and a lot of messy stuff can happen in the plant right inside our manipulation station we have the physics engine we have the geometry engine we have the contact we have the renderer you know that's a lot of messy stuff that our algorithm is going to have to go through now and BC just skips all that the classic way that people would talk about RL being harder than supervised learning is the delayed reward problem is that that you have long-term consequences of your actions the sequence learning has some of that um so I de-emphasize that here but there's a there's another the fact that my reward depends on my long-term even my reward my instantaneous reward at time 23 depends on all of the actions I've taken from time 0 to 23. whereas in Behavior cloning you know certainly in the simple versions of behavior cloning feed forward Network it just depends on the instantaneous input okay but this is such a powerful General framework this optimal control framework the RL framework um that people are you know and it's had some incredible success so alphago for instance you know there's just so many Starcraft you name it there's like incredible successes of uh of RL and we've seen some in manipulation too right so the the one that everybody talks about was the initial one by open AI but now it's been recreated and a lot less time and it's super compelling to say the recipe to make an advanced robot manipulation system is you just make the simulator and people have simulators we just write the cost function and then and then you you do deep RL right and I and people say things like ah it's so nice we don't have to do control anymore control is hard and I just don't have to do that anymore right uh but it's not completely all all done you know and like I said there's like there's the pendulum swings around and people seem even though arrow is incredibly powerful I would say this year people are a little less excited I'm sure next year they're going to be super excited again right but there are shifting wins and people are um you know there's various levels of excitement in RL I actually I started I did my thesis in RL I don't know if you guys know that I had a little walking robot I couldn't afford a full piece of rubber you know so I had to cut out I had to use a other piece of rubber one day and I never had a new one so I just had to I had like very humble beginnings I guess this was a CD rack um that was holding up my ramp right um and then so that was a that's just a mechanical toy that would walk down a ramp and then my my thesis dating myself now was a robot that learned how to walk but because it was close to walking by falling down a ramp yeah and I had actually I have a secret the only reason I put this in here is because boyan's about to throw away my treadmill that's been sitting in the lab corner and I was like I just want to show one last time you know for nostalgic reasons no no it's good it's good it's this is the thing like so if you're doing an RL thesis right so for me I was like writing my thesis here right and there was a treadmill here and all day long you just kind of hear this you know it's like oh my God I'm gonna every once in a while you hear and then and then you're here and go play you know as it was uh the treadmill shoved it up against the wall right but uh but that was my Beginnings I guess and I um you know I don't use RL a lot in my to make robots move today but that's not because I don't think it's powerful I think it's a pretty subtle there's a pretty subtle reason and I'll maybe we can come back to it in later but um I just don't enjoy the state of it right now in that way but we're but like I don't want to tune cost functions and stuff I I'll say that more carefully as we go we are doing RL Theory because I think there's some beautiful things that have happened in RL that need to be understood more rigorously so the group is actually doing uh a lot of RL but just maybe when we want the robot to do something that's not the tool I turn to today yeah yeah foreign okay good so so I think people have the the impression like the open AI was massive amounts of compute it simulated you know millions of years of finger twiddling in order to do to do that and they people have gotten that down Nvidia has a version now that that's doing a very similar task and dramatically less at least walk uh you know wall time um I think a lot of compute still uh optimal control isn't immune to that it depends which versions of optimal control so linear Optical control is immune to that nonlinear optimal control there's many different different approaches which I think RL is one of them and um they enjoy different levels of of generality like how much how many different problems you can Embrace with that and how strong is the algorithm right so RL is all the way on the side of you can optimize any plant and any cost function but it might take a long time and if you say I don't want to do any I'll carve out a smaller set of problems that I want to focus on then you can write narrower more efficient algorithms and the the big question the big money question is where is manipulation in that space is manipulation so complicated that you have to treat it with the you know anything algorithm or is there more enough structure in the manipulation problem than more targeted algorithms should be used okay so so my plan for today is to to talk basically um you know we I don't have all of the optimal control background you know that we've developed but I could still talk about um some RL examples and uh including the sort of software back you know how do you write these things and then I want to talk a little bit about RL from the optimization perspective because we've been using we've been going back to optimization um you know using a variety of optimization tools and I think we can connect to that and I've got you know as much as we want to as much as we have time to say about that um let me start by thinking a little bit about uh the software right so how many people know open AI gym it's a great thing right so um everybody agreed that there's a relatively simple requirement to Define an RL problem you just have to you know make Define your observations Define your actions define your reward and um the community you know finally got behind a particular interface just someone agreed that these are the function names in Python that we should all write our code behind that way anybody who's writing simulators or problem instances on on one side can present this interface and everybody's writing RL algorithms on the other side can can act on that interface and Jim the open AI gym is the interface that won it's maybe transitioning to gymnasium we'll see open AI kind of um you know did a mic drop on it uh said we're not going to do that anymore and someone else is like that's still important so maybe and we'll see if the world moves to gymnasium but I think it's got so much uh momentum that we're we're not going to lose it so the uh you know the software interface to this optimal control problem is just you know you have an initialization of your of your environment your gym environment you step that's my uh you know f of x you can reset if you need to you can render the step also actually it takes the action and it also it outputs both the observations and the um reward maybe a close all of those have a direct mapping to what we've been doing all term right so you know you're in it is just your build your simulator the step is just an advanced to in your simulator and then evalue your observation at a value reward for instance and uh you know reset might just be resetting the context in our in our setting right so everything this is just you know and anything any simulator can easily present this interface that's the point of it it could be an Atari game it could be go it could be a manipulation station render might be is you know like the mesh cat publish kind of idea uh to make it easier to go from that diagram and get it all correct the actions and observations and rewards we have a thing in uh I in currently in manipulation repo it'll it's slowly moving to the main Drake which is just the Drake gym end where you can hand it a simulator you can tell it which ports are the action ports you care about which ones are the observation ports which one's the reward port or you can write a function for the reward if you don't want to use ports and it'll wire that up and make that for you okay so it just makes that easy to make that interface um and then so that's the open AI gym and a lot of people knew that but uh maybe I could have written it more carefully so this is my step reset whatever and then on one side of this are the simulators for instance or a real robot if you so choose and on the other side of this are the algorithms only one simulator will be discussed today fortunately but there are other choices that are perfectly good on the algorithm side similarly I'm just going to pick one because I don't want to talk about all of them these days we mostly use stable bass lines three which is a a nice implementation from DLR right I think and that has a lot of the kind of like um ompl had a lot of a whole list of motion planning algorithms stable baselines has a nice list of of reinforcement learning algorithms and their this one is written in pytorch a surprising number of our libraries are never made the leap to pytorch so and it's it's a perfectly good implementation it's the one we use it's interesting that um the algorithm that most people use that we'll talk about the most today is PPO and maybe I would say Sac right so so let's distinguish so um in terms of the algorithms in manipulation for instance there tends to be a breakdown between um if you're using simulators then PPO is often a go-to Sac maybe but if you're using real robots that people tend to do Q learning kind of things I think really the distinction is um this is uh with a simulator you worry less about data starvation you just you run your simulator fast enough and you don't curate your data carefully and you do online RL and with real robots data is precious and you don't want to have to run 100 million trials so you do offline RL a lot more which is more compatible with Q learning approaches so you you store every piece of data you reuse the data to update your policies even after it's not the same policy that's running I'll get a little bit into that distinction later but PPO seems to be um a fairly dominant entry now PPO has become the default see I backed it up uh reinforcement learning at open Ai and just a lot of people do in fact it's not just PPO it's like there's a particular commit in a particular repository that has a particular set of parameters for PPO and those you should only use that one because people have done like studies of saying we took PPO we tried did huge parameter sweeps and we got all kinds of different answers and then we did get bisect on the on the original PPO uh repository we found out that this is the commit that everybody should use and uh and then like another paper find roughly the same thing so it's like the world just tried to implement exactly the policy parameters the the optimization parameters from this one commit don't mess with it you're going to spend a lot more time optimizing okay so but but that actually is nice you know kind of weird way that's nice because then you know if there's there's just one version of PPO that you should care about right so like if I if I wanted to swipe in PPO and just be happy as long as I use the parameters from that one commit I should be good I should have the same performance as as other people and stable baselines is use that commit for instance okay so let me um let me do a little example I think it helps to put the stuff in context right so remember the Box flip up I thought that was just such a compelling example for Force control before right so let me just even remind you of our existing example for that uh this is the old notebook where I'm gonna I'm gonna run the stiffness control version of that foreign based I'll do the stiffness control based and I'll just do the version that was scripted right so we had a virtual finger we programmed the remote Center of compliance if you will and got this just beautifully simple high level control just says move it towards the wall start lifting it up the center of compliance goes here the Box will flip up and it'll pull itself back down right that was just like a really nice way to flip up a box I guess with a if your finger was a point uh okay so I did that again in RL right this morning I just like thought I'll code the exact same environment and I'll put it in my Drake gym and then I'll run PPO on it and I did so let's see what happens uh restart that so I can use the same mesh care instance so I'm going to just use the pre-trained model it doesn't take long to train this one a few minutes but I'll just use the mostly so I don't run out of batteries here I'll use the pre-trained one okay so it flips up the box very well and then it smashes its head against the side for the rest of time it'll get random resets okay it flips up the box and then it smashes his head against the wall and I got to the point I was like oh that I could tune that away but but it kind of makes a point but that's like it's it's pretty good but it's annoying you know that way too right so um I mean I didn't like tell it to smash and said again well that was I'll show you the cost function right okay that keeps going um all right so yeah I recorded it just in case all right so let me just sort of Step through what what that looked like and you can ask as many questions as you like but so what was the network in in that okay first of all maybe I should have started with the action space so I wanted it to have the advantage of of stiffness control I didn't want to like deprive it of stiffness control so the input was actually the because if you remember in the if your fingers a point the difference between stiffness control and inverse Dynamics is is non-existent it was just a one over Mass but um so I I basically gave it the um that should be export input inverse Dynamics desired position was the action space okay so that's like the virtual finger in the stiffness control the observations was direct State output so I'm not even doing visual motor yet I'm just giving it access to the true state and asking to learn from that and then I just used stable baseline's default MLP policy to go directly from state to actions now we should expect that to be enough because the the optimal control for that problem should be just a function of State I didn't have to use an RL I didn't have to use a recurrent Network here I think just a multi-layer perceptron policy feed forward network is enough and if you look at the default MLP policy which is actually like all the examples use the default policy that's kind of a weird thing in RL uh for for control is people basically there's like maybe this one and maybe one with 255 units in the middle and they almost all use these these these same network parameters and people really don't change those very much I would have my instinct would be to start mucking with the size of the network a lot but but people really don't so the default Network architecture from stable baselines was just that okay so that means uh the policy network has 64 hidden units 64 hit so it has the um the observations which in this case was the state mapped into 64 hidden units 64 hidden units into the to the other to the output the action the value function was was roughly the same it was basically the same size so PPO I will get to it in a as I go into it a little bit PPO is actually an actor critic algorithm um so it has both uh policy parameterization and a value function parameterization okay and then I told you that the way that you you make that in the Drake gym M does you just have I just have my plant it's a little smaller than I would have liked here sorry and I have an inverse Dynamics controller I made a super small function for my reward and I piped that to the reward output I took my state I put that to the observations output and the actions were almost just directly the inverse Dynamics controller but my inverse Dynamics controller wanted a desired state and I only wanted to give it a desired position so those little boxes just put zero for the desired velocity okay but it's really just like make your little diagram pick the input that you want as your actions pick the output you want as your observations output or cost or a function to be the reward the cost function is kind of fun and interesting I I really didn't do any cause I mean there's one thing I did to do cost function tuning I would say which is this last line but um I'm like allergic to cost function tuning I don't want to do much of that I did it before and I don't want everyone to do it again um it's basically the Box angle okay I wrap it around because sometimes it did play tiddlywinks with the box and it would do multiple spins and land upright so I'm okay with it that I'll take that as a success so the Box angle modded in into Pi okay and most of the cost is just saying that I want my my angle from vertical to be small see this I do think about it the world in terms of cost not reward and I put a minus sign when I'm putting it handing it to the my RL algorithms that's just how I roll uh so I penalized angle from vertical I penalize box velocity I penalize the effort which is the difference between the virtual finger and the actual finger I penalize the finger velocity because I don't want it to be going like that but it still did and then the 10 is the one cute the one thing I was like I didn't type it in the first time and I I had to put that in and you see you can see the comment why so If I Only Had negative rewards only and the Drake gym end will terminate if the simulation crashes right it'll terminate early if every time step can at best be a negative reward then it wanted to crash the simulator it was learning to break multi-body plant okay so I had to add 10 so that every every time step was a positive and then it would be rewarded for running a long simulation and not rewarded for crashing my simulator right how do you crash the simulator you like cause the ridiculous penetrations with you know at high velocities and stuff like that and it was learning to do that that's cool but it's annoying so uh anyways but that was the one thing I was like okay I have to go back in and add a 10. everything else was like I don't know why I picked two and point one but it wasn't tuned that was just me thinking that was two-time 20 times as important as that yeah the effort in the in the stiffness control I chose effort to be the distance between the desired finger and the actual finger so that's like the stretch of the spring yeah uh I just looked at the rollouts and saw what the the largest cost I could get with those other things you know my my one step costs tended to between tended to be between zero and negative five or five or something like that I just added something that was safely bigger than the smallest reward I had gotten just to sort of shift that up to to be in the positive space like the if you you still need to change that point for this too it doesn't environment like corporate so the the comment there just to say back for people watching remotely like is that stable bass lines certainly recommends strongly that you normalize your action space you normalize your observation uh maybe it's less about even observations and or normalize your reward and some it does some amount of automatic things depending on if you turn that on or off so the question was does it even matter if I had picked a hundred interpretation of what New York just mentioned is that although it does so now we are trying to make the reward positive right yeah so this positive is using zero as a baseline as it yeah normalization wouldn't fix it wanting to crash the simulator right yeah good any other questions about I definitely could have made that better it was a teaching moment for it yeah I just thought I'd leave that it's kind of like because because that's real I mean you you'll do that you'll see robots like smacking their head against the wall and stuff like that until you until you tell them not to um and I want to think about um I mean that's that's like uh well let me distinguish between two different types of of of that kind of behavior right some of it is power in the sense that it can find very strange Solutions and some of it is um bad behavior by the optimizer I would say so when it comes to like tuning so if I wanted to make that better I would probably that basic framework is probably okay I could maybe make the twos and the point ones different and I could dial that in and get a behavior that was less right if I'd really penalized action um then maybe it wouldn't do that okay but um let me distinguish between two types of tuning I won't write the word distinguish I was only going to write reward training but I wrote cost and like I'll just go with it um I would say the first one is whether you fully specified the task okay and I'll contrast that with reward shaping and these are these are slippery slope okay and so so let me distinguish what I mean here so it's actually hard to write a cost function whose optimal solution has the behavior you really want I gave an example that before when we were talking about loading the dishwasher right I didn't tell the robot don't throw dishes across the room right and if I didn't tell it that it if it could throw the dishes into the dishwasher it probably should right that would optimize the objective so I would say that's my fault I didn't specify the optimization function whose optimal value had the solution I want and it's not because I'm a bad person it's because it's really hard to write good optimization Landscapes right and in in general I think a lot of the behaviors that we do as manipulators as humans with that do manipulation is due to a lot of common sense understanding about the worlds like you know dishes shouldn't be smashed into the plate you know into the ground like you you don't have to having to tell all of these uh Leslie Kelvin likes to call it background utility functions into a robot is very very hard and I think it's really uh interesting and hard to think about how do you give the programmer the vocabulary to specify the super rich semantics and I'm all for that I like that a lot I think that's a interesting problem and you know I think and and so like natural language can help with that large language models you know uh Foundation models you know these are things that are are going to help us make progress on that there's a second thing which happens which is just you're helping the optimizer okay which is that the optimizer got stuck in a local Minima and if I write the cost function a little differently then maybe I'll it'll do better and it'll get down to the right solution okay and this one I I choose to do less of this and try to work on better optimizers instead that's just my preference but um you know so this one I have less uh excitement about this one I think is fundamental and good um so for instance like which one if I were to start tuning that example my box flip example which one which one do you think is more the problem there right yeah there's probably both yeah tolerable vibration nice yeah so I didn't actually say don't smash yourself into the side of the bin repetitively that's my bad I could have said I don't want you to smash yourself my bin is important or something okay but there's also I mean I did tell it all other things equal use less energy or you know use less effort and keep your finger velocity small and it still chose to go like this so I'm pretty sure there's a solution that would have flipped it up just as well that didn't go like this and so yeah in that case I think the optimizer has done a you know a good enough job to get the job done but not it hasn't found the optimal solution you know and just just so you see the world through my eyes a little bit more the if you can't let me take the opposite extreme of that right so when we talked about the graph of convex sets motion planning the cost functions there of like this you know the bi-manual thing doing complicated it was there's a start there's a goal I want the shortest path subject to Velocity limits acceleration limits on there the optimizer solves the global optimality or it says there's no solution right that's just I mean it's hard there's only a limited class of problems that we know how to write that have a strong solution like that but the joy of working with that system is more for me than than you know when it's when when you have to do this right yes yeah that's a great question um I suspect it would continue to get better yeah the question would be like how long and and what but you're right I think I could probably just let this go um I did that fairly long for once uh and it didn't change a lot but in that particular example but I do agree I think you could um I mean fundamental to RL is a little bit of exploration and as long as it's still exploring it could eventually hop itself out and keep getting getting better great question I mean to some extent what we're doing is we're we're lifting up the programmer from you know so like I can make a robot do anything using GCC or claim right if I just have to I just have to write C plus plus code and if I write good C plus plus code I can make the robot do anything right and that's true of RL too if I can write a good enough cost function I can make the robot do anything right but in C plus plus you know um the compiler is deterministic and uh you know and it's fast and the and the error messages are clear and I think RL will will get there but right now RL is such that that you know you you try your cost function and then you wait a while and um and it's more frustrating to work on but that's just an intermediate State and that's that actually I think those either are working on RL I think you should work on that because that's super like there's just better things that can that will happen as we as the field continues to work on it and the fact that it's solving problems that nobody could program well that's hard to say but but people have not written GCC programs to do some of the amazing things RL is doing so it's lifted our our abilities I would put like kinematic trajectory optimization somewhere in the middle between those two right so you're going to have to do some amount of tuning with kinematic trajectory optimization a lot of times you can do it with like an initial guess or at a at a constraint to say oh you know I didn't want you to do this crazy thing that you chose to do put a constraint on there and then you know you could you could shape it you know that way and I I similarly find that very annoying and I wish that we could just have stronger optimizers um so there's a whole Spectrum I think of of these algorithms okay quick stretch I wrote to myself don't forget to tell people to stretch foreign okay so let's talk a little bit more about um at least the optimization view of sort of the PPO class or maybe even simpler than PPO uh view of of optimization so we've talked a bunch about optimization we've talked about nonlinear optimization we've talked about some convex optimization in general let me see if I've got some some parameters and I'm going to just call it f for my parameters here and I've got some complicated landscape right and I want to try to find I'm going to stick with minimum because I'll screw it up all day long if I don't right I'm trying to find the minimum of some non-convex objective function and really in the discussion today we're going to have this be unconstrained optimization we're not going to add additional constraints okay if we did have additional constraints we would push them up into our cost function with a penalty method like an augmented lagrangian or something like that we've talked about various ways you can optimize this maybe you find an initial guess and you start moving downhill based on the gradient right so we've we've talked about gradient descent now gradient descent as we've talked about it so far is off limits it's not allowed today because it required us to be able to evaluate the gradient of that function and that means I needed to know something about this um this is not allowed in the Black Box view of the world right because if I'm going to play Atari or go and you know or let's say it's you know Starcraft or something like this I didn't I don't have gradients coming out of my that simulation that game engine we talked about sequential quadratic programming that's what snapped is doing when we're solving ik problems or or even kinematics trajectory optimization problems it similarly is using partial left partial Theta you would think since it's doing quadratic programming that it would also be using the second derivative but actually it's making an approximation of the second derivative and only asking um asking for the first one sometimes to its detriment but also requires that the question is how do you do optimization if you're not allowed to get the gradient okay and RL has a I mean black box optimization in general has a bunch of interesting solutions to that and arrow has some has particular versions of black box optimization that are very well suited to the RL domain okay so so what do I mean by black box that means I have um let's call it Theta here right I have access if I give it a Theta I'm allowed to evaluate F I get the value of f but I don't get to know anything more about F I don't know if there's Sines and cosines in there I don't know if anything as far as I'm concerned Theta goes in I can't see behind the curtain the F of theta comes out and I have to write an algorithm around that you would contrast that with people call it white box optimization or glass box maybe it makes more sense glass box optimization where you can see everything I mean this is so the reason so like I think it's obvious that I um that you can make Drake look like an open AI gym it makes me a little sad to do that because the whole point of Drake right is to to look inside f right uh Drake is a differentiable simulator it can give you partial F partial Theta even if you run a whole simulation it's not just the Dynamics of multi-body plant but you can take gradients through simulator advance two you can put a gradient in get a gradient out right so those all of those things are available and we're going to throw them away when we're doing things today and it turns out I can't I haven't yet been able to do much better with gradients maybe I'll have to get to that next time but um that's the game black box optimization um so how do you do black box optimization it's it sounds like ah how could I do anything without a gradient but then if I tell you how they do it it's like of course what um right so let me give you the simplest black box optimization okay I'll take Theta at Time Zero okay this is my per you know this would be my um optimization step okay so every time I'm going to run a simulation or something I'll increment my Theta estimate by one okay so here's a simple algorithm I'll evaluate F of Theta I Plus some random noise okay and then maybe I'll also for good measure I'll evaluate this turns out you don't have to but I'll evaluate this and if the random Vector I chose was better I'll keep it and if it was worse I could keep this turns out there's even funnier things you can do okay so not a huge brainstorm algorithm here right it's just I'm going to say I've got an initial guess I'm somewhere on the landscape I'll evaluate it here if that was better I'll keep it and I'll move down to here if it was worse I'll stick with this one okay more generally if I were to say Theta I plus 1 is let's say I'll write it in the discrete logic but we can do it more directly in a second here if F of Theta I plus W this is my experiment I'm doing minimization because that's how I roll is less than right if I got better then why don't I update this to be plus W now for reasons we'll understand as we get to the stochastic case we tend to actually put in a learning rate here okay some small positive number so I don't go all the way to the to the guess because what so why is that right so if I were to go all the way here oh it looked better on this one particular simulation but well then someone else puts a different mug in the scene maybe that was that was a little too aggressive so I'm going to say if I did one experiment and it looked better I'm going to take that as a suggestion to move in that direction but I'm not going to go Full Tilt into that one observation okay and there's a stochastic interpretation of what happens with this version of the algorithm which um which plays out beautifully okay and then otherwise I'll go in the opposite direction but again I'll put a learning rate in there that's supposed to be Ada it's looking a little more curly than I meant to do but that's an ADA okay and there's simple versions of this that um you know instead of writing a plus or minus here if I just multiply it by the difference of this I get a version of that too which will always go in the direction yeah I can get rid of my if else and just write this as a single line by putting the difference here in here and that would basically set my sign and actually my amplitude of my update but the basic idea is take a random guess if it looked better let's go that way now that's actually a pretty powerful algorithm okay and PPO is doing a very smart version of that but it's not that different than that okay um so what would you expect that to do compared to a gradient descent type algorithm right you know gradient descent would take a sort of direct path in this case here it'll get stuck in local here right this one will take maybe a more Meandering or slow path to get to the minimum take some more trials it could also get stuck here if we start adding the randomness it could it could potentially jump out of here so it becomes a stochastic gradient descent instead of a deterministic gradient descent so it that's you know Tom was saying like what if I ran it longer it might have hopped out of here and come and found a better solution if I ran it long enough right so I think that is a pretty decent first order model of what you should think about the our algorithms are doing they're doing much more behind there but just as a very first glimpse of like how does that compare with the more model based methods which are taking exact gradients for instance this is doing a random gradients and we're going to have you look at a version of this called reinforce in some detail on the problem set some of you already started yeah and reinforces sort of the predecessor to PPO okay so the picture you should have in your head is that you're going to go downhill you're still doing a gradient descent based algorithm you're just going to take a random walk downhill that's biased towards the in the direction of the true gradient and that's actually something that can be made precise is that with most of these updates you can say that the expected value of the update is in the direction of the true gradient and the variance you'd like to be small but the variance tends to be big and we do a lot of work in RL to try to make the variance of the update smaller okay now so that's kind of like the black box equivalent of gradient descent but you can also I mean you could think of Black Box equivalence of of sqp to right you could take multiple samples and uh like cmaes is called it's covariate Matrix adaptation with uh something sampling it's the what's the es evolutionary strategy thank you I think evolutionary strategy yes so imagine taking lots of random points on your landscape and fitting a quadratic approximation to it and then taking a second order update right these these things can be made to work the picture is clean if the function is deterministic it's more beautiful but a little bit more complicated when you get noisy evaluations of your function someone could have put a different number of objects in front of me every time I run the robot I get a slightly different answer and I'm trying to optimize the expected reward but the background is the same right and and some of these algorithms enjoy more robustness to that think of you can think of it a lot for those of you that do a lot of supervised learning you could think of it about like how how does your SGD come out of mini batch right and some of the optimizers are a little better at handling mini batch Adam seems to be really good at it you know and and these are the same kind of things so we get the the comparable algorithms here which some of them enjoy more robustness to the um mini batch it really is very similar to mini batch if I every time I do 10 rollouts and I get 10 slightly different evaluations from a potentially infinite number of evaluations it's very much like a mini batch kind of update okay so I thought um I said before that um you know we do a little bit of work we I say we're leaning into the group in various uh forms of of thinking about what RL is doing well what it's not doing well how I can combine it with machine with the with some of the model based control uh that we do so maybe I'll spend a few minutes at the end telling you one of our recent research stories about that because I think it's it fits this picture fairly well but um so let's just think is this a good idea right so I told you that I had you know like the whole point of Drake was to make it so you can look inside F if you want to is there any reason why you shouldn't take gradients if you have them it's a super interesting question because here's the mystery right RL started to show robots doing things with their hands and stuff that we haven't seen from people who are using gradients before but but that picture you'd say gradients should only be better right if I have a gradient why wouldn't you use it okay well there's a couple reasons why it gets subtle the first one I think is is very easy to see which is that if I'm doing visual motor policies it's hard to take gradients through a renderer so the so if I go from my equations of motion of my plant and my output function is render a camera into pixels with a game engine style renderer and then do a neural network to go back to my actions the neural Network's super differentiable that's great it's built for that but the render is not okay and there's been a lot of interesting work on differentiable renderers but but renders aren't different I mean you could very actually I mean if you move to the next pixel things you know you could have an object whose pixel value is you know that pixel 14 and you know by 28 hits the object and 13 by 28 misses the object right true renderers actually do a little bit of smoothing around the pixels and stuff like that but if you give me a gradient of how pixel 1428 changes with respect to my parameters that doesn't doesn't do as much as you as you'd want so differential renders have been been great but not the Panacea Maybe volumetric rendering like Nerf seems to be a much better way to talk about that and that's then paying off but if you just wanted to take gradients like if I just had the ability to take gradients through Drake's renderer and back it's not clear that that's going to solve my problems but if you look at like the open AI example they didn't even have a renderer in there they were just doing State feedback and they still did something that surprised those of us that have been working on control through contact for a long time so what is up with that okay and we spent some time thinking about it actually um Terry and uh and Max and Kai Ching spent some time thinking about it and I can tell you the story in like five minutes it's a great 30 minute version but there's I'll tell you the five minute version okay so take the open AI example if I give myself observations or state observations then the major complexity that remains actually is the contact mechanics and somehow they seem to have done an optimization that was beyond what we had been doing and other people have been doing through the contact so was there anything happening there okay and I think the the story that has been emerging is that we knew for a long time that contact Dynamics leads to discontinuous Landscapes okay and I'll say say that in the short version of that here and it seems like the random sampling the way that people are doing messy gradient descent is actually smoothing out some of the discontinuities that comes from contact and that it was a good idea and we should have been doing it the whole time so so actually taking you know random samples could be better than taking analytical gradients it's more subtle than that okay but um you know we talked about some of the discontinuities of contact it doesn't happen everywhere but it does happen if a small change in my gradient or in my my initial conditions for instance causes me to hit a different face of the robot of the environment for instance okay and that leads to these simulations that could have so you can imagine if I change the initial conditions and I know look at the final conditions I could have a very different final condition if I've moved the initial conditions along here I get a very different final condition I'll end up on one side or the other so I can get a very different reward if I went one side or the other so that would be a very discontinuous so it's not this picture it's this picture with just immediate jumps right can happen if I have contact you also have these weird artifacts from simulators remember I talked about how you can be going inside one you know the the simulator is pushing you back towards one face and then you're all sudden pushing you back towards the other face I think this this one we should just resolve but this one causes some problems too okay and if you look at how people in optimization Theory forget about our optimization Theory think about how to do non-smooth optimization they have known for a long time that one that first of all non-smooth optimization Landscapes could screw up gradient descent it could screw up even convex optimization otherwise convex optimization and a natural idea would then be to smooth the objective okay so if you could take this and draw uh if you could take somehow take your cost function and just like average over it and get a nice smooth version of the cost function that wouldn't that be nice wouldn't that make things better it turns out there's an interpretation that the way people are sampling in RL is doing that to our cost functions a bit okay it has it can move the minimum it can do it there's I mean there's a a lot to think about in this world right it changes the optimization landscape in some ways that might be better for finding a solution but might not find the optimal solution okay so you can take these things that we get out of actual contact problems and see that they get smoothed out with with RL type sampling those are actual mechanical examples okay and I'll skip a little bit here so then the question is if you have the analytical gradients should you use them and it's actually very subtle because there's times in your landscape where you're in smooth sailing and you should definitely use a gradient if you have it there's times where you're very close to a discontinuity and you prefer the first the Black Box methods so we've been thinking about ways to to take the Best of Both Worlds and you can actually if you compute both the empirical gradient from from a RL side and the analytical gradient you can actually do a little numerical test to see if they agree and know if you should trust your analytical gradient or not okay there's games like that that are actually super interesting one last point I'll make because I know I'm out of time okay the coolest thing though is that once we realized that was helping the optimization landscape you could stop and think okay well did you actually need to be random to get that effect and no you can actually just change your contact model get it means a little bit of force at a distance it means a little bit of a little bit more penetration but you can soften your contact model in a way that has a similar effect like almost almost an equivalence you can't we we can for any noise sampling model in RL we could find a contact model but they might be weird weirdly shaped okay is stochasticity essential and it turns out you can do deterministic smoothing and you can put that into like an rrt kind of planner for contact which never worked for contact before irt's was I mean motion codes are good and uh so there's there's a couple examples but the priorities have not had the success you might think in planning through contact before but they work a lot better if you just smooth out some contacts it makes the distance functions work better I can tell you more or less of the details but this lights me up this is like something amazing happened in RL empirical RL and now I think the theorists are coming in and understanding it a little bit better and sometimes the answer will be not just RL but a mixture of RL and and true gradients and models and I would guess I would put my money on that being the future because there's a so I mean go has probably maybe has some structure but it doesn't have an obviously interpretable structure to me like the if I take a picture of a go board and I move one piece the rollouts could be completely different in ways that I I don't have enough go skills to understand okay but if you take my robot and my environment and you move me by a centimeter it's going to be pretty similar it better be otherwise I'm I'm in trouble so I just think I think the structure and those problems has to matter all right that concludes RL day one we'll talk a little bit more about um maybe model based RL and we're actually surveying you on this pset about what you want to see in the next few lectures so tell us and and we'll we'll dial it in
Robotic_Manipulation_Fall_2022
Fall_2022_642102_Lecture_4_Basic_pick_and_place_Differential_kinematics_via_optimization.txt
actually don't know why it's not projecting it was up a second ago flickr questions so um very you can bring the whole laptop oh um can you still see okay [Music] [Music] [Music] uh [Music] i think yes oh [Music] it i don't when i see you smiling like that but i'm worried because he uh he deep faked me in his final project last term so i'm a little i'm a little wary okay welcome back everybody let's get started we have a double recording technology going here so i'm miked like six times and i've got i'm plugged in over here and hopefully one of the two of them is going to look great thank you to the tas who are working hard to make this good okay so i want to pick up where we left off we did a lot of stuff last week uh yeah and um but we didn't complete the story right we had this basic idea that i'm gonna put a red brick in front of you we're gonna design a complete stack to go pick up the brick move it to the next you know from one bin to the next and i today i want to complete that story for you right so this was our task basically red brick ewo with a wsg gripper we're just going to pick it up move it to the other side and if you remember the sketch for how we were going to do that had a few steps the first step we had to learn a bit about kinematic frames how to work with them the spatial algebra of reasoning about frames rotations and translations then we made a sketch in the end effector coordinates so we decided that okay if i know the initial pose given a particular in the world frame for instance of the of the object then i can figure out what i want my gripper frame to be relative to the object frame i can project that into world coordinates and i can go through and make a bunch of key frames for where i'd like that gripper to eventually go and then i can connect those keyframes with a trajectory we talked a little bit about how to interpolate carefully on the trajectory but the last big step is to turn that end effector trajectory now into joint trajectories because that's what we have to send to the robot and we started that too we started talking about the forward kinematics right if you have the joint angles of the robot how would you figure out what's the pose of the end effector that's the forward kinematics problem and at the very end i mentioned uh that we're going to try to use differential kinematics to solve to decide our joint angles and so today we're going to try to finish that story and i love the questions i got last time i please keep them coming i'm prepared to speed up or slow down depending on on what you guys need and want so let's just make sure when i write this down that we're super clear i remember one time in answering one of those questions i said positions and i'm like well no the other positions right i just want to be super clear that we we do use the word position to mean a three element vector in space we also use it to mean the notion of generalized positions is the cue that we talk about which is the you know in the iwa case is just a series of joint angles but more generally it's whatever coordinate system we need has a sufficient description of the complete configuration of the robot and the object it's everything in the multibody plant okay so generalized positions we call q so when you say plant.getpositions or set positions it's talking about this generalized positions and then this is the pose right we talked about the representing that as a transform or a pose and this is of the body or frame b b typically meaning the body right and this without the extra superlatives would means it's in world in the world frame expressed in the world frame and relative to the world's brain good so so that's the we sort of figured out how to do that that we we could go through a series of our spatial algebra relations and go from end effector to the second to last end effector to all the way up to the base and figure out the transform from the gripper to in the world the the thing we um you know that you guys asked a bunch about that i hadn't i tried to sweep under the rug was this this notion of different representations for 3d rotation i still want to mostly sweep it under the rug i posted on piazza earlier too but but i did try to write some more notes about that just so you have references and i'll just say a bit about it now because it actually will play out in the differential kinematics story too and it's important to think about just to understand that there are different rotation representations and all of the comp all the complexity of what we're going to talk about today is sort of comes down to this i would say so the fundamental problem is so in in 2d space having an angle is enough to tell me what a rotation is right if i if i want to if i'm in if i'm in the plane and i want to just rotate a a vector i can do that with just an angle in 3d you would think that you'd use three angles to do that and you can but there's a problem if you only use three numbers like the roll pitch and yaw would be the a standard thing then you can run into singularities basically because royal pitch and yaw all live on uh you know on the sphere you know on the on a circle and when each of them are pie in the wrong place you can end up with a singularity and there's it's well understood that there you cannot completely without singularities represent rotations in 3d with just three numbers you need one more number and because of that there's a handful of different choices of of which which numbers you might do so the ones that i i called out here you can use three by three rotation matrices right which have the property that you can think of this is the x axis the y-axis and the z-axis unit vectors stacked up and that's a total of nine numbers way more than three right but great on a gpu or great on a processor and this is a you can often do a lot of computations nicely with the rotation matrix the more minimal representation you would think would be the euler angles in particular the one we use in drake is roll pitch yaw okay so roll is a rotation around x the x-axis pitches the rotation around the y-axis yaw is a rotation around the z-axis okay this is three numbers it's convenient to think about i can i can sort of intuit roll pitch and yaw but it has a singular but it has singularities okay so we use roll pitch and yaw a lot when the human's involved like if you're in a a description file and you want to just position something it's often easier to type in role pitch and yaw like the universal robot description format the scene description format all the standard formats will take in a roll pitch yaw description of the orientation and that's fine if you are specifying it in one direction but it has singularities there are a few more that you might know or might have heard of the axis angle representation where you can specify any rotation in 3d by a vector and a in a scalar rotation around that vector that vector may not be axis aligned almost certainly isn't for interesting rotations but you can always pick a vector and then think about a scalar rotation around that vector and that's four numbers again but a complete description that's useful for some things i used it for interpolating between two rotations last time and then there's the famous unit quaternions again four numbers and you can actually think of unit quaternions a lot like the axis angles if you want geometric interpretations of it a scalar you know cleverly scaled to be on the unit circle in four dimensions um they do have an interpretation like that okay and there's a lot of a lot of things to know about quaternions so i just want i want you to be familiar i want you to sort of recognize these but the most important thing like i said last time is knowing that um they all exist you can go back and forth between them you know except for the in a few cases of singularities you can go perfectly back and forth between them okay and they're good for different computations right so having a unit quaternion just four numbers is for instance the choice we make when we're populating our our configuration in a vector q for so the generalized positions we choose when we want to represent a an orientation is we use the inner quaternion but when we're doing kinematics queries we often use the three by three rotation matrices for instance does that make sense questions about that yeah can you give a quick example yeah so it's famously known as gimbal lock okay so uh basically if you rotate pi this way and pi this way then you can't come out there's a there's a singularity in trying to understand uh what's going to happen i mean there's even you can't rotate there's directions where it's like you can't rotate there's a singularity in this map um it always happens you can you can try to place it you can choose your coordinate system so that it's sort of the singularity is in a reasonable place but um it always happens at this sort of like pie pie case yes yeah it's just a it's just a limitation of the three of using only three numbers to represent this topological space there's um you actually you know this space wants to live in four dimensions so trying to you know i we're going to give a really good example of the singularities in a few minutes um but uh yeah it's a well-known sort of it's frustrating but but well known that you can't do it so just to make that super clear right so if you think of a single free body so i took in pseudocode here i took a plant i just added the brick only the brick that's it it's not welded it's just floating around right so it's a free body right and and i've got a context for it i didn't mean for that to be there already but what is q right if i if i say plant.getpositions what is q in this case positions and orientations how big is it it's in seven which is a coincidence that ewa has seven right but but this is three positions and then four numbers in a quaternion stacked in a vector to make up the vector q what is the pose if i were to call plant evaluate body pose in world right what is that thing we so the output of this is a rigid transform the representation it uses on you know in memory is actually the rotation matrix plus the translation matrix plus plus three three numbers plus the three by three rotation right so it's a three by four matrix okay so when you go from in this case where the q vector perfectly represents the position of the object that's all it's only job in this in this setting right i've got a single free body the only job of q is to tell me where the body is in the world and i'm asking the question of the kinematics engine where is the body in the world it's kind of funny but this kinematics function which in the robot case does lots of work what it's doing here is really just changing coordinates from quaternions to rotation matrices right it's still doing some work but it's just doing the change of of representation is that clear because we're going to take derivatives of this in a second so you want to make sure it's clear yeah let me say it carefully so it's you might think it's the identity transform the same information is present here and here but it is more than it is not just an identity transform because the way that the orientations are represented in the q vector are different than the three by three matrix it has to convert from quaternions into rotation matrices in this transformation this function as i've written it is doing is not the identity matrix or the identity if you were to put xb on the object tell me what you mean yeah this is just both of these contain the information which is where is the is that object in the world yeah yeah no it's good i appreciate the questions okay so we're going to take different uh you know gradients of this thing now right and it was good there was good questions about when when is the inverse kinematics well-defined when are there when are there many solutions so we're going to get into that in some detail here but we're going to see it through the lens first of differential kinematics so if i have this function which in the case of the ewa q is a bunch of joint angles not quaternions right but if i had a the ewa and a red brick i might have the seven joints from ewa and the seven numbers for the quaternion plus position of the brick okay now if i ask a question given those those configuration q what is the position of some body all right that's my function there what i want to think about is what is the gradient of that function right so i want to say if i make a small change in my q what is a small change in the what does it look like is a small change in the in the pose okay and that's just a partial derivative of that function okay so the kinematics function partial derivative and it's i mean i think partial derivatives are basically always called jacobians but in robotics you know we we don't even say kinematic jacobian we just say kinematic or we just say jacobian and that's everybody knows we're talking about this particular jacobian if there's no other context right okay so we're going to try to study this object today understand when it's full rank when it's when it loses rank think about how to work with it as a to make a controller okay so um i just did this as a as a sort of variation here on q but if i were to take dt if i were to take a q dot here d dt of q and get d dt on this side right the the derivative of this pose d dt is the spatial velocity right so the change in pose over time and it's interesting to ask this was a we decided was a three by four matrix that's how we choose to represent it in for computations what's the what right way to represent a spatial velocity the derivatives it turns out we're going to think of it as a three element angular velocity and a three element translational velocity so not the full 12 numbers we're back down to six numbers and the first point i want to make sure i land for you is why that is at least to some extent i want to land that but there's a lot of v's flying around here okay so let me just you know note the type setting okay so there was the latex you know times roman v which is my generalized velocities this v is translational velocities there's a lot of velocities and they're all v right and this is the spatial velocity the capital i try to be super careful about that notation it's almost always clear from the context like sorry not not the context but the you know from from when you're reading right it should almost always be clear it's very rare that we have them all in one equation but nevertheless i try to be really clear with that notation okay so now the big question is 3d rotations were this weird thing that we needed a bunch of different possible options to represent how do you represent angular velocity it's derivative of rotational orientation right it turns out everything's good again three numbers are sufficient right why the fundamental reason why that is so all the problems of the single the the with the coordinates is because when you wrap around a two pi you want to get the same number again the top topology of that space wraps around on two pi in each in each of the different coordinates angular velocities don't have to wrap right you can have an angular velocity greater than two pi there's no there's nothing you can have an angular velocity of a million in some direction right there's no you know getting bigger getting bigger and then i came back around the space is easier when you're in angular velocities and so it turns out that three numbers are sufficient you could pick a various you could pick various versions of three numbers you could pick the derivatives of roll pitch jaw if you wanted to but the canonical one that has really nice properties for our spatial algebra is this angular velocity vector which means something in particular it's it's basically the the l it's a three element vector right there's it's three numbers so we call them w x w y w z okay and if you think about the direction of those three numbers is kind of the instantaneous axis of of rotation and the magnitude of those three numbers is the rate of or of rotation you may never need to know that but you but you what's important to know is that three numbers are all you need and you don't and we're gonna they are sufficient and efficient in all of our computations so we don't have a bunch of them flying around we just always use this one okay yes it is thank you that would be a lot more reasonable here x y z yes thank you okay they have the same sort of rules of algebra apply to spatial velocities and i won't write them up slowly on the board but basically they add and you can use rotation matrices to change coordinates all the same rules apply okay it's less common that you will have to manipulate the velocities the dynamics engine is going to do a lot of manipulating of those velocities for you it's less common for you to have to to know these rules but i i find myself going back and just saying okay you know if i need them then i can i can i can look here okay and that's kind of the level i want you to have too okay so let's just think this one through again again the simple case of us of a free body right what is q what is v well let's just do it carefully here so q in the case of the single free body we agreed was a seventh element vector three positions and four quaternions if i say plant dot get velocities this is the generalized velocities for the ewa it would be joint velocities the rotations of each of those joints but for the free body right it's this v okay what is v how big is it this time it's six elements okay so this is a six element vector which is a little funny because that means the derivative of q is not necessarily v right in some cases it is but in general there's some transformation that you have to use to go back and forth between v and q dot okay ask questions is that clear yes yep this is the generalized velocities that's translational velocity and then the capital is spatial velocity yeah that's great okay this n is useful to know it's it there's map q dot to velocity map velocity to q dot you can go back and forth between them the transformation is a function of q so you pass in the context to get it that's what that's what's this is this is like saying q you know times q dot and and vice versa n is invertible sorry okay so now the question is so we sort of understand i think a little bit more maybe the some of the subtleties of the representation but when i write now you know the derivative of the forward kinematics the output i get is always going to be represented as a spatial velocity of a body six numbers right i could take the derivative with respect to q dot or with respect to v you know both of these are valid and both of those are available in the code okay but the jacobian is going to always output spatial velocity all right and so now let's step back and think about why you know how am i going to use that in the code why is that the thing i want in order to move my robot okay so we said this on the board last time that's why i'm you know the stuff i'm putting on the slides is partly because people could see the slides better but also some of this we is flushing out what we did last time okay so there's the different kinematics problems we talked about where the forward kinematics which goes from joint positions generalized positions right to pose we talked about inverse kinematics which goes from pose back to joint joint positions i put an asterisk there because i actually when we when we really cover inverse kinematics i'm going to try to give you a much richer i think picture of inverse kinematics than just pose you might want to say find me the closest pose but you know try to minimize something else and try to stay inside joint limits and whatever there's a much richer way to specify inverse kinematics but the vanilla inverse kinematics says you've got an end effector tell me what the joint positions are and this is where when you were asking last time about you know are there multiple solutions this problem absolutely can have multiple solutions right you could say the same end effector and there might be many joint angles that would get the same end i'm trying to keep that still same end effector right so that makes it a hard problem it's also a very non-linear problem in general so it might be that some of my solvers okay if you have exactly six degrees of freedom in your robot there's a close a serial chain robot there's closed form solutions for this and we know exactly what where the solutions are as soon as you have seven degrees of freedom you have to do something more and when you have a humanoid you have to definitely have to do something more and they there's not i mean this is a still a hard problem in some ways okay how does differential kinematics fit in right differential kinematics goes from joint positions and velocities to spatial velocity right it's jacobian had a function of it was a function of q and it multiplied the joint velocities to get to spatial velocity differential inverse kinematics is going the other way it's going to use the jacobian again something like the inverse of the jacobian to try to go the other way so it's actually a function of spatial velocity and joint positions i'll make this super clear don't worry but it's roughly it's going from spatial velocity to joint velocity you know where you currently are so the map from spatial velocity to joint velocity is a function of joint angles in our notation it looks like this i basically i'm going from q you know inverse kinematics goes from q to pose inverse kinematics roughly from pose back to q differential kinematics is a configuration dependent map from from from generalized velocities to spatial velocities and inverse kinematics is trying to go from it's a again configuration dependent map from spatial velocities back to velocity okay my claim is inverse kinematics is hard differential inverse kinematics it can still have multiple solutions in the like but it's all easy because it's a linearization of the hard problem and we're going to be have good solutions for it and be able to understand it completely and it's people use it on the robots all the time yes spatial velocity that's the velocity anybody or so the most common one we'll use is the spatial velocity of the gripper frame so like what we're saying i'm glad you asked it's not that it's the little v this is going from um the generalized velocities the generalized velocities may not be the derivative time derivative of the generalized positions but they're related so this is a non not necessarily square matrix that transforms little v not spatial v to the time derivatives of of the joint angles i don't know how to say that better uh but so this is a map i mean this is really uh in the case of joint angles this is the identity map it does no work the only thing time it does work is when you have a different representation for the velocities then you do as the derivative of the positions and that happens when you're doing these orientation things so if you had a quaternion in q then you don't use the time derivative of the quaternion you use the angular velocity vector so there's a there's a change of variables that has to happen yeah yeah it's good it's good i know what i've failed okay all right so we're gonna now um let me before i put that up okay so here's the straw man proposal for how we're going to start moving the end effector right if i have and in the case let's think about this as the a body let's see let's use the gripper frame i'll go ahead and like you said that's the most common frame we're going to use is the gripper frame so i'll make this gripper although i wrote everything in if i read b later that's my that was a bad this was a bad choice but okay so in this case if i let's forget the brick exists for a minute let's just think about moving the e-ware around okay so in that case if it's just the ewa then this is seven joint velocities because there happens to be seven degrees of freedom on that robot okay this is my six element spatial velocity now what we had from last time was we had a bunch of grippers we had gripper at time equals zero we had gripper at time equals you know pre-pick remember how we had the whole trajectory right we actually turned that into a function that was defined for all t in my interval zero to t final okay i'll try to write bigger but i'm hoping the video is better today okay and there was a you know there's software that helps you represent that right with piecewise polynomials piecewise linear interpolation of the positions you remember then we had to do that slurp for the quaternion okay but we had a nice representation of this that defined it for all t you can take a derivative of that representation and it will give you another trajectory that's the time derivative of the the velocity the spatial velocity as a function of time so my proposal is if i have a if i had my plan and i basically tells me what my end effector my gripper velocity should be at all times spatial velocity then can i use this to decide what my joint angles should be okay and the proposal is something like i want v of t to be the inverse of this right this relationship is is a non-linear function of q but it's a linear function or linear relationship between the values it's just i mean gradients are always a linear relationship right but it's um a linear relationship between the joint velocities and the spatial velocities since i know q this is just a matrix and i can try to take its inverse to try to go the other way so that tells me you know given i want to go in some direction what should my change in my joint angles be now if i write this the natural question is can you take that inverse does that work okay can i take that inverse does it work ever in this case what's the size of the matrix j six by seven which is not square so you shouldn't write i shouldn't write that like i kind of don't want that on the board but i'm this you know i shouldn't i shouldn't take an inverse i can't take an inverse of a non-square matrix right there are generalizations of the inverse that can work for non-square matrices and we'll definitely we'll we'll use them now right so j g of q is a six by seven matrix doesn't have an inverse but the generalization is the pseudo inverse people how many people know the pseudo inverse okay so everybody has their own favorite um symbol for it right i wrote this before as a you know minus one as the inverse people use like music symbols and whatever i just use plus okay plus is my pseudo inverse okay and the question is now not does the inverse exist but when does the pseudo inverse will always return something the question is is it any good okay and we'll we'll we'll dig into exactly how you compute the pseudo inverse in a minute but uh first just think just know that there's you could like call p inven matlab right you can there's it's a linear algebra operation or a numpy you know and you can ask for the pseudo inverse of a matrix like this and the question is um when does it work right so in particular i want what i want i want if i put a desired vb in and i use the pseudo inverse here to get a joint velocity if i were to put that back through right and think about what actual what was the resulting vb actual when does this equal this does that make sense what i did i went from i went from end effector velocities into joint velocities with a questionable pseudo inverse and then i went from joint velocities back to end effector this one is always well defined okay and the the question is when does that become the identity matrix right when does this work it can work even when it's this non-square matrix it can give in fact this is the good case in some sense right because we have being six by seven is the good case we have six things we're trying to do and seven joints with which to try to do them right so you'd like to think you'd like to be optimistic about this that that transformation should work people know when does that do you know the property for that yeah well you so full rank which in this case would be at most the row rank right the the rank of the non-square matrix will be determined by the number the smaller of the rows or columns right so works when jq is full row rank here rank j equals six now that's a math answer which is the right answer i mean that's the question i asked but uh when you go to put it on the robot there's a you know rank is like true or false is the rank six it's a true or false question okay but if you what really matters is somehow the condition of the matrix if you get if you look at the singular values of j and the smallest singular value gets very close to zero then that means the matrix is getting numerically close to being non-invertible in this one in this sense and you start having problems even if it's not if it strictly has rank but the condition is very bad when the smallest singular value is small close to zero that means that i might if i wanted to make small velocity small movements in velocity here it might take ridiculously large uh joint velocities to accomplish something small here if those eigenvalues so the singular values get very close to zero okay so what we really want to look at is the smallest singular value it should be you know not when it gets close to zero is when you have you have issues okay so luckily for our ewa you know most of the time that's good that this is most of the time this is full row rank and i did some little animations which i are in the notebook so you can run them uh start with this jacobian one here okay so what i have here is an unfortunate choice of screen layout i have here a little notebook that just prints the jacobian when i move this thing around okay so i can move this around and it's going to print out the jacobian jg the gripper jacobian in a font that's probably not useful let me make it a little bigger here okay and it also is just printing out the smallest singular value of that jacobian okay and the game is you know move this around convince yourself that in most configurations of this robot it's it's fine it's pretty really pretty good how can i make it not good yeah right if i put it at the end of its if i put it like straight out right like if i straight it out now i've got a smallest singular value of negative e to the negative 16. okay why is that right the map saying i want to con command an instantaneous velocity in the end effector would require ridiculously large joint angles i mean this is just that's numerical nonsense right that's zero that would it's saying that that i would need infinite velocity uh at the joints to achieve some desired force at the end effector right if you tried to go straight down it's not going to work it would require if you wanted to move down at a certain velocity it would not it would require you know infinite joint velocities that's a funny thing right you should maybe you should have a problem with that like because um that seems broken it seems like maybe we've just written the problem down wrong right because clearly the robot can move back down how do i how do i justify that like are singularities real or is it just our math my math is bad the second derivative is non-zero okay the second derivative is not zero so um here's a super simple example to to make that work out okay so this is just a two-link robot the each link is the same length so i can write the kinematics very simply and i'm going to just make it move through the straight position like this okay so that's going through the singularity and back i can loop it i think i need to reflect it that'll be cool oh yeah um okay so do you understand what's happening here two-link pendulum they just happen to have exactly the same length that makes the kinematics trivial it means i can write down the jacobian it's a two by two matrix it's super simple and that jacobian loses rank when cues are zero like this when it's straight out okay and i'm just playing i'm just telling the robot to go through q is a sine wave basically q one and q two are sine waves of equal of scaled magnitude so that they they stay perfectly in that line okay it's clearly going out and coming back it's not like it can't come back so how does what happens well you know we already got the we got an answer right so at that instant of being completely out straight it is true that the jacobian is singular yeah if i wanted to instantaneously command a velocity back here i would fail but i can accelerate back in that direction right the derivative is okay i can accelerate in that direction and get myself out of the singularity and uh and eventually get back and everything's good again so it is absolutely true that the map that goes from joint velocities to end effector velocities has a problem you cannot invert that map at this configuration it does not mean your robot is stuck there for the rest of time right i mean with some controllers it is okay so we're not going to handle that case beautifully with the pseudo inverse controller we're not going to try to we'll handle it in a different way okay but in the case where we're close to full rank we'd expect this sort of pseudo inverse to work well the scary thing is when you get close to singularity and then you start commanding very large velocities those are the kind of things that we we definitely do want to address questions you see one okay there yes oh i'm sorry down here okay i'll get you next i'm sorry so this is i see so so um i think the question is you know what math tells me that i can't go out out there right beyond the reach of the robot uh i mean this differentially this is still telling me uh also if i command it in this this direction it will fail right and similarly if i'm at the edge of some workspace and i'm trying to go you know it will this is what happens actually is you command yourself to go farther than you should your arm goes straight and the robot goes crazy so the math does tell you that in both directions and it's a it's a very much a differential quantity so it's only telling you as a function of this which directions can i move it's not an absolute workspace analysis it's just different it's just instantaneously can i move in that direction sorry um great question yes so if i was writing a really good controller and i found myself in this position i would start commanding an acceleration in order or i could forget about trying to command an end effector i could just command like this controller is just it makes a q trajectory it says forget about the end effector for a minute i'm just going to move the joint angles through something some simple function right but somehow in that situation you have to give up on commanding via the velocity of the end effector great able to yes my intuition between this motion which is the sine wave and let's say we had the same end effector motion but just going half the distance so you you'd go out there and somehow that would be within the range of motion it would be controllable but yeah i mean the motion so just i'm not able to picture why at any point here you can't have immediately a velocity you can only accelerate great so so i mean i can't flip in that case but let's say i was just going like this and back right so this is your example right but not going to full extension right that at any at any one of those configurations if i wanted to command a particular xy position velocity sorry of the end effector i could do so with a a reasonable velocity in the joint angles right so that's that's the big difference is right in all of those configurations here i still have the ability to command a velocity in the end effector it's only when my jacobian becomes close to singular that i and when it's close to singular it just requires ridiculously large velocities and then when it's singular there is no velocity that's that's the critical difference is that um it's really because these things line up and so think about the the effect that moving this angle has on the end effector velocity right it moves in both x and y here but when i'm here it only moves in y if i had it multi-elbow yeah so the ability to command an x with respect to this is gone and similarly the ability to command in x that that direction in this in this joint angle is gone and the rank of that matrix is what tells you that's true right it's really just the the trigono trigonometry of of you know what a small delta in that angle is going to produce at the end of vector so yeah sure yeah no you could have it similarly if i were to you guys i need to do some yoga or something but if i were to go like this right you know and if i folded back in on myself for instance that could be on the inside it's still maybe you could call it the edge of a configuration space workspace that's right that's right yeah okay so is there ever an example i think with with more complicated mechanisms you could say um if you had like a four bar linkage or something you could probably get yourself in trouble even in the comfort of your the middle of your workspace but it's certainly common that that you would be it's happened at the end of the workspace yes i guess like say you were like sitting here like what would happen and what would make this happen like in terms of like you get close to a singular like in practice could you break robots have broken for because some people used simple jacobian controllers and got too close to singularities yeah and in the you know 80s in particular there was a series of papers about what's the right way to do these sort of control that is and they worry very much about not blowing up during the singularities absolutely yeah what physically happens yeah so so typically nowadays the controllers that big box underneath the robot says you've asked for a big velocity and it turns off if you've made your own robot and you didn't put that safety protection in then i did throw a robot across the room once yeah that can happen right you know and there's big red buttons next to the big robots in case that starts to happen yeah but it can really it can really that math is bad you shouldn't fly you shouldn't apply that joint velocity command okay um so i want to spend the rest of the lecture thinking about sort of maybe a generalized version of that pseudo-inverse that's the different view on that pseudo-inverse and it's going to at least help us think about putting some of the guard rails on so that it doesn't throw the robot across the room or fault the controller that's trying to keep you safe and i'm going to do that by first just making us think about sort of the optimization view of what the pseudo inverse is doing okay so i like optimization that's that's the thing um and there's a lot of the of the tools from class that will use the language of optimization and really you know the code the equations that are uh are giving us the pseudo inverse i think are best understood as the solution to an optimization problem and once we think about it that way then it becomes natural to put on a few extra protections and write a slightly different optimization problem that can say you know try to do that but don't blow up for instance okay so let's think about pseudo inverse as an optimization what i want to say is it's it's something what i'm writing here is really something that looks kind of like this find me a joint velocities such that the end effector velocity is approximately equal to the desired spatial velocity right i wrote it by taking that in the pseudoinverse is sort of the solution to try to to do that but think about it in its sort of primal form i'm trying to solve for a v such that this map comes close to my desired given since q is given in this case we know where our robot is at any moment in time so really this just looks like i could i could i could write this if i abstract away from the robot a little bit this is just like saying find me an x such that a x is approximately equal to b right if i just this is just a sixth vector i'll call it b this time and this is this jacobian that's just you know in the language of linear algebra this is really just a x equals b and you can call slash and matlab just to solve that that's one of the ways to call a pseudo inverse for instance right now a way to write this as an optimization is instead to say let's try to minimize some error term right so i'm going to minimize the penalty the difference between ax and b okay so i've got some distance function some cost function that says basically i'm going to penalize and in this case i've chosen in what directions you know i've chosen the the cost function so it says what my values are in terms of what kind of deviations i like and what i don't like okay but this is sort of a standard way to say try to find me an x such that ax is approximately equal to b if the if the error goes to zero then i've solved the problem and you know i would expect that to be true when j has these properties okay but this problem makes sense even when j can't get you to there's when the when a is such that i can't get drive this error directly to zero so you see how that's kind of a more robust specification of the problem so this is um where i give us you know just i'll i'm going to start using some of the language of optimization but it'll be i think a gentle introduction to that let's even do it in the scalar case right so and think about what are the what does the geometry of that problem look like right so if i said just like that right no vector norms nothing this is just a squared of a scalar okay a is a scalar b is a scalar of the data i'm trying to find the smallest x i think the geometry of that problem is easy to think about right this looks like a quadratic form right this is my ax minus b squared this is x and somewhere there's a happy place where i'm at the minimum of that right and i'll call this solution x-star and for this particular problem we can find x-star very easily by just taking the gradient of that function asking when the gradient is equal to zero and that's going to tell us since i know that in the case where you know this curve is pointed up it's a positive definite function right it's a convex function then the minimum is going to give me the solution the place where the gradient equals zero okay so i take the gradient with respect to x of ax minus b squared set it equal to zero and this tells me that the solution to that just worked out is just b over a so does that always have a solution let me just start asking sort of an almost trivial question i guess but right a better not be zero right so what happens when a is very small that's kind of what's happening as we get close to our singularity right when a is very small this thing starts getting more and more elongated this cost function as i get small it goes like this and then maybe it goes like this right and it's it's going to move out and the optimal solution is going to move out this way that's the geometry of what's happening here is that my cost function as i change a and make a very small it's going to move the cost function the solution more and more towards infinity right just that's just saying a when a gets close to zero x star is going to go to infinity and the objective function follows suit as it should okay so um that's bad all right you don't want x star to go to infinity and that's exactly what happens that's what's in danger of happening when the jacobian goes to loses rank okay so the matrix form of that are there questions about that the matrix form requires more linear algebra of course but is really exactly the same math okay if i want to say minimize over x now ax minus b i can multiply this out if i wanted to this is going to give me an a transpose ax plus if i multiply it out i guess it's minus 2 b transpose ax plus b squared it's just another quadratic equation right and in two dimensions if i had x1 and x2 it's still going to just look like a quadratic function right and it's going to have some optimum at the bottom right everything holds in the matrix case i can do the same thing i can take the gradient of that function with respect to x now i do a little bit of gradient math and i set that equal to zero and i find out that the slight generalization of what i did there is just b transpose a inverse and guess what this thing here is one of the pseudo is what you get when you call the pseudo inverse right there's a left and a right pseudo inverse but this is the one we're using today okay i could could write this as b transpose it's this it's the transpose of the pseudo inverse i got a transpose here somewhere but okay so actually the pseudo inverse which i said was just a generalization of the inverse that's how i introduced it before maybe how you've seen it before it actually is doing something very clever right it's taking this slightly richer specification of the problem and it's not necessarily guaranteeing that it's going to get a cost of zero but it's going to give you the best cost it can that's why the pseudo inverse will always give you something back okay and that something is exactly you know this so that's the picture i want you to have in your head the shape of that bowl by the way is just you know it's governed by a transpose a right the eigenvectors and eigenvalues of that matrix will change the shape of that bowl i know that's you know a lot of equations or whatever but i want you i want you to have the intuition right so what happens when j starts to lose rank think about what happened here the same thing happens in the vector case this ball starts getting flatter maybe in one axis maybe in multiple axes but if it's you know as as if if it's one eigenvalue goes to zero it will get very elongated to the point where it can be a trough if the eigenvalue is exactly zero and the worst thing is that the minimum of that trough is going to move off to infinity okay so that's what happens that's what goes wrong when you call pseudo-inverse it's not that it's it's solving a beautiful problem for you it's just that you're asking the wrong you're asking it to do the wrong thing you're not telling it to be reasonable you're just telling it to get as close as possible questions about that now here's the here's the win okay the um the language of optimization is way richer than just calling pseudo inverses right i can this is an objective but i could also add constraints so what i'm going to do for instance is say you know get as close as possible but don't pick a velocity greater than like 10. you know i don't want my robot moving it's got because the controllers are going to set a velocity limit right so a perfectly good question which looks simple in this case is what if i did you know minimum of x a x minus b squared but i'm going to do subject 2 okay let's say i want x to be less than or equal to 2. something like that okay then the picture is still like what i've got here but maybe i've got a two here so it's going to say go down as far as possible but don't cross this line if you get there then i want the best solution to be right on the rail okay that's another way to write a mathematical program and since we're going to be doing it a lot let me just stop and say in this language this is the decision variables this is the cost or objective these are constraints i can do exactly the same thing in this problem i could say i'd like the vector 2 to be less than norm 2 or something like this or i could say the i element of the vector maybe every velocity every joint velocity has a limit so i can put a i could put let's say x zero less than two maybe x one less than three i've got a different motor on that second joint so i could use a different joint velocity limit the language of optimization is super general but we're playing in a very nice version of the optimization landscape where you know this objective we wrote down this is a quadratic objective and it's a positive quadratic objective it can never be negative right really the generalization is that it's positive definite or positive at least semi-definite but let's just say positive definite meaning the matrix which gets inside here this a transpose a and until i go i should say semi-definitely because we're talking about when it can drop rank so i'll say semi-definite here because the matrix a transpose a the eigenvalues of this are all greater than or equal to zero so my function is always going up and it's always quadratic it's so it's a convex function and it has a unique minimum and if i restrict myself to any constraints that are of the form of linear equations the absolute value can be written as just two linear equations i could have written that as x less than 2 and x greater than negative 2. so these are linear constraints this is the domain of quadratic programming you'll hear people talk about qps so now when i run my controller instead of calling pseudo inverse every time i want to hit every you know every time step i want to decide what positions to send to the controller i'll solve a small quadratic program the geometry of it i made it look deceptively simple in one d it's simple it is simpler in 1d but in higher dimensions you know you have a quadra it still can only do that roughly but the geometry of these constraints can be interesting and you want to solve it efficiently so there are strong solvers you know strong numerical codes that will take the specification of the problem in this kind of language they're called qp solvers for instance and they'll solve this problem for even very large matrices very fast and it's it's entirely practical to run them in a control loop okay now this i tried to visualize the geometry of this okay so i made a nice little animation here that writes a small mathematical program that just i'll tell you about that in a minute maybe more next time but okay and here's what it looks like this is my two link kuka okay so i basically i took the kuka and i just froze all the two links because i can only plot 2d stuff roughly if i have two decisions for two velocities to to move and i'm just trying to move in the plane whatever i can plot that if you get higher dimensional i can't plot it now this green is the quadratic form in those two planes that's just the objective and the red is the constraints don't go outside those constraints as you move through the singularity see if i can make that visible enough what happens that quadratic form flattens out and the solution is trying to move off to infinity that's the bad case but the qp says don't go past the limits so now i can just play with it a little bit okay so as i go close to the singularity you can see that that becomes a trough instead of a bowl it's actually you know until it's exactly zero it's still got a minimum at some point it's just off at infinity and when it's exactly zero it's infinity you know but the qp can move right through there uh pretty well it'll always come back with a solution for you okay so the quadratic program is a nice generalization of the pseudo-inverse controller okay um i did have another notebook that just showed it actually moving the end effector but just for the sake of time trust me it moves the end effector yeah if i if i just command a velocity like this it goes right it works and you can run it so there's a language um yeah so so drake sort of has three big components um you've seen the plant multi-body plant you've seen a bit of the diagrams right and context and all the stuff you love um and then there's the third sort of big piece of of of drake is the mathematical program interface because i believe that the language that you want to talk to your your multi-body plant is the language of optimization and so you can find these pieces in different toolboxes but having them in the one toolbox i can easily say make me a cost or constraint based on that robot and i can do things that i wouldn't be able to do if they were separate okay so the code looks pretty pretty simple you say like make a new mathematical program i will have two decision variables i'm going to add a constraint like this is x zero plus x one equals one that's a linear constraint x zero less than x one that's also a linear constraint i can write them both in the form of that i can add a cost like x squared plus x zero squared plus x one squared solve okay and behind the scenes what it does is it examines the costs and constraints that you've given it and tries to call the best solver it has a bunch of commercial solvers that are back behind it if you're at mit most of those commercial solvers are free with an academic license if you're not in education they're really expensive it's kind of like you know you learn how to use them and then you go off in industry and it's like oh my gosh that doesn't cost a lot of money okay but that's the language that we use and i guess i have just one minute the code is pretty unintimidating i think but our a simple pseudo-inverse controller i forgot to close the other one you can write a little pseudo-inverse controller and then you can write a qp controller they're the one that just uses the pseudo inverse can still move through i just said a desired velocity it just moves through and then the differential ik solved as a quadratic program can do all that but it can be robust for singularities and the like okay there's a bunch of other things that you can do once you have this language of pseudo inverse or of of jacobian control as a mathematical program i'll list them and the the the details are in the notes but it's sort of nice to think about it so the linear constraints we talked about here were just velocity constraints right the decision variables in my su my pseudo inverse like controller my jacobian controller were the velocities and i had the objective was based on the jacobian but you can actually add some amount of position constraints if you take a if you take a linear interpolation of your jacobian and try to say what's my next positions going to be and you want them to not go past some linearization of a collision constraint you can do that actually and similarly you can take a derivative a first derivative and put acceleration constraints so this becomes a super useful sort of language to start adding richer and richer specifications of what you want the controller to do always locally always locally but saying given i want to follow this maybe i want to have you know don't want to run into a wall and i don't want to exceed some accelerations that can all fit in the language and there's sort of right ways to write it so that you never that it always has a solution you know you want to make sure you don't write constraints that can potentially not have a solution and that's an important thing but but mostly that's packaged up and you can just call the differential ik system and and use that controller you actually used it if you played with that first chapter notebook that half of you tried and the rest of you made me cry and um but if you did you might have gone to the limit and then the ik solver said you know you've got no some you got no solution that's because it was a simple form of the ik but the full form actually is robust to that okay good see you thursday yeah just um when you're showing us the green thing yeah right so the green thing is you're saying it's the plus function and the class slash objective is the velocity
Robotic_Manipulation_Fall_2022
Fall_2022_642102_Lecture_17_Visuomotor_policies_behavior_cloning.txt
this is a you know quick trick that I I just mentioned here but um do you know this is not a drake specific thing this is just a GitHub thing and uh I guess Microsoft I think bought GitHub and I'm not wrong about that but um if you have any GitHub URL you have just GitHub open in your browser and you want to search the code browse the code just replace the GitHub with GitHub 1s and it'll pull up a visual studio client in your browser and you can just like quickly search and whatever so if you if you want to like quickly poke around in the source code for Drake even if you're not super comfortable in C plus plus there's often like use cases of all the all the you know all the different methods are all searchable inside there and even the C plus plus in Python syntax is not very different so possibly to a fault Okay so today I want to sort of um launch into the bigger conversation about control for manipulation we started it by talking about manipulator control last time but I kind of I want to go from manipulator control last time to the more sort of full discussion of feedback control for manipulation and let me make sure that distinction is clear because it looks kind of the same words let me just make sure I my intent is clear even if you might recommend some better words for it but when we were talking about manipulator control this was only controlling the robot and that was important right that that made a lot of things a lot easier remember we we were using the reason I call it manipulator control is it leans so heavily on the manipulator equations which was just my mass times acceleration equals the sum of the forces which looked like mqv Dot it has a particular form in the manipulator equations but something like this okay where this was the mass times acceleration terms this is Gravity Force this was maybe damping or friction or other things importantly this was my torque input these are maybe my contact forces or other other applied forces that were described in a Cartesian frame okay and we worked with those equations a bunch last time you could also I mean so I hope you see that as mass times acceleration is the sum of the forces you could also see this as just a differential equation which looks like f of x u where my X is q and V and so this is just X the equation for x dot for you know is this is half of the equation for x dots so you could say I would write this as that equation is similarly V Dot is f of q v U the other one that I need is that Q dot is just V to V and that together is my upside down x dot yeah okay but there's a bunch of things that um this distinction of only control the robot was baked into what we did there so um in particular the way I've written it here I had that the dimension of Q equals the dimension of V at least for my revolute joint robots equals the dimension of U and even more than that just the way I wrote in U is just it entered linearly like this as a what always enters linearly but there's no modifiers there so it's you it just came in and you can use any U to directly control V V Dot so that um I did go teach a whole class on the difference between fully actuated and under actuated but that makes these equations a fully actuated system okay and it's a particularly easy one to control in that form for instance we saw that PD control worked we saw inverse Dynamics control stiffness control we saw a bunch of things that worked pretty easily in part because I had I had you that had Direct Control on VDOT also this was one important point a second important point is that q and all these other terms were known or at least estimated pretty well let's say estimated well okay in practice we were able to we have good system identification tools that if I have an Ewa or a Franca or some other robot I can move it around a little bit I can get very accurate estimates of M and in fact Kuka has done that for us and they are doing the the low level control that's canceling that out for us okay there are things that can make that control a little easier like torque limits and sorry a little more interesting like torque limits or or other bits but out of the box that was a class of control problems that we know a lot about and we have very good solutions for we can command fast trajectories and track them very accurately we can control contact forces pretty accurately thank you but that is a not representative of the bigger problem of feedback control for manipulation right right so the bigger problem is to not just control the state of the robot but to control the state of the robot and the uh more importantly the state of the world and a lot of those assumptions that were fundamental in the way we wrote Our controllers become no longer true and I want to spend a little time talking about the ways that those are not true and the ways that those complicate control Okay so whereas before we had X was let me let me just say Q robot maybe NV robot was our state before now we're going to at least something that has in that example the state of the brick in the state space but unfortunately our U is still just you robot I picked up more degrees of freedom that I'm trying to control but I didn't actually pick up more actuators by which to control it okay so that is now another differential equation which I could write as my my same sort of V Dot is f of q v u it's still a differential equation in all of those but the dimension of Q is now greater strictly greater than the dimension of U and that makes control a lot more interesting a lot harder so that that's where you come into under actuated systems okay but that's not the only way that the problem got a lot more interesting if we start controlling trying to control the world there's a bunch more and I just kind of want to I want to talk through them and make you make sure we appreciate those okay so even if we assume perfect perception so let's ignore the perception problem for a second so let's say that I have a perception system it's reading my cameras or whatever my perception system that's outputting some estimate of my state and perfect perception for me means that X hat is just outputting X directly in simulation we could just be using the cheat port and going asking multibody what is the true state okay so still the problem is Rich because it's under actuated but also because the only way if you look at the equations you is what you on the robot is what you get to control your goal is to control let's say Q brick if you look at the way those equations come together the only way that you gets over to control Q is through the contact forces the equations of motion are in some sense decoupled they're only coupled through the contact forces so you have to do the control through the contact forces okay so if I were to you know when I do write the equations of motion for that robot Plus Brick system it actually still looks like the manipulator equations those equations are still valid but there's an extra set of equations let me put a little B underneath all of these there's a new set of equations for the brick variables you can write Force equals mass times Excel you know mass times acceleration for this 2 you have the gravity of the brick B is for Brick okay I also have the robot ones okay and the whole thing together is one big set of equations that I simulate with my my integration methods they are they are beautifully decoupled like as a structure you'd want in your equations to make your math better and stuff like this it's beautiful that these equations actually don't depend on the robot and these equations don't depend on the uh on the brick except for the one extra thing which is that the forces here on the robot at C have to be equal and opposite of the forces on the brick there's one extra constraint that couples them together which is that those two are equal and opposite but if you think now as the the job like before what we were doing right when we were working with those equations is we would choose a u that would cancel out some Dynamics put in some spring-like Dynamics to make it feel like an impedance or a stiffness or you know there was a bunch of Tricks we did those are not readily available now because the only place that my U came in was in the robot equations by control input and the only way that those use get through to the other side is through the forces is that clear and those forces are fickle they're subject to a friction cone constraint they turn off if you're not actually touching things right that's kind of annoying you can push but you can't pull unless you've got a suction gripper okay so this is not just this is like not as direct of a way to control those variables as just having a torque directly on the motor and that complicates that means you have to if I want to start having any effect on the the Brick I have to touch the brick I mean that's obvious in the you know in the physical interpretation and it's it's visible here in the equations of motion by seeing that the only way that that happens is these forces become non-zero and that's how it goes across the fact that those are not only um you know intermittent they mix thing you know it makes things non-smooth because they go from sticking to Sliding or you you weren't touching and then you are touching there's all kinds of of interesting stuff that happens at that interface so those are two big ones we're suddenly going if we want to control the world and think about it as a feedback control problem then we think I have to think about it being under actuated and we have to think about the control through contact but wait there's more it's still harder so they go I'm going to spend a little time making you see how hard it is and then we're going to try to hopefully make it look easy again okay but but that way hopefully we appreciate at the end why the thing that makes it look easy is surprising I would say okay um unfortunately perception is not perfect right so if I think about the role of my perception system that's taking let's say cameras in perception even if I'm trying to estimate the state of both the robot and the brick right the way that the perception systems makes errors is pretty non-trivial and you've seen that already you've seen for instance ICP Works spectacularly well and then it doesn't and it's not it's not that this is a little bit off but it would just give you complete garbage on one frame right if it loses track for instance and that's true of many camera based perception systems they can often work spectacularly well but then fail catastrophically so a lot of our traditional controls you might try to say that this variable is maybe has gaussian noise on it or something like that and you could think about this this is part of a common filter framework if you know that okay but perception systems make very non-gaussian types of errors you know for for in particular if you have a linear gaussian system if my equations happened to be linear and my noise happened to be gaussian those are not linear equations those are nonlinear equations and hidden inside there there's Sines and cosines and the like right they're not linear and and and the system was gaussian then I could say that if if x hat was just sort of the expected value of x already we're in pretty good shape and maybe I have to also I want the variance for more comp for linear gaussian you don't actually need even the variance but um maybe the variance of x can be a useful thing to to estimate to and even though those equations are not linear gaussian if you're just controlling the robot but not the object then we saw cases where we could use feedback to make the equations look very linear and in fact these linear gaussian tools that have grown up in control theory work really pretty well for a robot it's pretty it's pretty reasonable to think about estimating just the mean of the joint positions and velocities and possibly having a covariance of the of the joints and Joint velocities but once you add in the brick all bets are off because suddenly you're going your only sensor is measuring the brick unless you're not strangely instrumented environment are like your cameras maybe your contact sensors and the like and those are not sensors that give you sort of nice gaussian noise properties and you remember that right so we we talked about for instance when we were talking about perception the types of represent of uncertainty you might have of a perception I was going to bring a coffee mug down I forgot but if you can see the handle of the mug then maybe you have pretty small uncertainty but even if your sensors were perfect and your perception system was doing as best as it possibly could if the handle of the mug is not visible to you then you cannot estimate the the orientation of the mug with absolute certainty you have to somehow communicate that I only know the orientation of the mug up to some uncertainty ellipsis partial observability also makes non-gaussian causes non-gaussian uncertainty how many people know what a palm DP is a partially observable Markov decision process if you like Palm DPS you could just say it's a palm DP right um I'm not advertising this as a path of Happiness so so if you don't know what a bomb DP is you're all good it's useful to I mean I do think the language of palm DPS is very powerful and helps us understand the problem but I'm not going to give a whole lecture on it right now okay right but more generally if I think of my system as a Palm DP I'll give you the language yeah you know partially observable Markov decision process it's a fancy way to say that um I have a dynamical system that has some stochasticity and I don't get to see all of the states directly okay but for those of you that do know what upon BP is I want you to I want to connect to that real quick and and I think in general in the general Palm DP case what we should say is that the output of our perception system should be not just X but a whole probability distribution over X and that's the kind of thing that cosnet tries to Output in some a lot of perception systems these days we'll try to Output right this would be like a a probability distribution or a belief distribution over X thank you from the from the mathematics of part especially observable markup decision processes we know that that is a a reasonable answer that's a complete answer if I can put out an entire probability distribution and if I can write a controller that can consume that probably that entire distribution and make good decisions then we know that is sufficient for making optimal decisions even in a partially observable setting but this requirement of outputting an entire probability distribution is is a higher is a large requirement and in particular designing a controller that can reason about that is is very hard okay foreign it's also sort of interesting that it's not it's um you know I've got to say it I guess later but having a perception system that gives me the complete output of the all the possible uh states of the world is sufficient to make optimal decisions but we've over and over and over ago we again we've seen people write great controllers like spot opening the door or you know I call them the robot Whispers right incredible controllers that do not use this as an input but use a much smaller amount of information from the observations as input so even though this is sufficient to be optimal it's probably a lot more than you need okay and that's a that's an important big theme so this is um a mathematically powerful tool chain but it's probably more than we really need or need or want I would say though that even the Palm DP language as I've written it there can be insufficient for the manipulation problem because it made one particular there's one place where it has a weakness everything is a palm VP I give you that but it's a palm DP in some State space and if I've chosen some representation of of my state X and written a probability distribution over X I may or may not have gotten the right state right so even P of x might miss you know might not capture the true state and this is a I I don't I actually mean this in a deep and important way I don't I'm not trying to be flippant about it right the I think most of the control theory that we've done or sort of in life is assumed that we know at least what the state representation is that you know that for instance the joint angles of my robot The Joint velocities of my robot the quaternion that's specifying the you know quaternion plus translation that specifies the pose of the brick plus its spatial velocities you know that that is the right State and if I write a distribution over state then I can write and I can solve that then I've somehow solved the problem but that broke down when we started thinking about category level manipulation right as soon as you say you have uncertainty about the geometry of the object then again suddenly where do you put you know where does the uncertainty about the geometry fit in x if I just had X being the positions and velocities right so X somehow needs to encode GI uncertainty about geometry maybe even uncertainty about the mass not for that particular type of task you didn't need to know the mass you just needed to know the geometry really but certainly there are tasks like the force control tasks we talked about where the mass estimates are important and the like right and this is a I think maybe one of the biggest things that we've done we've railroaded ourselves a little bit in the sort of classic control is for for robots we say okay well we're going to estimate a distribution over the possible joint positions and velocities World positions and velocities and we haven't sort of naturally encoded geometry and mass uncertainty and the like so people are really excited these days about let's say well maybe I've got a implicit representation of geometry right and uh and maybe if I have a Nerf or a sign distance function or something like this and I can use that in my state representation that I can somehow rate controllers that that work for many geometries and that's a huge big topic that's a hugely important okay but even that can fail right so even if I had a beautiful way to write X with positions velocities of the robot and the brick and then I have the geometry included and the mass included there's still you know I have only a handful of examples that I always use but but now what if I'm just doing this like okay I could write positions and velocities of all the pieces but the number of pieces is breaking right um and the contact mechanics and the you know that's hard stuff that's kind of like if I start writing that down and asking about the the Dynamics how they evolve and the state space evolves and stuff like that just kind of breaks down right the control the way we would write that control problem uh breaks down foreign makes my multi-body head explode this one does too right I mean if I want to do manipul this is I consider this like a pinnacle of manipulation we all do it you know a couple times a day right but that's incredible but what is the state of the shoelace yes do you have the answer that would be amazing I'm sorry bro okay great awesome so um let me come back to I'll just do my like three things that I'll come back and try to answer that yeah so this you know so when you get to deformable objects then suddenly the um the state representation question gets really rich again too right so um I can we use methods from like finite elements and stuff there's ways to represent the state in order to simulate it of course but is that the right state to build a controller right stuck about spreading peanut butter until I like there's a few of them I feel like kind of capture most of the the things what is the state of the peanut butter that's ridiculous right but that task should be very easy the way we've typically written down feedback control makes it seem extremely hard okay and people clearly have these strategies right where you can kind of feel the when the the peanut butter clumps up and uh you know there's a lot of I think tactile feedback in their visual feedback that's happening probably there's not a finite element or a granular media model of the peanut butter in my head you know and the last one I always talk about is buttoning my shirt Okay so oh this is a how-to button your shirt um okay but uh if you think about what is what is required for control we have very I mean we button the shirt our shirts all the time but probably the first step of buttoning my shirt should not be to completely estimate the entire state of my shirt right I have a probably a very nice local policy that works with feedback on my fingers or whatever it's not very visual I don't think you know okay so so there's something that the the clear view of sort of the mature view of control isn't doing and that's what I want to kind of talk about today but so Robin to your question so uh why isn't it why is it did I say it's a palmdp instead of an mdp um I think there are versions of the manipulation problem that and maybe the yeah when I put the red brick with the sort of cameras all around I think an mdp would probably describe that very well but if I were to take my coffee mug and you know occlude it put it behind here right then suddenly partial observability has a very physical manifestation and it happens all the time in manipulation you know if I have a hand and I go to pick something up the hand almost always occludes my head mounted sensors it's really annoying it's like the time where you want the information the most you've occluded yourself right and piece of people say oh I put a camera on my hand but then cameras get blind when you put them too close to an object right so so partial observability even if the world didn't start with occlusions and the like it's a real problem in manipulation so I think Palm DP is the more General framework for that thank you for asking that's good okay so maybe you get my point this it's a hard control problem even if you knew the state space you probably have to think about uncertainty over that state space and guess what we don't know the state space in a lot of interesting problems and even trying to know the state space seems like a tall ask Okay so so what do we do about it you know the hope is that some of these tools for machine learning are going to help out and they've done incredible things over the last few years and they've kind of come in from the other end so this is maybe coming from the multibody equations up and adding all the complexity and there's another set of tools that have come from let's start with just images and start working back towards control and I would say you know the field is kind of like working this way and working this way and we haven't quite met in the middle yet but but I'm optimistic like there's there we're pointing at the right at each other you know and coming coming more and more together Okay so if I'm gonna do machine learning for it what what's the machine learning problem that I need to solve okay so um if I think about the multibody case again let's just distinguish a couple of the different control approaches right so if I have my plant here I'll do it in block diagram language okay and I have y here these are my cameras for instance my sensors were generally cameras the way I've always talked about this is that I have some internal State inside my system you have some inputs you have some outputs that's exactly what Drake makes you say is I've got some State Dynamics I've got an output I've got an input okay the output of the multi of the let's say the manipulation station is like the cameras okay and it's not you typically don't have direct access to X and that's what I'm doing with my perception system and maybe I'm outputting something like X hat this is my planning and control but I'll just write control for now that goes to you and around okay so I would call this uh State estimation plus full State feedback architecture okay where this is my State estimate and this is my full State feedback here which takes X hat as an input the pomdp belief space planning in in the language of the diagrams here is a superset of that but if I think about my perception outputting an entire probability distribution over X then I have to write some controller that knows how to take that do something more than just assume I have x given to me but I've got an entire distribution over X this is what I get in my sort of palm DP belief space architecture belief space is just another name for my estimate over X that's my probability distribution and in this setting in both of these settings you know the perception system Maybe could be a base filter if you know about filters or a Coleman filter is this is a simple version of a Bayes filter just to connect those words if those are something you thought about but like I said this is um more than you need for control right and I I remind us of the you know the controller that was written for this is extremely robust you know it's sort of has a sense that someone could push me with a hockey stick right implicitly and it's it's an incredibly good controller I don't know that it's optimal in any way but it's highly functional and it is not trying to estimate a full belief over the possible environments it's not trying to estimate with high accuracy the angle of the door handle or the or the door it's using a much smaller summary uh you know it's Andy and you know colleagues I know it's very distracting wrote a beautiful controller that used the sensors more directly they didn't try to estimate the full state of the world and it works really well so there's some sense I think which is that that picture makes the problem look too hard so the way we want to think about it today to sort of launch into our our further discussion about control is to go to The Other Extreme and appreciate that there's another extreme here which is what if I just take my plant I have my observations coming out why and I'm going to go directly from y to you okay even if these are cameras and I'll call this output feedback it's not my name it's a output feedback control as opposed to the state feedback control okay when why are pixels or camera images let's say you are torques people like to call it pixels to torques it's a really funny uh you know a lot of people say pixels to torques some people say it like it's awesome pixels the Turks and some people are like pixels like pixels of torch can be the thing no one should ever do you know um so it's it's highly uh volatile uh term but it's uh like pixels to torques for self-driving sounds kind of scary you know but um but it's a very powerful possible framework particular if this is a kind of a high rate um if this is if this controller updates frequently Then I then I I would also call it a visual motor policy in the manipulation space we'll call these visual motor policies and uh that's kind of a weird caveat that I'm putting in there but I just want to distinguish it from you could make a diagram that looks a little bit like that that would be a little bit you know more of a sense plan act classical architecture and the type of policies I want to think about today are ones that are looking at the camera uh all the time and constantly making decisions okay so this is actually this of course should just Encompass those right I could take both of those right write a diagram around it and and call it an output feedback and that's true but this I think this version of the picture opens up uh the door to different architectures that don't impose that structure so that's the big question is what is the right architecture for that kind of how do we how should we write that down how should we describe that output feedback policy right and then how do we find its parameters once we choose a class of possible output feedback policies how do we find it's per how do we find those parameters is that setup clear foreign thing of course would be if we can go from y to you without ever having to pick an X as soon as the human picks an X to represent our intermediate computation in we've assumed something about the state and it's not going to spread peanut butter and toast and that you know I'd actually like Peter padrantos but that was you know but there's something similar to that that I would like a robot to do okay here's my opening bid what if we just say I want my output to be a direct mapping from whoops you kind of told me how about that a direct mapping from camera images let's say directly to Torx okay and you know that seems like the architectures that know how to take camera images as input tend to be neural so I'll go ahead and say that's a neural network yeah probably a deep net convolutional or whatever architecture you like right this architecture if we were to just write a function of like a feed forward Network that goes from an image in to a control torque out the control theorists would call that a static output feedback foreign like for a linear system the it would look like instead of negative KX if you you know if you thought about this it's something that looks like this would be a direct linear mapping from observations to inputs and it turns out although control theory knows a lot about the static output feedback problem most of what they know is that it's hard that that even finding this for a linear Gauss for a linear optimal control problem finding that K is actually an NP hard problem so that's not a good sign but but there is a lot known about it okay but let's say Let's ignore for a minute the fact that maybe finding the optimal parameters are hard we'll come back to that but let's just say is this class of what what is this class of policies capable of doing let's say I had an oracle that would just tell me these are the weights that are the best controller in this class you've chosen a number of layers a number of you know an activation method you know activation unit and stuff like that and then I'm just going to tell you what the weights and biases should be the best possible what can that do right and what can't it do what can I do what can't it do it can't deal with occlusion right so if you had a frame that if you said reach for the red brick and I put the red brick let's say behind my laptop screen then that that would be uncertainty or something control through contact if I had the red brick behind my laptop screen I said go ahead and pick it up then that kind of controller doesn't have any mechanism to sort of reason about you know what should I do to pick that up if you were wanting to watch me take the red brick and put it down there then you would know there's a red brick there but if you're only if your controller is only able to look at the most recent image in order to make its decisions it doesn't have that power that you have it has no memory no history okay so despite the incredible power of deep networks a deep Network there's a bunch of things that this isn't going to be able to do occlusions is a good one what's another one even if there's no occlusions there's some things this can't do if let's say Y is an image almost all of you are like throwing things or catching things or smashing things right you can't get Mass right so um that's right I mean so really any Dynamics you could so you could argue that you could get a mass from a Statics point of view right you could you could know from a static equilibrium type analysis something about Mass but you can't know velocity for instance right so if I show you a single frame of a ball and I show you another frame of a ball and you're only thinking about that the fact whether it's coming towards me or away from me is completely I'm oblivious yes perfect yes right so there's a natural that's a great question so she says Jenny says what's a what if I have a history right so this would be my second bid would be what if I said that U of at time n is a function of the image at time n but maybe a image of time n plus minus one maybe some n minus M or something like that okay so that you could do you could even also do some occlusions right although you know if if I take the red brick and I show it to you and I put it behind here you better pick it up in less than M steps otherwise you're going to forget right so uh but you'd certainly two should be enough depending on how noisy things are to estimate a velocity and more than two maybe would allow you to estimate a velocity more robustly because you'd have you'd have you know enough to sort of estimate covariances and the like okay so this is great so this is um this is also something that we know a lot about from sort of more General systems theory this would be a finite impulse response model there's different names for it but if it was a linear system it'd be a finite impulse response model fir people remember fir s and the like from you take signals and systems yeah right it's a you can call it a moving average filter or something like that okay and we know what it can do and what it can't do right in particular it's got a line identical it's got a you know it won't remember things more than capital M and you have to make an M when Y is an image that could get expensive if M gets too big right so for long Horizon tasks this might not be a good choice but for short Horizons tasks it's a perfectly good way to take and give some some limited form of memory to a neural network feed forward neural network architecture but I also really like this because this even though you might not think of it this way when I start thinking about it this way and I you know maybe not if I called it a neural network with an image in and whatever but when I write it this way and I think of it as an Impulse response filter then you realize that this is actually just an opening I mean even this one is true this is a dynamical system what I want you to start thinking about is that the control policy should be a dynamical system right it's a dynamical system that takes y's you know a time series of y's in outputs a Time series of views and U equals pi of Y is one form this is another form but more generally you could think you could make an Impulse it I I our filter in infinite impulse response filter you can do a state space model and that's exactly what's happening in the neural network world too and I just want to make sure you see it that way because we know a lot about what each of those models can and can't do and sometimes that gets a little bit hidden in the weights of the neural network for instance Okay so you could do an iir filter like a simple change it's an infinite impulse response filter or more generally be called an auto regressive moving average with exogenous inputs oh my God I'm not going to write that but but our Macs people say that's like a cool thing to say at parties um right this would be like an rmax system that would say if I wanted to say that U of n if you want something to have an infinite impulse response if you wanted to have arbitrary history it turns out you don't have to do much more than that right Jenny's proposal was good and I can just change it just a little bit and maybe she even meant this but if I also put in U of n minus 1. okay and then y of n minus 1 U of n minus two if I just cycle back my outputs then that system depends of course on the weights whatever but that system can have an infinite impulse response okay that's a neural architecture you could pick that would have more representational power than that you know from if from finite to infinite okay and there's the linear systems equivalent of that is like super well understood thank you more generally you can write a state space model where you could say that you of neural network is let me call it that U of N is a function of X of n and X of pi n plus 1 is some other function of Pi n with Y coming in okay I could have a neural network that has an internal state and I can allow that state to evolve with another neural network for instance and then I could just have my output be a function of that state that would be a state space model linear State space models are ax plus bu stuff standard fare and that's just what a recurrent neural network is doing for instance if you know what lstms are long short-term memory yeah that's just another representation of a dynamical system and it has representational power it similarly has infinite impulse responses okay and it can have a long history that's most notable here is you know if I compare this versus this this has the representational power to remember things for arbitrary durations right so if I take that red brick and I put it behind my laptop then you can just say I'm going to devote one of the X's to remember that guy put a red brick behind the laptop and you know a year later you come back and say like I I remember where that red brick is right because you've given yourself that power of memory this is a scratch pad for memory right and it can evolve in beautiful continuous equations it can be a less beautiful neural network you know but still super powerful exceptionally powerful but those are standard standard things the Coleman filter you know State estimation that I talked about here fits into this model if I said that X hat I called it x Pi here okay that is certainly similar but it's way cooler if x Pi doesn't have any presupposition that x Pi has should be a estimate of the original state if no human said what x hat is but the you know the Learning System decided to use x however it needed to to reproduce the input output of the system okay so that's what this this suddenly if I now want to if we ask how do you find the parameters of this input output system whether it's static fir iir or state space that now that that becomes the second question and a good answer to this would be one where you don't have to say what x hat is or what x Pi is okay the simplest form of this would be an imitation learning form okay in particular I would say imitation learning broadly speakings over slightly oversimplification but I'd say there's two big classes of imitation learning one is inverse optimal control and that's not what I'm going to talk about now but I don't want to pretend that I'm talking about all of um oops that was pretty funny revealed my biases I guess and the other one I'll call Behavior cloning okay just to say that I mean imitation learning also is called learning from demonstrations there's a bunch of buzzwords but they're all the same thing sometimes you'll just lfd okay but let's think about the behavior cloning version of the problem Behavior cloning is actually an old idea that I went back and looked what where it started I found 95 you know it's an old it's an old idea it's cool now but it's an old idea of a 1995 paper just Behavior cloning and really what this is this is a simple approach where you say I'm going to treat control design as a basically a system identification or a simple supervised learning problem so if I have a human let's say that's trying to control my system and I have a policy here that I'm trying to fit it's a dynamic output feedback policy if I watch the human operate the task and I record the the whys and the use you know that the actions that the human took and the observations that they had available then I can try to do supervised learning to make the policy do the same thing when it's when it's a recurrent Network policy it's a slightly more interesting supervised learning problem but we know a lot about how to train recurrent networks for sequence learning like this okay so I don't think many people would say this is a like a final answer but this is a really interesting way to start looking at the problem and in particular I think it lets us sort out a lot of the architectural questions of what how should we represent our policy how valuable is it to use cameras on every you know at 10 Hertz versus at you know one tenth of a hert yeah ten one every 10 seconds right um and and asks a lot of those questions but a natural question for this is before I get into the details of how we do that is how do you even get that input right so um for robots and mobile manipulation and manipulation uh as deep learning started getting good people started coming up with more and more clever ways to capture humans doing the task where you didn't want to watch you know if I watched a human doing this like on YouTube if I want to watch a human rolling dough that is an impo that's a hard problem still to map from a human's controlled Torx and a human's real observations into a robot actions it's much easier if you can get a human to operate the robot and then you watch directly the torques that are being applied from the humans controller and maybe if you give the human exactly the inputs that the robot's getting then now you have exactly the input output data that goes to the human and you can try to just turn that into a supervised learning problem and people have been making cooler and cooler versions of this how many people know that the X price the Avatar X prize happened this weekend yeah right how cool is that so just this weekend so so the the X prize is like they make these Grand challenges about improving the world um and they picked a robot um example this this year and it just happened on Friday and Saturday I won't play the whole thing but it happens that the way it worked was um they had roboticists you know that were the judges this is my friend Jerry Pratt who's driving this is the winning cup he's the he was the best driver apparently um and had a the system that won by Sven benke and Company okay so there's this whole mock-up mock setup where he had to drive up he was only a given even the context of the task by talking to somebody who told him what the rules of the game were he says you're going to have to go over here you're going to have to pick up um the cylinder the cylinder and only one of the the cylinders is heavy and should be placed on this other uh spot right so you have to actually like use your force feedback through the Avatar in order to know how to complete the task there was another one he says we're going to pick up some like moon rocks or something and one of them has like a particular texture and you have to put the one with a particular texture over here and there was like a series of pretty complicated things and uh I'll forward a bit that was he ducked for a long time I guess if you haven't noticed it's pretty quick on the commute that was the HOV lane should I turn that guy off yeah um yeah so this is the one where he had to pick up something that was a particular weight and he's strapped into a series that was too light so he threw it away okay now he picks up the next one but he's driving these are actually just two Panda robots that he's that are in kind of like gravity comp mode so they can be used like this they could teach mode and he's got a exoskeleton hand and he's driving this thing around and doing super complicated tasks right with force feedback haptic feedback and he completed like the entire course that I I don't think they were sure that anybody was going to complete the entire course but but he did and they did and they did extremely well so you should go back and watch like the long version of that one of them that I really liked there was actually an entry by Northeastern that came in third that you know Northeastern just across the river they had this awesome setup with a Dexter's hand here and this is this is in their lab but they did it did very well in the competition too similar things pick up the heavy thermos he's able to do all kinds of stuff he like shakes hands okay yeah shakes hands and off to the side this is what it looks like right he's seeing through the robot eyes and wearing this cool exoskeleton it's awesome right I want one of those in my lab they're super cool if we have time at the end I'll show you my version not as cool but you can use it on deep note um crowdsourced Tilly operation which is a thing by the way people like people have started companies saying we're just going to crowdsource teleop and learn how to control your factories for instance uh I think that's a that's a super interesting business model okay so let me tell you a little bit about um about maybe how you would architect that visual motor policy and how you do some of this Behavior cloning all right so there's a kind of a canonical architecture for these things you tend to have two components you have your image coming in and you have some big deep Network like a resnet whatever something that's trained on imagenet pre-trained on imagenet for instance and is good at going from pixels through you know millions of units into uh into some other representation and there's a lot of different choices for how you pick the output representation we've talked a bit about the rise of self-supervised learning as a way to maybe train a feature representation Z you could just look at objects from two different angles and label them to be the same and somehow that can cause your your that can self-supervise a network to to represent something important about the about the scene okay but your choice of Z that's where all the interesting work is like people are asking you know what what is the right Z for manipulation okay but it's interesting that because of the computational burden of training a perception system and because of the wealth of just purely perceptual training data people typically separate out those that massive perception system from a relatively much smaller policy representation so the neural network that goes from Z and maybe my joint encoders of my robot into my torques tends to be like a three layer network with 255 units right so I kind of I kind of joke when people talk about deep reinforcement learning or whatever but then they only use three layers and it drives me crazy okay but that's a very standard architecture in fact I would say we I've talked to people and I said so how you know how many hidden layers you know do you use how big is your what's your architecture or whatever and they're like oh geez we've we started using three layers with 255 units like six years ago and we never changed it it's just kind of baked in all the papers use exactly the same architecture okay that is the architecture that did the kind of stuff I showed you early on this was an imitation learning pipeline that made Super Rich visual motor policies for doing things like putting hats on a rack picking up the plates pushing sugar boxes or whatever and compared to the previous this was the first one we had done other people had been doing visual motor stuff but this is the one that really convinced me I had to do it ourselves right but the robustness that you get out of a demo like that compared to things that are estimating the location of the sugar box or you know which tends to happen only when you don't have occlusions you know and then maybe you make a long-term plan and you execute this is using High rate feedback from the cameras trained with imitation learning which everybody I would say is thinks that's a an initial you know startup thing but not a lot maybe well some people would say it's a long-term answer okay but but when you're in the lab and you're pushing the shoe and it's pushing back and it's like it's it's so compelling it is so much more compelling than the things we've done before right where would you know if someone came up and moved the shoe the robot would be like still pick up where the shoe used to be and then it would go through the entire task and like drop off the shoe and everybody's there like a shoe didn't it's still on the table you know um the air ball robot videos are less with this less yes um they seem to be enough so the question was why are people using such small networks so I think in reinforcement learning that was a choice made early in reinforcement learning and I think the computational cost of getting enough samples to train a bigger policy Network might might be you know significant so I think if you can get away with a small Network and people have now seen that those small networks are very capable so there hasn't been a huge push to make them bigger I think a lot of the heavy work is coming from the perceptual side I think even in this imitation learning setting we're often in a setting where we want to we have large Corpus of visual data and relatively you want to minimize the number of demonstrations required so having less expressive power in the policy can be seen as an advantage yes I'm imagining if the human's too good but it doesn't get into the same potential failures that's a great question so yeah so so the question is how does your imitation how do your demonstrations get coverage there's two aspects of that I would say you want and I'll be even covered a little bit at the end but there's a having done having a good enough demonstrator is important because if someone is a little bit random and they take it from the same image they take two different actions then that is annoying it doesn't fit it's not described as a function you have to learn multimodal representations and things like that and that's a real problem the other question that you're talking about is if um if the human is only demonstrating sort of the sunny day Behavior then um there's a there's a very famous paper about this called dagger which um they made the point on a Mario Kart driving game but you're driving along and Mario Kart you're training um it's by Drew Bagnell and and Ross and a few other people uh yeah if you drive along and you only see examples of the Mario Kart staying in the middle of the road and then as soon as you go the the your training data was just a little bit you you didn't fit perfectly you slightly or you find a new situation you're a little bit off the side of the road and you never saw data when you're driving off the side of the road and it just goes right off the right off the end and so the dagger strategy which is an older strategy of teacher forcing would be that you you tend to not only collect demonstrations stop you would collect demonstrations fit an initial policy and then allow the demonstrator to try to correct the policy and have a blended control for a little bit and then eventually let go so so you allow the policy to make some mistakes the human now has corrective actions that's one of many several answers to the sort of trying to get coverage of the relevant distribution that's a response to not trying to cover every state because that would be a hard problem great question so in that particular work we use the dense correspondence as I told you about before as our Z so we would actually train dense descriptors like you did in your problem set and then we would identify a few not key points not labeled semantic key points but just picked a few random points in in the um in the color space you know in D and then find them and turn them into XYZ with our correspondence function we use that for our lower level and that was a that turned out to be a I think you know we tried to analyze against the other Alternatives and it generated robust controllers and lots of uh you know at a category level this is what it looked like so Pete would had a mouse teleop at the time he was really good at it actually but we we did a a couple different versions of it but uh you know on the order of 50 to 100 demonstrations Pete would just do in lab flip a lot of shoes right um it was actually super useful if anybody's thinking about imitation learning for their project or anything to just to set up the entire imitation learning Pipeline and simulation where it wasn't demanding human users input at all he wrote a simple controller in simulation that used the full State feedback and then he just tried to clone that and and make sure that that all worked and then you had unlimited demonstrations for free basically and you just tried to make sure that you could capture the task it was using a recurrent Network it needed a recurrent Network for some of these tasks although even tasks where we felt like technically a static feedback controller would have worked well we still saw better performance from a recurrent Network foreign back to your question is the the generalization power of this was so uh machine learning folks will talk about generalization versus extrapolation there's absolutely no claims here of extrapolation so generalization would be kind of you're interpolating within the data set that you've seen and extrapolation would be you're able to do something beyond what you've seen in the data set and uh and very much the the sort of every way we could think of to make a two-dimensional plot you could kind of make the convex Hull of the training data and the robot worked really well when it was in the convex cell and not so well when it was outside the convex all right and these these demonstrations I just I think they were so compelling in lab and this is how we do you know dough rolling for instance at Tri and these are places where I again I a few years ago I wouldn't have even had an answer to the question of what's the state representation for dope how would I write a feedback controller for doe okay or for noodles we haven't done peanut butter yet but we did do sauce spreading again this is this is kind of the peanut butter right this is apparently how real chefs spread sauce it looked a little weird to me but we actually studied some chefs and then did it this is work by C1 Fang at a tri all right and that's a representational question I wouldn't know how to answer but we're still getting good controls controllers out it's interesting I this just came out the other day so I put it in there's a article by one of the leads at Google brain for robotics he's talking about the push and pull between reinforcement learning and behavior cloning and so BC is behavior cloning BC and then BC methods started to get good really good these are links to us papers um so good that our best manipulation system today uses mostly BC with a Sprinkle of reinforcement learning on top to perform high-level action selection today less than 20 percent of our research investment is on reinforcement learning it's actually the research runway for BC based methods feels more robust that's not what people would have predicted I think a year ago yeah Andy Zang gave a talk at uh here at MIT you hosted him right um I he had it the whole talk was good but the the the like second slide was just captured this so well this is the imitation learning he says how to make a rock star Behavior cloning demo on a real robot okay these are the steps first of all collect your own expert data don't trust anybody else to make it perfect right so uh this this is about multimodal demonstrations and then some of the subtleties you just asked about um yeah avoid no action data like that's a weird thing to say I try not to have anything in your data set where you're not moving because your robot will get stuck that's kind of like there seems like something's wrong with our formulation if if I accidentally pause in my demonstration date everything breaks still not working collect more data right um until extrapolation becomes interpolation and the last one's real right it says train and test on the same day because your setup might change tomorrow right but given that and these these by the way I linked to the seminar and and he's he's a slides.com too so you can even see the slides um yeah but uh that's real right so um if you stay if you follow steps one through four you can make some of the best robot demos that anybody's seen right but it is important to understand its limitations he also talks about the the hunger for data so some of the simpler demonstrations where you know in the order order 50 he's got some really nice implicit Behavior someone's doing a project on implicit Behavior cloning did some some of my favorite examples of these feedback policies for contact uh but that was order 500 expert demonstrations and then some of the you know rich ones where you're picking up and moving berries and stuff was up to 5 000 expert demonstrations yeah all right so this is an opening discussion about control and I think exceptionally cool that you can do um you know some you can get a taste of what good control would look like we've learned that high rate feedback from cameras we didn't know how much we were missing it until we saw what it could do okay and I think Behavior cloning is a short-term path to study it and it works incredibly well on real robots given those caveats I would say for me it's a tool to study the problem and come from the other direction and as I move up from understanding why under actuated and control through contact and whatever is hard the misnomer that I don't want you to walk away with this lecture from right so the possible impression I could give but I don't want to give is that because the problem is hard you need to use learning that is one way to do it but if I give you the equations of motion and they're just complicated there's nothing about that that says you have to use learning right I think the representational power of deep networks is awesome but it's not clear you have to take samples in order to optimize them right there's many different ways to optimize them and then we'll explore that over the next few few weeks okay I'm gonna call it but I'm going to show you if anybody who wants to stick around for a minute I did try in case some of you found it useful for your projects I made it so you could play with game pads through mesh cat and I'm going to play with a Gamepad through mesh cat for just a few seconds as we wrap up all right this I'll actually do the non-robot or non-manipulation one first because I think it's fun okay this is it's okay to be honest I did it first because I mean I love you guys but my kid wanted to use it for first robotics so I was like oh yeah I I thought hard about the problem she was having I was like you should use Drake which is crazy that's not totally but I was like oh I can help you but I'll do it in Drake um okay so this is a mechanim wheel base right and it's simulating with the full physics right so those are just ellipses The only way that it's applying Tor you know it's applying torques to Motors doing velocity feedback on the wheels and the only way it moves through the world is through the contact forces uh between the McCannon wheels and whatever and it's like it's fun it's just fun to do that and you don't have to you don't have to like install Drake you just go on deep node and it's the re the the light I mean anybody can write a game stick controller the only thing that was a little clever about it is that um I we did it in JavaScript and plummet back through the websockets because the JavaScript is the only thing that you're running on your machine when you're running a deep node so the only thing that could touch the GamePad is the mesh cat browser mesh cats listening for the GamePad if you wanted to and uh allowing you to do that so then of course after I did that I thought oh I should do the manipulation station and I just added to the back of the so if you want to use this for your projects or anything um I made the manipulation station teleop have one more version if you have a Gamepad pretty much any Gamepad should work I think it's pretty standard interface uh oops I forgot to press a button so you have to opt in with JavaScript it won't just read your Gamepad without doing anything so you press the button once okay here's the standard manipulation station and now I can drive it around with my gamepad I I was saying I don't know if it's like the there's a bunch of tricks for mapping Gamepad robustly to like uh real commands I've implemented maybe half of them or something I'm really bad at this I don't know if it's my brain to the GamePad that's bad or the GamePad to the thing but but it's really hard to do um okay but I can definitely pick up a red brick it's not that bad so you guys could use this and make it better hopefully oh I missed all right hey it's okay fun so not quite as cool as like Avatar but uh useful maybe okay I'm happy to take a couple questions I have to run today but I'm I'll otherwise I'll see you guys next week
Robotic_Manipulation_Fall_2022
Fall_2022_642102_Lecture_20_Intuitive_physics_part_2.txt
okay let's do it I want to uh spend one more lecture today saying I think not enough but I I feel like this is such a big topic of uh learning State representations there's entire conferences on it and and the like but I hope to say a few more interesting things about it and make you think about it today so the problem in my mind of intuitive physics is a is largely a problem of learning models which is a problem of system identification right that's what the control people called it for or lots of people called it before and fundamentally it's about taking input output data from a system right so you're given data in the form of I'll use this as my shorthand for the trajectory of data View and and why plus a parametric model typically a parametric model like we have been writing in state space form for for instance I'll say it like this with these are my parameters this might depend on the parameters too okay and the goal is to find the parameters find my parameters Theta to minimize my objective my system ID objective some form of prediction error I talked very briefly last time about saying there's a difference between a one-step error versus a long-term error there's also there's there's many different Notions of error if anybody has thought much about online optimization the new way to talk about error in this setting would be as regret using the language of regret but there's you know basically I want to predict my whys given my use and I'd like to minimize my prediction error but from the model and from the data Okay so again I'll always try to start a little bit with some bigger picture and then I'll dive into some examples and some details okay but um you know I want to appreciate this in the bigger picture of the things we've been doing in in class this is one of the problems you can ask given those those equations but it's actually there's many questions you ask we've been asking on those equations right so the slightly more uh exhaustive form of this right is that we could in general have dynamics that are time varying they depend on state they depend on you they have potentially random inputs W this is kind of all the things we've been talking about and now we're emphasizing the parameters Theta same thing for y can depend on all of those things and by the way you know in code now you maybe appreciate finally or well maybe you still don't appreciate but or like you might not like but uh you know this set of things is exactly what we call the context in Drake right that's just why it's a structure right it has inputs States parameters Etc it exactly maps to that picture of the world and we because sometimes you need all of them sometimes you need some of them it's just a structure and that's why we we write the code in that way and what's nice is that you know in this view of the world in this dynamical systems view of the world F could be a multi-body plant F could be a neural network even more interesting F can be a system a diagram that has a multi-body plant and a neural network maybe a inverse Dynamics controller all put together and that what I'm saying here still holds and I can still describe it with some parameter Vector Theta that I search over in order to solve some input output identification problem and in fact that's there's a lot of work that's going on um you know now where people are are saying what if I do identification over both the trajectory optimization and the model for instance or there's lots of different combinations where thinking about f as being an entire diagram that has some autonomy inside it can be an interesting version of the problem okay so think about this as my general palette of of uh uh you know models that we're thinking about and it's just kind of cute maybe kind of nice to to realize that all the things we've been talking about in class are just slightly different takes on what we're doing with that set of equations written in that form right simulation for instance is just given x is 0 and a bunch of u's solve for x basically right you have to you're given the parameters you're given all those other things but that's basically just solving for x by integrating forward and planning is just if I'm given some objective maybe given I'm given X zero solve for some trajectory like this given x0 plus an objective solve for x okay perception certainly in its state estimation form is just again this is given now my parameters give Maybe given my observations my actions that I know I put in my parameters solve for x and system ID is you know given the data we talked about solve for Theta right but these are all very related problems they must be very related problems making slightly different attacks at the basic governing equations right and I think sometimes we see them we see all that I mean the particular way that you would address this problem when Y is a point cloud can leverage particular structure of those equations that you don't think about typically when you think about planning but they really have to be the same thing right it's just a matter of specialized algorithms exploits special structure in the equations okay and and to some extent I mean there's been there's been Trends towards just thinking about let's do gradient descent at this level of abstraction for for the entire for any one of these chains okay but I think it's very powerful to think about it in that bigger context in fact the difference between perception and system ID is sort of subtle you could you could almost argue they're solving in this at this level of detail the same problem because you could some people would say perception also has to estimate the parameters to estimate the parameters you probably have to estimate the state really the only difference is the perceptions trying to do it online and system ID typically is done with in batch where you have a collection of data before that you're trying to estimate the parameters is that useful to think of it in that level I don't know if that's just but that's very important to me that's very um you know this is partly why I think the systems view of the world the dynamical systems view of the world is so powerful that that I can apply my my same reasoning about all of these different algorithms to systems that might just be a multi-body system might have a neural network might have a whole combination of them and what changes as I change the model class if I change F basically last last lecture we said when f is a is a multi-body plant then there's special structure makes system ID or parameter estimation in this case of least squares problem at least some descriptions of it a least squares problem that's that's what I spent most of the lecture on last time was was telling you some of the details about why that how that plays out and I really want you to think about this as um there is a there's a spectrum of models right let's say this is my model Spectrum here oh and in some cases there's very structured models with specialized algorithms right and these and there's some very general models with with more General and potentially weaker algorithms maybe necessarily weaker algorithms okay so I'd say that I guess on the extreme part of this spectrum if f and g are let's say linear models or tabular models these are those tend to be the two cases that we can do the most we could bring the biggest most powerful algorithms to bear then that's like the extreme structure we can understand everything right I'd say that the multi multi-body models are are somewhere over here right if the I would say neural networks are you know broadly speaking deep learning stuff is here it's it's not actually all the way to the extreme I think there are you know just uh big black box game engine kind of simulators are are harder right potentially if I for instance have if if G is a render a game quality rendering engine and it's sitting on the GPU and I can put doubles in and I get doubles out but I can't really ask for any structure there that's maybe the hardest case the most General types of models and all we can hope to do there are the kind of Black Box optimizations we've talked about you know if you can assume that your models if f is at least known to be differentiable then that's a little bit more structured and I can do gradient descent type algorithms right and if I go farther and farther then there's more structure and more powerful algorithms but you're making more assumptions right so so the linear models cannot capture the complexity of the world right the multi models can capture a little bit more but not all they're not going to capture deformable things and fluid things very efficiently right so what I kind of want to do today is just walk a little bit more on this space and give you a couple more nice examples on that line some places maybe even here we can learn a little bit more so there's some things that about the system ID problem that are very clear if you go all the way here there's some things that we're learning more and more on that end of the of the spectrum is the real world yeah the real I guess the real world is um is is on the right if you're I mean the real world is not uh a mathematical object that I'm searching over so I guess I don't I wouldn't put it directly on the Spectrum these are the class of models right but it's a good question in the state estimation case why is you observed yeah at the standard State estimation I mean if I think in the simplest case maybe be the common filter for instance right where it really explicitly takes in the actions that you've set and the observations you get and estimates X right and I think more generally that's true it is true that a lot of times when we've talked about you know Point Cloud perception or whatever we haven't used this we've been just learning static models from y to some pose but I think as soon as you think about that as a dynamical system and do a filtering kind of approach then you would naturally bring in you great question there's a bunch of different you know entries on here and I have to I wanted to avoid listing all the things there's a lot of shiny things up and down the axis here and I I don't want to just list them off I'll just call it a few examples that I think I can say something interesting about but people I was looking to see if if they're here but there's a few people in the class that are working on like lagrangian neural networks for instance they they're they're an interesting point of the spectrum and uh there are many many entries okay so um multi-body can't do everything right we've we certainly I'm the first to admit that I don't I I so so the multibody parameterization you know saying that f equals m a kind of is the governing law that makes some assumptions and I don't feel bad about that I think that is um that limits the things it can describe but it also gives power to my ability to in terms of writing algorithms but it even in terms of generalization and other things right so it says it it it makes this so neural networks are general purpose function approximators and they're good for that that's an important thing to have but you know a multi physics-based model makes a few specific things saying I will not be able to describe things where energy is not conserved because I'm putting in as a conservation law conservation of energy you know mass is conserved these are these are priors that you're putting in strong priors which limit the class of models you can describe but also allow you to generalize more broadly so you have to just decide I think and that's the the name of the intuitive physics game is how can we walk up and down this use the structure when it makes sense but not give up the structure you know uh too quickly foreign and there's things like spreading peanut butter on toast where I know mass is conserved but I don't know how to use that efficiently with with multi-body equations so there's plenty of work to do for you guys in the middle oh let's just um limitations of multi-body parameterizations just to name a few specific ones um so to be clear the when I'm talking about the multibody we've been talking about our state representation is positions and velocities and our parameters we saw in the in the last lecture that the parameters are only of a few types right they're the inertial parameters Mass moment of inertia Center of mass location and then the the kinematics of the multi-body tree so the location of the joints relative to each other and stuff like this these are the parameters of the multibody family as typically written which doesn't immediately address I think we can grow them towards it but it doesn't immediately really address perception perception why because The Joint positions and velocities of the um of the multibody state may or may not be observable for instance that we don't have any notion of that in this set of equations um that's an easy one to say I think an important one that I almost feel bad about like I feel it's almost a historical accident that um we don't really we have ways to talk about uncertainty over velocities or uncertainty over positions but we don't really have uncertainty over geometry like somehow almost historically we haven't really explicitly parametrized the geometry parameters in those equations and written distributions over them and ml has been pushing us on that front but I would say that's a obviously important to multibody equations but somehow tucked in As a detail it's typically assumed that we know the geometry and for like I said for a robot you kind of just get a ruler and measure the link length and and don't worry about it but when it's contact mechanics and the geometry influences your friction cone and whether you're in contact or not those become vital to the time evolution of these equations but we it's surprising maybe that we haven't spent enough time uh thinking about distributions over geometry and the like um and building on that I think the multi-body as we've written um is I I wrote multibody but it really the stuff we've been talking about is rigid body multibody could be deformable but we haven't talked yet about deformable or fluids or I'll just say plus plus all the non-rigid type physics and maybe there's one more really big one I'll put in my my list here is that this this Paradigm doesn't immediately give ideas about State abstraction or um or model reduction so I'll try to fill in with some examples some of those but this is just kind of motivating why I'll take a couple deep Dives today okay so what do I mean by that so so when I gave an example of chopping onions is a hard problem right I know that I could definitely simulate chopping onions I could write the pose and velocities of all the onion pieces but that's probably not the model I want to identify and use for planning and things like this there should be some way when I'm chopping onions to somehow write a simpler version of that set of equations that still Builds on physics I would think but doesn't have to have the entire complexity okay so there's these are just big Maybe there's things that I chose not to put in here but but this is maybe a big list of things that we could do better questions at the high level is it helpful to talk about the high level stuff I mean uh any questions at the high level you guys can fire whatever these are now the we're getting into the uh the Deep Dives of more Boutique lectures I guess so feel free to ask off the cuff questions foreign let me take a slightly deeper dive into the on this side and tell you a few things that if you're willing to put even more structure in I could have picked tabular or linear but I'll pick linear first okay and and I think there are lessons because we have such good algorithms for this there's a few more lessons that you can take from from the linear case that will apply all the way but they're just so hard to think about over here and they're so clear over here okay so let's just think about what it would look like to do linear system ID and I'm going to do it at a level that that gives you some details but also um you know I'm trying to aim at the big points here so in this case the model class I'm searching over in state space form is Always written like this basically right very classic form of the linear State space equations V is just another we typically distinguish between the observation noise and the process noise so the question the system ID question here is given again data U and Y find now a b C and D to explain the data right a b and c and d are matrices they're the matrices that are the parameters those are my thetas if I were to vectorize them they'd be my thetas and the solution is again as I promised is very powerful very clear and it's going to go back to least squares it's going to look a lot like the multibody in some in in the in the simple case and it's going to do more than the multi-body good uh in the output case so let's just um remember in multibody I assumed actually the data was U and X I chose to say that I had positions and velocities and I had torques and I was trying to find the masses and and uh inertia's in lengths okay so the analogy here would be if I wanted to do exactly a closest analogy to what we did in multibody let's do a sub problem let's say given U and X find A and B okay that's actually the closer analogy to what we did in the multibody and it's going to line up just like remember what we did in the multibody case we took our data Matrix we took our parameter vector and we did these squares to minimize the one step error and guess what we can do exactly the same thing here and it's even simpler because we've already written it in a linear form okay so we're going to make our data data Matrix times our parameters okay and the way we'll I'll write that is I'll I'm going to say let's call it X I'll call it x what do I call it I'm going to minimize over a b I'll write it in the rolled out form first just to make it clear what I want to do is I'm trying to use xn as the prediction and x n as the data when I write these things down just to make my notation clear okay I'd like to over all of my data end I'd like to just minimize find the A and B that make that least squares residual as small as possible that's a one-step prediction error and I just want to find the minimum of A and B is that clear yeah and I can do that in the data Matrix form by constructing a big data Matrix it's actually easier to write it it's cleaner to write it as a couple data matrices let's make the data Matrix for x which is just I'm going to stack all my these are vectors right so they're horizontally concated concatenated X I'll do my U data Matrix and then I'll even even though it's redundant I'll make my X Prime data Matrix which is just this whole thing shifted by one and then I can write this as a big least squares problem where I'm just trying to find a and b times the big Matrix X and u minus X Prime okay so it's just another way to write that in Matrix form and I only chose to write it because it has the it forms the data Matrix like we did in multibody and I just want you to see that connection okay this was before my masses my lump parameters now it's my a b Matrix okay but if multibody was least squares it's not surprising that linear is least squares in that case okay and you can just solve that this is just backslash operator in Matlab or NP dot linealge dot whatever okay that's just a simple of these squares and it has important properties that every that people understand very well so that in my governing equations if W as long as W and V are uncorrelated with X and U then this is an unbiased estimator and people know a lot about if you want to write a system ID paper and get it accepted at the assist ID conference you have to say all these things about asymptotic biasness everything like this it's a very mature discipline the thing we hadn't done in multibody that we can do in linear systems is the input output case and if I don't even know what x is I wanted to discover X for me that's beautiful that's what we hope our neural Nets will do okay but we can do it perfectly here and doing it reveals a few details that really are General okay if we do the input output data back to the given U and Y um the algorithm that I'll I'll describe here is roughly there's a few different names for it but at least the famous most famous piece of it is called the whole Coleman algorithm okay it's really I mean there's tools and Matlab to do it uh the one where the closest to what we're doing today is N4 Sid if you care if you ever go looking for it that's the particular choice of realization we're going to use okay so this is more complicated because I it's only a least squares problem the way I've written it you'd like to say I'm going to just form a new data Matrix again but if I have this I can't fit a and b in the least Square sense unless I have uh until I have X X is unknown I have to somehow solve for x and a and b jointly what's amazing is that you sort of can do that and the reason is because as much as complex as that is you can actually write y as a function of the history of u's so the first thing you have to look at is a model of the form this one's potentially biased okay but it turns out this is you this is actually a history of use and this is a big Matrix potentially I can write Y at some long step as a rolled out version from from that's still linear in the entire history of use okay this thing if you remember or if you connect back to when you learned signals and systems if you took 003 or there's a couple different places you might have seen something like this but this is the impulse response of the dynamical system it's often called the Markov parameters of the model the point is we can actually fit a model without solving for x that describes the input output data and we this one we can just use least squares for and then as a second step we can try to recover a b c and d that explains G that's the magic of the linear system ID case but when you do that that second step find a b c and d to describe G reveals some fundamental truths about system identification that I want you to see okay and one of the most important ones is that there are many different a the optimal solution of reconstructing given a b c and d is not unique fundamentally it's not unique it's only unique up to a similarity transform that's you know if you know your linear algebra that's a precise statement but it's super I can make it intuitive in a couple examples Okay so if my goal in life is to model the input output data and I've chosen some State realization X on the inside that's only a construct inside my system it has no bearing on the actual data so if someone were to come along and just say I'm going to change X1 and X2 I'm just going to flip them right I'm just going to switch X1 and X2 that would change the rows of A and B and it has to you know and see and you know it has to describe the same model is that clear example um if I were to permute X that's a X1 and X2 okay so so I could tell I'm I don't have you guys so does that make sense right so if you were to train a neural network a recurrent neural network an lstm or something like this okay and it's got 50 hidden units if you were to just change the 49th unit with the 30th unit just for so just go ahead and just make a switch operation right it's going to describe the same you know you have to wire the inputs you have to change the inputs and outputs but that's got to be the same model okay why is that important because the optimization is not unique and it that it can actually affect the way you you can't write an optimization that asks to get a unique solution to some complicated cost function right you the in linear system ID we're very clever about knowing how to parameterize a family of models that are the same up to a similarity transform and I don't know how to do that for lstms but it's the problem has to be there okay sorry that's right that's right that's why I I realized my notation was overloaded and I switched thank you yeah right so if you just said I'm going to just take those units take the activation of those units and just flop them it's the same model okay that means what gradient descent is trying to go downhill to fit your recovery it had to make a choice I'm going to use neuron 32 for the red brick and I'm going to use neuron 33 for the blue brick okay and that choice is a is um changes the fact that there is a choice that it has to be make make is bad for optimization I would say in general right that means you have to do some symmetry breaking there's a place where the where gradient descent had to make a choice and go down one of these things and in general in more mature optimizations we structure the parameter landscape so that it doesn't have to make that choice and I would imagine there's a future that we're not there yet where we're parameterizing neural networks better so that it has this kind of topology okay that's just an example similarly the scale of the of the state variables X is unspecified if I could I could take X and say I want it to be between negative one and one I could say x I would be negative 10 and 10. negative a million and a million right and I'll just if you take give me an A B and C that that has one of those values and you want to change it to the other one I'll just scale up the input scale down the output and have exactly the same model there's nothing in the problem that tells me that I that I how to scale that okay so in linear systems we have beautiful understanding of how to how Coleman makes a particular choice it says I of the similar models I'm going to pick the model that is well balanced numerically so that the controllability I know I'm saying things that I haven't given you the background for but that the controllability Grammy and the observability Grammy are balanced and those are called the balanced realizations and we understand a lot about them and it maybe it's happening in implicit regularization in their own the neural models somehow but it's it seems absent from the discussion okay so I really think that the there's so many lessons from this that's not that's I know how to do it in linear systems but that lesson has to be important for more complicated systems that's not unique to linear systems but declarity of the math and linear systems allows us to make progress on it and we have to shove that down the road okay what is that progress I mean do you think it's a simpler optimization problem or what do you hope to gain from that yeah I mean I think so um one way it might manifest would be architect different architectures right cnns were a beautiful architecture for translation and variants and images right I can't tell you that for state-space models in control this is the right architecture to use we haven't we haven't given you like the the architectures that are obviously right I think there's lessons that might say that you should have an architecture that is I mean we're seeing you know uh equivalent networks and things like this like these kind of things I think will potentially render the problem you know some of the lessons from linear optimization up into the more complicated settings yeah I was also kind of curious like do you think that the benefit of these things and the parameterization does it is it about like overall just better good lower predictive error and generalization good yeah I I so so that's that's fair so how would I measure the success of the neural network versions of system ID right now I mean I think they tend to be very successful in getting the training error low already I think there's questions about how well they generalize or extrapolate I think there's questions about how reliable those the the convergences I mean I think we can almost always make them work but if I knew it was going to work I'd be in a different uh space right so I think it's just levels of maturity of that and and potentially extrapolation kind of benefits great question yeah so um so you can ask the question that for all a b c and d that reconstruct G choose the one that balances that that is somehow balanced in some so the there's an extra objective saying all things equal I'd like the the amount of control effort coming in to roughly be the same as the number the amount of uh observation requirement I mean it's a yeah it's very clear it's controllability Grammy and observability Grammy and I'm trying to to say it um clearly but there's a very natural objective that you say I'd like my basically my eigenvalues for one half of the problem to be the same as my eigenvalues or the other half of the problem and that that's a Natural Choice in the linear system setting okay so let me just show you like um the linear systems can't do everything they're too weak of a model class but they can do maybe more than you think so let me just as an example of like connecting to perception with linear systems I have a example here of doing imagine I have a two-link robot a cart pull system and I've got key points on it that I'm tracking and you'd like to ask the question um if I if I so so the carpool is an interesting non-linear system if it's going through its entire State space then you need non-linear equations to capture it but if it stays near a fixed point at the bottom or the top then actually you expect a linear system to to be okay all right so let me just sort of paint that picture so I have my multi-body plant my scene graph and everything I've added a simple system which just takes the body poses and renders some key points my little key point system okay so and I'm all I'm going to let the system ID that's my output now I'm going to let my system ID look at the key points and look at the use it does not know that this is a two-link robot that has position and velocity it does you know it doesn't know that it's coming from f equals m a it's just saying describe the input output data and if we think it's done a good job then I would hope it kind of figures out that there should be about two states with two two positions and two velocities Okay so this is the the data I fed it I generated a few rollouts of I made a simple balancing controller just to keep it in the linear regime but this is my little cart pull system you see it's a little cart with wheels and a pole that could fall down my inverted pendulum thing but it's staying it's not the pole is not falling a lot it's I kept it in the linear regime which is the limitation of this model okay but that's what it has to work with it's a given that view find me a model I don't know what the state is come up with a state realization that describes that input output data okay and I run the whole Coleman algorithm on it and I I get a this is my input output data oh that was just waiting for me to press the next one okay this is what my impulse response looks like which is the the data's impulse response and the models impulse response it did a pretty good job and what's important here is that I I can solve for for any choice of the number of State variables I can ask what's the best a b c and d okay and it'll tell me what's the Reconstruction error given a choice of I did zero States one state two states three states whatever okay and what you see is that you don't you know there's a lot of error if you have zero State it's not an input output static map if you've got one state it gets better two states it gets a lot better after four states we know the real nonlinear equations have four states after four states you have diminishing returns okay so something in good in the linear regime we have tools that find the state representation even a nice state representation by some understanding that figures out that that data Came From A system that has two degrees of freedom with two velocities sorry yep because the system's not linear and because it's um uh you know it has noise and other things you could try to start explaining the noise right it's not that it uh yeah it can't it can't do a perfect reconstruction given the input output data I've given it with with any number of states actually but did I make that point well enough Maybe this is what I'd like lstms to do too and we do see that sometimes you can go in and you can look at the behavior of a recurrent Network and try to identify what a particular State it's doing it's just more complicated and here we can understand it yeah the dimension of D is fixed yes so so uh the reason that that I made that plot is that every time you can solve it for any particular choice of State so I I say try one state then I have a clear parameterization I can solve the problem and because it's only one variable to search for this is what people do in practice even for really complicated systems you can just say how many states am I going to give it and you expected diminishing returns and at some point you you say I've got a good model at four states okay what do people like this version or do you want to see like um robots pushing soft things around or something I can I've got one more lesson from the the linear case that I could do or I could skip to the graph networks and stuff I think the number of key points is it's relatively insensitive of course at some point but it it it's sensitive in a particular way in which the you know the noise floor basically I think the key points for instance at the end of the poll are probably doing a lot of the work in some sense right and if I took away the ones at the bottom it would be but but the way to think about that is because of the signal to noise ratio between the pertur the the noise I'm injecting in in both measurement and uh process compared to the magnitude of the signal how much it tells me about the Dynamics yeah yeah I suspect I could just do it with half the key key points and it would be fine yeah especially if I went if I kept the ones at the top yeah what do people want but what do people want Choose Your Own Adventure uh task relevant States approximate information States how do you how do you how do you do learn a state that doesn't have to reconstruct all the observations he votes for that and we have one vote so I guess uh the majority has it okay silence uh doesn't pay okay um let me tell you quickly about um task relevant models okay it's not a closed discussion in linear systems by any means it's not so I would say in input output reconstruction we've we're pretty mature this notion of learning task relevant States is still pretty new and I think we're going back and understanding it in the linear system setting in order to go forward with the more complicated okay so this is my standard uh model right but if you think about what you and why are in our robot context right you might be torques why might be pixels and there's a bunch of people in the state representation learning world that have nicely told the story that um that reconstructing the observations is more than you need to solve control problems okay so this is one of the ones I think that makes the point nicely so Amy Zhang has a nice line of work where she says okay if you're an autonomous car there's features of the pixel space that just are not relevant to driving right and there's others that matter very much okay if you're trying to learn a model for con for decision making not for just prediction then it might be that predicting that there's a barn or a tree is your kind of wasting state and making your problem harder potentially it might be very hard to construct that right so maybe an extreme example is that you're just trying to solve one of the RL gym examples and someone's playing a movie Behind the behind the cheetah okay and if your task is to to do model based uh RL for something on the cheetah you shouldn't try to reproduce the Gone With the Wind or whatever's playing in the background right that's just a lot of work to reconstruct those it would require a lot of State so how do you not do that right the prediction cost the standard sort of reconstruction cost from system ID asks you to reconstruct all the observations that was also always the classic framing so there's a you know a lot of interest and some nice work I think on learning task relevant models and I have to pick um a slice to tell you but I think the one that complements the linear story is um is is a particular version now I know actually I know some people are working on things like student teacher kind of models of this that would be one way to well I'll try to I'll make a connection when it makes sense okay so um what what makes a model good for decision making right what makes a good X ultimately what makes a good X is if I can write my optimal policy as a function of x right the real objective would be if someone told me what the optimal policy was I would like to find an X that captures enough information about the task so that I can make as a function of x optimal decisions I think that's a very natural way to say what would be a task relevant X and the claim from that picture is that knowing where the barn is should not affect my policy therefore it doesn't doesn't need to be an X we're knowing where the road is very much decides this depends on you know decides this and so it must be an X so this metric of saying X should be sufficient to make optimal decisions is a nice metric for task relevance the problem is the way we're doing this so far is that our goal of system identification is so that we can build a controller um there's a chicken in the egg problem if you have to someone has to tell you the optimal controller then then this objective directly is tough people people try people will say like I'll do this is where the student teacher kind of idea comes in for instance there's ways that where people will try to um to find surrogates for that optimal policy and find an X that that is sufficient for predicting it but there's a nice idea an interesting theorem and it's I like thinking it through in the linear systems case in the reinforcement learning dynamic programming optimal control world let me I guess I can you guys know all those so I can say RL DP optimal control world okay it turns out that if x is sufficient to predict the one step reward then it's also sufficient for making optimal decisions that's pretty cool okay if x is sufficient to predict the one-step reward cost then it's sufficient for this problem Okay so an interesting way to think about that is that really I have I have a system going on here but it kind of has two outputs right I have the full observations and I also have you know at every end I have a reward function a scalar reward function this might be high dimensional images but this is always a scalar reward and it turns out you know theorem if you have a state inside here that can perfectly predict reward than building a controller based on that state can be can perfectly predict the optimal policy the reward is a function of X and U in general yeah that's a different thing so I'll just repeat it so Leroy says well since reward is a function of of X and U then you would that doesn't seem surprising but X is high dimensional I'm only giving you a scalar right I'm giving you a scalar observation and it's not clear that you can go from a scalar observation back to a a huge State internal state ment it's only the rewards You observe during the rollouts in system identification so I've I've observed you know 10 rollouts of my system and I got data for you I got data for y and I got data for r i don't have some magical ability to see what reward would have been at different states mm-hmm but the the model has to be able to predict so so this was this is okay maybe what your point is is that this is a a big requirement in the sense that it it has to be able to predict the reward for all x's and all u's that's true yeah but the good news is that it doesn't require solving the optimal control problem right that's solving the extra the optimal control problem um once you have this state is sufficient to to actually you can solve it after you can first find your state representation so actually the The Proposal that this suggests would be that what you should really do is think about task relevant models as doing system identification but instead of doing it from predicting y as a function of U you should do you should try to build a system model that just predicts the reward as a function of you okay now where this gets more complicated is that this theorem says I've per I'm able to perfectly predict reward and then it's sufficient the interesting case is when this becomes an approximation and there's a nice paper called approximate information States we always call it AIS okay which talks about putting a bound on how well you can perform given an uh bound on how well you can predict your outputs so it goes into the approximation case this is also related if people have heard of bi-simulation by simulation tries to do this kind of with State aggregation but if you've heard of bi-simulation there's a nice line of work thinking about that too saying I'm going to combine two states in my mdp for instance if they are identical with the view of the reward okay that's only useful if you've seen it so okay so that's a super powerful idea and there's like there's nice work now about saying what can what can we understand so so Maya once I started thinking about this kind of idea the immediate question I asked is can we understand how that works in the tabular case and uh we've got a paper on that recently and can you understand how that works in the lqg case the linear case and we submitted a paper about that midnight last night um this is like ongoing I'm super excited about these ideas okay um but that's a very different and exciting I think view of of what um what the model should do and I do think naturally the idea of a task relevant model is one that should be sufficient to predict the reward okay but this question of sort of um of state representation is really the the First Fundamental question of the in my mind of what I mean by intuitive physics it's funny because Josh Tenenbaum um sort of coined the word to it to physics or he popularized it certainly it came from cognitive psychology he popularized it in our world and he and I will be having a conversation about intuitive physics and about halfway in we're almost always because we do this every once in a while almost always we realize halfway that you don't mean the same thing when you say intuitive physics because I mean when I say intuitive physics okay but for me it's the search over models and this question of how do you find States and then how do you the second question is control with these approximate models and they're they you can't decouple them completely the in my mind I think the quest is finding representations that are rich enough that are task relevant enough that are um tractable enough both for system ID and for control design Okay so let's walk a little bit more back and forth on that line of model complexity okay I'll tell you I want to tell you quickly about this work that Danny drice did very recently and I just think it's awesome and it's just another example in the on the Spectrum okay of of complexity so let me set that up with um again we talked about the various limitations of multibody one of them was not being able to talk about deformable objects or not being able to talk about geometry and this is Danny's work was trying to address that shortcoming and he did it with Nerf compositional Nerf okay so let's say a very popular approach in visual Dynamic learning here is to say I'm going to take an image in I'm going to Define my state X with an auto encoder kind of framework I'm going to predict an image out this would be do people know what Auto encoders are roughly if you try to learn a neural network function let's say this would be my encoder this would be my decoder over here and my state is in the in the middle here I'm going to try to compress my image into some small latent Vector X and so that I can reconstruct the the image this is very much not the task relevant case this is the reconstruct the observations case and it's also um missing the Dynamics that I love but this is this is a very common approach and the reason it's nice is that you can train it directly on images first and then think about the Dynamics later but foreign so if you once you have that um that representation and by the way in this world everybody calls it Z not X that's the latent latent State I'll call it Z and then people will try to learn a Dynamics model on that as a second pass for instance and you can imagine um trying to learn that parameter to reconstruct the the inputs and outputs with Z okay but that has no notion of physics in it that has no notion of there's multiple objects in the scene in it there's no notion that there's no structure that's coming from geometry so there's a a line of work it started with you for us it started with yunzu li who started using some of the geometric reconstruction the Nerf the volumetric Reconstruction as the decoder so these neural Radiance Fields here are a choice are one way that you could imagine going from some Vector representation up to a complete image by these these volumetric reconstructions okay so yunzu and his collaborators started coming up with latent States that had this extra requirement that they they needed to be able to do not only reproduce the image but also do novel view synthesis and they use this neural Radiance field as a decoder but Danny thought it's what's crazy about that is that if you have multiple objects in the scene you're trying to compress all of them into a single Vector a single Nerf obviously there's a there's multiple objects moving and they should each have their own sort of geometry representation so let me tell you the the how it you know what it does and then we'll spend a little bit more on on how it works okay so this is a simple manipulation pipeline uh perception wasn't the major Focus we made all the objects Danny made all the objects bright colors so the masking and and segmentation were easy he could have trained a mascara CNN on this but he just said I'll just make the shoe red and the um the other objects very colorful and then he wanted to generate a data set of pushing okay and he just took his his robot put a blue cylinder on the end and started just moving it random vectors across the table and shoving the objects around and he did this in two steps right so the first step was I'm gonna for I'm going to look at the scene and I'm going to break it down into individual um models based on just the mask so NASCAR CNN could have done it but we could do it with just color based segmentation here I'm going to group the pixels and say these pixels for the purple our this represents one view of that that object and I'm going to train a Nerf effectively on just that object and I'll do the same thing I'll treat this this one image as a as a view observed view of the original shoe from this angle and and similarly and so forth okay okay yeah and then he's going to train a model here that's using the Nerf as its underlying geometry representation in order to to do this sort of reconstruction and he's learning a model on the Nerf that can predict forward Dynamics so this is the rendered from the current observation this is the output of the model when you started from the initial observation you just simulated forward the prediction error the objective of our system ID is roughly the difference between those two rendered images and in my mind that worked it works incredibly well just as an as an ability to predict complex unknown objects forward in time with physical interaction I was like what yes the in this in the data generation and this initial thing he's just open loop straight line trajectory well I mean it's a closed position controlled uh trajectories like this just something that I think he would okay so it wasn't completely random he would just take like the center of mass of the colors and just move through them yes yes this is a long-term prediction yeah yeah okay and it has the property this is what you'd expect out of the compositional version of the architecture is that it has the property that you could put new objects down that have never interacted with each other and it's surprisingly able to generalize fairly well it's not perfect you can look at you can easily find artifacts in the in the rendered images okay but it's actually able to predict how novel objects that have never touched each other before would interact this is also not a super Dynamic regime but it's it's a contact Rich manipulation relevant regime okay and it even works for deformable objects this is yeah here we go so this is uh I don't know party snake or something poor Danny Danny was a visiting student here who arrived uh two weeks before covid from Germany and so we mostly collaborated remotely and he he went back to Germany and did all his experiments in Germany so I only know these through the video okay so that's uh forward neural network model that's predicting long-term deformations of a deformable object that had never seen before okay so that's cool and the way it works is this is I only mean this as one example of a wider wider class of interesting models that are you know from just a deep neural network doing everything to like I'm going to put in a little bit of structure I'm going to admit that there's geometry I'm going to admit that there's some sort of contact mechanics but I'm going to try to Leverage The Power of of learning a network okay so the way it works here is that I for each of the images this is an image with multiple objects in it each there's going to be a mask for each of those objects and there's going to be a trajectory of of those masks each one of those masks gets shoved through an encoder derendered with a Nerf into a new image and you you learn your latent vectors in this way and then you learn a Dynamics model using a graph neural network as the representation because we don't know how many objects they're going to be at runtime you want to be able to assemble new models so we use this graph neural network structure where the known that weight are are on the edges of the of the graph okay to do long-term predictions and you can then test you know for instance by just de-rendering a future observation and that's what we showed you the Reconstruction from a similar long-term simulation versus the Reconstruction from the original from the from I could I could put my initial thing in render to Z get an image out that would be the instantaneous reconstruction if you will or I can do the long-term simulation okay so you take your scene observation you decompose it you actually have to give uh not just the actual pixels of the object but a little bit of a buffer around it in order for the Nerf to to learn about the edges of the objects and stuff like this okay and then there's the prediction and it's only a very small so so the one way that there's a little bit of physics bias if you will is just that it's object-centric that there's a mask our CNN type Network that's doing segmentation and saying that there's some number of objects and I want to learn different models for different objects the second way is that when he makes the graph neural network whether you put an edge in or not is dependent on whether the geometry representation overlaps those are the only physics-based biases in this model but they're enough to to give stronger longer term long Horizon predictions right so you so he did ablation studies for instance where you said I'll just I'll put all of the objects in the scene and I'll assume that they can all interact with all of the other objects and he compared that to say okay oh I can take my Nerf representation and just query are they overlapping should there be any forces that interact between those two and that's a little bit of physics bias okay and it gives you these sparser adjacency matrices and naturally it's an easier function to learn and to roll out more stably into longer Horizons okay and so you know surprisingly long in my mind surprisingly long uh long-term predictions and you can find errors and it bricks will slowly kind of fade into different colors and uh and other things but it's it's surprisingly good won't open the prediction oh I'm sorry so so each every time the video resets it's the next one yeah yeah and then um Danny and everybody who's building these types of models have have ways to do basic planning and control on top of them I would say we can talk a little bit at the very very end it is almost the very very end about the the state of the art of those planning and control algorithms but they're they're still weak I would say so he did a rrt in the latent space in order to do this and he considered that the weakness of the of the work so far future work okay but this that task just to make it clear he tried to push the blue squares Into the Blue uh region the yellow squares into the yellow region but the model originally before training had no idea that there was what the cube was what the Dynamics of a cube was there's no sense of of mass or anything directly it's all embedded in the neural network one other thing I just have to at least mention is that there's been a lot of nice work on using uh particle representations so this is you know we talked about rigid body representations but if you want to do deformable objects or fluids or other things another choice that unions has done a lot of nice work on and other people have have done a lot of nice work on now is um is to just represent your relatively complicated thing with a bunch of fluids a bunch of particles and some of the physics-based simulators that you see out there actually do use particles not rigid bodies so Nvidia Flex is probably the most famous one right but they have great animations of semi-rigid things rigid things fluid things all interacting beautifully by simulating bazillions of particles on a GPU right so yunzu is asking well can we can we use that as our underlying representation in a neural network and he's done a lot a beautiful line of work uh thinking about these kind of things where he would construct again use the graph neural network and he would add the edges on the graph based on not only the location of the particles relative to each other but if they were particles associated with a rigid object they would have a different graph topology and a different set of edges that would somehow impose the rigid if they were deformable if they were fluid they all had slightly different underlying elements and then he did these long-term rollouts of really complicated things right of uh objects that could blend into one big object of uh of rigid grippers he we had a bunch of things that were like sticky right rice uh that he was trying to squish into shapes up in the lab okay a pretty complicated fluid simulations that could be represented again in a neural network and then used as a surrogate model for planning and control so those are just two instances on this landscape of using a bit of physics you know um and putting in more structure that can potentially generalize more so um I I was telling Anthony as we walked in that I I feel like I could talk a bunch more about these kind of things especially because I didn't say anything about how you do control with these and that's another whole uh topic so let me just call out very quickly a few ideas about that um if you once you have a big model that's a compositional nerve for a particle-based graph neural network or a feed forward neural network then there's a big question about how do you do complaining and control with that and um the answers are sort of are surprising to me but a lot of times the answers are even though they're differentiable people do black box rollouts black box optimization on there and there's I think some subtle reasons that people have studied for why that is partly just because gpus are good okay but um but that seems to be the state and uh they they are very they're strong algorithms but they tend to work with relatively limited hurt planning Horizons compared to what we're used to with physics-based models relatively short planning Horizons but the other thing that almost I think is almost more Central to these is that neural networks are able to represent almost anything and the training error does go to zero but they tend to predict very well near the training data and do arbitrary things away from the data and if you write an optimizing based planner against that then it's very easy for your optimization to try to exploit the things that your model is not good at so I think the same way we had you know keys to making a rockstar and behavior quoting demo I think the key is to making a rockstar learned model control demo are to add in a few extra heuristics uh and costs that Force you that to stay close to your training data and in general you know physics-based models you tend to say I remember I move the ball once and I was able to throw it across John Jacques was able to throw it across the the room right um that's extrapolation it's beautiful right the the neural network is going to be able to model almost anything around the data but it has a harder time extrapolating partly because we haven't put any structure in it why why should it right so um okay that's a quick blast through some intuitive physics I think it's a huge open exciting research topic and it's bringing in people from Pure ML and people from Robotics and people from all kinds and perception and all that so maybe it'll bring in you guys too I will see you Tuesday
Robotic_Manipulation_Fall_2022
Fall_2022_642102_Lecture_5_Audio_fixed_Geometric_perception_part_1.txt
foreign so um that definitely makes the screen better foreign thank you of course start in this general area I'm going to start on this board probably but okay all right okay welcome back we'll get started foreign so last time we actually um we got a lot accomplished right so um the pipeline we had as of last week was this basic pick and place pipeline and although we ended uh you know we just finished covering some of the differential inverse kinematics at the end there we came up with a pretty complete pipeline for moving a red brick around and just to make sure that I've set it properly right the way that we chose to actually command joint velocities and Joint positions to the robot was we thought a lot about the differential inverse kinematics affectionately diff ik right and in order to make that complete demo which is in the notebook and you can in the chapter and you can and should play with it I put that into the systems framework and so just to think of it from a systems perspective we had this diffic system which took in a desired spatial velocity of the gripper it also had to take in the current Q The Joint the Ewa positions and it output V The Joint velocities the Ewa joint velocities I have to write bigger right and we'd use the pseudo inverse in the simple case or a quadratic program as the generalization in order to implement that differential inverse kinematics uh system right to wire that all up though if you remember we have the the the station here the manipulation station which takes in desired positions right so to make this all work there's actually one more Block in the middle which is just doing a integration you give it some initial conditions and it will take these desired velocities out and integrate them back into desired positions the station had an output port for the Ewa position which has to get wired back up into this that's all good we have direct access to the Ewa position and then in the full demonstration this came from a trajectory right we had our initial pose trajectory and we took a derivative of that pose trajectory to get a velocity trajectory a spatial velocity trajectory so this this uh you know took that spatial velocity and then this trajectory system just plays out as a function of time the the spatial velocities which get pulled at every time into this differential like okay pseudo-inverse and the whole thing went around and we have a complete demo it's actually kind of interesting that if you were to if you were to simulate a little too long for instance right then this thing would run out of interesting things to say we've only defined a trajectory up to up to the final you know segment time of our plan so your mileage may vary if you try to keep simulating right because you could just Define this more carefully but I didn't so you could uh you could simulate that further and get um a little off one other thing just to say about that because we're going to build on this uh this week but uh there's there's two multi-body plants flying around here right um at least two but you can imagine having having more so inside here is the is one plant which is simulating the physics of the world right that's my our physics engine is buried inside this station diagram but the defy K system also needs to to call the kinematics methods of a plant right so this there's another plant that's being used inside here and they're not the same and that's okay okay so this plant has the robot and the red brick and the bins and all the details of the world right it is one mathematical model of of the uh you know of the world and it has everything inside it diffic is using a more narrow view of the world it has a mathematical model of the world that only talk thinks about the arm and ignores everything else and this is not um unusual this is not an artifact of difficult you could think for any simulation you know your robot might have some internal model of what's going on and then there's what the real world is giving you so you could think about this as sort of the internal model that the robot has of what's going on and in this case it's very simple it just pretends there's nothing except for the robot arm and there's another one out there which is the this is you know our simulation of the real world okay so there's cases we'll see multiple different plants flying around and that's okay it's just different models that are used for different pieces does that make sense Okay so the big thing that was the Assumption last time is that we assumed that somebody gave me the position of the red the initial position of the red brick right and in fact we did that before this diagram was even created so that I could you know before I even make this whole system I can design this trajectory and that's just a fixed thing over time yes so you see about that yes so this is the desired Ewa position this is commanded I forget exactly what I called it in the diagram I'll see it in just a second this is the commanded and this is the measured and those are joint positions of the Ewa yeah thank you for asking okay so so there was this very artificial assumption that um you know the robot woke up and was new exactly where the red brick was the pose of the red brick in the world and the goal of the next few lectures is to remove that assumption right in particular in the manipulation station so here I can answer your question now it was called the position on the way in and it's Ewa position measured on the way out and we were also using these um you know if we wanted to write this completely we were sort of using the cheat ports on the manipulation station we were we were saying go ahead and tell me the exact position in the world of the red brick and I put those in the manipulation station simulation so that they're available and we can we can you know write algorithms against them but you don't have those in the real world so on the hardware version of the manipulation station those ports are not there when you're on the real nobody's going to tell you exactly where the red brick is what you have instead are cameras as sensors right and we have to now start using those cameras to infer the position of the red brick right and of course much much more down the line so the demonstration by the time we're done is not a server oh there we go okay good is going to look fairly similar I've upgraded from a red brick to a mustard bottle so that's good that's progress you'll also see there's a bunch of cameras around the scene okay that's new I had ignored them and you'll also see that um there's some perception happening here so this is a point Cloud that we're going to talk about that is obtained from a perception system that perceived that was looking at the cameras at time zero and doing some algorithms to try to estimate the position the pose of the of the mustard bottle in the frame okay so the goal of today is to give you that basic algorithm and we'll do better and better as we go forward but that's our pipeline right so it's it's going to be look fairly similar but there's going to be some initial work where I look at the camera outputs extract that pose and then create the trajectory and when we get even better that'll be we'll be used that in real time feedback okay but today we're just going to say look at look at the world once now design our trajectory because that's the pipeline we already have okay so um here's the thing turns out computer vision is hard okay so using looking at your cameras uh it's hard it's a hard problem it's gotten a lot easier now that we've had our machine learning Revolution okay but it's still a hard problem and you should understand maybe why computer vision is a hard problem I think it's it's fairly um intuitive but to say it in a word or two right why is computer vision hard is if you're given an image the red green blue values the RGB values at a bunch of pixels then a very small change in the RGB values can mean a very different can have a very different effect on what you're trying to infer from the scene and the flip side is that a very small change in the world can lead to a very large change in the RGB values right so the mapping from RGB image to for instance the pose of the red brick this is a very complicated mapping a small change here can lead to a dramatic change here and vice versa discontinuous changes right if I have two objects that were to occlude each other for instance it might be a very discontinuous change in my ability to estimate this or know this and and uh you know the colors and color values that come out if I change the lighting conditions this can change dramatically even though this changes it not at all okay so this map has traditionally been very very hard to to reason about and now we're doing a lot better than that by trying to learn those that map with uh with data-driven methods and deep learning but for that I would say that we have had another Revolution not just the machine learning deep learning Revolution oh sorry go ahead that's a great question yeah so so and someone in the survey questions which I totally read is asked me to repeat the questions better so okay so the question is um yeah as there work Beyond camera sensors uh it happens not as much as I would like so so absolutely the other big sensor so we talk about our joint sensors a lot we talk about inertial sensors maybe an IMU in the robot specially if it's a mobile robot the other obvious one for manipulation is tactile sensing and we are going to talk about that but that has been much slower to evolve than camera based sensing partly just because the computer vision world is enormous and there's massive data sets online for computer vision research and there's not the same availability of research for tactile so that field is growing more slowly but it's it's an obviously important signal now you could go beyond that right so um uh I think we use smell I think we use sound we I mean you can hear things Collide and stuff like this I was talking to the I visited The Culinary Institute of America foreign okay Mike two this one does seem dead I just powered it up and it powered right back off so I'll take that off so now that I've given you the answers to all of the midterm questions I'm kidding just messing with the people that are watching remotely uh um yeah so there's a handful of different Technologies uh you know for for these geometry these depth sensors uh for indoor right structured light is one where you actually Project Specific patterns and look for those specific patterns to come up this projected texture stereo is another common one so Stereo by itself has problems if you look at surfaces that are completely textureless and so they can do much better if you just project even a random dot pattern to provide some structure and actually this is the the sensor that we're going to use for class and I happen to have one in my pocket not always just today um it looks about like this and this is what you saw in the simulation you know that was suddenly around the bins in the in the manipulation station demo there okay so Intel real sense D415 this Series has been discontinued but there there was there was a momentary scare where we thought intel was canceling the whole line but they've got um there the line has continued okay and these um the the the nice thing about projected texture stereo as apply as opposed to some of the active time of flight sensors is that you can put multiple cameras in the same scene and they don't interfere it used to be as soon as we had some of these time of flight sensors we would have to like really carefully synchronize the the frames of each camera otherwise they would interfere with each other and these that are passive but they're just shining some texture any any texture is good and they don't assume anything about the particular texture they're putting on these now have the ability to sort of work uh multiple cameras at the same time yes yeah well this is the one that's most relevant so I'll tell you this one so um so if I do the block matching stereo this um this picture this cartoon I had here right so there's a tree in the left there's a tree in the right I'm trying to find the same image if it really does look like this where most of the image is white because I'm looking at a white wall or something even this chalkboard maybe this chalkboard probably has enough texture because of my erasing and stuff but that's the problem is that you won't know the depth if there if two parts of the image look very similar so the trick there is just in the infrared Spectrum throw some random dots project texture right and then even if they're random at least when I do my matching this block will look different than this block and there's and it's once again able to extract the depth and you see now if you're two different cameras because I haven't assumed anything about the specific texture that I projected the two cameras won't interfere they will both just have enough texture okay so of course we're going to be able to simulate um the these cameras now um let's just think about what's happening here so this is a very simple diagram I'm going to let it explode so because I know it's too small in this current frame but the new thing we've added here is a rgbd sensor system just tax on it listens to the scene graph and it publishes color images and depth images RGB red green blue depth sensor okay and it's Simplicity in this diagram hides the massive complexity behind it right so it's it's potentially a full game engine renderer that's happening behind the scenes there the one we tend to use in class here is just a relatively simpler opengl renderer that um that doesn't do photorealistic but it's very fast and very low computational overhead and it runs fine on deep note and all these things okay but when we're trying to do things like train a perception system with deep learning we will we'll have a version of that that renders with photorealistic rendering right okay so then um what you get out of that is a color image the standard thing you'd you'd like to see in this I only put the mustard bottle into the world the the background is just whatever zero in one coordinate system mapped into the RGB space okay and you also get this depth image so for every pixel what these cameras mostly give you is for every pixel that you have a red green blue value you also have an estimated depth okay that's the natural uh interface that you have to all these depth based sensors okay so the system is very simple you just go multi-body plant scene graph I'm going to show you this example that has both renders the the ground truth mustard bottle with a standard mesh CAD visualizer and then I'm adding the rgbd sensor which gives me a depth image but to put it to render it I'm going to turn it into a point Cloud which we're going to talk about and then hand it out to another visualizer and this is what it looks like there it is okay so I have a camera right here I put my frame on my camera so you could see you know understand everything in frame coordinates okay and what you see mostly here is the ground truth mustard bottle but if you go in and you turn off the ground truth you can see what the camera sees which is the rendered Point Cloud okay and I think what's very important to understand just from this very simple example is that the camera being where it is which I turned off yeah the camera being where it is can only see part of the mustard bottle right this is going to have a dramatic effect on the way we think about these algorithms right so occlusions partial views are part of the problem with perception you're only going to see parts of objects in general you're rarely going to see perfect especially you'll never see the bottom basically if it's on a table or something like that you'll certainly never see the bottom and you'll probably put multiple cameras around to see fit multiple views when you're when you have the luxury to do that in for for instance when we did the dish loading example that I told you about these little d415s we were just we just like the philosophy are just put them everywhere make it the problem's hard enough I don't want to mess too much you know we still have partial views and occlusions that we have to deal with I'll Define those okay but there's there's d415s all over the environment this one's trying to look into the dishwasher when it was there there's two on the wrist of the robot Okay so the interesting there's interesting operations we'll think about about how do you fuse all those Point clouds together and and they're like yes yes um this this one was not a mobile robot but if you have a mobile robot then then you quickly need that requirement that you've actually integrated them directly into the robot so um absolutely we those the most common place of course to put them is in the robot head for no other reason than we like humans I guess um the natural the another natural thing you would think of is put it in the hand and we have some of the hands I brought down before had cameras inside them um and those are useful I'll show you an example later of them being useful but they aren't quite the tactile sensors that we were talking about a minute ago because um one of the problems with these types of sensors is that they have a minimum range in addition to a maximum range and if you get too close to an object then they will become blind again so yes absolutely that's an interesting question about where do you put them on the robot and the dream would be that our robot has is only instrumented by itself and can go into any environment yes um yes okay yeah so we're not going to talk about psychotic so the question was about um uh psychotic eye movements I I showed one exam so there's um the example I showed you of the super fast hand that was like going like this and catching a phone it was actually doing a simple computer vision algorithm but it was tracking really fast um right it didn't have the cycads that a human does but it was doing a continuous tracking um and that can be good in in many applications uh the other thing that's really popular now is event cameras which will try to at super high rates just tell you the pixels that changed uh maybe we'll get to that later but I think slow stationary cameras for today just to get us started yes oh sakad so it turns out if you if you were to like have me look if you were to have me track my finger moving across here my eye doesn't move smoothly as smoothly as you'd like it actually jumps right so you tend to have fixed gaze and quick transition to another gaze to another gaze so eyes human eyes tend to uh to jump in discrete quick events the idea there um is it's even better if you were to like spin around I see people moving their finger it's even better if you're like spin yourself in a chair which I'm glad you don't have rotary chairs otherwise I would have lost a few minutes but um but yeah so the idea is that that humans are mostly blind when their eyes are moving and so they so roughly we do as much as we possibly can to keep our eyes fixed and then move very quickly to the next thing this is also why birds this is going away this is also why birds sort of walk like this right they right that's a sakad it's a card right that's just they don't have the ability to move their eye in their head so they move their whole head but it's because they're they're blind when their eyes are moving roughly yes watch out what you ask right um okay good moving on um the other thing to know about these is that although I I do think the revolution has given us beautiful sensors that we didn't have before enabling new algorithms they're not perfect sensors still and we have to think a lot about um the noise in these sensors we're going to talk about clean sensors today and talk about noisy sensors in the next lecture but if you look at the um you know we point this in the lab to a pile of interesting objects if we simulated those objects we would get this is the depth image I would love to get out of my sensor the one you really get looks more like this and it's accurate it's very accurate in some places and it's very inaccurate at other places and it depends exactly the the properties of that of that error depends on which technology you're using but it's very common at you when you see things that are at sharp angles to the camera that your projected texture doesn't do so well for instance and you'll you'll get like the sides of objects tend to be have dropouts these black frames were the places where it didn't get a depth return was unable to estimate so it just said there's not I don't know there's nothing there right and that happens a lot on the side of objects also shiny objects tend to be defeating for this or transparent objects aren't so good for for stereo so um a lot of those things are still challenges with these sensors now the Nerf type technology uh when we get to it but some of the only RGB based technology can do better in many of those situations and that's one of the things that's exciting about it but these sensors still very common in robotics uh have artifacts that we have to think about this is uh the particular sensor that we have here the D415 everybody who talks about it says it's it's an awesome sensor but it's a little lumpy right and you're like this is just somebody uh you know having some interesting objects on their desk and if you look at it and you zoom in whatever you know it has consistent artifacts in its depth estimation it kind of looks like there's a little bit more of a wavy field than you'd expect okay it's interesting that um that there's a science of simulating the noise that you get from a depth sensor and you know I think autonomous driving companies are doing that exceptionally well even trying to simulate like the what happens with the the plume of exhaust coming off a car how does that mess up with my sensors okay we haven't done that level of modeling in Drake yet uh but I think that field is advancing okay so from there let me stop that okay from there now we have a particular you know depth image coming out of the camera and we need to to think about how we're going to work with it and there's various a bit analogous to how there were many different representations for orientation there are many different representations for geometry right so the RGB image the depth image is one it's the one that comes out of this camera we're going to talk about converting that to point clouds in just a minute that's what I drew in the visualizer but these are just two examples and there's many more so you if you've ever used a CAD program you probably thought about using triangular meshes which tend to represent the surface of objects with triangles right or you might if you've done more complicated things you might have used a tet mesh or tetrahedral mesh for volume which would represent the entire you know you can imagine triangles for the surface but if you want to have the whole volume represented you use four points for each piece a tetrahedron there are other representations like signed distance functions this is an implicit representation we'll get to it we'll cover it more soon and it's the stuff of deep SDF and uh so this is SDF for short and Nerf is is in this category there's voxelized grids or occupancy measure occupancy Maps right so I might actually represent geometry by just saying I'm going to put a bunch of cubes in space and I'm going to put a one everywhere that there's I think space is filled and a zero everywhere else right those are my that would be a voxelized grid and it's actually a pretty common representation and there's more to okay and again a bit analogous to the rotations each of these is going to be useful for different types of computations and for the most part you should be I think be out relatively optimistic of being able to convert back and forth between them okay there's some cases where that conversion is lossy and you might not be happy you know going here and then come back coming back for instance you might not get the perfect reconstruction but we get our data in in one particular format a depth image and it makes sense to convert it into many different formats to make different algorithms efficient okay let's dive in and actually do a little bit of perception work um it's not a coincidence that we talked about kinematics last uh last lecture and differential kinematics and the like because in the first pla the first step of perception really I think is thinking about perception as an inverse kinematics problem okay and the way this is going to go I'm going to make some silly simple objects in 2D on the board that are supposed to be visibly asymmetric no symmetries to worry about first okay so let's say I have an object o okay this is my object this is a known object the things we're going to talk about first are going to be are going to work best if you know them have a model of the object to start with a known object o so maybe I've got a cad file that somebody gave me and I'm going to try to find the object in the scene like a mustard bottle for instance maybe that CAD file has a triangular mesh okay but I'm going to convert it from that format into into a point Cloud formula format to make the first algorithm work so imagine I've chosen as my representation for this geometry to be a series of points on the surface of the object they don't need to they don't necessarily need to be beautifully sampled evenly sampled it's rare that you get that have that luxury but we're going to represent our object with just a bunch of points that are some position in space we'll call them the model points in an object frame okay so this is the model points and this is the ith model point and I'm going to write it I'm going to denote its position in the object frame with my standard you know Mi is in position in the in that frame my depth camera gives me something different right it gives me roughly what I'll call scene points this is the I seed point in the camera frame to go from the depth image to the 3D location of the points even in the camera frame you do have to go through the geometry of the camera there's some perspective geometry that that you know the lens for instance like this that will help you convert from the image the depth image into into a bunch of 3D points and this you know we won't use points if the depth camera said the depth was infinite or the depth was Zero all those black regions we saw there we won't include those in the scene points we'll just discard them but there's a relatively simple operation that goes from a depth image into this which is what we're calling our Point Cloud representation there's a list of these points a point Cloud can have its own can live in whatever frame okay so these scene points might hopefully look sort of as if they were roughly generated from that shape okay maybe I put it like that and there's going to be some points here right maybe if I'm looking at it from this angle I only have points on one side but for now I'll just say I've got points roughly everywhere around the object okay and then next lecture we'll talk a lot more about partial occlusion partial views and occlusions I know I know where the camera is so I'll assume I know where the camera is in the world if it's the camera mounted to the hand that's just done through through forward kinematics right if it's a camera bolted to the world it's just a fixed constant value in my task is to estimate the objects pose in the world yes yes so if um it could be a function of the joint angles if you like yeah and then absolutely that would be the result of a forward kinematics computation but even in my simple example they were fixed and that's uh that's that's fine I will in general throughout the class when we're talking about estimation I will use this hat notation at over it okay to denote the estimate okay so X hat is the estimate estimate of x okay so how do we do it it turns out like I mean you can tell I'm already using the language of kinematics and frames and the like so finding the missing transform is just a kinematics problem right if I line up my um if I line up my different uh transforms then it becomes a simple a relatively simple task with one big assumption I'm gonna assume that model Point MI corresponds to scene point I we're going to remove that assumption in just in a few minutes but to start let's just say that you could imagine if I look through my camera if for instance like every point that I found had a completely unique and reliable color for instance so I had no doubt that when I looked at this that this point here absolutely goes with this point over here that's a that's too strong of an assumption for reality but it's going to be the first step of our algorithm right is to assume that for every one of these points here I know which of the points in my model it's course it corresponds to and that correspondence is actually um in general is one way I'd like to be able to correspond every point in my scene to some point in my model but it could be that many points go to one point and it could be that not all model points have a corresponding scene point but I want that all scene points correspond to some model point and to begin with we're just going to assume that there's a one-to-one mapping just to make it very simple to just avoid um extra notation if I do that then the problem is really just a kinematics problem I know that the position of the model points in the world should just be the world to the object I have this this is um something I've been given right and it also had better equal the position of the camera for all I this is two different ways to put the the same point into the world coordinates yeah now this is known this is unknown we need to estimate this right and we said both of these are known right this is measured but and this is known so I've got an equation here where I just need to back out what does this transform given that now just think about a little bit about the the properties of this um for instance let's just say that there's no noise whatsoever you know this is going to have a unique solution assuming I have enough points right if I had just exactly one point then the there's going to be multiple solutions to this right there's some for for a 2d estimation there's some number of points required for 3D there's some number of points required as long as they're unique points whatever there's very simple conditions I mean basically two points and then at least three non uh co-linear points right that will mean that this Matrix has a unique solution given I know these correspondences foreign but because the system has noise and other things this I I would prefer to write this not as a solve some linear equations perfectly but try to solve this in a least queer sense just like we did with the uh with the differential ik last time so it's going to be you know solve this in a least squares since this I would say is an inverse kinematics problem not differential inverse kinematics but inverse kinematics right there's no jacobians involved here this is actually trying to estimate the the the transform so as a natural question we talked a lot about differential inverse kinematics last time why are we doing inverse kinematics this time what's different does anybody have an immediate sort of idea for why we should jump right to inverse kinematics this time why aren't we using jacobians yes perfect perfect so he says that you don't have an initial ground truth right so in the in the case of driving the robot around I knew what the initial joint angles were so it made sense to ask the question if I made a small change what happens to the end effector so the Jacobian the differential kinematics in the differential inverse kinematics became the right object or a natural object this is a different problem this is the robot woke up and it has to find with no good initial guess where the object is so we have to solve the harder problem of going solving the complete inverse kinematics problem we have to find that entire transform this is still of course easier than when we have joints and a big robot this is the one object case but we're gonna this is our first example of solving the the real inverse kinematics problem kinematics was going from um positions the generalized positions to poses and inverse kinematics is going from poses back to positions so it's a little I understand it's a little weird the way I've written here but I'm trying to back out the positions of the representative here as opposed I admit but I'm trying to find the the positions description the generalized positions of that object from a series of these of these chains yeah but I would probably address it before or after election maybe yeah thank you that was probably my one chance right I probably should have just said yes I'll stop everything let's fix this um I've tried so hard to get their attention okay um so let's write it as an optimization so instead of just saying I want those to match with equality what if I said I want them to match in a least Square sense so if I said that X w o oops w o times p I can't even see my own light writing up there minus xwc s i am going to do the sum of this over all I and I'd like to minimize this over my decision variable which was the object position of the world okay and just to remember to be explicit that this thing is uh is a particular mathematical object that represents a pose let me write this down as saying I'm going to look for this inside se3 okay which is the special euclidean group it's a fancy way to say a pose basically in three dimensions okay AKA a valid rigid transform okay so this is a sort of a robust way to try to estimate that pose this thing since both of them are known I'll start just writing pwsi or even just PSI since the W is implied okay save me a little bit of writing and this thing we know from our spatial algebra can I I could write this as P of o in the world plus the rotations of the O in the world times okay so I can write this alternatively as I'm going to minimize over p o and r o where this is some potentially abstract representation of orientation but it had better live in the special orthogonal group Okay so which is just another way to say it's a valid rotation Matrix a valid rotation okay so this is the problem we want to solve and we want to think about the sort of the geometry of that problem and how we how we solve it robustly that makes sense questions about that yes okay yes so in the case where we have enough points and that there's no noise and there's perfect correspondences you could try to solve that as a bunch of equalities and find the exact solution but even if you think about if you count how many numbers here for instance right um if I'm going to search for let's say this is a rotation Matrix so I have nine numbers here a three by three rotation Matrix and I have three more numbers here so I have 12 numbers I'm trying to solve for in general and let's say I have um you know 50 points in my model then I've got an over constrained you know they should if there's no noise there should be a solution but I still sort of don't like the idea of of solving a system of equations that could be that you know that is defined based on some transformation where I have like 50 equations to solve for 12 unknowns that seems fraught with like numerical problems and stuff like that so you could if you know the no noise case you could just pick your favorite 12 and you know as long as they were and do that but we're going to just move ahead to the case that's going to go the distance for us which is as soon as you have noise or whatever we're gonna we're gonna say find me the best possible match that describes this data good question okay so let's work with let's chew on this a little bit um I need to pick now some representation for R to turn this into a mathematical program the state of the art methods in this world will tend to pick either rotation Matrix representation for this or quaternion representation for this okay I'll do the rotation Matrix representation another question yeah yes we're going to get to the it's a ransack would be a a good way to handle outliers for instance if you have random points that don't correspond with your object then ransack is a very natural way to handle that yeah yeah for sure we're going to get to that okay so let's choose r as a rotation Matrix and I'll write exactly the same thing but instead of this relatively abstract saying that my decision variables live in some special group you know valid rotation valid rotations now I can write exactly the properties of a rotation Matrix which are constraints on the variables in my decision right so now I can say I'm going to minimize over p P and R now a rotation Matrix okay just drop it R here oh and I minus PSI same thing squared okay but if I represent this by three by three numbers then I need to add constraints to make it a valid rotation Matrix so the the important properties of the rotation Matrix is the most important one is that it's an orthonormal Matrix right so the columns our unit length and the transposes the identity that's how you this is an orthonormal Matrix and the simplest way to write that would be to say that r r transpose is the identity Matrix so and that's most of what we need it turns out there's one more that you need to be a valid rotation is you need the determinant of our to be a positive one okay if you allow so given this condition the determinant will only be positive one or negative one but it but if you don't add this constraint then it's possible you could get something that's a rotation plus a reflection there would still be an orthonormal Matrix Okay so I just meant to make I was trying to emphasize that and I realized it looked like plus or minus Okay so if you have a determinant negative one then um then that would be called an improper rotation and we want to be I guess proper here so okay so this is interesting now um this is almost like the optimization we wrote last time where we have we still I see this and I see a quadratic objective do you see a quadratic objective yet I mean the decision variables are here right in this inside of the equation all of the decision variables enter linearly okay because this is known this is known so the decision variables enter linearly and when I Square it I will get it at most a quadratic term in the decision variables and it's going to be because it's this nice you know some quantity squared that means my objective is still some nice quadratic form in my PNR that's good yeah we're in the land of quadratic optimization this one however is also quadratic right if I take the two terms here and multiply the coefficients of the Matrix together then I'm going to get terms that are quadratic in the elements of R and I'll do the two by two case in just a second but this is now also a quadratic constraint so last time we talked about quadratic objectives with linear constraints and we said there's beautiful Solutions that's the quadratic program quadratic objectives with quadratic constraints are not quite as nice these are called qcqps quadratically constrained quadratic programs and they don't admit in some cases they admit Solutions beautifully but in general don't admit the same you know natural solution techniques okay and then this one is actually a cubic oh no so it depends on the the thing at worst it could be a cubic uh constraint so we're going to tend to I'm going to just full disclosure we're going to attend to ignore this and I'll justify that in a little bit okay so let me just do this in uh since that was a little bit abstract I can make it very clear I think in the two by two case in a 2d optimization so rotation Matrix in 2D you probably think of for instance writing cosine of theta negative sine of theta sine Theta cosine Theta right that's the sort of when you think 2D rotation Matrix maybe you think about this okay and that's that is the map that goes from Theta to a rotation Matrix I'm going to hear I want to avoid this non-linearity of cosine and sine so I'm just going to over parameterize it here I'm going to call that a b negative B A like this okay so after I solve for A and B I can back out what Theta is if I want but I'm going to parametrize it like this and I'm going to take advantage of the fact that I know that I could use four numbers to do it but I don't have to because I know that proper um rotation matrices have this structure in 2D okay so if I then write what is the um r r transpose equals I constraint if I just multiply this times the transpose of this then I get two constraints basically the the four elements all give me you know being one or being zero this constraint converts into just a squared plus b squared equals one and a b minus B A equals zero okay so when I say this is quadratically constrained that's it's exactly you can see it multiplied out here if I did choose to parameterize with Theta I can do that people do do that we will find examples where we do that it's still potentially a reasonable problem but it becomes a non-linear optimization problem and the language of sort of you know unique Minima describing the solution is no longer valid for us okay so that's why we go from this representation to this representation to be linear in our decision variables in the in the quadratic in our decision variables okay so let me consider for a moment since we've written now this thing with only two parameters A and B let's pretend I want to solve the optimization and I just won't worry about the positions and I'll just solve for rotations what does that optimization landscape look like right I have a quadratic objective and I have this constraint right I made a plot here we go this is what it looks like okay I'm gonna as I move the true rotation around this is on the left is uh the math that takes a handful of points and computes that objective okay the green is the objective function and the red is the constraint okay a squared the this is a this is B for instance or yeah okay a and positive B over here as I move the desired Theta around then the optimum you know the objective moves around okay and beautifully in this setting actually the minimum always lands on the unit circle okay that's because I have no noise in that case it turns out if I ignored the constraint completely I would get a valid rotation that makes sense right because if I if I wanted to minimize the this cost and the true cost is a valid rotation right then the thing I get back out should be a valid rotation also as soon as you have noise though this objective of trying to reconstruct the points if you will can move away from the unit circle and having this constraint that it better find the best true rotation Matrix that satisfies the constraints can be visualized in the optimization landscape as pulling you back in 2D just to the unit circle does that picture make sense okay now that's just for the rotation only case but it turns out the rotation only case is all we actually need it's pretty clear it's a very there's a very clever trick actually so Tom when you're do you remember the like a couple lectures ago you asked me why I can go from when we talked about spatial transforms right in our spatial transforms we said ba expressed in let me actually use the same letters I used before if I want to change from p a b expressed in F to PAB expressed in G I claimed I only needed the rotations and you asked why don't I have a position and a rotation and I say it's an exercise for the reader but you only need rotations it's not that hard to see this is the relative distance the relative position between two points right so if I move a frame in Translation the relative position between two points doesn't change it's only when I rotate the frame that the relative position changes okay it turns out we can exploit this trick to simplify that optimization problem If instead of trying to write everything relative to the world frame we do it relative to another point in the same frame then the positions mat drop and I can only optimize for rotation okay the relative position only depends on rotation so the the way that that manifests itself in this equation is you just basically you pick some canonical point on the in the point cloud typically people pick the centroid of the point cloud and you said you write all of the model points relative to the centroid and the if you do the algebra this term disappears and then you can you can always back it out later after you've estimated the rotation so once I get to this relatively simpler optimization the same one I was working on here this one actually has a nice solution although qcqps in general don't have a great solution this is a special case and it has an excellent solution and it's obtained by just calling SVD okay this is one of the magic cases where SBD just solves the problem so the problem of minimizing oh um over r sum over I are P0 m i minus P0 s i squared subject to r r transpose equals I has a closed form solution with the singular value decomposition singular value decomposition that's amazing right and this is a becomes a staple of our perception algorithms so what does that mean in practice is I can do things like this I have a a model and a scene let me make sure I get the right ones correct I'm going to move my model into my scene okay given this transform so the scene is my salmon colored I think when I picked it and the light blue is the model let me just say this one thing yeah and then I'm gonna I assumed I made a huge assumption that we knew which point in the scene corresponded to which point in the model that's what these lines indicate for every one of those I knew which point it should correspond to and I did it in this example by just perfectly causing a rotation on that okay and it turns out in that setting where I have the known correspondences and I only have to estimate translation and rotation there's effectively a closed form solution that will snap and find the rotations yes the oh over here um so so what I did here was I I um I moved the two point clouds the scene salmon colored scene and the blue colored model to be in the same world frame or the same frame so the fact that it's this brown thing that's salmon plus blue I could I could have moved them both into a world frame I plotted it um what did I guess I think I moved the um yeah I moved the model into the scene in this case the scene was measured so I'm consistent so the scene I met estimated in the world frame that's what the salmon is I solved for the rotation and translation that would move the model into the scene and then I applied that to the blue color and I and the result was that points landed right on top of each other and I got a lovely mud Brown for the geometry the so the um I have to remember to repeat the questions the what is the scene and what is the model yes so the this is too perfect to get from a real depth sensor but the scene is what you get from your camera and right now I assumed it's perfect and everything we're going to remove those assumptions the model is the for the mustard bottle I'm going to make a scan a perfect model uh in CAD or something of a mustard bottle and I'm going to go through the world through my cameras trying to find that model in the scene right and I chose to represent that instead of using a a mesh I'm going to represent it as a point cloud and just do this is called Point set registration or Point Cloud registration what we're doing here take this point Cloud to this point Cloud find the relative transform so that they become the same the the result of the brown thing is complete success we've dominated this problem and the result was was mud Brown it's the same mud Brown that's here yeah it's good so um let me instead of well let me foreshadow what's going to happen next time um which is that if you no longer know the correspondences and you have to estimate the correspondences then there's a relatively simple algorithm that I'll do at the beginning of next time that will do this for you and the results look something like this becomes an iterative algorithm where you try to guess the correspondences and then you apply this magic SVD solution right you apply the correspondences you apply the magic SVD solution and in the good cases this works beautifully and can solve the harder problem I will also show you outtakes next time when it doesn't solve that perfectly right and that's on a loop which is why it's a gets Paddy it looks like it gets bad again okay so let me just ask a few questions um about the the version we already have here so um what happens if the objects are perfectly symmetric right remember my little my shapes here were chosen to be intentionally asymmetric so if I just go back to the simpler case here foreign what happens if the object is perfectly symmetric if I just did a square or something like that what changes in this optimization right that's that's the right question he says do we still have those correct correspondences if the shape of the object was symmetric in this part of the problem that is irrelevant because I've already given you perfect correspondences so there's no problem with symmetry the next part of the problem where we have to find the correspondences will be susceptible to symmetries great okay what happens if I don't have enough data points that's kind of an interesting question what if I had if I'm trying to estimate this and I only have one data point for instance what's going to happen there'll be an infinite number of solutions and in practice since I'm solving this in this sort of quadratic form the solver will probably find the one that's closest to zero in these parameters okay SVD would even do that I think okay so it's not under it will pick something but it will be one of the infinite number of solutions good let me stop there and we'll um we'll not try to jam the last thing the next thing into the next five minutes I will see you Tuesday how's it going
Robotic_Manipulation_Fall_2022
Fall_2022_642102_Lecture_15_Force_control.txt
but having read your pre-proposals which I thoroughly enjoyed there was a there was a a surprising number of projects not surprise I mean it's great but a surprising number of projects that wanted to uh throw things or hit things or smash things or somehow do very Dynamic things with the arm and having just talked about motion planning and now seeing a lot of people that probably want to have fairly sophisticated motion planning in their project and incred you know and possibly something with Dynamics I just want to take a minute to sort of make sure people realize what we were talking about how it might relate to breaking stuff uh or throwing stuff or skipping stuff or it was there was a definite theme there I I actually want to know why I mean but um I know a bunch of you read the tossing bot paper you know that's a good maybe that was a motivation but uh but it just it was surprising I for me see I see I teach the two classes I teach under actuated I teach manipulation and under actuated for me is about the Dynamics right and manipulation for me is about perception and things are relatively static or quasi-static you know but you do perception you do high level planning you do all the other stuff uh but I think the world's collide inevitably maybe it's even my own bias creeping in somehow but let me just um distinguish between kinematic versus dynamic trajectory optimization because I think probably a lot of you said maybe I'll do some motion planning with trajectory optimization to catch something or throw something so um you know this is what we talked about last week I would just say last week here the dynamic version of that is actually a big Topic in my under actuated class but we didn't actually talk about it um here and maybe you guys I think a lot of those projects don't need it but let me just make sure you you understand when maybe when you'll need it or when you don't need it okay so what we talked about last week was parameterizing some curve um you know just it's this is just a curve in space or a trajectory in space and in fact so the you know I told you I was gonna finish pushing that kinematic trajectory optimization into Drake it's there now um if you look at the Constructor just as a you know if you look at the Constructor of the kinematic trajectory optimization class it doesn't even take a plant there's it has no notion of Dynamics by default the default is just this is just some curve in space you can associate it with a plant by adding kinematic constraints to it or add other constraints to it but by default this is just some you know it's just some curve parameterized by a handful of numbers that's it and then you can if you want to say I want this to be a certain place I wanted this to be associated with the kinematics of the arm you can add costs and constraints that make that true okay that is in contrast to some of the if you look at some of the other trajectory optimization methods like direct co-location for instance takes the system directly in the Constructor right instead of so does direct transcription and so does some of the other multiple shooting you know these are these are more fundamentally about Dynamics this is fundamentally starts with this the construct of of a dynamical system continuous or discrete depending on which transcription it is and the decision variables are all set up to basically um to basically solve the numerical integration of this these differential equations right so the different the decision variables encode the numerical integration and actually the different transcriptions that you see like the direct transcription versus the direct collocation those correspond to sort of different numerical integrators they're they're kind of um but that's the way you think about it okay so you can of course put derivative constraints on this you can say that Q double dot at some point has to equal something this would be a constraint you could add in kinematic trajectory optimization so you can start putting Dynamic constraints onto your curve and in full generality you could Implement a numerical integration schema as constraints on this curve but the you know these methods were built for it these methods you'd have to build them yourself okay so um I think most of you are probably going to be okay with this but I want you to be aware of that sensitivity so what when would you go from this to this it's when the Dynamics matter you know in a way so so I can say for instance that I want my robot to execute some joint trajectory and I could say it's got velocity limits it's got acceleration limits but if you suddenly have the robot and a ball for instance and you want the evolution of this curve to be to represent both the robot's joints and the balls joints and that those are only coupled through the equations of motion then that's where you start getting into the you know where you probably want to live in this space I think a lot of the cases of catching throwing smashing or whatever are probably okay thinking about just the robot and just making sure that once you're at the end and you're actually connecting with the robot with the ball or whatever it is that you do so with a with a velocity to send the ball off into some ballistic trajectory and you know you can compute some very simple dynamics of the ball here and just make sure the robot gets there and that coupling might be simple enough that you don't need this machinery I think that you know or or catch or smash or whatever um okay I think most of you are in this regime but I want to just make sure that that's clear there's another set of tools if you find yourself that the basically I would say start here if you find yourself trying to write more and more Dynamics constraints the curve of the robot depends on the the motion of the ball of the objects then you might look at that API or even the notes in under actuated okay good all right so today is mostly about Force control and I have a bad joke ready so I'm going to do an example of force control here I know I have many bad jokes but thank you you got it right if you missed it here's another example of force control that's a pretty good example too right probably when I was writing those letters or erasing the board I was in a force control mode in my arm right I mean exactly what is my brain doing I'm not trying to make any statements about that but this is a kind of task that if you wanted to program a robot to write on the board you might consider doing Force control why because if I were let's say I was executing a position trajectory right you missed my joke um if I was if I was executing a position trajectory and just going through the the letters example you know in uh in joint space and I was just a little bit off in my estimation of let's say I I was in the perfect position to write my example okay but now I you know someone came and just like I I put the second robot down that's supposed to write and I just put it off two degrees to the to the side and I start writing I'm going to air ball a little bit and then I'm going to break the chalk as I go into the board right if there's any uncertainty about where the board is then following rigidly a joint trajectory could be a bad choice now in practice since we're typically controlling our Ewa through a joint impedance control mode which we'll come to understand this week there's going to be some flexibility in that even if you command you know just an end effector trajectory it will comply to some extent and that's because actually underneath it it's doing some Force control okay so but if you were to just take a rigid you know Factory position control only arm and try to do writing you'd be sad right unless you're super calibrated and dialed into the location of the robot relative to the thing it's trying to to put force on in fact that's one of the classical examples that people originally moded Force control motivated Force control was things like following a wall or painting or welding or something like this where the robot had to to follow some continuous curve in space not based on you know open loop trajectories but based on actually you know sensing the force on the on the robot and executing a path uh due to that Force okay so that's what we're going to talk about today and understand how to um how to work with that how to think about it I'm going to break it up into into two pieces I find there's there's a lot of notation I would say there's even a lot of philosophy about that comes in when people talk about Force control um people will say it's you know it's morally better to do Force control you know um or you know the world can only do this or uh I'm gonna I think I can keep it pretty simple but I think um it's helpful to distinguish first the case of just thinking about forces I'm going to assume the robot is a point okay I'm gonna I'm gonna start by just assuming that the robot is a point there's a joke about assuming a spherical cow I'm doing worse I'm going to assume it's a point it's not even a sphere with any radius I'm going to just take a point robot okay then we'll figure out how to make that Force happen when we put the arm back in as a second pass and we'll talk about the manipulator control impedance control and the like there and we'll get basically the same things to happen but we'll throw in some Jacobian transpose kind of logic there so um so let's start by thinking about you know what is what is force control so I'm going to start with um uh let me start with a little bit more motivation besides the the writing Okay so if you look at the the images that I generated by generating lots of of training data for the segmentation pipeline this is one I actually had to take the Cheez-It boxes out of the um end-to-end clutter demonstration because uh I can't get an antipodal grasp from the top that would have picked that up right the Cheez-It box is just too big for the hand I actually asked my daughter I said okay I gave her a two-fingered gripper and I asked her to try to pick up a big box and she did it really well not surprisingly she's a human um and uh you know but but we don't have we haven't developed that capability yet right and so if you want to do more uh dexterous things in order to get those big boxes out of the out of the bin then we're going to be in this this regime now she's also got a dexterous hand which is just not even fair right uh and she can even stretch your hand out to do that so this is too advanced but you get the point okay so so we're going to work towards that type of um a demonstration right in our clutter clearing kind of example and here's another good example that we'll we'll talk about and you'll even do a problem set about so this is now I'm going to push the we're going to push the book to the side of the table and you have to think about this for a second but um why does that work okay we've thought a bit about friction cones and the like but how is it that the robot is able to push the book to the side of the table this is a clever way to pick up a big book okay but even in this first part right there wow why does that work right so um a couple things possibly could have happened right it could be that the robot went to do its thing and there was so much friction that the robot doesn't move that didn't happen okay it could have been that the fingers slid on the book that didn't happen either this was a nice regime where the fingers stayed attached to the book but the book slid on the table so think about how that has to happen right there has to be some sort of difference in friction in order for that to happen our normal forces are are similar so somehow there's a difference in the friction cone and if you look really carefully you'll see that there's a some tape modifications on the end of that finger to just make sure it's nice and sticky not I mean okay it's not adhesive but it's uh it was a textured uh tape that we put on onto the finger okay so we'll unders we'll understand that but but thinking about that you have to be in you have to put yourself in the regime where the forces you exert on the table and on the book are such that the friction cone allows you to slide the book but not if you push too hard then nothing's moving if you push too soft you're going to slide on the book but there's an intermediate regime which only exists if there's a difference in friction cones at friction coefficients but if you can if you if you have that regime and you can control a force that puts you into that regime then you can slide the book okay okay so this is what I want to do I want to do a uh I'm going to flip up the Cheez-It box like my daughter but I but I'm going to take even just a point finger example okay so we're gonna we're gonna try to regulate the forces of the on the Boxes by thinking through a point finger and the reason to do this is just that everything is super simple and I can write the equations of motion in a heartbeat so in this case I'm going to say Q of the finger is just the X Z position I'll even stay in the plane okay the Dynamics of this f equals m a is going to be and they right foreign on the other side I've got mg where mg is going to be is my Vector notation G is 0 and negative 9.8 okay I'm going to assume that I have the ability to just command the forces like it's a little jet pack on there that I'm allowed to produce generalized forces directly on the finger again we're going to add the robot back in at the end okay and then I also have any contact forces that are coming from the the finger interacting with the box or the wall or whatever and now the spatial Vector notation that I introduced before we're going to lean on it heavily today okay because because it's all about getting the forces in the right frames and everything like this so um this notation remember means that this is a force on body force on the finger at point or or frame C which is my contact point and you can imagine for instance if I this was my my Dynamics are trivial here um you can imagine maybe that this uh is something I could even have a force sensor for there's various ways in the space of force control to try to regulate your Force you could try to measure your accelerations you could assume accelerations are small let's just assume for a minute that I could actually as I'm executing in the world I can measure that Force okay or you know or I can assume that X double dot Z double dot is small actually maybe I'll do that one first let's just do this let's do this one first okay if x double dot and Z double dot are small I'm not accelerating my finger rapidly right then if I want to control that Force then I could just choose a u let's let's take the the condition where I'm already in contact with the wall okay so I've got a point finger I'm pushing on the wall what should I choose U to be to make that Force um be what I whatever I want the accelerations are zero because I'm stuck on the wall I can just choose U to be exactly what I need to make this uh the the equations this would be zero here I just want to choose U so that the force is the desired Force okay so I can just say U equals negative mg minus f C I'll call it desired I'll use little F since I'm talking about Cartesian forces so far okay so if I have a desired force and I apply this thing I should first just take out gravity but otherwise I'm just going to apply the force I want and by assertion that these are businesses small that equation will just give me that the measured Force equals the desired Force that's just logic but I just want to make sure that the algebra is there what happens if you're not in contact with the wall and I apply this controller assuming that you are in contact with the wall this is an extremely important point and it's one of the best things about Force control what's going to happen if in fact the real contact force is zero and I apply this this controller I'll get mg minus mg that those terms go to zero and then I get minus my FC desired and this thing is I'm going to I'm saying is zero so I'm going to accelerate right in particular I'm going to accelerate into the wall this is a beautiful thing okay so the signs are a little bit hard to think through but in general this is the property that if I I think intuitively it makes a lot of sense so if I'm close to contact I think I'm in contact and I ask myself to get positive contact force off the board the control I would execute to try to ramp up the contact force that I'm not getting will have the effect of driving me into the board this is this is a hugely important thing right so this is why for instance in Walking robots people like to use Force control in their legs okay well it's more subtle than that typically when you're swinging your leg through you would typically try to do a position control and try to make sure you know where your foot's going to land and control that but when you actually go to land and when you're maybe when you're in stance when there's legs and stance you'll switch to a force control mode and in particular for that moment where you're about to put your foot down where you may not know exactly where the terrain is rather than have to perfectly estimate the shape of the terrain you just push down and you say I want some some amount of force to be coming off my maybe I've got some four sensors on my feet I would like to push down until I feel the forces on on my feet be the desired forces typically the roughly the opposite of the weight of the robot right and if I ask to have that force be large and my foot's in the air then it tends to go down okay and this adds a lot of robustness and this is why when I'm writing on the board and I'm off by a few degrees we're going to get to that it completely completely but that's why I can get some extra robustness if I'm thinking in the space of forces and not in the space of positions Okay so unsurprisingly I have a couple notebook examples that I want to play so let's think about the simplest version of this I'm actually going to draw the Cheez-It box a bunch in these examples with uh you know just like that like it was on the screen here so this is my bin that I was in with the wall that would have blocked my view cut away okay and I actually just to keep it simple I make the Cheez-It box so it can only rotate in the plane so I just took away the extra degrees it won't spin around or whatever the point the point finger is aligned I'm just living in the plane okay but let's say I just come I put my finger in some known position and I ask for a force to be exerted on the finger I'd like the f c f c desired is that I want to be feeling a force that's pushing me that way if I'm not feeling it I'm going to push harder this way to try to get it and it'll put me into contact so the first notebook I'll just run here is um what happens if I just apply a constant desired force and I start the finger here think about it for yourself for a second if I just command a desired force and you know and start the robot here the the block here the Cheez-It box here and the finger there what's going to happen I'm just going to run a fixed duration simulation with a handful of different commanded forces commanding a constant force what's going to happen tell me what's going to happen it's going to push the box into the wall if there's enough Force but there's other cases right at The Other Extreme it could just go up and do nothing the Box will stop it if you're not pushing hard enough because the friction on the floor and then there's actually a super interesting regime where it slides you know different thing different amounts that could slide depending on how hard you're pushing right so this is the different rollouts if you will where in one extreme so the box is like right around 0.1 okay the the finger comes into the box it comes in more slowly if you accelerate more slowly the blue line is the one extreme with the smallest Force and then it just hits the box and the Box just you know stops it basically it actually goes a little bit into penetration because that's what the contact model we're using is it allows it to penetrate just a little bit okay The Other Extreme it hits the box it barely slows down whatsoever and starts pushing the Box until the Box jams into the far wall you know okay and in between you get all these other different possible behaviors including this one pushed it for a while and then stopped because there was a collision event that started it moving but the continuous it could actually be under the friction cone so there's lots of interesting different things that can happen okay my claim is that there are some things if you want to to regulate the Box there's some things that are more natural I mean I already we get some robustness by not knowing the geometry of the box and we can shove it around like the first thing my daughter did was shoved it to the side right she didn't need to know the geometry for that she could just say I expect to feel forces and boom he pushed it to the side but we're going to do something fancier here which is what that other that picture was this is something that would be very hard to do my my daughter didn't do it actually in the videos I don't think she could I'm sure I'm sure she could have but uh this is a little fancier this is if you're really thinking about forces and regulating the forces you can do pretty cool stuff so I'm going to see if we can take the finger put it in a regime where it's actually rotating the Box up right in the middle of the bin how we're gonna we're gonna there's we're going to be in a regime pretty quickly as soon as it takes off where there's two primary points of contact in the plane right there's more out of the plane but you've got a friction Force resisting the sliding in that corner you've got the pushing you're doing there depending on the friction cones if they're large enough there's actually a place where you can start lifting it up you get enough friction on the finger that you're that you can provide a torque okay but you're not producing so much force that you're sliding and you can put yourself in there's this regime where you can you can lift up the box that would be extremely hard I hope it's clear that I like this example because I think that would be extremely hard to do in position control mode but that's really a force kind of action and if you even if you don't have the perfect model but if certainly if you do have a perfect model then you can just do it okay so here's my mesh cat Cheez-It box right the fingers often in contact there okay it's come into contact because I commanded some non-zero Force the controller we're going to write now and I want to step through it on the board and be a little bit careful I actually that was the one feedback I got from your what you guys said about the Deep perception network is the lectures which I really liked that level of feedback people said maybe step through a few more examples a little bit more slowly happy to do it so I'll try to do that today and uh and you tell me Okay so I've got uh I've got I'm going to write a controller that controls basically the orientation of the Box by controlling the force on the finger that's our goal and to convince you that it works I just have a little I'm only controlling through the finger but I can basically regulate the orientation of the Box this is the full physics engine right running and it's not sliding at the bottom corner it is and it's providing the the forces only through the finger through Force control mode okay it's actually worth going through the exercise of of doing that I think now let me do something a little crazy so watch what happens to the finger if I go like this oh that was actually pretty good darn it I switched to the sap solver which is great and I recommend it because I recommend it to everybody but now with the finger normally the finger was flying off at the end and I guess it doesn't go flying off okay my controller is better than I wanted it to be um imagine that the finger happened to slide to the edge of the box but it's still commanding a force right I I made a I'm making a strong assumption in this controller which is that the the finger is pushing on the side of the box if you're regulating force and your fingers suddenly in free space then what's it going to do it's going to Rocket itself down until it collides with the ground or depending on the angle it might rock it itself off into freeze you know off the box and there you get into your uh throwing and smashing regime okay oh I'm a little bummed that I the I think probably this is the right answer but the other one was allowing a little bit of numerical errors okay good so let's let's actually work that out a bit how can you write a controller that does that the hardest part about it I think is getting the notation right and maybe there's some tricks of kind of how to write it in a way that doesn't uh doesn't isn't too susceptible to model errors and the like like okay so let me draw my free body diagram I'm in the middle of the bin for example I'll exaggerate here and put it in some relatively large angle okay I'm going to have a bunch of frames that matter this is my body frame okay I'm going to call this Frame since I've got B and I'm going to have a contact frame C I went ahead and use my imagination and call this one a so that's frame a where this is X and z and I'll have a contact frame over where my finger is touching the wall and I'll put that frame remember the normal forces always go in the Z axis in our contact frames right so this is frame C and this is z and this is X to keep with my right hand rule foreign view of the robot we were trying to command the force that the robot felt you know the force at C applied to the finger f but for the free body analysis I actually want to think about the equal and opposite Force which is the force that the finger is applying at C on the body B so that Force which is going to use all my colors here so probably and you can see it in my simulation which I should probably stop just so I don't like run out of battery or something silly probably I'm going to have a force that's in this direction here and that's going to be that Force I'm going to call the force uh on body B at a see oh my God named ground okay and I can express it in various frames but let's express it in the A-frame okay and I have another Force which is probably going to be pushing me it's going to stay inside the friction cone okay but if I want it to tip up it's going to be it's going to have a component like this right this by default is going to be on body B applied at C of the finger that's my name for it this is the finger and it's most natural to express it in frame C well we know our transforms to go back and forth between them and then of course I have a now gravity for us here which if we want to keep our notation consistent here this is the force applied at body B normally you can just just write B for that but I'll just be explicit from gravity and I can write it in the beef actually it's most natural To Explain express that in the world frame because then it's just zero negative M times 9.8 whatever okay what do we know about these different forces we know I'll stick with the color codes here to try to keep it clear we can let's assume for a minute we know the friction cones we'll assume we know the mass we'll assume we know the geometry but I promise that the controller I give you is going to actually be pretty good about that not having to know them very perfectly okay so if I know the friction coefficient at um the contact point then by my notation right the what I know is that the force of the finger the Z component well first of all it's only it can't pull on the box okay and then the X component magnitude is less than the friction coefficient times the Z component right in the frame of C this is a simple thing to write okay the implications of that of course in the world frame for instance depend on transforming the frames into the world frame but it's really just that simple you'd call it the simple ice cream cone cone in that case where it's just along the z-axis same thing for the for the ground okay right we could say the same thing for the ground I won't write it all all out here but the force B at a of the ground applied at a can I just say that's in the friction cone at a as kind of a notational shorthand for that yeah and that would depend on mu a of course which I think I set to both them both to just be one in this case it doesn't have to be in the bookcase they have to be different this one not it's not clear to me that it has to be different it depends on the geometries and other things okay the force of the finger if you think about that as a three element Force we're going to be commanding that effectively right we're going to be regulating that our controller gets to in some sense set the force of the finger as an input what what do we how do we think about the force on the ground this is saying it's inside a whole this gives me a whole range of possible forces but how how do we know which force is going to happen which one's Newton going to give us her God or something cool good right it's it's the it's well it's which everyone's going to keep the point from moving right it's the it's whatever force is necessary to set the velocity to keep the velocity at zero if we're in the stiction regime so if no sliding that if we complain all these forces into the frame of a this is the the important way to say what you just said here if we put all the forces into the frame of a that they're in equilibrium at a there's no acceleration at a okay so the force of the finger I can map that to point a and I express it in a plus the force of the ground expressed an a applied at a plus the force of gravity at a e a that equals zero because that's what friction does it basically given this and this it will set this to make that zero the causality is not something I'm trying to make a statement about but in practice since we're controlling this uh we're going to be able to regular to understand what the ground is going to be so this is the this is the important equation the no stick or the stick no sliding condition at the ground which helps me solve for the ground forces but inside that I could you know I'm I can choose different I can choose different forces to apply at that ground as long as they are able to satisfy this equation by staying in the friction cone then I have room to possibly push up right and try to torque this thing up so the last important thing that we have to think about is the torque applied to the body Express that you know let's say the total torque in the frame of a for instance I want this which is also going to be the same sum you know finger plus ground plus gravity in the torque components the torque at the of the ground at the ground is zero because it's moment arm is zero okay but I can control the I can choose this torque at the finger to try to make a torque around that this bottom point okay so by reasoning about the forces and the friction cones I can lift up my objective to say I basically want a pure torque around there subject to the constraints that the force doesn't slide that the ground doesn't slide so we just need a control strategy for choosing um for choosing that you know this this torque is now a you know abstract we've abstracted away we can sort of think about what torque do I want and there's lots of possible answers I could choose I could use my perfect model of the Cracker Box and my perfect sense of the geometry but I don't want to do that I want to show that Force control is a little general and doesn't work doesn't require perfect knowledge so let's just say I want the the even the finger here that's the one I have Direct Control of the finger to come out of a PID controller how about that and I'm going to have a I'm going to call it let me just see if I can write that so I'd like to think of the the torque as being you guys know PID could drive set it a few times but never perfectly defined it right the proportional integral derivative control why would I want to choose that well let's say I don't know the force of gravity or the torque due to gravity because I don't know the mass perfectly of the Cheez-It box uh I don't know that maybe I don't know the gain of my uh you know if I don't know the position of the finger perfectly relative to the position of the ground because I don't know how big the box is then I might have some slop in any model based control here but if I do just a simple linear feedback if the angle isn't as high as I expect it to be I'll pull more okay I'll put more torque in if it's if it's too low I'll push less and I'll even have an integral term that could compensate for that mass this is a very simple since it's a one degree of Freedom problem I can just do a very simple controller that'll use feedback that says if the angle is not where I want it pull a little harder if it's too far pull a little back okay and that'll just change the torque of the finger is that clear unfortunately this is a competing objective like there's it's not clear that I can choose any torque of the finger because there's other constraints coming from the friction cone so I'm going to use and pull up my optimization Playbook and why don't I say this is my goal so why don't I write a minimization over the ultimately it's going to be on the force of the finger that I I can pick which I'll I'll write it directly in the um B of the contact I could pick any of them though really of the finger these are my decision variables okay and I want it to be that my torque of my finger is approximately my PID controller this is kind of my objective this is my estimated angle of the bot of the I'll make that a quadratic objective and I'll solve for it subject to my friction cone constraints thank you and the force balance constraint and everything we wrote here it turns out is a linear constraint even though we're changing coordinate systems with our spatial algebra between frames a and frame C if the it's a it's a cross product that happens but of a cross product with a known position and the decision variables only enter linearly so this is still a quadratic objective and these are linear constraints so this is the good case it's a quadratic program and you can solve your little quadratic program and apparently never let the finger slip off the Box okay that's all I'm running right here is that level of detail useful a little a little slow okay send it to me yes on on this on the survey or something I will continue to try to dial it in but so there's you know the particular instantiation of it as a quadratic program is cool um but I guess the essential element is that I can choose commands for my point finger that if I didn't think about forces there's just like no hope I could I could rotate that that thing up uh and think you know the the exact force that I'm applying no matter what what the angle is is hugely dependent on my friction cones and things like that and it's only with that that I can get this sort of Rock Solid demo where I can move the Box up and down right actually I I thought about bringing a Cheez-It box down yeah but um I didn't want to embarrass myself because that's actually really hard to do that I was also going to then have to bring like a rubber mat or something like that um and then it gets less cool right try it at home homework uh flip up a Cheesy box without using the wall okay questions that's a narrow example of something that I would work we'll call Direct Force control even though I have two controllers in some sense Happening Here I have this higher level controller the command I'm sending to my robot is trying to directly command a force and it'll make that happen in the point finger case it's trivial to make it happen in the robot case maybe you have a force sensor or something to make that happen okay guess what we're going to do indirect Force control next okay but I guess I everybody still says they like the break the stretch break so let me stretch for a second and then we'll do the indirect Force control yes yeah good good good good so I I'm if I'm writing this in the in the code I have to somehow relate Tau finger a with the decision variables here so this is really a function which does my spatial my spatial algebra that does the coordinate change but this is a linear function of this given I know the position if I know the position of C in body B relative to a for instance then I have that cross you know is what gives me this but this is a this is just a constant Matrix times my decision variables there's also because I changed expressed in frames I would have a rotation Matrix also on top of that but those are just um that's a great question thank you for asking those are just linear functions of this and you can play with it if you want so if I didn't know this exactly but I just estimated it it's still going to work pretty well in fact the first version I did I just assumed I knew the width of the box and I didn't even put in the term relative to the height of the box and it was fine in fact it I think I updated it I can't remember which version I pushed to get but uh yeah so it's pretty robust to that okay so I think this is a beautiful solution oh please go ahead okay yeah so um this is what he says what if we have a springy material how would that change things um yeah let me think where you're coming from here on that so um I mean in general I would still be applying it for and so far I've said let's say you can measure the force on the finger and regulate it so it might be that by in order to regulate the force it might spring it might push into whatever surface it's a little bit springy and it could change the equations that I'm trying to balance in order to you know reason about the far corner so it might change the free body diagram but I think the formulation will still work I think that the the basic concept of regulating it just reasoning about the forces should should be intact and even I think linear Springs which is a pretty good model of those kind of contacts should go through without even changing the complexity of the task if you had a very non-linear you know response curve or something in the spring that it gets harder I see okay I see so humans are so he says what if it's a human arm right so humans are annoying I'm sorry uh that's a bad way to go through life uh hard to model humans are hard to model both physically and uh their intelligence is also hard to model they're squishy but we're kind of like full of water and uh so we tend to not build High Fidelity models of their skin and oftentimes fairly simple models of uh you know of softness are sufficient you're also not supposed to push on people I think uh not with big robots so so be careful with that disclaimer nothing you know if you push a person with this controller I'm not to blame oh it's all good that's a great question that's a great question okay there's only one thing that I really think this is um this controller is very sensitive to I I I you claimed it as a good thing that um you know when you're close to contact and you're not in contact you command a force at least locally it does the right thing okay but if you're just a little bit off and you command a force it could do an exceptionally wrong thing okay um so it's a it's it is making a big assumption that you're in contact when you're nearly in contact and applying that force is in contact and it does require you know maybe you know it does require some free body diagram kind of modeling it turns out there's a version of force control an indirect version of force control that can be a lot more natural can mix objectives about position and force and give you you know another programming language think of these as programming languages to control the end effector of your robot okay position was our first control our programming language Force pure force was the the next one and we're going to go into kind of a mixed position in Force if you will indirect Force control in particular let's start with um stiffness control so here's the Paradigm right so let's say I am doing this dangerous thing of walking up to my robot and uh you know pushing on the end effector but I'm going to push it um and the force I want what if I wanted even though it's a big robot and it's complicated what if I want it to be when I push on it it acts just like a linear spring that I want whatever Force you know when I'm not pushing out at all it should just sit still when I push on it a little bit I want it to push back proportional to how far I've pushed it from that right it turns out we can make our whole big complicated robot act like a spring at that point okay and that's that's sort of the second big idea and again if we write down our simple Dynamics in the point finger world it's it's simple to accomplish that and to even say more carefully what I what I mean what I would like is to pick a u so that my effective Dynamics look like this uh on the wrong side that would be the equations of a spring with a resting point at the desired location okay so now this is in some sense programming indirectly the forces but it's programming the interaction that the for that the robot is going to have with the world if the world applies this it's going to respond with some motion similarly if the robot is at a motion you know it's pushing back with a certain Force so it's it's defining that relationship um that turns out to be a really nice Paradigm okay uh let me play with it okay to do this though I needed to make the Box in see-through so let's see if I can make this clear okay so the same box is just less I've removed the market branding um okay and I've got two fingers because one is the virtual finger which is my X desired and Z desired and the other finger is getting pulled there with a spring okay so as I move around in the free space my finger the actual finger tracks the desired finger okay but if I push into here now I'm applying more Force depending on how big that spring is being pulled and at some point I can move the Box by moving my virtual thinker so I'm there's this natural interaction of kind of I can think still think about forces but I'm thinking about it through the set points of a spring what's good about that is if I were to suddenly move up for instance it doesn't go totally crazy right I've defined a a more robust law but you also saw I kind of give a hint there right it's actually not a bad way to flip up a box check this out I'm leaning on the corner there okay let me reset and I can do it better I bet foreign okay so now what if I just put the set point of the spring somewhere over here we're going to think about what the physics of that is but it's actually a beautiful beautiful idea I want this suddenly to have forces applied to the Box such that it's there's like a spring rubber banding me to the to the wall right around that Pivot Point okay what happens whoop that's pretty good right just in case I couldn't do it myself I made an open loop script that does the same thing right it's actually so I I would have had to be pretty pretty bad to not get it to work but as a very this is now open loop if you will it's just executing an open loop script no feedback required accepted the level of regular of making this law happen it's a beautiful way to program the interaction okay you can imagine that um so so stiffness control would be if you made it act like it had a particular stiffness okay damping control would be if you program the damping impedance control would be if you programmed all three of them you could even change the effective Mass okay those of you in Neville's lab can can weigh in but um right so so impedance control is uh the most General name of it foreign M B and K for instance it's kind of weird that you could apply forces if so I could I could push on my robot and have it act like it it's a different Mass robot than it is right it turns out it's hard to do that and I think a lot of people we'll do this and this and I think it's less common to do Mass the in the Ewa specifically it's called it is an impedance controller but they're not actually regulating the mass of the robot they're regulating the mass of the rotor so they're in at the rotor level there's doing some amount of impedance regulation of of mass Matrix inertial shaping is what they might call it okay um but maybe not at the full make the robot like making a heavy robot act like it's very light requires typically high bandwidth maybe either good force or acceleration sensing those kind of things that are are strong requirements for uh for a control system okay so that's pretty good right so and you saw what happened right was that we took a different strategy to flip up the box we made a virtual Pivot Point here and we made effectively a spring that was here that was applying forces just based on that spring and then as I move this up the spring Force caused it to not only flip over but also push down right once it got here and it started pushing up and over the forces changed direction from here to being here and it actually did exactly what I wanted the whole sequence through by thinking of it not as programming the force directly but by programming the interaction yes um I mean normally it's uh let me think about so I think normally the task sort of provides that right so um in most cases I think there is a natural answer but it comes from the task definition like for you know for pushing books or something then maybe books will have some natural stiffness that they'll want to be interacting with or whatever I I where do you I worry you're asking a deeper question that I'm answering uh can you give me an example of a situation where it would be hard to pick those I see excellent okay so the great so this is a great so so in even in this controller specifically how did I pick KP and KD perfect that I can answer uh I think I set them to one uh so so the point I think in this example it was pretty I mean maybe 10 or something you know it actually almost always I'll pick this to be uh you know Square energy more than this so that it's critically damped or something like that okay but so that there are basic heuristics like that but if I had picked a smaller or bigger gain then I would have just moved the finger more into penetration or less into penetration and I think the same phenomenon would have worked over a large range of gains it's just a matter of how uh you know where that critical point for the for the virtual finger would have been I think I think this this particular demo would have been very robust to them the box is experiencing different forces right um but what's it what's essential here is that there's a pivot point about which those forces are rotating which is somehow Inside the Box and I think that's true for basically all case for any given task right so um so this is a in some sense this is what's happening when you're commanding the Ewa now right it's happening in joint space we're going to talk about the the rest of the robot on on Thursday but um you know so far you've been thinking about commanding positions in fact what it's doing is a setting a virtual position and it's putting a small spring between your command and the actual and that's why when you take the dishwasher door and you might not know exactly where the dishwasher door is but you command some trajectory that's close it will actually deform that the true you know finger will track but with some error the commanded cues in that position so oftentimes the programming Paradigm is command what you want maybe maybe you go a little bit more into penetration than you would would have otherwise or something like that but you command what you want and the robot will get it done in a soft way I did that for emphasis you know so if I made the stiffness higher than it would have been um pretty it wouldn't be it would wouldn't be as far there are interesting cases um maybe I should even jump to that I'll come back to the hybrid version in a second here but let me actually make this point of the best case the most interesting case of of doing this maybe you guys know the do you know the remote centered compliance story this is like so clever it's really kind of what we just did there's something called RCC remote centered compliance it's one of the most clever um things I think in force control because it's done in Hardware there's no software okay it was done in 1977 at MIT by a guy named Drake I thought that was so cool I didn't know that that's great um okay so so this was about this was originally motivated by Peg and whole tasks it turns out it's more generally useful for assembly tasks okay um so if you think about sticking a peg in a hole then a lot of interesting things happen okay certainly you can get a little bit out of line right and things could go bad if you're coming down with a Peg and you're just out of the you know sort of out of alignment that could be bad but um but you can play some tricks like chamfering the edges to help a little bit with that and people actually have learned straight have changed to strategies where you often come down a little bit at an angle so you have a little bit more robustness to the exact arrival but the the really nasty thing that can happen with the Peg and hole task is you can be partially inserted but out of alignment let me see if I draw it with a different color here and you can get yourself jammed like actually the forces here and the forces here could be large enough that you you can't really even pull yourself out okay so this is a really this is nasty business and um this is such an important operation for assembly you know for factory robots that it it got a lot of attention uh you know in 70s actually it motivated some of the early work and motion planning too so if you know Tomas Lozano Perez's early work on configuration space which is a kind of core idea he was a lot of those initial papers were actually done in the context of Peg and hole insertion okay so it motivated AI stuff too but the one I'm telling you about here today is let's think about how would you program a response that you'd want to do here what's the analogy of that box flip up for this okay it turns out that what you want to do is have a center of compliance that's somewhere down here instead of somewhere up here okay so I I don't know how to make that super clear except maybe I'll I'll use my Eraser okay I probably get all chalky in the process so if I've got a some sort of stiffness in my hand and I'm holding it at the top which is where the robot's going to be holding it for Peg insertion and I come down at a little angle but I'm a little bit out of alignment but what happens right this is me just trying to move my hands straight down right things go pretty bad okay let's say I had the center of compliance at the bottom a little awkward that my hand would have to be in the hole okay but let's say I could do it for a second I'm coming in at the same angle going straight down it lines up beautifully it lined up even better with the lighter eraser I had in my office when I was playing before let me try with the lighter eraser right so if I go straight down from this not so good if I come down straight down like this it just actually lines perfectly up okay so the stiffness you want is like a torsional stiffness but not where the hand is you want to have an effective stiffness down here but your hands up here you really can't put your hand in the hole that would just not be good it turns out there's this super clever mechanism it looks like that oh my God there's a few of them right that's that was one of the original ones which has cantilevered Springs up here see it looks it it has these these Springs up here the point of the tool comes in here and this it basically is this elaborate spring mechanism that makes the effective Center of compliance this is the instance instant Center side of beam deflections and the center of compliance ends up down here it gives a remote Center of compliance so this is far in the regime of you know I said most of the time you're just doing virtual things that are small this is smart far in the regime of being very clever with where you put that virtual force and so there are there are cases where you can do that and this means that I go in and I Jam my my pin down it's it's a slightly wrong location and the mechanism with effectively infinite bandwidth you don't have to people try to do this in software you could try to do this in software but the software is going to be running on a control system which has reads the sensors at some rate the actuators can only move at some rate if you do it with physical Springs it's effectively you know arbitrarily fast physics is doing the work for you and it can uh it can you know adjust itself and snap itself into place and people really do this like uh when you're jamming Parts together this is a physical one this is mating so there's a there's some pins on the bottom and there's some holes on this top and they have to align it and the the robot is not doing super detailed visual serving it's just kind of jamming it down and the remote Center compliance is doing the work such a clever idea so good put it in Hardware okay so that that's on the extreme of being very very clever with it all right so but overall what I the message of the lecture I hope is that you realize that there it sometimes it's more natural to talk to your robot through the language of forces than through the language of positions that's true by the way if you care about reinforcement learning or supervised learning you know a behavior cloning kind of learning uh the there are some tasks that you would want your neural network to Output forces instead of outputting positions right this is a general concept and people you know there's papers like oh I you know I switched to impedance control mode on the output and it learns four times faster um depends on the task but that real that can happen okay so the the stiffness control the impedance control is a s is one way to just say I'm not going to command forces directly I'm going to switch to commanding virtual stiffness but you don't have to um you can mix and match these different ideas so the book example was actually hybrid Force control where I cared about regulating the forces directly I actually commanded force in the vertical position because I wanted to be in that friction cone sweet spot okay but then for sliding I wanted to have position control mode because I wanted to control where the hand was going to go and you'll do that you'll work through that example on the uh on the homework okay but instead just to give you a couple different ways that you could do hybrid force or stiffness control let's call it it's often called Force position control but um let's stick with the theme here so what if I did in my this is my X Y components here what if my x-axis I wanted to do position control so I'll program the stiffness in this it's on this side it's got to be like this x desired minus X plus KD x dot desired minus x dot for instance and on the x-axis I'll go ahead and command the um the force desired in just sorry in the Z axis and if I want to command it like this then I put a minus inside okay so that's roughly what happens in some of these hybrids where you want to in one axis you're going to act like a position source and the other axis you'll act like a force controller okay now you don't have to choose World X and worldwide you could do this in in any frame you want multiply it by a rotation Matrix in front do your spatial algebra and Achieve forces that are for instance if you are welding that or that you know following the the wall for instance maybe you want to always in the normal Direction your current normal Direction you want to act like a force and in the horizontal you know tangential Direction you could act like a position for instance those these kind of things okay so that's a general recipe it's also not the only recipe um you can also mix them sort of oftentimes you'll actually just mix them all together right so you could also do U equals negative mg Plus maybe if I'm not using scalars anymore I'll make it a capital um Okay so you can have a little bit of this and a little bit of that and if you turn up KP if you turn KP down to zero then you could be almost completely in you know if these two coefficients these gains are zero you can be acting like a force source and if they're turned up then you can act more like a position source and you keep that command small that's another way to mix them in fact this is what this is a common interface for the for the panda or for the Ewa and this would be called typically called like a feed forward torque or Force an extra command to send down but you you know often primarily interact through the stiffness controller okay so this is um this is actually great in some applications but this is just maybe the most the most General way to write it if you will certainly I could produce this with this with the paper choices of KP and KD and F yeah okay so next time we'll tell you about we're going to stop assuming a point robot but that's a pretty good stopping point okay see you Thursday thank you hi I'm gonna ask you really quickly about my project sure sure um I was the group that wanted to do the um like picking up with other
Robotic_Manipulation_Fall_2022
Fall_2022_642102_Lecture_12_Deep_perception_for_manipulation_part_2.txt
foreign welcome back we're going to do our second round of uh the Deep version of perception today so last time I gave you know I'm sorry to have given a whirlwind uh overview of deep learning uh it's actually I I really want your feedback uh on that because I my thinking was that I would try to give a version that people who didn't know much would understand and a version that called out to some other more advanced topics I hope I didn't land in the middle and everybody hated it but uh but please give me your I we're going to put a specific question on that on the survey about that and help me figure out how to dial in the first lecture on deep learning it's a it's a big ask for me good okay but today we're going to slow down and we're going to talk through a couple more specific um algorithms ideas that connect that Pipeline with the things we need from manipulation that maybe we don't need in a standard computer vision world so let me put it in context by thinking about A system that we sort of built last time if I have my block diagram with my manipulation station here with all my output ports and everything like this right some of those output ports that are most relevant for today are the rgbd sensors and we've got some you know some perception system which is now let's say a deep neural network um and we're eventually going to get over to our planner like we wrote for the clutter clearing example and then our controller and we ultimately want to send commands back into the low level of the station there's actually even additional layers of control inside here right that are doing joint impedance control but we had differential ik for instance in this block here and the connection between the planner and control I guess the example we've talked about the most so far is sort of well understood we said we're going to send um gripper trajectories we're going to spool them out over time and ask our controller if this is the dip ik version to turn those into joint commands that the manipulation station knows how to execute the question really is if we have this immense pipeline from Deep learning and we they can work more natively with rgbd inputs then what how should we go from here over to the planner right what is the information that we most need for manipulation now we can be ambitious with our ideas here because as we talked quickly about last time one of the most amazing things about these deep learning architectures is that I can potentially even if I have a task that's pretty narrow and specific to my robotics application there's a chance I could pre-train on image classification in in imagenet and and then with a small amount of data train a more relevant Downstream task okay so the big question is what's the right task and let me distinguish this later in the class uh you know there is a version of this that goes from rgbd through a neural network straight to control or straight to the manipulation station let me just make it the extreme version which would be pixels to torques okay this is we'll do this later that's that's a good idea I think there's lots of good things to learn about that idea but that's not what I want to do today what I want to do today is embrace the fact that we have a beginning of a pretty powerful tool chain for these layers you know and we're going to have more developed to develop in that space and there's just this huge power of of tools that we we already have in the community and we just need to figure out the best way to talk to it to get from our Rich camera input our Rich environment into something that is sufficient to describe the task in consumable by our planning and control algorithms okay so the big question is what are those what are those useful representations that and there's not one answer there's not a right answer there's this field is is changing every day but there's some good ideas that have emerged and and I think even just picking one or two of them and going through them a little bit carefully hopefully that'll you know encourage you to to read more and think more about even more of them so the answer I gave yesterday or from Tuesday was we're going to take our RGB in come out with our instant segmentation and then go through one more step for instance maybe ICP for instance and turn that into the estimated object poses which I can then send to my planner that was sort of version one we talked about leaning heavily on instant segmentation you know we also said you could take the same thing and and enable um a different planner that maybe in the middle here if we just do our antipodal grasps right so maybe if this is still the instant segmentation we could still do this and come up with some desired grasps and send that to my planner so those are two things that we if we just use this the out of the box computer vision tools mask our CNN for this then we already have some potential Pipelines and of course there's for both of these there are versions of this where people would recommend that you just go you know all the way here as a neural network just try to hop over that you know and or maybe go all the way here as a neural network those are certainly possible and I'm actually I'm going to try to write up uh a fairly you know succinct but but a you know kind of a summary of what are people doing in deep pose estimation this would be if I if I did this that would be the world of deep pose estimation and there's a lot of good ideas in there and this would be maybe um let's say deep grasp selection that's a my name that's not there's not really there's a bunch of known algorithms for that I'm not sure there's one really good overall name but foreign both of those are possible and are good ideas but I want to stop and think is this even the right interface is this is this notion of putting the pose being the summary of everything that my perception system sees being the right interface to the planner Okay so remember we get to be aggressive here we can train our Network on almost anything right is sending sort of estimated object poses the best signal you know is that the best connection and there's a couple reasons why it seems maybe we can do better right the first one is I I would say this assumes a lot about a known uh about the having known models thank you right if I have a model then and the world is not moving you know then telling me the pose of the object or maybe all the objects in the scene is everything I should need to know right in some sense it is well it's it's a complete description of the current state of the world that's that's a slightly different thing maybe in the in the full Glory I would also want you know assuming the the world is nicely second order maybe I'd also want the spatial velocity of that object okay but if I have known models then this is actually a reasonable thing to do there are still limitations even in that case but we're going to try to overcome that assumption today a bit for the first time we're going to try to start doing a manipulation of at least categories of objects okay the second thing that I don't like about using pose anymore in this case is it it's actually asking for more than we can potentially get right it might be asking too much okay so for instance if I have partial views of an object maybe it's it's actually very hard to um to estimate the perfect pose right especially if there's symmetries for those would be a classic case where maybe the pose isn't the naturally right suggest description and it might be more than I need for manipulation okay so I'm going to try to make this case in in a in a story in just a second but just to set it up at the high level here and I would say the third thing about it that it seems like a major limitation of just using pose as our language from the perception system is that by itself just saying that I have a nominal estimate of what the object's pose is n't maybe uh sufficiently rich in the sense that it doesn't tell me anything about the uncertainty from the perception system okay and we should try to describe uncertainty right we the um you know I'm going to try to use this as a theme as we go through but this pipeline actually the NASCAR CNN pipeline did talk about uncertainty it was a little hidden right but the fact that every classification every potentially potential bounding box had a score function that was between zero and one right and the ones it was confident in was close to was close to one and there was one that was ridiculous and it was close to zero right so it told me something more it told me there's there's I have some um some segmentations but I've got a limited confidence about at least one of them right so maybe we need a language to talk about that in the language of pose or maybe we need to just think about other representations completely okay so let's let me tell you about this in the case of an example right so this is um this is the category level manipulation version of the story right so we want to do some sort of manipulation but we want to do it not on known objects but objects that are all from us the same category okay is that clear right so it's an interesting question immediately to say um I you know I know something about mugs I could total I could imagine writing a program that would work for all mugs but if the perception system only told me to pose then I don't even know what the pose means for those right and it's also a good example that I don't need to know the pose perfectly to accomplish a lot of interesting tasks with mugs okay okay this is the category level manipulation problem right imagine mugs it turns out that the field that's working on category level manipulation have all roughly converged on mugs or shoes sometimes you get a plater but you had bowls right I mean there's there's like a really small set of categories that everybody really likes to talk about so you'll see mugs and shoes throughout from different papers okay and it's a it's an interesting problem because it is a rich class of it's like I think it's like the Goldilocks Place between I don't want to assume I have known objects but I don't want to have to deal with arbitrary objects right so like tell me it's mug the mugs have a handful of right we can generate arbitrary mugs that are interesting you can also find mugs at the Disney Store that have you know ears and uh or or cow udders or something like this so you can go as far as you want away from the nominal mug but there's a really nice problem to just say how would I write a manipulation system that could deal with that level relevance of variation not the whole shebang anything okay I think it's a really nice sort of intermediate it's actually pretty cool that you can have these simulation pipelines that will just take a few uh you know a few parameters kick out a cad file there's procedural CAD is a thing now right you can take a texture map slap it on your mug right and generate like all kinds of bugs right in simulation harder to do with shoes but uh but mugs are pretty good we have done vegetables procedural potatoes really okay so so how do you do how do you think about this sort of pose estimation problem at the category level there's a couple like the first thing that that people would think about in computer vision about how to do this would be maybe there's a famous project a work called Knox right it's this normal normalized object coordinate space Knocks all right where you take a whole category of objects and you try to come up with some canonicalization something that would make it so that if you tell me the pose of this canonical object then I can infer I can relate that to the thing I'm seeing right now so that pose is still a meaningful quantity because if you don't have some you know some notion of of how to go from an arbitrary thing like what is the center where what what point am I talking about the pose even though that that point might be in different places on different cameras uh you know then you don't have a full representation so this is a this is an extremely good um you know way to do it that has been successful in computer vision and there are ways to think about category level pose estimation but I want to also talk about this uncertainty okay and I just say that the channel there's still there's more challenges that come up if you really want to use pose and carry it through your entire pipeline okay in particular like what is the right way to represent uncertainty in pose and I'm sorry that it's become a theme but once again the thing that screws everything up is the representation of rotations right how do you write uncertainty around rotations for positions you can imagine just using gaussians and that's sort of fine you can hire a moments if you want to right I could say I have a gaussian uncertainty around the position of my object but for the rotations you need something more clever there is something more clever there's a language for this it's called the Bingham distribution um and I just you know I want you to know that it is possible and it has there's a sort of there's a natural gaussian in the in the space of rotations and it's done by taking a gaussian in the in the higher dimensional space and then projecting it and then intersecting it I'm sorry onto the the unit disk so this is the 2D version where it's easy to visualize I've got just a gaussian there and I have um the intersection with the S1 here but the place where we want to use for 3D orientations is to think about how do I put a distribution over possible unit quaternions on the four-dimensional sphere okay and it looks like this this is an I just want to just open your mind to the the fact that this these kind of things can't exist that actually a gaussian on a quaternion sort of makes sense it's just called the Bingham distribution okay you can you can write the intersection of a of a gaussian with the four-dimensional sphere and you get beautiful representations this is kind of the right way to do gaussian like things on the on the quaternion space okay where small distributions would be you just have the antipodal pairs show up right and then as you widen it you remember the quaternions are always antipodal so it's going to be symmetric along that that antipodal axis there and you can get full rings of uncertainty in the like foreign and I think you need to do this if you want to use pose everywhere in your system and you want your system your perception system to be able to say uh it's uncertain about things you need to do that I mean mugs are actually a great motivator for that if I see a mug in the in the kitchen sink right and if I can see its handle I might be able to give you the pose with a very small uncertainty estimate but what if the handle is on the back side it would be just wrong to give me any to choose any one of the possible orientations without you know without somehow saying it could be any of these right so having distributions over pose is a thing you can do but it gets hard fast it gets very hard fast to to carry that all the way through and the types of distributions you get with partial views and occlusions are probably non non-bing them pretty quickly there's a pipeline you can exercise but I don't think it's the it's the best one it's not the one that I would fully advertise Okay so here's a different just to give you a a version that's a different answer to this What If instead of using pose we just talk about a handful of points attached to the body or in a in the coordinate frame relative to the body this is our first you know proposal for an alternative representation at that level think about key points okay and the key points I'm going to talk about now if you're familiar with the key Point literature this would be um I'm talking about the semantic version of key points first okay so roughly speaking what if my RGB or rgbd in goes through some sort of perception module and outputted a list of XYZ positions for key point one my claim is that that I'm going to try to argue over the next few examples here that that actually is a pretty natural way to talk about a lot of the category level problems we have it does it assumes less about the known models right it does surprisingly well thinking about partial views and symmetries we'll talk about that and it's it's it has a nice connection to uncertainty even more it turns out it's actually really pretty useful when you hand that to the planning it's a pretty natural representation to think about writing a planner around okay so this is just the first example of a of a different representation we could use in the as the output of our perception system okay right so it comes from key points are our thing in computer vision they started off with these open pose kind of you know people dancing you want to track the dancing people um right and so the way they do it is they put key points on the hands and the elbows and the shoulders and make a skeleton out of that and track it it's incredibly impressive okay it works really well like uh on novel videos and things like this now so the proposal here is let's take our mugs instead of trying to represent a canonical pose for all possible mugs let's pick a few canonical points just a few of them maybe right maybe the bottom of the mug where it's on the table maybe something that says what's the top of the mug just so I have a sense of maybe the bottom should be below the top you know that that's useful and then it depending on what you want to do maybe you can put a key point in the handle if you want to pick up the mug and in this case hang it on the rack that's another favorite yeah hang the mug on the rack okay then in order to do that I'll you know we'll work through it but you can actually write that task pretty nicely as a planning problem where you just know where the location of the yellow dot is right you don't actually have to know the absolute pose of the object and we'll argue like I said it fits more naturally in the into this framework okay so um here's the basic idea of how it's going to fit into that planner framework okay so imagine that the output of this I should even use my multibody notation so what I'm going to imagine is that this outputs the the position of key point one in the world frame here and key point I you know for all I there's a pretty simple assumption we could try to make the first assumption just saying I'm going to reach over I'm going to grab it when I'm grab once I've grabbed it I'm going to assume that those key points because if I've sufficiently grabbed the object are going to move with my hand rigidly with my hand and then when I release they'll stay in the world right and we should pick manipulation actions for which that's true if I tried to grab over here that's not going to be true but with a little bit of a of a few assumptions about uh you know pretty reasonable assumptions you can imagine that the Dynamics of these key points could be such that um when my hand is open then those key points aren't moving and when my gripper is closed then what I'll I'll just assume that the position of the key Point relative to the gripper frame is constant that's the only difference right and if I do that then I can imagine coming up with a sequence of these plus the open and close that could schedule me to move my key points around in the world yeah I'll plan to go over and pick it up close and then I'll start the key points moving along with my hand and I'll drop it off off we go okay so it turns out actually it's a pretty rich specification language it's even more Rich especially you know so the versions we've talked about so far have been I had a desired pose exactly of my hand at the end or my key points at the end you won't be surprised that when we get to motion planning we're going to loosen that up and be able to write objectives and constraints on the potential keypoint locations okay okay so to do to execute that the key points are are almost of what you need but they're not quite enough right the key points will let me write my planner for the most part but I do need to um you know have some notion of where to grasp so different people address this in different ways so you know Anthony's got a version where the key points he's got key points for the grasping too and a dense enough key points that he can find a grasp that will uh in the language of the of the key points roughly in his neural descriptor Fields but um uh we could you could also in a simpler case maybe just use the raw Point cloud and do your antipodal grasping in the vicinity of the key points right so just if I have the top Center and maybe I have the ability to to segment because I've got my ice my mascara CNN then you can imagine choose your grasp on that object but then plan the motion once I'm in there that grasp in order to move the key points around and that works incredibly well there's a few things that I think it's worth saying about how you how you make these tools work people have questions at that level foreign how many people know key point type algorithms yes them okay good obviously both and then and in the middle that's good there's a few like important things to know about them and knowing them will help us understand for instance how you think about uncertainty in the language of key points there's a few famous architectures neural architectures I won't dwell on them are good temperatures there's a convolutional pose machines he's one of the first I'll link to them in the text the one we've tended to use is a the integral pose machines or the integral version of that into which is a small change on the original architecture okay but there's a there's a key feature that that both of these have that I that you should understand which is that I take my RGB in and I don't actually directly regress the key points what I put out here first of the note from the neural network is a heat map for instance in the simple case let's just take RGB and have a 2d heat map come up and I've got a figure for this in a second okay and then afterwards I'll I'll find the probably most often the max the highest value the Peaks in my heat map in order to put out my okay it's an interesting idea you know it's one of these things that it makes things more differentiable and it tends to be a more robust metric for neural networks but so the key points are probably impossibly small to see but there's little red dots here I didn't think about the screen resolution when I took this particular image okay there's little red dots on the faces that are picking nominal key points on a face people do this for you know for face tracking by the way they'd actually have like pretty dense uh you know they have a key point for every sort of part of your face for your for your lips for your eyes right and it's kind of spooky to see the the points being plotted around this one's just plotting like five of them one for the left ear I think is this one in each of those pictures okay and the heat map is the the ground truth heat map that that people would use would say that if I know the key point is in a particular location here for the left ear that I'm going to draw a just a gaussian bump centered at the known Heat point location that's the standard thing and have it basically effectively zero for most of the image but just have this narrow gaussian bump and your goal is actually for every Point you're trying to estimate to predict an entire image which has the Peak at that value right so this is the heat map representation now you can see quickly how that could encode uncertainty in a nice way right so if I if I was confused about which was the left or the right ear maybe I'd have a second small hill over there in my output of my network and that allows you to then if you wanted to to do more robust things down the down the line reasoning about the uncertainty you could you could leverage that richer representation the fact that it's a gaussian of like known kernel it bugs the it's not out of me um it's total hack right that you know people just I think do kernel hacking on that it doesn't seem particularly principle to me to me but but it works well in practice so just admit that okay but this is this is sort of a nice representation how do you train that well you can of course click on key points have humans label key points in a lot of different images and then for every click you make a little gaussian desired image and you train your network yeah uh there are of course better ways in in the manipulation workflow we can play the same kind of trick we played to label our segmentations you could take an object that you don't even have a model of you could spin it or you know spin your camera around it make your Nerf or your somehow your dense reconstruction of it and then click once or you know once for each key point on the reconstructed model and back project to have labels for all of the possible images that came in right that's a really fast way to generate a lot of labeled data for key points and that works pretty well right so this is the the way that we then this is the shoes example right but if you want to sort of manipulate any possible shoe right this was a this is a great demo I remember the we had this running one day during visit day when the new grad students came to the lab and then they were coming into our lab space to eat lunch right and so we we basically asked everybody to take their shoe off which was maybe not the best hosting I could have done but uh but we got a huge variety of shoes to test that day and uh and it picked up almost everyone it was like incredibly good it was just you know boom put the shoe on the rack put the shoe on the rack mode and then Daniela our lab director Danielle Roost came in and she had these ridiculously shiny black Italian shoes and we couldn't do it and I did was it and I was terrified of of hurting her shoes by the way um so that was the one we we failed on Gretchen actually also had some high heels that we had never seen before um but but it was all good we added them to the training set and now we can do it now we can do those high heels right so it's surprisingly uh powerful and simple sort of pipeline you can of course also generate key points synthetically right if you have a distribution of objects if you have your parametric mugs and you want to just generate a bunch of of different uh you know labeled key points then you can generate synthetic images and that's a super powerful pipeline right so we did a quick example of it for on boxes because this was when Greg was during the pandemic and he was walking past his the lobby of one of the dorms and seeing piles of boxes he's like that's a pretty good data set so he would just started collecting images of the front lobby of the of the dorm and generated a whole category level of boxes right but he also did this amazing job of using blender rendering to to set up the procedural models so he it's not too hard to generate boxes of different size but uh it's a little hard to see but it's incredibly close to photorealistic he took a handful of texture maps of different boxes that he saw on the front lobby and uh and generated just these huge data sets of of labeled of yeah perfectly labeled boxes that looked pretty realistic right you can get the ground truth instance level pixel wise segmentations but you also get you see the um the ground Truth key points it's super relevant and interesting to know that you could train your key Point detector to predict you don't have to pick only visible key points right you could choose to to predict key points that are occluded right I could if I look at that image on a brighter screen I could uh you know I could hallucinate for myself and give you an estimate of what the back corner looks like even though I can't see it and if I can generate training data that puts a mark in the back corner which both of those two pipelines I suggested couldn't do then you can still ask the perception system to predict even occluded key points okay so that's pretty powerful almost always these um you know this was a little too simple almost always people will run it through uh a mask a segmentation pipeline first so that the key Point network has to only work on the segmented Point clouds maybe the bounding box that comes out of NASCAR CNN I assume you could do it on the big on the big image and it would be okay but it tends to work better if you give it the scaled and cropped you know version zoomed in version and then this is the detections on his raw data in the in the lobby uh his predicted key points where you can see like the heat maps and uh various levels of uncertainty right the fact that those are spread out and not as peaky is important information it's sort of frustrating that um although it's obviously important and good thus the back half of our tools uh don't it's a lot of the algorithms we're talking about in classes don't actually know how to reason about that uncertainty well it's an advanced topic to reason about uncertainty in your planner and your controller and the like we'll mention it well I think we'll have a at least one lecture on it towards the end but um there's no question we should be asking our perception system for it I think there's work to do and how to consume it okay so that's pretty good and those are the ground Truth key points that we're labeled foreign pipeline did I make the point of of how it works for a whole category sufficiently well right the I can it's easy to label sort of the toe of any shoe the heel of any shoe the the top of any shoe but it's uh it's hard to talk about harder to talk about a canonical pose of every shoe there are people that really don't like key points as a representation I'm not trying to sell this is the end-all deal but uh it's surprisingly simple to think about and good right and we we did demonstrations back then of understanding how many drained objects and finding every mug we could buy at the uh on Amazon you know and uh and it's pretty darn robust there's also nice additions to it right so if you think about what that pipeline couldn't do right out of the box so if I just used the initial Point Cloud to decide where I'm going to grasp and then I just think about where those key points are going to move in Space the fact that the key points are not a complete representation of the geometry we had to be fairly conservative so that we didn't like crack mugs on the table as we went around we had to pick fairly conservative trajectories for our key points okay but you can put this together with other deep learning you know tools for instance if you just imagine the missing part of the point cloud and have a completed shape of uh of your object that you could put the entire geometry moving through as a constraint in your in your planning system and that that added some richness to what we could we could do we could do more realistic collision avoidance constraints now this one's pretty cool um people also talk about learning oriented key points where you have not just the XYZ location but maybe the the axes you know if the the three-dimensional uh coordinate system if you do that and you just know what are the what is the key point in axis of on my on the object I'm manipulating that turns out to be enough to do interesting control with right so if you wanted to regulate the force at the end of at the end effector of your of a screw or an eraser or whatever then you can write a controller making the same assumption that the object becomes rigidly attached to my hand when my hands closed on it I can start regulating my forces of my hand at the point defined by the key point and with the orientation defined by the key Point okay and that's enough with that pretty simple pipeline to do some pretty cool stuff right so way was able to pick up various Lego blocks insert various USB Keys you know do mating in you know with all of those are sort of force sensitive type tasks putting Lego blocks together or putting USB in right and you could do that with very little knowledge of the object by just assuming just understanding its geometry in the level of a key point and uh and applying these tools seems only mod there we go right so you get any possible eraser you want to be able to apply a wiping motion on this on the screen and the controllers that we'll talk about more soon are enough to regulate the forces you know pretty well yes please um awesome so so what are we assuming here right so we are assuming that key point one versus key point two is fixed given the initial observation but it's not fixed to some canonical model right so so my model does not assume that I know this a priori but we will we assume that they are rigidly attached they rigidly move through space that's right exactly right yep yep perfect there's there's versions of this that people have done I mean that David held's lab for instance has done that use key points or other uh particle level representations that that are that do this for deformable objects for instance uh there are definitely extensions like that but the uh yeah the simplest version is just assume they're going to stay rigidly move rigidly good other questions yeah it's a surprisingly powerful pipeline I'd say um the thing one thing that people don't like about the key points I remember when we did those demos for instance the thing that every single person asked they're like but okay would you hand label the key points right you know you're going to learn them next right learn the key points right and um I I took offense because um because I actually think at some point the human has to say something about the task and I think this is in my mind the the key point is like a minimal amount of information to ask the human that defines the task and uh and I think that's true I think there's a there's a role where you have to have semantic key points where the human applied some amount of semantic information to the you know this is the handle I want you to pick it up here right but there are also ways that you could use key points where the the semantics aren't important and they're really just a summary of the geometry and in that case I think people have done beautiful work on learning key points and so um you can for instance self-supervise and try to find ways to to label key points one of the best examples I think of of that is this keto work of uh you know learning the key points that were relevant for some forceful manipulation type tasks okay so learning key points is absolutely a thing but I think if you do learn them you don't get to call them a label you don't you don't have the human informed uh you know knowledge attached to them you don't have the semantics attached okay but it's absolutely a thing okay so um you can actually take that idea even farther and think why am I just doing this is sort of like the sparse key Point story right um there's there's really no reason to make them sparse you can go ahead and try to learn dense key points that cover all over the entire geometry and if they're you know if they're consistent then they they take on a different sort of notion here so there's another representation called that's called dense object Nets and I'll tell you some of the details of this one too this was a very enabling uh for us right so here's the when I show you these pictures this is what I hope you see so in the left we have a canonical image of the of the object and the task the object in this case being a MIT hat okay we have someone holding their Mouse over the object at a particular point and maybe moving it around to make the demonstration interesting and uh now we're seeing a a different playback and the goal here is to find the associated key point if you will in the uh in the Hat in the other frames right so this is often this is also called dense correspondences right why is I mean it's exactly the same as we meant by correspondences in the ICP pipeline right and it makes total sense to try to learn correspondences remember in the ICP Loop once we knew the correspondences extracting the pose was easy if that's our choice or maybe we don't want to do it right so it makes total sense that if you try to solve the hard combinatorial part of the problem by learning from trial and error right and then allow additional tools to sort of work from there so so asking now as a different representation now not just sparse key points but dense key points is is is I think really powerful is that image clear I'm going to show a bunch of them so I hope it's clear right so you can see the uncertainty there right but the the big thing that changed is this is not n you know sparse key points here we could put this over anywhere on the hat and for any possible place on the hat we'll show you a distribution of possible correspondences okay so let me tell you a little bit about how a standard correspondence Network would work now when we're going from every pixel in the original image to every pixel on the on the final image we're not going to use for every possible key point on entire its own heat map that would be the logical extension of this but it gets pretty expensive so we're going to do a slightly different representation here based on the some of the ideas from self-supervised learning so we're going to take RGB in put it through our neural network and the thing that we want out is a dense a descriptor image okay whereas if this thing was a RGB has a you know some width by height by three channels right so I've got see what I'm saying it's width by height by three it's a tensor but each color channel is a image of width and height and then there's three of them RG and B or yep and then this one I'm going to put out a different image that is colorized roughly in this arbitrary extra dimension of descriptors right so I'm going to map every pixel in my original image to some descriptor space I don't know what that space is going to look like exactly but I'm going to ask it to have certain properties in particular that it satisfies correspond it gives correspondences that same if I have two images where I know I have the same point on the object then they should arrive at the same place in dense descriptor space okay so D when I draw these pictures those pictures like that we chose D to be three so that we could render it and as an RGB image okay but you can choose D to be higher for instance it doesn't have to be just just three and then this is trained Siamese Style with self-supervised learning okay so we take two images that we know have the same points in them I'll show you the pipeline in a second but we're going to do that same dense reconstruction trick and know from two different images that we're there's a point on the geometry that if I back project it should be the same point in both of those images and then there's a bunch of points in the on the object that should be different points right so I'm going to put two images through my neural networks my dense descriptor net get my other image okay and I basically give positive reward you know for matches in descriptor space right and I also have some negative examples I do some non-negative mining a hard negative mining to say this is a non-match okay so start off by making the robot move around you come up with a dense reconstruction and then for each point on this image if you for each point on the caterpillar in this case this is a plush toy that you we got it has lots of interesting buckles and so we thought it'd be good for learning manipulation but we ended up just using it for perception right everybody wonders why we have this strange caterpillar in the lab okay but we take we take all those images and this this tastes this time it's deformable okay there's no rigidity assumption here and then we say this point here which I know to be in a different frame the same uh you know the same point that should have that should arrive in the same place in descriptor space so we get rate loss functions like this for matches we say that we want um some average of the neural network output from image a known location a minus the neural network an image B at the known correspondence B where you a is known to correspond match with UB right I minimize that loss I normalize it over the total number total number of matches okay and then I take a bunch of um non-matches too and just play a little trick it looks like a the opposite of this roughly that says I want points that are that should not be the same to be have a large distance in this space right up to some threshold and you sum those two together and you get your pixel wise contrastive loss there's a bunch of tricks that people do to make this work better right you tend to um like for instance normalizing so your descriptors are on the unit sphere that seems to be a good idea data augmentation is absolutely a good idea right um people do background domain randomization as a particular form of data augmentation all those tricks that people do in in similar pipelines are applied here also okay and then what you get out here is um you know you take your caterpillar even though it was stationary when we scanned it there's nothing in the network that that requires it to be rigid and what you want to see here is that the colors which is our 3D when we choose D to B3 and we draw them as an image that as you move the caterpillar around you want the same points in the caterpillar to roughly come up with the same colors in all the frames and it's like surprisingly good okay we have a Baymax dowel in lab and it's surprisingly good okay we had various uh but this is now the same demo again where the mouse is over this right and this is the old version this is the new version where we got much tighter predictions by changing by playing some of those extra games about normalizing the descriptors and things like that and it's surprisingly good like the you know you can go down the left leg right leg and it gives you a distribution of possible key points if you wanted to extract a particular key point out you can of course say there's a descriptor here what is the peak value of my uncertainty map in the other image that's an operation that's natural to do and can be done differentiably for instance but it turns out that um this is something that we didn't actually have any reason to expect but if you trained it on a bunch of different hats then somehow we also found that the dense descriptors I mean this is actually I would say this is a you know a limitation it worked and we exploited it but we don't understand it well enough to be for it to be a reliable thing at the time but it turns out that it somehow learned a category level descriptor okay we trained on a bunch of different hats independently and then uh uh we could put it on on one hand it tells us the correspondences on all the hats right something about the fact that you know it it can only fit the way it fits things in D dimensional space somehow made this happen if you want to learn different hats independently you can that's what this other side was doing you just have to train with all the hats in the image at the same time and if you're specifically saying don't match this hat the point of this hat with this hat then it will learn not to but but given without that pressure it somehow seems to pick points that are somehow geometrically related across different objects in a category so think of that as a dense self-supervised key point right there was no human labels anywhere in that pipeline we scanned the object we did we did it it did its thing from there and that alone depending on what your pipeline needs to be after the fact right that's actually a loan to do some interesting things so if you just say I want to pick up the object I want to pick up the caterpillar from the from its ear from its tail in this case right and we put it down in all kinds of different places it'll pick up the Caterpillar by its tail you can deform it you can change it it'll pick it up by its tail right just by having a correspondence function pick up from its ear okay pretty good let's take a quick stretch seventh inning stretch that's what it should be called right the seventh inning thank you all right so hey you should ask high-level questions or low-level questions but is that Landing as these are different representations they are fundamentally not just talking summarizing the and state of the world as its pose and they're sufficient for control but they required us to think about control a little differently down the pipeline right yes um it's a volumetric so I know like this this is like clay and I appreciate it I know where that goes that's a neat question so I'll repeat it uh for the people watching at home but yeah so the question is if these These are always all over the points we're registering are on the surface in fact we when we make the 3D image we're actually using the depth image to project our colorized right this is um the output of the of the network is a an image in this case and we are actually projecting it on the point cloud and spinning our camera around to make that image and you say you know could we do a volumetric version of this where you'd actually correspond into the body I don't see why you as long as you can um you trust your reconstruction enough to talk about a penetrated point being the same in both cases I don't see why you couldn't do that I haven't seen it done have you guys seen it done did you do it is that what the do you consider that to be in the neural descriptor okay all right good good yes the way or what the pros and cons versus the key points you were doing for the categorical love yeah awesome question so so our dense descriptor is the way um I actually have seen people use them over and over again in lots of different applications so there's a particular I mean the correspondence they do very they do pretty well um and I see I have seen that be successful they don't have any semantic information you know I was going to talk at the very end about just some of the some of the things that are not here right so there's still the notion of object is sort of still missing here the notion of Dynamics is missing here so you could you could potentially train um one per object for instance and and have correspondences have a type of object we did call it a an object representation but um uh you know certainly like the Dynamics of the object are missing right we don't know and it doesn't tell you anything about the mass it's not going to help you like crack an egg right it doesn't tell you how things are going to evolved absolutely yeah these both I I think I mean for both of these examples it's absolutely missing from those right but for um you know moving nearly rigid or you know possibly deformable slightly deformable things around it's a pretty powerful pipeline I don't know that we've come we've finished thinking about how to plan and control with it either right so there's there's work to do even just thinking about the right way to consume that information uncertainty in that information I think it would you know let's say I I've seen enough people use it uh both of them I mean in various forms people have implemented it in various capacities that I I would trust that it would work right if like I wouldn't be afraid of saying you could grab the repository grab some of your own data and expect it to work right that's not the case of every tool we've we've played with but that is this one seems pretty robust okay yeah so this is a class of of uh uh you know just two examples of a big class right that I think is uh super powerful I mean I guess I forgot to show this is the density scriptures on that box pipeline I talked about before uh it's interesting that it need not be I mean if there's symmetries right it could learn correspondence functions that are good up to the symmetries you wouldn't expect it to be able to do better than that right but that's a super valuable um you know representation of the object to go ahead and manipulate things Craig and that's our messy lab yeah this is uh Anthony's extension of it which I hadn't thought about as a volumetric thing but but there you go so neural descriptor fields you should check it out the emphasis in the title was about the se3 equivalents right to be able to to um do relative coordinates for instance the video is actually very nicely describes the pipeline still mugs on racks it's the thing it's pervasive in the field Okay so here's the um let me let me pause that before I run it here but yeah here's the thing so let's compare it remember I said it that at the beginning that we're not yet going to talk about we're trying to do you know representations for our the rest of our existing pipeline and that is putting some sort of a constraint on our representation space right so what I'm basically saying is that we're taking RGB or rgbd or some combination in to our neural network we've asked this question and we said we're going to do it like a human designed pipeline here and humans are pretty creative but somehow I think that it's putting pressure on this to be interpretable in some ways and the big thing that is of course happening is people are asking bigger questions now about what if I remove that assumption and use a learning back end right and you could ask you can just say I'm going to train end to end from my neural network right through my learning control and the thing the thing that's exciting about that is that really does remove this sort of this this requirement right that if it needs to represent uncertainty then it will represent uncertainty in order to get the job done if it doesn't need if it has if it you know needs to represent something specific only specific to a task maybe it'll do that only you know only capture some relevant parts of the of the scene in order to capture the task but the the design strategies we have for this for these components are much much weaker right so you end up using reinforcement learning or imitation learning we'll talk about both of them and they're not as generalizable or and they're they consume massive amounts of compute as as these kind of planners so I think there's a I think to some extent there this is a slightly artificial distinction but I what you what the field has done in the last few years is more and more people are just saying I don't like this constraint I don't know what the representation should be I should it should be whatever the learning does and therefore I must use learning control and I think there's a lot more to do where you can have Rich possibly uninterpretable representations here and still do really good control over here so that's a personal agenda for me is to to Embrace learning control when it makes sense but also remind people maybe that it's not the only way okay so we'll talk a lot more about some of the the general approaches to control up here that could consume richer possibly not just kinematic models of the of the state and there's a you know big topic of learning State representations so I would think you know in answer to your question David I think that the you know these examples are some impoverished state representation that are con sufficient for some for some kinematic tasks right but there are a impoverished notion of what the state of the world really is right um like I said cracking an egg is just an extreme example right but or or boiling uh I don't know there's a you can imagine a bunch of things which would have a lot more State than just its current positions as specified through the correspondences right so thinking about how to find those richer representations and writing you know learning or writing controllers around them is a big agenda okay so um when we do do imitation learning we found in our lab that uh that using these dense descriptors as as a pre-processing step in order to then train a neural network policy does work incredibly well so this is just taking those same examples where we're using the dense correspondences for instance on the hat I'll tell you more about this when we talk about imitation learning but it turns out that if you just sort of give to your learned policy a handful of key points on the hat trained through dense descriptors and ask it to then you know put hats on the rack or learn controllers that can move plates around and do more Dynamic tasks it's been surprisingly good okay so we'll talk about that in the imitation learning section uh and I I really think this is a uh just two examples of a big class of approaches that are thinking about novel you know representations for uh for the geometry uh one that I like from Andy Zhang and Company is they do this transporter Nets where roughly they're saying I mean there's there's a lot of interesting things happening in the paper but maybe at the high level what they're saying is that I my my assumption about the Dynamics is when I grab these pixels those pixels are all going to move together okay and and that allows them with a with a rich pipeline to sort of do an incredibly General and useful tasks for picking random unknown objects and putting them into bins and things like that just from looking at the raw perception having a model that if I pick here those things are going to transform through you know through a standard rigid transform into the new place allows me to do a lot of Rich tasks so the um the Deep perception world is is alive and well it's moving super fast and I think you will you'll find uh you know many things that will change the way we should program our robots good I'll end a few minutes early I guess and I'm happy to stick around and answer questions for uh for projects
Robotic_Manipulation_Fall_2022
Lecture_1_MIT_6421064212_Robotic_Manipulation_Fall_2022_Anatomy_of_a_manipulation_system.txt
i'll start talking consistently now and it sounds like it's going through the room at least so that's that's a good sign is that is that uh is that coming through on the live stream are you able to hear me oh it's a little bit delayed i guess i guess i've already spoken there's some seats here i think sitting on the stairs isn't terrible yeah there's some seats here you think it's making noise oh nice okay hi everybody thank you for coming thank you for approximately the right number of you coming i don't i didn't actually give you any information that would have been helpful to decide whether you should come or not but exactly the right number of people came plus or minus a few so thank you for that i apologize for the room scheduling i'm glad that the class is popular but it's i also don't envy the role of the scheduling office right now who are it's not just that there's you know classes that are that have big numbers or something but it's also that if you look at the registration numbers across the classes and a histogram of these there are like a time plot of these numbers they go like this you know and so the scheduling offices has no you know it's a very hard prediction problem for them and they're going to do their very best i hope to get us a bigger room but we're going to we'll roll with it as it as it happens so welcome to robotic manipulation we it was important i felt to have in the class i would have would have loved to call the class just manipulation but i thought if somebody who doesn't know that we're doing talking about robots saw that you know maybe from political science or something like that they would think of something very different so i thought let's qualify that a little bit uh there was an early version that we called it intelligent robotic manipulation but i didn't want you know some other manipulation class to come around and then it looked like a put down or something like that so robotic manipulation it is and i think it's going to be a fun class there's a lot of things the field is alive with progress uh there's more robots out there doing more cool things now than ever before and it's just a incredibly exciting place to be so i hope to capture some of that enthusiasm for you and even today but certainly throughout the course of the term let me start by introducing us so i'm russ i've been here for a while teaching robotics um we've got an excellent teaching staff they have them to all city be sitting together right here so boeyen's here and anthony is here and ria is here there are tas for the class if there are a lot of people in the class we might get that last seat filled we'll see the other i would say the one most important bit of information which is not a hard thing to remember is that the website is at manipulation.mit.edu and i'm not giving out any handouts that have but all of the course information grading rubrics collaboration policies all of the things that i am officially giving you i'm officially giving you with that link right there and if you have any questions or thoughts about that you know i'm happy to take those there's an extra piece of the course it started last year and it's continuing this year if you're in the undergraduate version of the course it counts as a cim now so as the department has grown and now we have ai and decision making as a as a core part of the of the department we wanted more of the ai and decision making courses to be able to count as a cim requirement so natural language processing and this course have now taken on the ability to to be a communications intensive in the major class we have excellent teaching staff from from cms that are helping us with that david's here i i think maybe nora and liz decided to save seats for the rest of you today um so the way you should think about that so it's um it looks like 15 the exact distinction between the two different tracks the the four dot it's also makes me sad by the way last year we had the number 6 at 800 which was like i felt like i won the lottery getting about 800 and now we've got four two one zero which is not as cool but um if you're in the four two one zero that's the undergraduate version of the class it counts as 15 credits you get one extra recitation on fridays and i'll tell you a little bit about the what comes with that all the great things that come with that and then if you're in four two one two you're in the graduate version of the class so just a very the detailed differences are on the website we can answer questions at a high level both groups will be doing a final project the undergraduate version we'll be doing a really good final project coached by some of the best communication people we have and um i would say at the end of last year the the people that were that were in that track had some of the best project presentations and and uh you know videos and reports that i've seen ever so if you're i think a big part of this class actually is this product that you'll have at the end which is a pretty we've seen some pretty amazing videos you should go watch some of the videos from the previous years we had a robot playing the piano like a concerto on the on a simulated piano we've had some amazing things and it'll look great on your portfolio you know your cv and i mean industry i had people from industry saying oh my gosh who did that project right so um so i think it really is an amazing opportunity the recitations don't start tomorrow they start a week from tomorrow before we've taught significant topics in in manipulation they're going to we're going to be doing in the in those recitations reading some research papers and just understanding the rhetorical analysis how to write a good paper in manipulation and then that will graduate into the project you know the work towards the project the graduate version we'll do also a project the emphasis will be more on the technical and less on the communication and the graduate version also gets a few extra problems and other things that are you know expecting a higher level of maturity on the problem sets and the like okay so um like i said the the information about the grading policy collaboration policies everything is on the website including the differences if you scroll around you know it talks about the the exact differences what it counts for what it doesn't count for it all changed last year so i um expect questions and there might be details that we didn't get and then the last thing i'll say about sort of logistics before we we dive into some robots is that right now you know i'm going to say this again at the end but just go to the website click on the link to join the piazza group apart from the one email i sent on this through the registrar yesterday we're going to do all of our communication through piazza okay you can please review the course guidelines now so you're not surprised on anything later uh the lecture notes are all online at that at the same website uh the plan the schedule's online uh we're gonna have weekly problem sets throughout the class typically due on a wednesday cadence we'll have office hours to support that probably friday monday and then we'll see maybe if we give one right before it's due to the first one will be released late tonight or tomorrow with a it'll be a light one just a warm up and uh and it'll be due next wednesday okay and then there's a lot of uh emphasis on the final project and they've been really really good in fact i should just i'll queue up a few other the good ones to make sure you've all seen some of them from from last year but they can be really exceptional okay the course notes um which are there when i say read and please comment on the course notes so this is what you get when you go to the manipulation website it's a html notes some people hate that they want pdf and you can get the pdf if you want but the html is in i'm trying to do more than you can do in a pdf i'm trying to have really interactive content there's animations that you'll be able to play there's interactive simulations you'll be able to play uh it's interactive in the sense that you can go in and you can comment kind of like a google doc okay and we have discussions on there about like i have no idea what you meant by this sentence for us you know and so i'll try to say i i tried to say you know this and and people really do help the notes along and i think get to the bottom of some issues okay and the link to the course part of the website is right in this course being taught by at mit okay so i hope you uh you take a look at that i hope you you use that and i'm happy to take feedback on it there's also all these links to the to the online notebooks that go along with the course one of the cool things about it i'm testing now my suspiciously bad connection to the mit network which is interfering with the stata network but one of the great things about about all the class infrastructure is that now we can allow simulations to load over the internet with no installation so it used to be that we would try to help people through limp through sort of making all of our robot code run on your machine and people would come with a win32 machine or like you know ubuntu 12 or something like this right and and that's all gone now because there's online cloud resources that uh that come provisioned uh we can you just log on to deepnote which if you've used google co-laboratory we use google collaboratory for a while we've switched to deep note because it's easier to provision a stable collab can change the requirements out from under me and they always do it like on the first day of class but deep note i get to provision it with a docker image and so you should basically be able to go to the website and instantly run the code even the visualization which is loading is not loading very well for me right here i can i'll run a local version should all just run with no installation on any machine you've got i'll just run a local one here so it looks like this this is the intro notebook just running locally you'll get a little visualization just in the web and then if you run the very first example you'll get a little robot up here and you can go into the the controls and right through the web you can sort of drag your robot around and i'll see if i can pick up the little red brick here for us here be sad if i can't it's close a little back up it's easier with a joystick but it's a full physics engine if i grab the brick pull it up right i can probably even throw it if i was really good but okay and that's just going to hopefully just work seamlessly for you that's a long that's a long road to get to the point where it really mostly just works every once in a while the cloud services will be have an issue or something but we've been pretty lucky with that and pretty it works pretty darn well ah see it just loaded slowly over the mit network but it was there okay um so my goals for today are to give you kind of a tour through what you're going to learn in the course and also to give you a little bit of the sort of initial thinking about not only the components of a manipulation system that we're going to talk about from perception and planning and control but also the way that we think about it in this class which is a little bit different i think than than your average manipulation class so i try to take a little bit of a systems theory perspective when it fits and i want to make sure i make some of those connections for you today but let me start about just making sure we know what i mean by manipulation because actually i find a lot of people when they hear robot manipulation they think of fairly narrow examples of robots that are just doing pick and place for instance and actually manipulation is much more than pick and place if you take away one thing from today's lecture i hope you'll say manipulation is more than pick and place okay matt mason who's one of the the um you know leaders and and uh uh big names in the field from carnegie mellon he wrote an excellent review on robotic manipulation a few years ago and one of the interesting things he did is he really tried to define think deeply about what does it mean to be manipulation okay and he actually gave five definitions because he couldn't decide he couldn't narrow it down i guess you know the first one was you know just manipulation means activities performed by the hands i won't take you through all of him all of them but he kind of eventually got to you know manipulation refers to an agent's control of its environment through contact and i like that very much i think through selective uh contact and i think um i think it captures some of the i mean what robots are supposed to do right they're what what makes robotics special compared to let's say computer vision or natural language or something like this is you get to move stuff you get to change the world and that i mean you could argue maybe i will argue that if we're really going to solve intelligence it seems hard to imagine solving intelligence as a passive observer of the world through cameras i think being able to pick stuff up and move it around and interact with the world seems pretty essential to our natural intelligence and and that's what this class is about is filling in that part of the um of the intelligence artificial intelligence spectrum okay so if i take that little example like i just showed you of a of a robot moving a brick around we can sort of um you know think through what that's going to look like that's a pick and place example but i want i really want to say robot you know robotic manipulation is not just pick and place this is clearly robot manipulation by matt mason's definition and it's way harder than picking up a red brick and moving it to the side right and if you look at the rich contact that's happening between his fingers and the shoelaces and even just the dynamics of the shoelaces you know robots aren't doing this yet okay so we have grand challenges just sort of in the mechanics of manipulation but i'll show you some examples too that even if the we're doing pick and place if we're trying to do it out in the wild things get pretty rich and pretty complicated in other axes too so i would say that's that's where i feel like matt's definition doesn't completely capture the goals for the course matt says manipulation is about contact and i think that that's of course true but if you think about doing manipulation not just in a in a closed environment or in a factory or something but if you are out in the broad world and you want to send a robot out and do manipulation then there's there's more broad requirements that come into play um i think it requires in order to be in control of the oven you know of the environment that's like an arbitrary loophole in the definition where we can inject having to understand everything about the world right having a very rich perceptual understanding of the environment i don't mean putting a bounding box around you know a person that's good but i need to know how much mass the things i'm gonna you know what's an object what's not an object what happens when i push it these are demanding things that a computer vision system doesn't typically give you out of the box this common sense of it is understanding like what's going to happen if i push something am i going to topple a pile if i not if i if i push it in the bottom you know these kind of things are are grand challenges in ai and i feel like they're part they're under the umbrella of manipulation okay the ability to make very long-term plans at the task level like what am i gonna do to get the milk out of the fridge okay i've gotta first i gotta open the fridge you know then i gotta move the pickles out of the way i'm not sure if you keep your pickles close to your milk but you know what i mean right and then you reach back to the there's a lot of steps involved to do a manipulation task which require a pretty high level understanding of the world and reasoning long into the future so that's in the in the course material too and then you have to once you've decided to do that you got to figure out how to move your motors and your joints to make that happen right so combining those different levels of abstraction is a grand challenge that we try to face so let me show you a system that exemplifies some of that it was a project a few years ago at the toyota research institute tri which is just down the street i've been i've been working with them for a number of years now to try to make some of the larger scale examples of manipulation and take them to higher levels of maturity and this is an example that i i learned a lot from trying to just let's see if you could take a big robot this is not something that we are advertising you put in the home but it is a robot that we have today that works pretty well and we asked could that robot if someone put it in front of their sink and asked it to load the dishwasher could it do it what are all the problems involved in doing that so the problem in the open world manipulation sense is someone that sleen he comes and he dumped whatever random things into the into the sink amongst them some of them are dishes right some mugs some plates some spoons there's a dishwasher right next to it and the task is to open up the dishwasher you know start putting the mugs in the top rack the plates in the bottom rack the trash off to the side and the silverware in the little silverware rack okay and this is a complete manipulation stack that did all of those components of perception planning control high level reasoning okay and it took a lot of work to put it all together and then it took a lot more work to try to get it to operate at a very high level of repeatability like if the same challenges that autonomous driving um companies are facing these days to try to you know it's one thing to make a car drive down the road and make a video but to make it never crash does it you have to deal with all these long tails of the distribution all the random things that could happen with the lighting conditions with the stuff that's in the sink and taking this system to a maturity was an excellent exercise that really changed my view of what are the hard problems if you look down at the details the the individual skills that it had to do were actually fairly complicated from a control perspective from a motion planning perspective in some cases it had to open the dishwasher door it would nudge things out of the corner i mean this is partly because it had a enormous hand and a small knife stuck in the corner and so you can't do what a human would do which is kind of you know pick it up it had to really take different tasks to pick up the plate is sort of my favorite one you had to pick up a plate from a stack of plates this big hand had to kind of you know go in and this is a feedback law which was it was constantly monitoring its sensors to slide under until it knew that it had pushed far enough grab the plate pick it up and move each of the individual pieces of this were actually pretty sophisticated okay but then you had to assemble that all into this higher level um machinery that by the way um a big part of that was using simulation that actually that video right there if you can tell that's actually a simulation right of the robot picking up a plate and the way we were able to get that to a pretty high level of maturity was by having a very good match between simulation and reality that we worked very hard on and getting to the point where we could stress test and simulation find the corner cases in simulation and expect those corner cases to start disappearing in reality okay so a lot of work on on simulation and that's a relatively new thing to a few years ago i said i said this in the text right a few years ago i remember when we were doing um humanoid robots and i was talking to my students in the lab about you know we should be doing manipulation and simulation too because it's working so well for our walking robots and i remember you know they looked at me like russ you can't do manipulation research and simulation it's like it depends on perception you can't simulate perception well enough to do that in simulation you know the dynamics of contact for like subtle things between you can't do that in simulation simulators aren't good enough okay and then it changed it like a few years ago computer graphics simulation renderers got good enough like you use blender for instance as your renderer it's an open source uh rendering engine right and everybody was suspicious that if that if the rendering wasn't perfect then like a machine learning computer vision system would would cheat and you would it would know how to use the artifacts of the renderer to solve the problem and it wouldn't actually work in reality but guess what the renderers are good enough and people train in simulation and get it to work in reality that's changed people now think you can train perception systems the other big aspect of that is the physics the physics engines were not good enough the real-time compatible physics engines they were working for legs which are actually relatively easier for a walking robot to simulate but for the delicate interactions required in manipulation they weren't good enough a few years ago and now they're getting to be good enough and there's nuances in the different simulators but we've seen dramatic success in transferring results from simulation into reality if you do the work to match you have to make sure your models match into the simulator and all of those components had to go together into this high level planner so that if someone came and adversarially right so in boston dynamics they kick the robot and that's cool we just close the dishwasher drawer that's not quite as cool but it tries to make the same point right as if someone came and messed with your robot it had to be smart enough to actually in that case it was putting a mug in the top it had to realize oh someone closed the dishwasher drawer i'm going to set the mug back down pick it up because i've only got one hand that's pretty annoying right and then you know you could do it all day long and you feel bad for the robot but yeah so that was a complete system end to end and that's kind of the goal for the class is is to help you build out a complete system and understand the different the nuances some of the interesting parts of the algorithms at each level of those hierarchy okay so it really does go i'd say there's kind of a ladder of complexity in terms of high-level reasoning if you will which involves scene understanding being able to make sense of what objects are in the world you know where is the milk in the fridge deciding to move the pickles before you pick the milk and then there's low level like how do i you know feel forces on my hand and decide i should do something a little bit different and so it's very interesting to try to span that whole space i come from the controls perspective towards this and some of you have taken under actuated with me so let me just sort of connect that to the you know from my view of the world like why did i come from controls towards manipulation okay so you know before we were doing humanoid robots this was the darpa robotics challenge that was an early version of the boston dynamics robot it's doing backflips now it was a lot heavier and not as backflip ready back then but we spent a lot of time on this robot and worked very hard on the control system and very proud of of what we did and we um you know we worked very hard on understanding the dynamics of that system simulating that system writing you know understanding its robustness properties and the like we got to the point where you know even when it was getting out of the car that was the hardest part of the challenge by the way there was a we had to drive this little car with this enormously big robot and then getting out of that car was like solving a jenga or something or you know twister to get out of the car but we worked hard on the feedback controller and you got to the point where even if andres is jumping on the back of the car the robot really didn't fall down so we in some sense i think we know a lot about feedback control robotics has gotten pretty good at controlling complicated robots like that right even perception is a big part of walking around the world right so we had to use our our onboard sensors to sense the world you know understand it well enough that we knew what surfaces we could step on where we could where we should not step right um so we were solving problems like that this was in you know 2015 kind of technology was actually right before the machine learning boom right so so this was all much more geometric perception at the time and like a year later we would have done it with deep learning probably but but we had a good pipeline for perception but none of that gets me through the sink right so it's it's it's interesting to see where that sort of dies right is that the amount of understanding you have to do to understand where to walk in the world is so much less in terms of understanding perceptually the world that it really it did it only scratched the surface on the really hard problems so this is an image of a sink with mugs in it and those points there are an estimate we're trying to estimate have a this is this is now a deep learning system that's estimating the poses of the object and those are representing the uncertainty of the objects uh pose those different colors and the size of those of those colors and there's a net someone threw a napkin in there right and there's a mug underneath that napkin and it's a pretty confused about whether that's actually a mug right now and even just knowing that those things are separate mugs versus one i mean this is just a harder manipulation problem or a perception problem and it requires a lot more work not only in perception for the sake of perception but the connections between perception and control especially this is just an example of you know now i have to manipulate any mug okay and the number of mugs you can find if someone throws them in the you know in the sink they're they're pretty diverse so knowing how to manipulate a particular mug is not enough to understand how to manipulate all the mugs how do you program a robot in a way that basically you know solves the mug problem when you know someone could have gone to the disney store and came back with like a mug that looks like uh you know one of the seven dwarves right then that that just totally breaks a lot of our perception systems and uh and we've been trying to generalize their tools to do uh much more general robustness in a much more general sense i care a lot about feedback right we made atlas not fall down when andres was jumping on the car how important is feedback in manipulation okay um it's an open question i would say i mean i strongly believe in the answer but there's there's people out there that are absolutely not thinking about feedback in in the actual manipulation of the hand sort of problem what they're doing instead is building really clever graspers grippers right so this is a soft robotic hand that you can i mean it's just being told the squeeze but the dynamics of the hand are such that pretty much anything you put down in front of it it's going to make a nice conformant grasp around this and pull it over and for some class of problems these hands knock it out of the park okay but like i said manipulation is more than pick and place and if you look at how humans if you just watch yourself you know go home and just when you're making dinner tonight or watch or loading the dishwasher or something watch yourself the the things we do with our hands are so they're so hard for us to reason about they're so they're subconscious but there's always these like right here look at that she missed a little bit right and then does this corrective action and i uh you know i believe that actually we're missing out a lot by not having rich feedback loops connecting perception and tactile sensors to our our the commands we send to our our robot and the world is now seeing more and more success of feedback control and manipulation and so that's you know that connects to my my background as well this is just another you can watch if you watch high-speed video of yourself doing anything with your hands it it's amazing what we do you know and you don't even think about it right the way humans load a dishwasher is so different than the way we were having our robots load the dishwasher they you know the robot would try to line up the plate and stick it down in the slot humans just go you know and it's you know just rely on the fact that it kind of will fall into place we're so clever and we're so dexterous um and and robots are not getting it done that way so when i think about control for manipulation when i think about control for the humanoid we have a problem which is that the robot has some joints we want those joints and maybe the center of mass to go through some trajectory okay and and we know how to think about that we know how to build models for that and we can build models that work even on various terrain but we're thinking about manipulation it's not just about controlling the robot anymore that's part of the problem that's a sub piece of the problem but it's not just controlling the arm the state you're trying to control is the state of the robot but also the state of the world in this case the state of the red brick okay and that's what makes it interesting that's what makes it under actuated for instance okay um so for me it lights me up i think it to the to the point where i really think the next big thing that controls has to do that's a biased opinion but but the thing that will grow controls into the next set of great problems i think manipulation has a lot of those that richness and here's why um you know controlling the state of this red brick is sort of i kind of know how to formulate that problem at least maybe it's a hard control problem because of the contact mechanics but but i kind of know how to write that problem down but if i want to like chop an onion okay like what's the state of the onion if i want to simulate that you know is it changing every time the knife comes down like what represents what trajectory am i trying to stabilize i don't know how to think about that controls really doesn't have a lot to give yet in in terms of that state representation question learning's starting to contribute a lot and actually learning plus controls are coming together i think to address this this grand challenge of state representation for control right that and once those those you know states are coming in through a camera it opens up all kinds of interesting problems too so we're now seeing more and more you know feedback you know based control in manipulation i think it can make a huge difference it certainly makes a huge difference in the reliability of the demos you can see you know in practice the systems just feel much more real now and the big go ahead technology which we'll talk a lot about when it's when it's time is the ability to make feedback control decisions directly from the camera it used to be that we would kind of look at the camera decide what to do make a you know make a plan and execute sense plan act is sort of the old way to think about it and these are now visual motor policies where you're actually closing the loop on the feedback on the camera input at high rates and that really makes the difference between a a robot demo where the robot there's like robot air balls right where your about you know does something the world changed and it continues to do something as if the world hadn't changed and it's really embarrassing and uh you know we've had robots that fall down because they just thought the valve was there and it wasn't there and that's starting to change now we're starting to be able to close the feedback loop at high rates through a perception system this one's just i love this particular example this is just that like we did picking up the plate from the sink but now it's trying to do it from rich camera-based real-time feedback that's the nominal behavior but now we're trying to make it you know robust all kinds of perceptual changes right and the feedback there is only from the cameras but we're getting to the point where we're seeing more and more demos that that do what they should do in these kind of situations okay but underneath that i believe i really believe to my core that the way you get to that is by breaking that super hard problem down into simple models the same way we talk about in under actuated if um for those of you that taken it you know and and breaking it down into the the sort of the place where you can think rigorously about what's happening in the system at all the different levels so underneath that that technology are these you know relatively simplified models of physics that we can reason about we can practice on we can understand here's another fun example from from tri we're not doing dishes anymore we're making pizzas and the like i'll show you more videos of those throughout the term but this is just rolling dough another example of manipulation being a lot more than pick and place okay and again it's using visual feedback so if someone comes and throws down some more dough or whatever these systems are now getting more and more robust to real-time visual feedback changing the task and this is a case where you know i don't really know what the state of the dough is but i've still got to come up with a good controller that'll that'll do the task and these are the problems of the day and these are the problems that i think control has to grow to address on the um oops i put it in oh where did they okay well i have to find the other video but um there's a success video of the before the interesting failure cases um there's a there's a robot uh also at toyota where i've got the best videos from toyota they built this this incredible robot it's called the ttt robot and the task here is to go into a real grocery store not just some we have a mock grocery store at toyota but we also have a real grocery store down the street that we're collaborating with and the task is not easy obviously to to pick up all these objects and be robust and actually i'm very proud that they even let me show this video and you know because they think very seriously about the failure cases and they just say this is a hard problem we're going to measure how often we fail and we're going to make it better and better and better the same way but this robot in the success video the task is wake up you're in the this grocery store that you've seen before but you're going to be told some number of of items that you've never you know just from a list of hundreds of items you're gonna here pick these items and put it in my grocery basket and it drives through the store and with uh increasingly high success rate is able to sort of go through and understand and find the objects and and load a grocery basket right this kind of stuff's coming now that doesn't ex that doesn't apart from the complicated failure cases i just showed it doesn't stress as much the dynamics of a dexterous hand but the perceptual understanding of the world is really hard in this case right and some of the failure cases where you you thought you could pull the object out but it was actually in a box and the whole box tips out right these are really hard cases for a perception system to understand okay so that's kind of the motivation at a high level those are the kind of things we want to cover and now i want to tell you how we're going to cover them and what is you know tell you a little bit about the sort of the breakdown of the system and the the style the way we're going to try to connect the pieces of those manipulation systems to like to dynamics and control kind of uh dynamical systems and let me start by just saying that the the anatomy of most manipulation systems these days has ross as a big part of it how many people have used ross or know what ross is even i'm happy to yeah okay ross is the robot operating system it's probably one of the best things that happened to robotics you know a decade ago at this point and um it's a it's an ecosystem it's not an operating system in the windows or linux sense but it's an operating system in the it's an ecosystem where people are contributing different modules perception systems planning systems simulators for instance and ross makes it easy to connect them together right so those of you that raised your hand know this but let me just say a few things about it as a launching point to what the way we're going to think about things so we said that this is okay you guys can i'll see that even if i'm okay great so in ross if i have a perception system but i can make i can i can build components in a sort of a modular approach okay so maybe i have i start off i have a camera driver okay and i someone needs to write a camera driver and that takes a bunch of work you know especially as cameras change or whatever i've got some camera driver that has to talk to firmware and publish out an image let's say it could be a red green blue image for instance coming out okay there's another big chunk of work which is to come up with a perception system and maybe if that takes rgb inputs in it outputs in the simple case let's say the position of my red brick right in the onion it's a much harder question but in the red brick i could just tell you where the red brick is and that's pretty good okay someone else needs to write a planning system let's say maybe two maybe there's a high level planning and a low level planning but let me just say there's some sort of planning system that takes let's say the the positions of the brick and the positions of the robot for instance and starts putting out a joint trajectories okay and then we've got some low-level controller that thinks about maybe the dynamics of the arm and tries to realize these um joint trajectories and maybe it has to send low level motor commands okay and at the other end of this i've got a motor driver okay and every one of those is a reason you know maybe this one and this one are his research project see but certainly all of these are are massive research challenges and um you know traditionally it was very hard for one research group to do all of them well okay and the big thing that happened with ross was um it was a it became a standard for sharing components okay where maybe i could use a perception system from carnegie mellon and maybe a controller from from dlr in germany and maybe i'll focus my research on the planning system okay and the way it works is it's uh it's it's based on message passing network interfaces and it's multi-process okay so basically someone can write a program here that does camera drivers and they will just publish on an on a on a network and an ethernet for instance uh in a particular type of message that contains an rgb the data for an rgb image okay and there's and all that we have to agree on if i'm going to use your camera driver is the format of that network message okay and then i'll write a perception system and maybe i'll agree to use your rgb image network packet format okay and i'm going to try to produce a position format that everybody agrees on okay and if we just agree on a few of the common message types and that's it and let everybody write their own individual executables right this was the go-ahead idea that made people really start being able to share their code and it's subtle actually i don't know how many um people you know do a lot of software engineering but um it's for very it's for somehow subtle reasons i mean even compiling someone else's code on your machine and having the right version of the dependency libraries all work together can be a real roadblock to trying to get you know some code from from cmu or something to run on my robot okay by by separating out the concerns of compilation and making this executable level decomposition of the task and only agreeing on the message type where it's easy everybody can whether you're using different programming languages someone could write in python someone could write c plus someone could whatever all we have to agree on is this the packet protocol on the network with with things like docker people don't even have to even agree on the operating system people will run like a perception system on a docker container on a but on ubuntu 14 or whatever and and i can still use it on my mac because you know these kind of things cause a level of modularity and abstraction that got roboticists to finally start sharing their code and really using each other's code and a new lab or could could start up a serious robotics project by picking the best components here they'd get a system that would actually run and do some interesting things and then drill down and start to work on the on the different components that was a major good thing that happened in robotics and just even it started the culture of open sourcing your code right those that wasn't a big culture beforehand but um we're not going to use ross in the class this is the starting place but but it doesn't serve my pet pedagogical goals okay so um you can use ross if you if you want to but um but i think the the connections here of only talking about what are the message types is a little too weak in order to really try to take i think even a lot of companies struggle with it once you you get it's very good for getting an initial system off the ground but then once you start trying to take a system to really high levels of reliability or whatever the semantics of how these systems talk together gets much more subtle and just promising that i'm going to publish at some you know whenever i want to publish for instance some position of the brick that might not be good enough and so i think the field is is you know on the path to higher levels of maturity ross is growing in this direction too okay of trying to to do a little bit more reasoning about not only the the way to write these systems individually but the way to connect them together okay so let me tell you about it from the perspective of dynamical systems and control the way a control theorist might have started this they would have drawn a very similar diagram a block diagram right so maybe it starts i'll start with a robot because that would be more standard in controls maybe this is the simulation for instance okay and i'm taking motor commands in and having some sensors come out some sensor signals come out now this is something that control theorists have been doing forever right maybe not with um onions right or laundry or something but but certainly for aircraft and for chemical plants and for all kinds of rich systems control theory has been incredibly successful and they have a modeling abstraction a hierarchy a modularity approach that's very similar in the pictures i've drawn but it's different in the details okay so i'll still think of this as a block diagram but i'm going to be specific about the details inside here so i typically will represent this as a dynamical system so i'll write it in a generic way today and it'll even be okay for a while so this is a difference equation where x is is used in a control sense to represent the state of the system which maybe in the case of my my robot would be the positions and velocities of the robot plus the brick u is my command inputs these are my motor commands coming in x is my next state x n plus 1 is my next state okay and so in this setting f has to be somehow my physics model right my my physics engine here okay it's somehow connecting you know equations of motion that look like force equals mass times acceleration okay with this these notions of state and the notion of next state acceleration is a continuous time idea derivatives right and somehow i've talked about a discrete jump from one state to the next but we'll talk about how to sort of make those those jumps okay now these equations might be familiar to you in simpler forms right if you took 1803 here or a different you know differential equations course then you would have seen them first as let's say a linear set of difference equations you might have seen it that looked like this or the matrix form of that might be where x is a vector right this would be a linear difference equation if you took an intro controls course you would have seen something that looked like this the state-space form of a linear difference equation okay from controls which would be just the it's now a control difference equation this would be a linear control difference equation okay now i didn't require 1803 or difficult or linear intro controls as a prerequisite of the course you don't you if you were to take those courses when you took those you know many of you have taken 1803 at least right you would have been able to go start from those equations and thought a lot about the time evolution of these you could solve the differential equation given initial conditions you could talk about its stability properties there's lots of things you could potentially do and we don't need all that right away for for this class okay but we so we're not going to use let's say the the deeper content from 1803 but we are going to definitely use the modeling language okay and you should see this f of x u as just a non-linear generalization of these equations that you would have seen in those classes okay because f is complicated it's now a physics engine it becomes harder to do the closed form analysis that you did in the intro classes okay but we're still going to benefit a lot by writing it down in this dynamical systems language okay so we we're going to talk a lot about having our block diagram of the system and using equations of this form this is not quite enough okay we also need to model the sensors so the sensors we'll typically use in the language of dynamical systems as an output of that function of that of that state it could be a function of x and u in general and y is now the outputs at time n okay and in the case of my sensor being a rgb camera right f might have been my physics engine but g is going to be my game engine quality renderer right if i have to go from the positions of the robot and the brick over into an rgb image then g is i can write it as a function but down in the details that's rendering right so these get to be very complicated functions but my what i hope to convince you over the course of the term really is that by thinking about it through these equations it's going to ask you a little bit more than ross does i want you to in particular to tell me what the state is i want you to tell me what the timing semantics are how do i go from n to n plus one if my camera is running at 30 hertz and my robot simulation is running at 100 hertz and i've got events based on uh you know some other sensor that's that's doing some strange things i need a modeling language for talking about how those parts interact okay and that's going to ask more in fact it's going to feel annoying when you start writing these systems and and i'm not just going to say you know give me a function i'm going to say what's the state variables you know what's the randomness you have to declare the randomness there's a couple things you have to declare but the advantage over the ross very light touch here for the purposes of the class is that you get to do more sophisticated things with the models if i only know that there's arbitrary executables behind the box and they send messages out then there's limits to what i can say about what they do when they're connected if i know that these systems are deterministic functions once you tell me the state okay then for instance i can run exactly the same simulation twice anytime i just put the state in i run the same controls through and i'll get the same outputs back out deterministic simulation it sounds crazy to me even just even for me to say that right there it sounds crazy but most people in robotics can't run the same experiment twice even in simulation right it's just a weird thing but everybody's got different processes running on different clocks and sending messages when they want to send and nobody wrote down a specific contract saying you must send at a certain rate messages must arrive in a certain order so if you see a bug in your simulator right you see the robot fell down in some weird way or threw a brick across the room right and you say i'm going to reproduce that and you run it again then your perception system might have sent a message just a little bit before or different you know there's there's it's very hard to get a deterministic repeatable simulation out of a generically uh generic ecosystem like this okay but if you ask every one of the individual systems to declare its state if you ask all of them to be deterministic or declare its randomness okay then you get an extra sort of power when you start combining them and knowing how things are going to work so very at the very least when you have a bug in your in your final project we'll be able to help okay um and in general we're going to try to keep things running in a single process instead of multi-process just keeping we're going to i'm going to try to emphasize the the interesting parts of the components and hopefully you know if you do a little bit of work when you declare them then the details of multiple message passing and all that stuff will just disappear and you don't have to worry about it okay so certainly robot simulations um if i think of f as a physics engine and g you can sort of imagine x is the state u is the motor you know the positions and velocities u is the motor commands y might be my camera image or my joint sensors but actually i would argue that all of the systems in our hierarchy can be described nicely with those same sets of equations okay so let's think about a perception system for a second okay so modern perception system maybe it takes in an rgb image these days let's say it goes through a deep network and it outputs the position of the brick now a lot of deep learning based perception systems really just look in this case if this if the position of the brick is y and the rgb image is u they can be modeled just as a static function y of n is g of u of n so it certainly fits into the dynamical systems framework but maybe doesn't exercise the dynamical systems framework because the state is empty there's no state okay but that's not how we used to build perception systems right if you were you know if you've taken a class on state estimation or if you've heard the terms uh like a kalman filter for instance right a column filter will take observations in and it keeps an internal estimate of what's the state of the world okay which so it's it's got a state space form and we'll output the the estimated state this fits squarely into the even if it's an extended column filter it fits squarely into the modeling paradigm and if you've worked with common filters i mean you might if you've done a you know summer at us autonomous driving startup or anything like this or you know autonomous driving company for instance you probably come across common filters okay if you think about perception in this way where the goal of a perception system is to summarize all of the things that's seen maybe in the recent history of the world into some coherent understanding of what's happening in the world that's very different than what we see when you go from a single rgb image out to a estimate of the the world okay and it was actually pretty weird for a bunch of us right when deep learning started to work really well everybody was talking about few shot or one shot learning right or zero shot right and and and uh you know people were like that's crazy why would you not use multiple images or like why would you not remember what you've seen in the past when you're making your prediction and indeed i mean the the you know deep learning perception systems worked incredibly well even from a single image but the modern rgb you know deep learning perception systems are actually look a little bit more like this well they'll have a recurrent neural network in the middle or a visual transformer in the middle okay and those again the recurrent the state of the recurrent network is going to be can be declared as a state in my dynamical systems framework and transformers are a little harder to think about that way but they totally fit in these frameworks okay because really the goal of perception should be to accumulate information over time right certainly the way i perceive the world is not take it you know take an image and then you know understand everything about it right i'm accumulating information as i as i move through the world and summarizing it in some belief okay and and so these kind of perception systems fit beautifully into the dynamical systems framework if you go to think of i mean control absolutely fits right into that the robot controllers if you want to write an impedance controller or some inverse dynamics controller that absolutely snaps right into this framework it was came from that world okay planning systems are more interesting and we'll talk about them when the time comes but you know a lot of times if i'm writing a planning system in a ros ecosystem i'll you know listen to the perception system and then i'll like go think for a little while and make no commitments whatsoever about when i'm going to return an answer you know or if i will return an answer actually and eventually say oh you should do this right and um you know the timing semantics around the planning system when these are long running you know planners potentially can be very stochastic it could be a big source of of either conservatism if you have to wait for your planner to come up with an answer that's why the robots will you know do something interesting and then wait for a little while and then do something interesting wait for a little while or if you're trying to keep that system completely moving then the semantics of when this planning system reports its answers gets pretty subtle okay but when we get into the details that still fits into these dynamical systems you can still model that in the language of dynamical systems okay so what we're going to try to do in the class is very much keep the good things about the modular architectures but also try to declare our state variables our randomness our inputs and outputs okay and then you'll be able to compose these modular components into a big system you know get repeatable simulations and if you want you can do advanced control analysis verification you can do you know monte carlo analysis but you can also try to prove that a system is going to converge you know exponentially to some some equilibrium even if it's got some really complicated components in the way okay so that's been a bit of my you know i would say not everybody believes this this is my personal uh you know belief my taste i guess if you will i think some people see the complexity of manipulation the complexity of all these components and say it's so complex that you you can't be rigorous right and i'm saying instead it's so complex we must be rigorous or we will fail and and people still look at me and say yeah good luck okay but i'm gonna take you through my my version in the class and we're gonna you know i think it's it won't be a burden i think if you if you don't believe me but you'll at least see my view of the world through the course of the class any questions about that at a high level i know this is pretty high level stuff but so this belief that i have has taken um has taken life in this thing called drake okay which has been a something i've been working on for a long time it grew up in the days of controlling atlas the humanoid robot that's when we started getting much more serious here at mit about software engineering when toyota started the research institute drake moved over to also to being supported by professional software engineers and grew into a serious project and then you know now it's um it's being used by big companies and um and small companies lots of startups are using it amazon robotics is using it for their manipulation stack so it's grown into something something big and real but it's at its heart it is a modeling language that tries to capture the complexity of manipulation in these sort of dynamical systems framework and it has these three components right it the that's the systems framework for modeling the dynamical systems for declaring the state variables the the parameters and the like okay it also has a really there's some really advanced physics simulation inside it we have some really really talented uh physics based physics engine engineers if you will researchers they've done world class in terms of the sim to real gap i think i would put drake against any simulator out there in terms of the capabilities and then it has a lot of tools for for motion planning and control that are based on optimization okay um we're going to use we've made this all capable to be used in the in the class and run on the cloud and all these things like that so it's going to be the glue that puts all these pieces together it doesn't try to be a machine learning toolbox like pytorch is extremely good at being pi torch this is filling in a different part of the of the stack the the dynamics the planning we control and they can work together they can work with ross okay um there's a bunch of tutorials out there i've seen it so in the past people have said i wish you had told us a bit more about how drake works especially when the projects came along that you know we did problem sets we were successful in our problem sets but now i wanted to do something completely different and i didn't have you know everything i needed to do that so we're going to try to balance that i don't want to teach a class on drake but i want to make sure you have the resources but we've also been pushing a lot more tutorials in this evolution of the of the open source project you know tri made it very capable and was using it for in research in toyota and then as more companies and more people were starting to use it they just relatively recently have decided to emphasize tutorials and adoption basically so even if you took under actuated this past spring you might see how much there's more documentation there's more user friendly stuff even now than there was a few months ago and even just the some of the syntax that was a little gross is getting better like super fast even in the last two weeks we've we've done a lot so and there's if you do want to use it with ross or in some other project you're welcome to but we'll give you a complete self-contained deep note workflow for the class this is just an example of the of the tutorials that talk about a lot of what i just said here there's a there's a particular modeling dynamical systems tutorial that talks exactly about how you would declare your state your your input u okay let's just say one more thing about it here because um we're going to need it for the first pset which is that it turns out that if you wanna you know all the complexity of of modeling all the different things we want to do in manipulation you don't get to stay quite as simple as that you need to have systems that have multiple rates mixed you know they can have randomness they can have parameters that you might want to tune with a system identification engine or a machine learning algorithm okay so so it gets a little bit richer but in the way in one particular way it gets richer which is that we tend to write our all of our functions whether it's the dynamics function or the output function to be a function of state of input but also of any randomness that comes in from an input port okay so this would be any random inputs and the reason you declare your randomness is so that if i just give you one random seed the whole thing the whole system is completely repeatable for instance okay um parameters p okay which would be let's say masses or inertias or lengths of a robot or it could be the weights of a neural network in a deep learning system these are the parameters okay and so the functions in most functions in drake want to be able to be a function of all of those things so instead of passing those around as four or you know arbitrary numbers of inputs all the time we just say let's put them all into a structure okay and we're going to call it the context okay and so instead of writing this you'll see in drake you'll see f of context okay say x of n plus one basically equals f of context or sensors is g of context when you see that you should just realize that just means that's just the structure that contains the state the inputs the random any random inputs and parameters okay time also time varying okay so that's just that's throws people sometimes when they first see it but it's very natural once you think about it as a dynam as a way to write code for lots of dynamical systems okay so the strategy for the lectures is to take a deep dive into not maybe not writing drivers but into perception and we're going to talk about both geometric perception thinking about point clouds thinking about point cloud registration how do you do object pose estimation for instance in a point cloud how do you do filtering like in a point cloud in messy point clouds but we're also going to do of course some deep learning based perception we're going to talk about both motion planning level planning and some task level planning okay we're going to talk about some control what is how do we do force control how would you do impedance control if you've heard these terms right or what does position control even mean but rather than like spend the first third of the class on perception and the next third of the class on planning and then the next third on control what we're going to try to do is that by chapter 3 you'll have a limited but a fully functional robot that can pick up red bricks and move them around okay if someone tells you where the red brick is right and then we'll say okay now someone didn't tell you what the red brick is you got to make a an initial perception system okay and then all right now the scene gets really cluttered how do you how does how does the system have to advance and that requires you know more work from the planning system and work from the perception system okay and we're going to spiral out in this way trying to make a more and more capable robot and only introduce the cool tools from these pipelines when they make the robot do something new and different okay and one other high level point i'd like to make is that i i've already dropped a few terms right i just said visual transformers i said common filters you know some of you know a lot about some of those things some of you don't haven't heard those things yet okay so when i think about lecturing to such a diverse audience here and really lecturing about robotics one of the great things about robotics is that it's a kind of a mixing melting pot of so many different fields right and it's very hard to know everything about all of them okay so how do i try to do that in a lecture right so i i think the best way i can do that in a lecture i'll take feedback of course is i try to i try to make sure that if you know those things if you've seen those things i want to be able to make connections right if if connecting that to a common filter and you've thought about common filters is useful and i can say column and filter then then the people who've seen that will benefit i think and i don't want to you know avoid the word common filter because we haven't talked about it yet but i also try very hard and you can tell me if i ever say things and you you say i didn't know common filter but you know i hope you still get the point right the point of that statement about coleman filters and recurrent networks and transformers was that a perception system a modern perception system can still be described as a dynamical system should be described as a dynamical system and i want to make sure you capture that level of it at least okay and if you ever don't call me on it right but expect i i try to make layers you know of the class right so i want to be able to talk to experts i want to be able to be you know if there's things you haven't heard of yet you'd write it down maybe that's the most important thing to go read about tonight maybe it's not okay but you don't have to you know i hope you'll permit me to say some things that you haven't heard before right because robotics is just so broad that you and and really that maybe the field isn't mature enough to just assume that everybody has all the prerequisites so that i can say you know i can only build on what you've taken okay let me um finish up here so you can you know when you draw these block diagrams and drake you can render them as diagrams and we'll do that and you'll see that there's big libraries in drake of different ways to you know different systems that implement these different components okay yeah so the schedule is completely up i just told you the the basic storyline is we're going to do basic pick and place learn basic kinematics learn basic jacobian based control and the like that's going to get us off the ground with a basic robot and then we'll start doing perception but basic perception and then we'll go back and get more cluttered scenes and we're going to do this it's all outlined here there's a few lectures that i've left as to be determined and i want you guys to tell us what you're most excited about maybe i can fill out some things i've got certainly got plenty of things to that i could could talk about there but i'd like to hear what you guys are excited about the projects all the project-based deadlines are are up there they're aligned for the two versions of the class but there's a few more milestones for the cim component okay so make sure you take through take a look through there and understand your goal is to hop on piazza make sure you're there so if we're in a different room on tuesday i'll tell you that about that on piazza okay and your next your problem set will be released very soon awesome i'm looking forward to a good semester i'll see you on tuesday i'm so sad i didn't show the successful video hey how's it going to the class i was wondering if i could ask you for a quick input on a problem i've been looking at okay more to do with under actuated let me let me just make sure you
Robotic_Manipulation_Fall_2022
Fall_2022_642102_Lecture_21_Task_and_Motion_Planning.txt
so so uh task emotion planning is actually well let's see it's a place that I think I would like to grow the class I'd like to make maybe even move this up in the syllabus into some of the more core material because I think if you have this in your toolbox then you can program more and more complicated things and uh you know the state machines that some of you have been using will get you so far but not to loading a dishwasher for instance uh it's also a topic that there's some really good research being done on campus so Leslie and Tomas in csail our our leaders in this field and uh Brian Williams also who's an aero Astro in csail does some really nice work on on a version of this problem so there's a lot of expertise on campus if you get excited about these kind of topics okay so let me try to set it up relative to what we've talked about before and then I remember the plan is for me to talk for the first half and then boyan's going to talk for the second half so I'll try to not talk too long uh so remember I used this as the example to motivate task level planning before when we were trying to load the dishwasher this is if you now that you've seen it and now you've thought about it think about writing a state machine that would think about all the possible cases that this system might have to potentially be in right maybe there's plates on top maybe there's mugs on top maybe there's the dishwasher's open maybe it's not it would blow the stack you'd have an enormous State machine and the way that the size of the state machine tends to grow as the number of tasks accumulate in the number of possible transitions accumulates it can grow very very badly okay so in that project we did not write a big state machine we didn't write a behavior tree we used planning and just to remind you that the the way it looked back then was we defined task level actions right and they were finite there was there was a list of things that we programmed the different skills or different actions that would do things like open the dishwasher door close the dishwasher door start the dishwasher we've even loaded a soap packet if we needed to okay um and each of those was implemented in this sort of abstract class of an action-primitive interface okay which had just a few sort of reasonable is candidate get outcomes we'll talk about those again in just a minute okay what's interesting is to so let's just think about is candidate for a second so asking could I run this skill right now or more carefully if if I was in this state could I run this skill that's potentially in a very Advanced query of trying to understand when you're when it's suitable to be you know to try to open the dishwasher door could involve you know solving intelligence or something this is um but in this context we have simplified the problem down so that the state is actually a discrete finite State even though the problem is very complicated we in that example coded the state of the dishwasher as things like the number of clean items we've already put away the number of there's a few things that are more continuous valued but they were still sampled in a sort of slightly subtle way but mostly I want you to think about this at the high level that those choices were enumerated into a discrete set and we could do search on this task level objective primarily with Graph Search a more advanced form of graph search and incremental type of graph search but we roughly turned this into a big Graph Search problem in order to decide what we were going to do next okay um you know there was really enumerate explicit enumerations enums of the different states that the system could be in okay and that is an instantiation of this bigger idea from AI planning like you know strips is the language we mentioned very quickly before it really is it's a you know long-standing tradition of how to write planning problems planning descriptions where you can list initial State goal State set of actions for each action you list the preconditions you list the effects of the action and this defines a planning problem and the you see the action primitive interface is candidate is exactly the preconditions that you would see from Scripts and get outcomes exactly represents the effect set of this of that action and just defining that where the if you can write the preconditions and the effects on a discrete State space then you're in the land of AI planning and things we have very strong tools for that I mentioned quickly padiddle before the planning domain definition language which you should think of as an extension of the strips vocabulary where there's concepts of like there's an object-oriented sort of uh Concept in there there's um there's so there's now the notion of object instances it's a more expressive way to write big discrete planning problems but it can if you chose the winning planners that use pddl actually do not typically do standard Graph Search anymore but you could convert this into a very big Graph Search and do Graph Search on it the winning planners do much more heuristic search on a factorized exploiting the factorization in the problem okay and then if you have this high level planning power then you can accommodate some of the so you know we've made it weak in the sense by having to discretize the world into a handful of of finite buckets right and that weakens our ability to describe all the things that could happen in the world but you can overcome some of that with feedback and online replanning so in the dish loading example we would re-evaluate our discrete symbol grounding of the world every time we took an action and if something changed we could handle unexpected outcomes so that was you know the example I gave of that is that someone came and closed the dishwasher door and it would it would realize that realize its preconditions were no longer met choose a different path through the discrete search base to continue okay so let me transition to that in a second so there are cases so so this is this was a case where despite its complexity we were able to get very far by by doing things with discrete Graph Search first and then filling in the details with motion planning second and then the the coupling between that any gaps in the coupling between the discrete planning and the continuous motion planning were overcome with feedback but there are similar problems where that's not good enough especially if you have longer term consequences of your actions that you really cannot decouple the motion planning from the task planning hence the name of the lecture so um Kalyn Garrett was a recent graduate from Leslie Thomas's group had a number of nice examples that told that story I'll use one of his here I think he's talking but here we go so imagine we just have a little suction gripper and the problem was to move the red um object the a onto the red region okay the B let me do that again yeah movable blocks placeable regions okay there's Contin if you think about the continuous values of that problem there's a continuous state that represents the location of the a block there's continuous state that represents the location of the B Block you know where they are in the world where the gripper is in the world the only reason that you have to move the B Block for the a block first whatever is because of the continuous location there was a block in the that was impeding my ability to solve the simple version of the problem right I wanted to just pick up this block and put it in this region there was a different block that was in the way because of its continuous value I had to make a different set of I had to order my discrete actions differently does that make sense the coupling between the discrete and the continuous exposed itself that it really affected the what your first action should be okay so the planners these stronger task and motion planners will solve that harder version of the problem where they jointly solve for the discrete path through the graph and the continuous actions of the manipulator task and motion planning here's another example that um let me talk through it again so this is the PR2 with the much loved no longer with us uh PR2 that Leslie and Tomas used to use constantly the PR2 slowly went out of existence they bought every spare part on eBay possible and they don't it just it's not around anymore but uh so this task was is to pour the pour the blue mug the contents of the blue mug into I forget if it's the white or the Red Bull but basically pour that out so you would think the simple strategy would just be okay first I have to pick up the mug then I take it over and pour it but because of the um the location of the green block at the initial time and the kinematics of the arm it was impossible to pick up the blue cup from an orientation that would later allow you to pour at an angle that would get it in but these were the kinematics of the robot The Joint limits the size of its hand were affecting the order in which you had to execute things you wouldn't have even needed to pick the green block if it wasn't for the kinematic limits and the and the continuous variables okay so that's just another example here of see the green blocks in the way okay but it the stronger task and motion planning algorithms will solve that big version of the problem with a sample based planter yeah okay so I want to tell you just quickly the weights you know a couple of the ideas from task and motion planning and make sure I leave plenty of time for Bullion so there's a nice survey that Kailyn and Company wrote about integrated task and motion planning it's not that old it's still quite very relevant so I I strongly recommend it if you want to get a more encyclopedic coverage of this kind of material but actually one of the things that they do in that survey is they make a taxonomy of the different approaches that people have taken to task and motion planning so Tamp is whatever all the cool kids call task in motion planning right and uh I think the the choices they made about the um you know the X and Y axis of the little grid are useful to understand Okay so we'll I'll come I'll give a couple examples that I hope tell this story uh clearly but let's think about sequence first versus satisfaction first roughly speaking if I were to make choices about all of the continuous variables in my search Problem then I can reduce the problem back to a discrete Graph Search and that that would be sort of take the solve The Continuous problems first and then go to to the discrete okay there's similarly you could try to dissolve for a discrete path problem okay and then try to fill in the details of your motion plan now of course that's not gonna because of the task of motion planning coupling you can't just fix the high level sequence and then solve for the continuous but if but some of these strategies really dominate by saying thinking about first let's pick a discrete set of actions try to fill in the continuous thing if I got a valid a violation I could was not able to find a solution on the continuous problem then I'll go back and revise my discrete plan but in some of these these problems the discrete rules people love their discrete planners they're very strong let's try to find a way to jam continuous reasoning into the discrete planners similarly there's some people and I would probably put myself in the second class maybe which is we love our continuous trajectory optimization right and we can find ways to jam discrete stuff into our continuous trajectory optimization okay and so those are sort of the top and the bottom uh axes and the interleaved are the people that are maybe trying to to do a little bit more of let's do a little bit of planning incrementally build my long-term plan and incrementally call my motion planner and try to do a little bit more um explicit coupling okay so I thought um I would I would pick two instances here that I are two of my favorites here so um the logic geometric programming is people like me that think about trajectory optimization first and try to put some discrete planning into the trajectory optimization framework and then padiddle stream which was kalin's work um is a little bit it's interleaved but I would say it's still coming a little bit more from the sampling first and putting motion planning into the sampler okay let me tell you just a few things about logic geometric programming first of all it's awesome there's a there's just very compelling examples uh this one's from Danny this was actually you know follow-on work that connected its perception and other things but the basic idea that a trajectory optimization is solving for these very you know multi-step processes that are making long-term decisions between multiple arms that need to coordinate in order to accomplish a task like that was put the Yellow Block on the red thing pretty similar to what we talked about before right but um where you had to move block a to get Block B there um but these are solving you know definitely multi-step Handover kind of problems from a description which is not imposing that the the system must make a Handover that is discovered as part of the sequence is discovered along with the continuous motions okay so I think I could tell you the gist of how that works um because I've already told you about kinematic trajectory optimization right so um my toy version of kinematic trajectory optimization didn't even have a robot it just had a point that was going around the red obstacle and we wrote optimization problems which had a cost in this case it was just a shortest path kind of cost even a weird one in the Square but okay I started at The X start my final thing was X goal and I that last one is just the constraint that said for All N I want to be outside the obstacle right and more generally I might write that as minimize zero to xn and to realize I had to make a choice on the number of steps I'm I'm going to optimize over okay the sum Over N maybe I've got some I'll use L for my loss function and um yeah I'll stick with accents that's on the board but it could probably just be my joined angles for instance maybe it's x n plus one and xn if I wanted to have that n equals zero to n minus one okay subject to um generally I might have constraints of the form this for instance and then I had my initial condition my final condition right what logic geometric programming is doing is solving a more complicated version of this which I would call hybrid kinematic trajectory optimization there's many different names for it but if you've taken under actuated you'll recognize hybrid trajectory optimization this is a long-standing thing from uh more Dynamic systems is hybrid in hybrid systems so but this is a kinematic version of that okay so in hybrid kinematic trajectory optimization I'll do it in a step let's call this like I'll even call this action number one okay and then I'm going to have a second problem which is action number two and for Action number two I'll similarly write out my new end decision variables maybe it's even M decision variables no need for them to be the same and I'll have a loss function that's you know L subject to G and subject to x0 equals something X m equals something right think about it making just a complete copy of this algorithm okay but perhaps when I'm taking action number two I'll have a slightly different set of constraints here maybe action number one right represent move the arm without an object and maybe action number two might be move the arm with block a in the gripper okay so maybe if I know that block a is in the gripper then I'll have a constraint here saying for instance like the Q of my robot or my gripper you know equals the Q of block a for all for all of the the steps m okay and then maybe you've got an action number three you get the point here but I've got an action number three which is move with Block B and I've got all the same things but my G here would say that the Q of the robot has got to equal the location of Block B that Block B somehow moves with the robot so I could solve those if I knew the sequence a priori I could solve those one at a time I could say I'm going to move block a I'll run that first problem and then I'll stop and I'll take whatever the situation is right now I'll try to move Block B I'll solve that second problem right from the current initial condition but if you want to solve them jointly if someone has told you already given the sequence if I said I want to do a sequence which would be I'm going to move let's say action zero and then I'll do action one I'll move block a and then I'll do action zero again because I need to move my robot over to where the Block B is and then I'll do action two if someone tells me what the sequence is going to be then I could solve that whole problem jointly as a single trajectory optimization problem right where I could just accumulate all the costs into one big cost for all the problems and add extra constraints saying that x0 from um let's let's say x n from action zero has to equal x zero from action one okay I'll put I'll use the constraints of the initial condition of this problem to match the final condition of that problem I'll take the initial condition of this problem to match the final condition of that problem right and I'll just make constraints that link these two together is that clear X and of action one has got to equal x zero of action zero second time and the decision variables that I'm handing to my Optimizer now is a sequence of x's for this action a sequence of x's at this action a sequence of x's for this actually right each of those is a sequence of decision variables that I'm handing the solver and then I've got a bunch of constraints which allow those optimization problems to couple each other right so that the final condition of this matches the initial condition of that so on and so forth that's a much bigger optimization problem it's potentially harder for the solver to cope with but it fits still directly into the nonlinear optimization framework if someone gives me the action sequence and that's what you'd call a hybrid trajectory optimization a hybrid kinematic trajectory optimization problem okay and it turns out solvers can do pretty well with that you can add I just use the initial you know making the initial and final State match as the only requirement but you could do more requirements like for instance if I'm going to pick up the object if I'm going to transition from moving the robot to picking up the object then probably the final state of this had better put the robot where the object is for instance you can put the necessary constraints that couple those problems that were previously independent into one now why is that better than solving them independently because those continuous variables at the interface if I had to solve this a priori before I even started moving I would make a in order to solve this problem I would have to make an arbitrary choice at the initial location of that object or an initial you know the the contain I would have to lock in the continuous variables at each of these interfaces but here it's free to solve for a trajectory only under the constraint not that it's at a particular goal but just at a goal that's good enough to start up this optimization that's consistent with that second optimization okay so this is the optimization beginning of task and motion planning and those problems are well understood and and good again if that action sequence is given not just it's you need to know the number of them because you need to know how many decision variables and you need to know the order of them okay the second thing you can do is then formulate now this is the continuous optimization people sticking some discreetness in okay let's do a search a higher level search that tries to permute the different possible discrete sequences okay and for each of those we'll solve The Continuous optimization problem underneath and if we're smart about it we don't have to solve all of the possibilities we can use bounds on one solution to rule out some of the permutations of this okay so we saw that a little bit um when I talked about branch and Bound for uh I talked about it only quickly in the trajectory optimization but there's for those of you that that know it I just want to connect to that but there's a a standard approach to solve to mixing some of these discrete and continuous optimizations it's very well understood and gives strong guarantees when the sub problems are convex that is not the case necessarily in logic geometric programming nevertheless you can do um you can you can set up a strategy where you solve a relaxed version of the problem at each time step and then you try to refine Your solution in order to search for this action sequence that's the discrete decision and at each level you're solving continuous optimization problems so in my mind what logic geometric programming does very well is it does this Branch it sets up this branch and bound over action sequences it sets it solves very efficiently the hybrid kinematic trajectory optimization problem it does use non-convex solvers that are going to have local Minima and everything like that so there are no guarantees but in practice there's a lot of impressive uh results and I think the thing my favorite part of the logic geometric programming approach so when we have done in my group mixed integer branch and bound type algorithms for let's say footstep planning of a humanoid I've been I think a little stubborn about saying I want to take the problem instance and I'm going to hand it to garobi or some some well understood solver and I want to come up with exactly the right problem instance that I can hand to that you know there's like a clear problem instance of mixed integer convex I'm going to formulate that instance I'm going to hand it over Mark Toussaint and Company they didn't use Grobe they wrote their own solver on the end and they made every took every advantage of that like anything you could do that would avoid solving the big problem Downstream they would take those shortcuts so for instance it might be that if reaction two required solving a kinematic trajectory optimization problem that would say move my my bag you know over to here I could do an inverse kinematics query and understand very quickly that I can't even reach the bag after action zero and I don't even have to solve the kinematic trajectory optimization problem right so there's a lot of very clever heuristics in there that Leverage The kinematic problem in order to prune more and more branches of those trees and I think that's what made it scale particularly well questions about logic geometric programming this is I think one of Mark's favorites uh the you know Mark busan's favorite version of it is where it picks up a stick and moves the Box or it's pretty slick yeah there's also one I couldn't find quickly this morning where it grabs a hockey stick and like pulls something from far away that's a pretty good one too is that like I mean what was described earlier of just like um okay so the question is if so I think there are General heuristics just based on Geometry reachability of kinematics and stuff which feel a little less bespoke than saying does the dishwasher open at a certain time or something like that but they still he still has to define the different actions and that is the analogy to like deciding that there's a move the mug now the way that they decided though is by in the sub problem writing a constraint saying that to pick up the mug action means that the mug is welded to the hand for this part of that you know during that action so the encoding of of you know semantics into the optimization happens by the definition of each sub-problem yes example let's see probably go to the stick is one move the stick is two that's and then probably the third one is when it's in contact I've guessed three if I had to get and then maybe a move away is even four but small number yeah I think these are these are solving impressive but maybe not super long uh duration Horizon uh planning problems the sequences tend to be you know 10-ish steps or less than say not hundreds or thousands the thing I worry about with these kind of methods is local minimum so and I think the branch and bound performance um will eventually so you can make stronger or you know less strong branch and bound type approaches but um I would worry more about local minimum in this okay let me quickly talk about the digital stream leave time for buoyant so um but it'll stream is is a different example okay so this is now coming more from the sampling base from The Logical planning uh symbolic planning side of the world and pushing down and bringing you know bringing a few of the motion planning ideas up into the sample based planning and it's so integrating symbolic planners and black box samplers and I you know don't tell Caitlyn no but I'm gonna I'm gonna talk about it in a pretty different way than I think Caitlyn would talk about it uh so this is just an example of it doing cool things you know it's like put all the colored all the objects in the bowl that is most similar to its color these are you know Advanced long Horizon tasks that they're able to solve with this but the way I want to think about it I hope it's okay I want to put it in my language which is the the graph of convex sets language okay so if you remember the graph of convex sets was this idea that you could take the standard shortest path on a graph problem and expand its vocabulary by saying every time you visit a convex or a a discrete set you're allowed to pick one element from a continuous valued function and we tried to say that the set of those was was convex so the shortest path problem if you say I want to have a source here and a Target here the shortest path might be choose this real value continuous value this continuous value this continuous value this continuous value and in the case of a graph search where we have convex regions we have you know we've been working on ways to make that solved with optimization uh padiddlestream does not make any assumptions about the the convexity of those sets it's solving a harder problem and it's doing it with sampling instead of with optimization and the reason I want to draw this is because this is how I think about um the way padiddlestream works okay so one of the key observations in in this kind of mixed continuous and discrete planning is this right so if you were to give me uh the path if you were to tell me already I'm going to take this path of of sets then the optimization problem is easy for us because the the it becomes a convex optimization problem it's only the continuous values that have to be decided similarly if you were to choose the continuous values the problem is easy it becomes a discrete Graph Search either one of those is easy it's only when you put them together that it's hard what padiddlestream is doing in in my mind right so I have this picture of a graph of a graph here right is it's sampling so it uh the streams in pedidal stream are Samplers Black Box samplers you can think about it for any set here I'm allowed to every time I evaluate the stream I pulled one more sample out of that uh that potential set the Samplers the streams the Samplers that are used in but it'll stream are inverse kinematics queries or even a collision free motion planning uh like GCS you could say that if I'm in this set one point in this set might involve might correspond to an entire trajectory of the sub problem okay so what padiddlestream does is it well let's see the straw man version of what potato stream does which Kailyn uses as a straw man uh is that you could just pick a bunch of random samples evaluate your stream 100 times for each set okay and then make your edges from all of these points to all of these points and you have just a really big discrete Graph Search problem right similarly for all the the you know quadratic set of points here I have to make all the edges here and you'd get a really big Graph Search problem but that would be a way to take your mixed discrete and continuous problem sample and turn it into a big Graph Search and if you love the power of symbolic Graph Search then that can get you far right now pedal stream's much smarter than that it doesn't do that as the that would be you'd add potentially a lot of not only a lot of samples but a lot of irrelevant samples right because if the optimal path is up here you're still making a bazillion edges down here so what padiddlestream is doing is um is well there's a handful of different strategies in the padiddle stream family I guess but they interleave the symbolic planning with the continuous sampling okay so basically think about like an a star type algorithm for Graph Search would expand only a frontier of possible sets the high likelihood sets that have a likelihood of getting you to the goal with with the path are worth sampling more right and so you add more samples every time you add a new sample you connect it up with its parents whatever you do make the graph much bigger but you can do selective sampling in order to scale this to much harder problems and that's what political stream is doing the each sample again would be calling an entire Collision free motion planner for instance so it's you know doing like this but it's finding solutions to very very hard problems that's it okay so um oh I have to say one more thing we will soon uh have uh GCS trying to solve Tamp problems that's that's a goal sava's here uh this is think about the um let's see think about I change this into a way but the um the picture of the suction gripper picking up block a moving to block b this is just a top down view okay so this is the suction gripper here the arm okay these are the boxes they have to move from here over to here that's a combinatorial plus continuous planning problem and what's interesting is the scale of it is there's lots of boxes and lots of possible permutations lots of possible paths and there's some you know initial success suggesting that keeping it in the graph of convex sets kind of framework we can maybe solve the global optimality with in a few seconds so that's work that you should I hope will have a lot to say about soon okay brilliant Let's uh take over by the way I I appreciate everybody coming I'm tell your friends that next semester boyan will be presenting for five minutes at a random five minutes uh during every lecture so you must come always to all the lectures to to see him okay there you go oh yeah good good oh there you go hello everyone welcome to the second half of the lecture first I really appreciate that uh a lot of people showed out today what you actually should do is show up in every single lecture so you don't yeah yeah so uh in this part of the lecture we're going to talk about some uh recent progress in robotic research that is doing planning with large language models and this is closely related to the advancement in nature language processing um so we've just talked about a task and motion 10. um but so and in this in traditional task of motion planning what concerned with problems like those rods just represented they are more like uh some like kind of puzzles that requires a lot of logic and motion reasoning well in this part of the lecture we're going to talk about what if we plan like humans what if we have priors about each discrete action the the description of the tasks how do humans plan so here are some examples I spill my dreams can you help how cold if you spill drink how would you do it you will definitely find something to to try to clean it up you may want to go to the kitchen to find some napkins wipe and Destroyer to the to the recycle can yeah and also like so we want we all want future robots to help us in daily life and we should be able to communicate to robots and the robots should be able to complete the task with design for example when whenever we want to ask a robot to clean up the spilled Coke the robot should if it's like a human it should say okay I should complete generate this sequence of actions and I should try to accomplish them one by one so for example you may want to uh if it if you uh as the only spiritual code definitely put your cocaine in your upright position and find some napkins and wipe the table and throw it every single way so it turns out we humans always um always love to use language as abstractions to specify tasks and to specify plans and when you think about it you actually like when you do like a complicated project you also communicate with your friends about like all this kind of plans with language it turns out uh humans active activities on internet produces a massive amount of knowledge in the form of text and that could be really useful with the power of deep learning for example on the right of the slide you will see if I never if I don't know how to make certain dishes such egg fried rice I'll be able to Google online and they will tell me step by step instructions so um so before we dive into how we solve these problems Let's uh let's talk about large language models so I'm sure a lot of people have heard of what language models these days and many of you might have played with it so uh for those who don't know I'm going to give you a short introduction what is a language model language model is a task of predicting uh what word comes next given a context for example uh we have this sentence here says the students open there and ask you to predict the next word and you will immediately have a lot list of candidates in mind and if I give you a word you may say it's likely or unlikely for example if I say apple here open their apple you'll see this is clearly uh not not very reasonable and you won't put airport there instead we might put books there because you have all this you have an internal language model in your mind more formally um it is like given a bunch of words we provided as a context you will be able to predict the next word which is x t plus one here and this is called language modeling so you can also think this uh as a system that designs a probability of a piece of text for example let's say uh X1 is my first word X2 is my second word and if you use uh all this conditional probabilities you're able to change them all together and predict the joint probability of uh of of a of of a piece of text that actually makes sense like uh that is a probability here X1 to St so how can we use this so the Highlight here is that whenever you have an internal language model in your mind given some task given uh not some task but like given a piece of text you are able to predict what is likely coming up next so first of all if you have a fixed list of options like what roster said here like we just assigned a piece of text describing action one two and three right for example action wise move the arm without the object right if you have this already have a fixed list of options uh you can use a language model to evaluate its likelihood for example if my spilled my Coke and my available actions is like eat an apple and second option is like find some napkins you're obviously obviously see that finding napkins is more likely coming up next in the given the context so a second way we can utilize this is that if we have uh we have all the vocabulary uh in English we can actually also sample with our likelihood model so so as I showed here uh with a with a trained language model it will assign a probability to each word in English dictionary and then you can sample from the distribution in like a generate generate options that is how text uh chatbot works and you you actually are already using language models every single day for example when you type we actually uh Google knows to suggest you something because because I already have a language models that based on people's history can predict what's likely coming up next and when you use Google to search where you type something it also suggests a bunch of possible options then okay now everybody knows what a language model is then we are going to dive into large language model so we've already seen a lot of neural networks these days and many of them represented this in this lecture or millions of like let's say like a million five million parameters are already considered large but like this really really large language models of like 540 billion parameters and you can't use your your tiny GPU like even with a to yeah to to even inference on these models so why do we have this huge language models it is because if we train them on the entire internet or we can incorporate a lot of human knowledges into them and use them to do interesting things so for example you can use large language models to write assets for you for example I once wrote A Blog about computer science schools ranked by boba's and this I actually use a gpd3 to write help me writing it because my English is bad and like this is one example of how to use it so it's just having best computer science schools worked by Boba home blocks and write the first sentence it generates things for you and it knows that Berkeley has 20 Boba shots in front of it you can also use it to complete your homework for example like I took I took this as I took this asset prompter from from one uh MIT class that is a minor minds and machines and given the prompt it somehow knows to write something reasonable about that and I highly suggest everyone to try that and with this language models can actually also request from the answer in real world this is a really recent word from Urban guys that's using chat GPT so you can ask it ask it all kinds of questions uh and it can also really really well for you so before so before the age of deep learning this question answer models aren't that good as we dive into the age of getting to the age of really really big language models they're getting better and better can also really type of question all kinds of questions and actually can relate to previous context as well and can give you a really realistic experience just like you are speaking with some humans this is turns out as large language models are really powerful planning for example in the most naive way to use this you can ask it just for planners like how can I do something right so give me a list of items I will need to make a cup of coffee and like it will show you something for example um give me detailed robotic instructions to make a cup of coffee in a kitchen it will give you a bunch of instructions detailed robotic instructions matter yeah that's still converts like a human and I will talk about this problem later on so um so this large language models are really powerful and we really hope to use them for robotics so however when we try to use them so it's actually very hard because whenever you ask it to I spill my drink can you help GPD will tell you you cannot you could try using a vacuum cleaner well okay and one funny fact is that when I was doing research as undergrad like one of my friends was playing with large language models in his project he asked her uh I ask a language model give me instructions to get a cup of to give me a cup of coffee and the language model tells him to go to a cafe so this is you we need to have more control about what we can get from the large language models to allow us to do uh actual robotic tasks and one core challenge is that our robots can only do a fixed number of commands and need to Pro and this problem broken down in actionable steps so actionable is critical here um this is not what you large language models are really outputs um yeah for example in this task we have action one two three and they there are different skills but it's through just three actions we don't want large language models to up arbitrary things we want them to choose um so we need large language models to speak robotic languages so solution one so we can propose that um we can just find each executable skill to some text options for example whereas we already have here is like we have uh all three actions and we wrote description about that and if these skills are like actually like real life skills uh you can expect the language models to give a reasonable guess about how to how to uh guess about like how to generate a sequence of actions like what we did here um this is doing the classification so it's also easier we have more control so basically as we see language models can predict the probability of like the upcoming tax so what you do is like you give it like instruction and like you you can uh you can evaluate the likelihood the problem is not likely could lock probability of each option coming up next in the form of text so this is exactly what a lot of people tried before so how let's say we have a bunch of available options on the on the right that we can actually use a robot to execute we let's say we already coded skills we use large language models to to to to complete this just like how we did in our asset for example how would you put an apple on the table and prompt it say prompt it to say I would want and like it will be able to predict the likelihood score for each options there is a second solution that is we can also prompt the large language model to out to Output in a more structured way so not just like a random actualizing instructions that are long paragraphs so uh and then we pass the more structured output because as soon as they are more structured it's easier for us to pass so then there's come to this important skill called fuel short prompting of large language models so what is future prompting as we just said large language models can just finish an essay like in the and try to predict the upcoming tasks in a uh most likely way so what if I engineer my context before in a structure way for example here I type the United States and like I typed some Arrow the mapped into Washington DC the capital and then one type of food the country is famous for and the tallest the highest mountain in this country so I gave it three examples United States China and Japan and I get prompted to complete the essay for France and you will see it actually outputted the capital of food and the tallest mountain in that country so that is one of the so so so the Highlight here is that uh larger Network models can just give them a context we can if the context in a structural way and can copy the logic and extrapolate to what we are querying next so this is go this uh this is called cue shot prompting so how can we prompt large language models to do structure planning um so one immediate way to do this is like I give it a few examples of like a structure plan here and then I give a new instruction so I ask that gpd3 here to generate or generate a plan for bringing a banana from banana lounge and turns out okay I don't it doesn't have knowledge about banana Lounge but somehow like it gives a reasonable plan we will be able to play play play gpd3 for planning at the end of the lecture if we have time you'll be able to try all kinds of tasks uh okay so given this so we already know now we know what large language model is and how can we use for task planning then it remains to combine this kind of new capability um and connect them to like make them actually executable in real life so the first paper I'm going to mention is a DSi can not as I say paper it is from Google Robotics and Everyday Robots um and so now we know large language models can do planning for robotics the problem is that large language models are not grounded in real world they don't know what's actually possible from a state with a given environment so let's say I already have a bunch of skills let's say I trained everything either either with learning or like I have a motion planning algorithms that can uh can can can I can can just give me a plan like given current observation like we we have a bunch of skills each tattoo uh a language language uh language description that we can classify from with the thing we mentioned before so now we have this problem that large language models aren't actually grounded in real world maybe my language model will say um I would like to find I spill my Coke and there are obviously there are a lot of options available for example I can either I can either find some napkins or I maybe I need to first go to somewhere else to to to find napkins or like I can also instead of finding napkins Maybe I apologize MIT if he just sent me this if you're having trouble with your mic this is happening every time we haven't used the mic for I apologize yeah and um and maybe you have multiple options that you can uh that leads to the success for completion of the task however what's immediately in front to you in front of you may not be uh may not be uh valid with respect to every single option for example let's say you want to find some you can find pick up napkins directly but there is no napkin in front of you then you shouldn't then you shouldn't execute this task at all instead you should go to find some now Kings maybe in the kitchen and like so so your plan should should um should also be grounded by uh grounded from from from from current state so and when we talk about grounded from uh from from current state we it's helpful to mention uh mentioning a concept called affordance so what is affordance it is saying that with respect to a certain certain task we desire uh can I How likely it is for me to or like in terms of cost how how costly it is How likely it is for me to accomplish it from my current state on uh and so on um so for example we've all learned a little bit about reinforcement learning uh and we know that what a value function is whenever a current state is likely to lead to all uh higher expected return from the future when a state is more likely to lead to a success successful completion of the task we say a current status of higher value and in this paper they use the reinforcement learning uh in combination with large language models the first uh training reinforcement learning from pixels then with the value function from the reinforcement learning algorithm we can actually calculate if the skill is uh actually somewhat like executable or likely lead to success from my current observation that is directly from pixels and um so IO provides kind of task-based affordance and they're encoded in the value function so what we can do is I um now we have a list of options and and our language model give us a prior uh probability based on based on the previous context for example I like my my task the description of my task and then like what I've already planned before and then we can also calculate another probability from our current affordance from the value function of the reinforced Millennials for example we have a uh we can do multiple things at a time for example or how would you put an apple on the table obviously um for for language model it seems that find an apple and pick up the apple is are both valid options they are most likely coming up in the given the context of this task but if I don't have an apple immediately in front of me in my field of view it will realize I cannot directly pick up an apple as a certain position so given if we Grant wrong decision making by both language models and value functions we will be able to get a more reasonable guess for example if I don't have a apple in front of me I will prioritize find an Apple first instead of picking up an apple directly is it hard to learn a value function in the space of words with the basis function being the language model pieces Oh you mean like State based on yeah how do you I mean learning a value function that works for apples and folks and all these things seems really hard oh yeah so for yeah so so actually um I think one one thing I don't really like about uh like oh one I won't say it won't really like I think it's one limitation of like what the vanilla approaching this paper is that they train the separate you used imitate they actually trained it with uh imitation learning for many many of the skills and like they train one imitation learning policy for every single object that is they need some humans to collect the data for like uh multiple days for your water bottles and for another water bottle will have another guy to collect it for another day something like that um but I think uh I think one hope people are one thing that people people think they can hopefully solved in the next few years is that instead of doing this kind of thing we just have one huge value function model that can intake in a current image observation and a piece of text embedding and outpostings but the problem is that turns out all this kind of skill learning are in domains where data is extremely expensive you either hire humans to do it or you're trading simulators like a transform many many days to get this kind of thing so so yes it is currently a big challenge right now and but like I think people propose a solution it is just data is not there yet okay conditioned on my skills and conditioned on the request order ah so usually value functions is just conditional observation right or actually some someone say state right so it is like they actually have multi one value function like the function itself for each of the uh each of the skills options and then when you for each of the skills you look into its value function and you evaluate it by evaluating at the current observation yes yes so you in this setup like you need 50 value functions also I said in the future you may just have one that also connects your own text yeah any other questions good everyone has reads me and then um so let's see how does this work so if so so in this Slide the authors are asking her to accomplish some really long horizontal tasks that involves like um nice steps here and it says I spill my Coke on the table how would you throw it away and bring me something to help cling um this looks weird but like it's kind of deliberate to another uh to to confuse a robot and um the robot just tends since accordingly you'll be able to see that at the very beginning it thinks that it finds that uh find the cocaine is very likely uh in the uh in the in the first as the first step and it's also somewhat it's also executable because it's navigation so it just shows that the task and then oh by the way like the if you didn't see the blue the blue bar indicates the likelihood score from language and the Red Bar indicates uh likely cool score from affordance so so you will see that so let's let's go to the force let's go to the force uh picture if you look at that you will realize that the foreign for dung is actually extremely the importance for everything else is uh extremely high um no sorry I picked the wrong wrong one let's let's look at number five um language model might suggest that uh as you immediate finish but like my affordance uh would suggest that many other options are really um available for me to accomplish and this kind of these two things things together Grant a entire plan for this long Horizon pen task you have to predict the future observations then no uh how can you plan like for the fifth oh ah if it doesn't know the image is uh at 10 0 I think so I think one one one assumptions that uh this type of work often assumes is that your language model is reasonable enough that like your your pen generated by language model is already reasonable it is like um what the language model sees is that at the time step five it will see human instructions and like a robot I would find a call can pick up the cocaine until like step four and so it kind of sees that history of like a history of uh of task plans like before but not like observations so this kind of someone requires a language model to be good if the language model is not good enough you cannot trust it so are you saying that like um right so basically like I do not plan for this step after the first step because the step backwards that requires the image in the future yeah currently this is the setup in this paper although there are Future Works that improves upon this yeah so nobody asked asked about this but uh another another thing people might ask is that how does it incorporate feedback in the thing for for a stronger way so it is like for example if I found certain tasks to be invisible um how can I adjust my plan accordingly so this requires a feedback from the feedbacks from a step from the environment that is contains requires more information than just affordance for example let's say I tried to open my door with my key now I saw my keys in my pocket but it turns out it's not so in my in my mind I kind of know and if you adjust my plan accordingly and replant so actually some follow-ups follow-up works of this have proposed that we can do it like we can prompt the large language models to to do um to do uh some interesting uh inner monologue that is like they have a success detector that attacks certain expected event actually didn't happen for example the fact that my key is in my pocket is not there then it will it will just modify the modif insert one line in the uh in the in the prompt here saying that oh I found something something is not actually there and then the large language model kind of incorporates that feedback and adjust its play in the future accordingly so that is a magic of prompting large language model um another interesting that people always do is that um you know language models are very tricky and they uh they are really naughty and to make them to make them to make them like actually plan cool things sometimes you need to give them some good incentives for example here I just say let's say we are trying to accomplish a really really long hormone skill if you directly ask a robot to give it an instruction it will avoid giving you a really good good sequence of actions instead what you do is like you insert something like chain of salt you say now after like humans I spill my code after this instruction I write this sentence let's think step by step and then generate a plan and you will find the quality of the plan significantly improves another thing is that you can have string of salt which is saying that instead of saying let's sing step by step I give a few demos saying that uh uh saying that um okay someone still might their code I need to find something to wipe the table and finally throw everything away so if you give it a few examples of this and like now give a new instruction it will actually learn to follow the structure and generate a chain of salt a nutrient of salt for the new task and actually also helps it generate the plant better so it is really naughty and you need you need to find there are a thousand tricks how to prompting how to prompt it to generate nice things yeah in the in the latest version of the second paper they added king of salt uh where like I like I added line of reasoning it to help language models do better the possible skills yeah oh no it is oh so sorry sorry I misinterpreted the question so so the point I'm making here is that if we want to add a Chain of Thought prompting into this this is orthogonal to whether we are doing generative planning or classification based planning so if you look at this so so when it came out they actually really impressed me because um this make make a future of like future with home robots more likely because they sorry um can you help with that large language models May hold the key to unlocking such tasks they can when I was going to watch the entire city by picking up okay or just look at that builds my Coke on the table demo how would you throw it away and bring me something to clean it up the robot considers different skills that are available to it and selects the best one according to the second process described above it uses the affordance model as well as the language model to score the available options the algorithm starts by finding a Coke can which is Then followed by picking up the Coke can once the robot accomplishes that part of the instruction skill is appended to the prompt and the method continues with the next set of skills on the right you can see different skills being considered and their scores by the language model the affordance model and the combination of the two each skill once it's Chosen and executed gets appended to the prompt which then allows the model to generate the next part of the solution in this case the robot Ends by finding a sponge picking it up and then in the seventh step of this extended plan it brings it to the table and puts it down since the robot doesn't have the white table skill in its repertoire finishes the task at this point for the termination next we show two other unnarrated examples of tasks that seikan is able to accomplish yeah so because now the large language models are able to pass really really complex human instructions you can see examples here like you give an extraction like I just worked out can you bring me a drink and snack to recover so you can you can input with our actually like human language we are not going to watch the entire video Let's just assume this is finished a success body so as you just saw like because we want to emphasize the capability of large language models for planning this paper so we kind of like we have to have a bunch of uh uh skills associated with each executable option and that is one of the hardest parts that we I hope will solve in the next decade um and then think we can dive into my paper it's also with a robotics at Google and everyday robotics when I was interning so we've just saw second it is amazing but second didn't tell you that the hard-coded all object locations if you move the object a little bit it doesn't work anymore and it assumes that all the objects are available in the scene um if you remove something it would not be able to find an alternative plan also it has no perception it's like you you you have to let it know that uh where objects are and what objects are available and by the way only can deal with 30 around 30 objects so this is quite limiting uh and also you need a and because of this you have no perception and you have a finite system executable options so in this project I'm trying to significantly expand the capability of secant so previously as I mentioned sequence no perception system so it is not grounded with what's in the thing and where they are so in this project what we do is like we can just let the robot navigating the scene look around find and find other take a lot of pictures and then um with the open open vocabulary detector that is like um now whenever it sees a new image let's say the source this bag is table it will do this kind of class agnostic Regional proposals um Punk like crop this this bags and the table out and store them store their locations three locations and we do multiple Fusion such that we build a single representation of this entire thing and then whenever the human asks it about certain object for example I want the back and then it will query the same representation using the visual language models and find its correct location and also tells me whether it is actually in the scene or not so for example here it will just propose a bunch of objects in the office location at Google there is like this cocaine um this chip bag some trash cans and the uh the yellow sign there you can actually query with it all kinds of nature language input object names you can query the plant with like plant potty plant green plant is all fine and it will be able to find that object so this is a open vocabulary detection this is a valid paper so basically um if you know visual language models um basically we this more clip model can give you a likelihood score between text and the image describing and the score describes how closely does the text describe the image and then we build on that this paper proposes open vocabulary detection where I combine our object proposal and we can query everything with text for example like for the crocodile there we can query it with toy green toy or toy crocodile and it will be able to to estimate the likelihood score for that so how can we Grant planning we sing so now the robot have navigating the thing and I give an instruction called recycle the cocaine so what would actually do is just like humans I would immediately have a list of items in my mind that I should kind of plan with so so someone like establish a planning domain I propose this objects I might this actually my skill might need a cocaine and a recycle bin and then because I already built this open vocabulary contacts I'm able to find this location of the cocaine recycle being the thing and how can I approach them and then because I have this object I have these objects we can generate executable options for example let's say you let's say you you get the most naive way to do this with templates that is for every object I generate an option that this go through something pick up something put down something but you can actually also generate it with large language models as well if you have the skill to execute them um so this is really powerful for example like I tried before on large with large language models if you just give it a few few demos like we did like with the countries and the food and mountains before you can let large language models to generate possible options for example for knife you can generate like peel cart and different type of options that are that so it can get more powerful um then given all these options we can do things aware context or planning that is what's available in the thing whether it's not and then I can I can I can I can do this kind of planning sure oh like available objects Yes actually also done with large language models I give it a few examples of like instruction followed by a list of objects involved instruction a list of objects involved and can propose it really reliably for example uh when I saw we will be able to play with this at the end of the class I think so it is like for example I just give it um I don't know like throw the cocaine in the Bing you know it's like code camping and then one actually it's really really powerful because when I test it like with really weird wild task for robot for example filter to fish it actually proposed cutting board a knife and a fish so it's all about large language prompting large language models here every single step can be done with large language models here except actually executing this is why why we still need to uh carefully we need to start this class really hard because it is unsolved it's really hard then compared to secant so above it is taken where I use language models and value functions to find the most likely action among a fixed set of candidates so what we can do here is that uh we we can we can actually propose executable options in this framework and we can we can also use afford affordance as before and try to find the most plausible action among the candidates we generated so although like the skills we have is still limited with in this in this case we are like we are able to uh expanding the capability from a fire set of skills to infinite because now we can navigate to arbitrary objects previously everything have to be card coded now like in the Google kitchen I'm able to ask it to go to like uh ask you to find Band-Aids for me I heard myself find some Bandits or find some medicine for me it will propose I should go to the first aid station and then navigate there this is options that are previously not available in the original state camp paper and we are doing that here so and you will be able to see the to create a representation there will be later queried with natural language input sorry is queries first we run a scripted exploration action with class agnostic detection and capture all clip features we can also run a frontier exploration also for any normal environment so this is camel so this is one task that's not achievable by secant before because it it just doesn't have a lot of uh doesn't have this concept of a brown Mark of a woven basket in mind because it's not in their hard-coded list and in here I'm showing that with my new framework it is able to achieve these task that's achievable before the basket and brown water green chips to look out in the series reputation as visualized in the map as a bottom right both objects are found and localized the robots and plans and does a task by combining large language model and affordances as visualized on the top right corner to watch an apple the robot proposes three objects Apple Tab and sync training a policy to watch items is beyond the scope of the project so a simpler pick and place version of the task is demonstrated here the robot cruxley picks up the Apple inputs is in sync if we unconstrained by available manipulation policies we can lift the constraint on large language models and then it will output steps like turn on the tab as next action yeah and the last one is also water the product plan so all these tasks are privacy unachievable second because they have to have a final list of objects while we don't here so although this is pretty powerful you know that we always need to bind available executable or actions to our language options and that is one of the hardest challenge now and I think it's an exciting area of research once we can solve that problem combined with this we can actually have real Everyday Robots and then there is one last paper which I'm going to go through virtually basically uh with really powerful language models you are able to like synthesize programs and uh you can execute them as programs but I think we are running out of time and I hope to show everyone um some some interactive demos for examples this is my chat GPT demo this is a compensation model so right here I'm give me a step-by-step instruction to make Beijing it's like deliberately confusing it by mixing two languages and you know and it kind of give me this instruction and you can ask a crazy thing someone even built a virtual machine inside it because it kind of you can see it into something and can like generate like what's inside that directory home directory and like you you do conversation with it like a make directory and it will output like uh direct what's in the directory is your newly created foldering it it's really crazy you should play with it it's free and then this is not free I paid for a little bit for it um so this is gpd3 so here is what we actually used in in second paper in my follow-up so basically I give the advantage of uh give give it a bunch of demonstrations these are the few short examples and before the class I just tried bring me a banana from the banana launch and is kind of generated the scenes although like with like in the actual paper which is a large much bigger even bigger language models compared to uh to gpd3 here and like I'm like in the context I give like 10 demonstrations that have a great variety here is just pick and place but I think it's still really powerful so what about let's let's try something let's let's some students suggest something who want to suggest a task for it to play instructions to fold laundry for the laundry I see is this a correct way to spell laundry okay okay well at least it says fold but like in yeah on chargept okay so I'm going yeah yeah you kind of need to need to you know it's naughty and you'll be prompted to yeah this is just the most naive demo um but with more templates it will be able to if you give it more diverse demos it will also be able to generate more diverse things as well for example here you already know that I need to use the action fold okay let's try something else you can also type things like I don't know let's let's try this I'm thirsty help me out ah how about solve a Rubik's Cube yeah I'll probably ask you to find someone who knows how to serve you it's all bad yeah yeah if you if you like coded this kind of skills with the trajectory planning hours and maybe I will be able to solve it yeah also feel free to play with chat GPT you can find those all kinds of crazy stuff what what do you want to ask it philosophical question on this one have you encountered like responses look like like a certain common sense in it so so for instance I think we're a lot of like language models because they're just doing kind of pattern recognition and stuff like I don't know the stack Overflow for instance they're stopping it from responding to questions because it was given from misinformation that sounded correct yeah exactly so one of my friends like who actually who I asked for help from will make this size he tried to his child GPT and he asked it first he asked it to make a dish that involves dark and like charge gave him um give him a list of uh instructions but then he tried to scan the scan the chatbot by saying uh explain to me why why is why is that I can um I can uh instead of using dock I can use my code and like the and the GPT out the chat GPD actually gave him a bunch of uh answers saying that because like at the code it has like first and like you know like it's a valid alternative to Doc to make this dish so you can actually yeah scam it yeah feel free fantastic [Applause] yeah feel free to suggest things
Robotic_Manipulation_Fall_2022
Fall_2022_642102_Lecture_6_Geometric_perception_part_2.txt
hopefully that's on now foreign I'll keep talking for a second here for you foreign [Music] foreign thank you foreign all right okay let's get started so uh last time we talked we started our work on perception right and uh I had intended to get through iterative closest point we didn't quite get through it so I'll finish that off today and uh and the goal for today last time we sort of assumed perfect Point clouds we assumed that our sensors were you know giving us the best possible information that they could given the you know projection through the lens but there's no there was no noise there was no outliers there were no nothing like this so today we're going to think about what happens when those Point clouds get messy because real Point clouds are very messy and think about ways we're going to sort of modify our basic computational framework to to make this more robust and I think uh you know these there's a couple particular algorithms I'll try to show you and I think they kind of represent a nice way to understand the problem understand what you can do with it there are many different variants out there but I think there's a few that you can you know get a taste for what what's possible so just to set that up again remember that our goal for today for this week I guess is to do the same thing we did before but we've added these D415 cameras and now we are reasoning about where in the scene I hid my point Cloud there but we're actually using those cameras to find the mustard bottle first by taking pictures then by turning those pictures those depth return rgbd images into a point Cloud this is actually the result of after filtering the point cloud getting it down to just the the mustard that's left there and removing the bins and all the distractors and stuff like this and then what we'll see is we run ICP on this algorithm on this mustard bottle in order to so then accomplish the task okay so we started last time by uh with a really important component of the algorithm that started us thinking about sort of connections between the kinematics problems we've talked about and the perception problems we're starting to talk about so we started off by saying we were given a few things we were given a model in the form of a point Cloud so our model points where we wrote them as a bunch of points model I and we were given we had this canonical model frame in the object frame right I'll write bigger I remembered okay and then we also had scene points this is what we got from our basic you know from the camera with a little bit of processing to put it into a point Cloud format and this was originally we obtained some scene points in the camera coordinates and then we assumed we had the camera pose so this is the camera right and then the biggest assumption we made last time was that Point Mi corresponded with point with scene Point s i right so we somehow could look at the picture look it through our camera and know that there was a particular point in the point cloud and that should be associated with a particular point in our model okay if we have this setup that it's incredibly useful to know that given that setup we can write an objective like this minimize over the unknown pose which would be the pose of the object in the world and when I'm writing it as an optimization I want to be clear our notation sort of makes it clear but I want to be super clear that as a just as a argument to a mathematical program that this is an object that is the poses so the special euclidean group three okay and the objective is that for all of the eyes I'd like that if I take both the model points and the scene points into the world frame then their distance is small and I tried to show you the landscape of this this is a quadratic objective right when I plot when I plotted it with just two decision variables which I could do in 2D with just rotations right then it looks like just this quadratic Bowl and then this constraint here the fact that this that these are not arbitrary rotation matrices inside here meant that there was an extra constraint which was that unit disk we'll see that picture again but this was a sort of a nice optimization problem and it has a great a you know a closed form it's a numerical but an algorithm that's almost as good as closed form solution via SVD singular value decomposition that's where we were last time right I want to just point out again uh point out quickly here because we're gonna we're gonna play with different formulations of this today and and different ways that are going to you know be good optimizations ugly optimizations that are going to give certain robustness properties um maybe there's a point-to-plane version of this where you can try to correspond points to whole faces of a mesh for instance there's a bunch of versions of this but we're going to be doing manipulations on these basic equations okay so the first one just to observe right off the bat is that if I had written instead if I wanted to transform the scene points into the object frame for instance if I had done um what do I want I want the transform from the world let's say into object frame which would be the inverse of this transform and I'd written this optimization with this guy that has a that's solving for a different transform this is trying to put the points together everything in the object frame okay but from the from the point of view of an optimization problem this looks identical and can also be solved with SVD right this is still you know linear in this in the decision variables here plus some constraints that are solved by SBD okay so you can go back and forth and we're going to move these things around and we'll understand where it breaks and where where it works okay but I want to address this um this big assumption because really how could you possibly know what which point in the model you know if I get a point cloud of a mustard bottle you know how am I possibly going to know which one goes with the top you know which one goes to the bottom there's that's maybe the the hardest part of perception is to make that that leap okay so I think a picture uh well actually the picture on the screen uh is pretty pretty useful one so this is a remember my blue is my model my salmon colored is my scene right the brown is what I get when they overlap and it wasn't artistically chosen clearly okay and the green here the green lines are the correspondences I just drew in saying if I said that this point corresponds to this point I'll just draw a line okay so let's say I didn't have those correspondences but I had this initial frame of the of the model points and these initial scene points you can imagine and the algorithm's name is a pretty good indication of what the first thing we're going to try is a fairly good heuristic would be to say let's just guess that the points that are the correspondences are the points that are closest in space I'll have an initial guess of the object and I'll have the initial the actual scene points and let's just start by saying the ones that are closest in euclidean distance are the ones that correspond okay it's interesting right here you'd see that this one on the first guess might get it wrong that this point would correspond to this point probably but if enough of the points correspond to the right points I'll make this precise in a second here then we can have an algorithm that can be I think actually very effective so this is the iterative closest point algorithm which is doing exactly this and I'll write the equations down to make that precise in just a second and what you can see is even if the first guess is wrong and then you solve for a new pose given this single singular value decomposition it puts you in a new condition where you can try to make your correspondences again and as long as that alternation converges you can get Global solutions to this now we're going to it's this is not this is also susceptible to local Minima you'll see the bad cases in a minute too but this is the basic algorithm I want to understand next okay so the iterative closest point algorithm is going to try to solve for the correspondences so we need a notation for our correspondences we're going to have an initial guess [Music] for the object I'll stick with the original one we're moving the object or the the model into the scene points okay when I'm going to do an estimate I'm going to try to use this hat notation to say this is my guess at the true X okay so given this I want to find correspondences so step one and I'm going to write the correspondences in a correspondence Vector okay so let's just write it like this I'll call it this the vector C here where I'll say Siege let me do CI the ith element of this Vector C I'll choose as my heuristic to take the the point that is the minimum distance okay I'll make sure this notation is clear in just a second given my initial estimate X hat argument over J I'll search over all of the Js PSI x squared and since this is just going to be a guess at the correspondence says I'll put a hat there too okay so what is this this is saying that I'll take each point for every scene point I I want to find a corresponding model Point J okay so I'm going to Loop through all of the possible J's try all the possible model points and compute the distance between that model Point J and my scene point I that I'm considering and whichever one minimizes it so Min would be the value of the optimization an argument is the L is the element J which causes this to be smallest that's the difference between Min versus argument yeah so this is returning the element J and I'll tuck that into the vector C is that notation clear so this is a vector of integers yeah so how would you compute this right in general so we talked about quadratic optimization over there this is now it's an integer optimization but it's just a list of numbers so you can just compute all the worst case you can compute all of the distances and just take the smallest one there's a finite number of things to compute just take the smallest one in practice we don't do that in practice there's efficient data structure for nearest neighbor queries so we're going to use those so KD trees there's good there's public libraries that make these very fast okay approximate nearest neighbor queries and the like okay so in practice you can make this computation use your your best nearest neighbor data structures okay and then step two the way the reason I chose this notation is that I can just say my new X hat is the um the result of that optimization with just the the correspondence is written in here so I've got model Point c i hat here minus p s i okay see what I did I got to sum all these up over my so for I used to just have Mi here assume that I knew one-to-one correspondence now I'm going to say I'm going to take whichever model Point corresponds to that scene point and I'll sum over all the scene points and I can still it's from the optimization perspective same the same as that optimization so I can use my SVD is the algorithm clear it's the simplest it's like the um the bread and butter Point Cloud algorithm okay you'll see it used all in lots of places yes good yeah so so the question was what if the dimensions don't match so I made a choice here with my notation to say that I'd like to say for every scene point I'll try to find a model that corresponds okay we're going to talk about that so that has implications so um you know in the case of partial views for instance that makes a lot of sense well I'll say this very carefully in a minute but I made a choice so far to say that all I'm requiring is that every scene point it corresponds with one model point okay you could have the you could have many model points for ones that are used by the same or you could have sorry one model Point that's used by many scene points you could have some model points that are used by no scene points okay but this notation says that for all the scene points I'll do this good question yes that's that's a great question so the question is what happens if you don't take all of the correspondences but you take a few of them and I think we're going to have to do that at some point to address uh some messiness so we're gonna we're gonna think about the right way to do that yeah so at some point we're going to have um have to deal with that and let me give you an example right away that sort of I think makes that point imagine if I had my object of Interest here okay and this is the ghost of the I'll use my reddish color for my scene points okay I get the idea okay and what happens if I don't know there was a reflection or something okay and I got a couple scene points way over here right and I have my model that started off with some initial guess okay so this is nicely getting pulled in this direction okay but I said so far that every scene Point corresponds to at least one model point so these points are still going to pick some point on that blue on the model and they're going to pull this in this direction right they're trying to minimize the distance between the corresponding points my see I've got all the colors here I can say you know if I've got some green correspondences here I guess that would be the shortest one would be a straight line so it's not going to like this and then right even if I have a lot of good correspondences here you know that's pulling me this way I can have even a very small number of correspondences that are far off but because that distance is large they can have uh overwhelming effect on the quality of the algorithm okay now so that's a reason maybe that the it feels unnatural to say that every scene point has to match with at least one model point that makes you susceptible to outliers but the reason I chose that was because there's another Direction too right which is that what if I only have partial views okay right so I've got a camera um I'll use white for my camera here I've got my camera over here and it's looking down and I can see this part of the object but I'm not seeing any points over here okay so if I said for instance that every model point has to correspond with one scene point that would be the opposite choice if I had made the opposite Choice over there then again I've got some model points here which don't have a correspondence here and they will cause big you know big artifacts potentially here so you can choose either to correspond scene to model or model to scene one of them the scene to model I think makes you susceptible to outliers and the model to scene makes you susceptible to partial views okay and that's kind of the point of the lecture is that we're going to have to do a little bit better than both of those I would say that um these are the dominant problems the ways that you know the important ways that real Point clouds are messy I would say Messi's a silly term but noise is taken I would say I want to reserve noise for something um like I'll say say in a minute so what would noise be in the context of uh perception system like this maybe I I measure the the true scene point but plus or minus some gaussian noise for instance right so every every one of those points is just perturbed with some gaussian that's what this is I shouldn't say this but I was gonna say that's what all the theorists pick right when they try to prove something about perception or something like this they always assume gaussian noise right but that's not what real cameras do it's that in fact I would say that the um you know it's the easiest thing to analyze but it's the uh it's not a property I'd say the the real cameras are actually very low in terms of noise and this kind of noise is relatively easy to be robust to in fact we've already got relatively good robust to noise just by writing it in this least squares objective right to some extent if you've got gaussian noise added in here then you'd expect this least squares objective to be the right metric for rejecting that noise so by by virtue of no not not asking David your Esther last time about whether we should make this equality and what that looks like by asking for it to be softer this gives us nice robustness to that gaussian noise but I think um partial views are a very big one you just you're never going to see the bottom of your object that's sitting on the table until you pick it up right like no amount of looking around or whatever is going to get you see the bottom right so you're going to have partial views another way to um that you get partial views from occlusions partly is just by having a camera looking from one angle but also when objects when you get more cluttered scenes you'll have occlusions you know something will block your view of an object and I think the last big one is um you know it's a type of noise Maybe but the outliers are let's let's define them as spurious random points added to the image in very in arbitrary locations a reasonable model I think of this would be um would be just choose uniformly over the over the viewing window points at random if you can be robust to that kind of outliers that's a that's a first order sort of robustness more interesting outliers are where you have other objects for instance maybe you have a mustard bottle but you have a ketchup bottle nearby and it's kind of got some of the same shapes but clearly your perception system should be able to disbiguate them but maybe these kind of algorithms might get pulled towards the ketchup bottle and never break free if they're similar enough right so those outliers can be pretty complicated I'd say there's at least one more type that you have to worry about with real cameras which are dropouts you remember the picture I showed of some of the real cameras yeah like this this has got um you know this is what real things look like you'll just have no returns from the side or shiny parts of the images okay so we want to be robust to that okay but given this basic algorithm I guess I got ahead of myself a little bit um you know this is a very powerful basic algorithm right this this is just the ICP and this is the Stanford bunny which you basically if you were ever to write an ICP paper you must it seems you must use like for a condition for acceptance is that you ran your your system on the Stanford bunny data set and you will do this on your problem set so you can join the ranks uh okay that Bunny shows up everywhere uh but this is ICP in actions if you remember the um the the dish loading robot from Toyota Research Institute um did you see what happened there so there's a there's a there are two perceptions that work in this uh in that video let me show it again okay there's an initial one that's actually using mostly deep learning to try to find where the mug is in the sink roughly but then when it gets close you see this little realignment that little that was an ICP based algorithm that was using the the local camera on the hand comparing it to the expected mug and dialing it in and gluing it for the grab and that's a pretty common pipeline okay that was it again right there right there a little refinement and then go can the camera see it that close yeah right here you can see it once it gets too close it gets it becomes blind right here right there you can still see it yeah you're right they do have minimum throw minimum depths and uh it gets blind pretty quick and let me give you one more sort of high level motivation for the the ICP class of algorithms this is actually um this is when deep learning started uh you know approaching some of these perception problems and everybody was trying to train their first deep networks to try to estimate the pose of objects which is extremely powerful pipeline now but the first versions of these algorithms all required you to label the ground truth pose in real data sets Okay so this was a tool that was very useful for it still is useful we would take real data in the lab that's the messy lab upstairs okay and we had a model of the drill and we would just have a user interface which after collecting a video stream of data would just click you'd click two or three times in the in the interface let me stop that and uh and you click two or three times just to give ICP an initial guess it would fit the point Cloud into this really noisy Point Cloud okay this big complicated one and then all of the images that you had from all the different interactions were suddenly perfectly labeled or labeled by ICP and then you use that to train a higher level perception system yes um in the dishwasher example yes do you in that case yeah so in the dish in the dishwasher example we had a model of the cup the plates the spoons in fact that's why we chose dish loading was because you can sort of imagine you know going into a restaurant and having a finite number of things it's like it's a pretty good case for the known model assumption and we tried to take that as far as possible and then anything we didn't have a model for that we couldn't register to one of the known models we would throw in the in the trash yeah which every once in a while we throw something important in the trash but okay so partial views I think partial views it makes sense to do scene to model correspondences for outliers I'm sorry uh did I say that right you'd like everything in the scene to correspond to at least one model but for outliers you'd like the opposite and at some point we have to do better than these hard correspondences of trying to correspond all the points to all the points and we need some sort of mechanism to to do something better than that right to we're going to talk about soft correspondences and we're going to talk about outlier robust yeah correspondence rejection basically okay is that the sun changing the lighting's just just changed a lot okay let's talk about soft correspondences first well let me just I had a couple animations here too this is ICP in the partial running in the partial view case it can do pretty well but it can really mess things up okay and these this is what happens with a few outliers chosen how I did where I just picked some random points in the um in the world and those points even as it tries to converge are going to have potentially an overwhelming effect on the convergence of the algorithm so that's what we're trying to fight okay the first way we're going to try to fight it is by taking these hard correspondences and softening them okay so um let me write the same thing we're doing here I used this notation right here I use this I'm going to pick I'm going to sum over all scene points and uh and you look into the index of my correspondence I'm going to write the same equation but I'm going to write it a little differently and it's going to lead to another algorithm here so take my min over x o and se3 I want to keep my um x o p o but now I'm going to do MJ minus p s i and I'm going to hit this up front with a correspondence Matrix c i j and I'm going to sum over J and sum over I okay this is now a correspondence Matrix and if um cij is going to be 1 if I corresponds to J and 0 otherwise okay so this is just I'm taking my original single sum with an index I'm going to write it as a double sum and basically every time I I sum through this if I wanted to get exactly the same correspondences I just set a lot of the terms of the sum to zero okay but out of the box I'm going to say there's a chance that all of the model any of the model points can correspond to any of the scene points or any combination thereof that's why it's a generalization yeah now if I wanted to impose something like um you know every scene Point must correspond to some model point I could put a or vice versa I could put a constraint on this on the rows or Columns of this if I wanted to but let's not let's leave it as a slightly more General case right this could have a row that's all zeros it could have a rolled row that has multiple ones if there's multiple correspondences is that clear yeah so minimizing this is actually since this is just a constant if I use this as a constant Matrix if someone gives me this this coefficient Matrix this is still something I can solve with SVD okay it's got more terms but it can still be solved in the same way the trick is that this has to be fixed okay you can't what we I would love to optimize c and x simultaneously to be able to leave this as a decision variable but in this problem this the correspondences are given in a slightly more General way and I'm still just finding X and the interesting case then the soft case this is so far is the same as what I've written before I can make these soft now if I change and allow c i j is just between 0 and 1 for instance instead of saying it must be zero or one if I'm allowed to correspond a little bit with some of the points right I mean does that make sense sort of in the equations it turns out that's exactly what's happening in one of the famous software correspondents approaches called CPD coherent Point drift is just another Co here and point drift dpd it's one of the famous Alternatives if you will to ICP but you can really just think of it as a soft version of ICP the CPD paper actually is all written in terms of the language of probabilities and trying to say I've got an estimator and and the like but it's exactly the math is the same you know you're still uh you can you there's a probabilistic interpretation of what I'm writing but it's just a gaussian and the math is the same so in the CPD paper basically they just say on each iteration of the algorithm I'm going to still I'm going to initial guess X hat oh and then I'm going to set c i j to be basically like a soft version of the distance like a gaussian kernel around the points right if the I'm going to take one of my points here and score the the I just have one dimension here right if this is my scene Point here and my model point is here then I'm just going to score in some gaussian kernel the distance from the from the points so there's a normalization term and there's some parameters of that gaussian but roughly it's the distance we know that's our estimated distance right over some some variants here and I'm just going to use this as my distance function my correspondence function okay and there's a beautiful course the you know Bayesian sort of interpretation of that but you can think of it just as a distance function that's giving you these correspondences and then step two all right you solve SVD and then you repeat the word on the street is that this is much more robust than um it tends to be more expensive because you're summing over a lot more you know you're you're something over a quadratic number of points instead of a linear number of points but um and so some people actually choose not to use the algorithm because of that quadratic cost when Point clouds get big that can be expensive but it tends to be you know just the word on the street is it's more robust I think I have the snapshot of the Stanford bunny from the oh this is the CPD one but this is roughly what you see in every paper I could have put CPD on the bottom and it would have been a similar picture where you see the Stanford Bunny and you see it with some noise and stuff in it and then you see my algorithms better than their algorithm right I thought I had this EQ one in here too yeah here's CPD right you see they corrupted it with uh gaussian noise it's all good okay but in practice people do like CPD apart from its speed yes yes that's a good question so the ICP in the sync why did we use ICP and not CPD in the in the original one there are all there are many variants of ICP also that um and sometimes more mature implementations I think probably if that had been a pain point in our pipeline we would have explored CPD but I think the off-the-shelf ICP implementations were good enough for that job and we worried more about yeah it was about the computational cost in fact because ICP is local I can tell you specifically because ICP is a local algorithm it can run into Minima we would actually take like a handful of initial guesses for the pose of the mug given our original perception system and we would be in parallel run multiple versions of ICP and take the one that fit the best go and we optimize that pipeline maybe before we fully thought about whether we should use a CPD or not yeah we did explore actually a version of a CPD like version two and I think that that was also viable yes in this algorithm yep so um we still need the initial guess to come up with some initial correspondences the notion of distance which sets my correspondence my initial correspondences you know before we were just using it to down select which thing to correspond with at all now we're setting the soft correspondence with it I should use x hat in this yes thank you I'd like one of those to be Jay I wrote that whole term too quickly clearly thank you and this is also a function of I and J I think that's right good catch yes to CBD I think the um there I would guess that there's somebody who's put RGB into CBD but the um yeah I don't see any reason why you wouldn't I just wouldn't think that a gaussian you know any euclidean distance in RGB space is going to be only good very locally right people do it for ICP also there's an RGB version of ICP where you put your distance across the RGB values in addition to the XYZ values um I've always thought it was a little weird there's there's other there's things you can do that uh that use descriptors more General descriptors to do the point matching and stuff like this and that makes more sense that to me than RGB but great okay so this can be this tends to be so how does this handle outliers for instance right compared to the ICP algorithm what is this doing without Liars if I had the picture I drew initially where I had a perfect Point Cloud here with two points way over here what happens foreign exactly right so this one unlike the the original one which is even at convergence is getting pulled by the by those outliers depending on how you which direction you put your correspondences this one is effectively ignoring points that are a long distance away right so it handles outliers in a soft way it also means that if you're too far away in your initial guess you have no hope of converging but but this is intentionally keeping having introducing some notion of locality in our distance function which is good yes these days you use deep learning to get an initial guess and then maybe you use ICP or you there's even deep versions of the refinement that can work very well but I think that's a very natural pipeline you know use a data-driven thing to take an initial shot and uh and then refine it with ICP there are versions of this that we will cover I'll mention at least at the end as we in our last variants that try to solve the global point correspondence problem okay and they don't work great or the see there are some algorithms that will solve a global optimal problem but it will take potentially to the age of the universe to solve and there are other ones that that are close to Global but um but in practice people don't consider global point correspondence in noisy Point clouds to be a solved problem okay there's there's sort of two important um versions to think about when you're thinking about this initial guess how do you think about this initial guess if you have a point Cloud that looks like this like it's a bunny but it's a little furry right then then actually it's not too hard to get an initial guess it's not too hard to solve ICP if you're in this setting for instance Right Where You There is a drill in there for sure but there's also like every other tool and a bunch of probably student lunch or something like this you know like there's all kinds of other stuff in the point cloud and you have to find the needle in the haystack that's a much harder problem for initializing ICP and we don't have strong point registration algorithms that will solve that Global problem in fact when we did this this project to try to just label we figured who cares if it's slow let's take a global point correspondence algorithm to just generate the labels it's offline we'll just generate a big data set no big deal but we couldn't get a global point guards on its method to be robust enough to do the job so it required having a human click but those are two very different cases would be the needle in the haystack versus a fuzzy bunny okay all right so this is a soft version of rejecting outliers but let's try to work let's think about how we could work a little bit more towards the um the rigorous sort of outlier rejection case right the problem with this is that I still had to come up with that kernel function I chose some parameters of a gaussian which is sort of arbitrary nothing about my data told me really what that those coefficients should be I picked some kernel I tried it I maybe tweaked the knobs until I until I got something I was happy with but really if I could solve this jointly saying that both c and x are decision variables find me the best fit among any correspondences that would be the dream that would solve the global ICP problem we can't do that but we're going to do something a little bit closer okay so let's talk about rejecting outliers basically removing spurious correspondences foreign and I think there really are two cases okay there's the easy case I guess I called that the uh fuzzy bunny let's just keep a stick with that that's not what I called it here but that's what I'll call it for today the fuzzy bunny case where we we have almost our model in the in the data but it's just been corrupted by a handful of outliers like so really I guess the um maybe to make that more formal you could talk about the the rate the percentage of outliers in your data set right so if you have uh you know a thousand points in my point cloud and 990 of them are bunny-like and 10 of them are are just spurious points then that's sort of an easier setting and then there's the I've got a drill in a you know that you know 100 of my points are associated with the drill but there's another 900 associated with other things that are interesting that would be the hard case strain the easy case there's a bunch of heuristics that I should acknowledge but I don't want to dwell on you can imagine heuristics which are you know and you could sort of call this CPD approach heuristic where you could just say let's say I'll truncate distances you know any if I have my ICP Loop and any distance that's greater than five I'll just put a threshold on distance for instance I'll just remove those from the correspondence list and that's fine right other thresholds that sometimes people will put in they'll say I'm going to look for a hundred best correspondences I'll just put an upper limit on the number of correspondences to consider right and I can just put in sort of a threshold on that and there's a bunch of algorithms like that okay which are useful a hard case has more interesting algorithms in my opinion um and I'll list a few okay so one of them actually Dave you said ransacked last time right ransack is random sample consensus foreign we're going to ask you to play with that one on the homework and it's it's it's a pretty simple algorithm uh to understand in a few words it is I'm going to take my thousand points in my point cloud and I'll try to pick 100 of them at random and start using ICP from those hundred and maybe the I can combine it with a few of these thresholds on distance or whatever to bring in other point clouds that are fitting but then I'll stop and I'll take a different 100 initial Point clouds and I'll do that a bunch of times and when I've happened to pick if I you know if I do that enough times then I'll hopefully luckily pick some subset of the point Cloud where the um those initial subset gives me a good initial guess and I can go from there okay so it seeds it optimizes on random subsets roughly because they initialize with random subsets okay ransack is useful more generally in ml kind of problems and the like it's I hope the problems that will step you through that and you'll get a basic understanding of that I want to spend my time here talking about one that I think is much more clever it leverages some of the the geometry and the problem which is um using pairwise distances okay what are pairwise distances and why is that a useful idea so here's an observation you remember how I said that um that the relative distances between points depends on the rotation the relative positions between points depends on the position but not the translation we use this trip right I have a bunch of points on my point cloud right then the relative positions this Vector here if I take any two any two points right if I do p of let's say m i I'll just do M2 versus M1 in some frame this is um depends on on the rotation but not translation that's what we used to justify that's how we designed actually our SVD algorithm as we said we we actually only have to worry about rotations because that Vector if I slide this thing around on the board that relative position is invariant to translation okay the pairwise distance this is the pairwise distance is invariant to rotations and translations okay so if I put this object in some completely different configuration here right the distances between these points here is the same even though it's been under any rigid transform right if I just look at the length of the pairs does that make sense so if I were to go through my original model and compute all possible pairwise distances okay and now I go through my scene and I compute all possible pairwise distances if my scene has a point whose pairwise distance isn't in the model up to some noise then one of those two points had better be an outlier did I say that well enough right if the scene I'll write it like this S1 S2 distance is not a pairwise distance in the model then one or two is an outlier now that's a little bit weird okay because um so I think that that analogy is perfect if I take an initial Point Cloud if I have my model and I think the real data is just a perfect translation of that you know and then some extra things thrown in the mustard bottle right in practice I made a I made some point Cloud representation of my mustard bladder bottle once and I've got a different set of points that are all on the surface of the mustard bottle later so you have to put some margins on this you can't say that um you know with exacted quality these distances have to match but in practice like there's a there's a distribution of of expected pairwise distances that represent your object which you can look for in the data without having solved any post pose estimation problem if you can find a clump of points in your data then uh then you can actually reject a lot of outliers so there's a nice algorithm called teaser which is from um an MIT group Luca carlone and uh Hank was the lead author teaser actually had a bunch of different components of it but one of the pieces that I'm highlighting here is this uh this outlier rejection step Hank wrote a bunch of papers and I'm sure that the piece I think he had a different name for every piece of the algorithm so there's a there's a better name for just this piece but it's all under the teaser umbrella yes yeah okay so um you're worried about reflections so um that's true if you had a perfectly mirrored object you wouldn't be able to distinguish it I think with the pairwise distance computation I completely agree I think that's I mean in practice um if I accidentally found the mirrored mustard I guess like maybe that's not the the biggest problem um but you're right it's exacerbated I think in 2D on the board it looks like you know you the mirror operations is very natural but if you think about you know objects of Interest going through any rigid transformation the case you're worried about I think would be a reflection where you really going through some axis and you're right this would not distinguish that but at very least it could reject a lot of outliers right maybe not all good question so um this this teaser algorithm by uh by Hank and Luca made a really clever idea okay they said we've got this cluster about there's some distribution of possible um pairwise distances and he said they said make a graph where the edge is I'm going to draw I'm going to put the picture up that will help you make a graph connecting all the matching pairwise distances and that actually if you can find The maximal Clique in the graph that is likely your your uh your object of interest in the in the data so let me show you there the picture that I use to think about this algorithm okay so this maximum Clique in correspondence graphs okay and this is actually the they've showed it in very complicated settings big Point clouds and the like I was like Hey I just just explained this to me I have a triangle it has the sides three four and five you know let's just work out that case first okay and I and I like this answer so much that it's not in the notes okay so so hopefully this will help you guys too okay so this is the setting I have my same model blue and salmon scene and the setting I was worried about was what if I have the object of Interest my model my blue if it appears in the scene perfectly and a bit like what your Reflections question here if it appears in the scene perfectly but it also there's a not enough similar distances that appeared also in the scene so I said imagine you have a you know the exact triangle the 345 triangle but you also had a prism or a pyramid there that had three four four four three right you see what I was trying to do was make a lot of similar distances the distance four shows in my data the distance four shows more on the object on the right than on the left even though the left is the right answer correct answer similarly the number three shows up a lot on the right they're by counting there's more correspondence pairwise distance matches on the right but the right the correct answer is on the left okay the way their algorithm works is like this okay so we're gonna we're gonna make a node here if a on the in the model point a here corresponds to a over there if a corresponds to B over here you know each of these circles each of these nodes in the graph is one possible correspondence from model to scene okay an edge in The graft and the graph happens if the distance is the pairwise distances match yes so if the if a to A and B to B here is the same distance then I'll put an edge in the graph otherwise I won't put an edge in the graph okay so that gives you this graph structure of possible pairwise distance correspondences and the claim in their paper was that the maximum clique is likely the object of Interest and indeed this object here on the left has a bigger Clique of three than any of the pairwise comparisons on the right so they defeated my counter example and won me over yeah but that's a very clever idea isn't it that the that this you can use this in variance without even knowing the pose of your object at all the rotation nor the translation you can compute this quantity and you can look for this you know the statistics of your object roughly in the data I think it gets a lot harder when you have noise in the data and you expect those to be you know almost pairwise distances then you'll have probably many more edges in your graph and it's not as clear what the you know what the maximal click looks like but it's a very clever idea okay questions about that yes yes that's a great question so don't do first of all never do that with a robot I'm good so your question so um I I actually this is a very very good question so Tom's asking he says um what if my object is like long and flat and I'm looking at it like this right and I've got first of all you're going to get no especially on your iPad you're going to get no returns on the side so that's just dead in the water probably but um but even I would say long flat objects on tables I think are a good example of how the ICP objective that we've been writing all day is actually probably not the right objective because um if I had I could I could get a lot of good matches in terms of distance even if this is shifted by a lot if I'm looking at this object on the table I'm probably waiting the edges a lot right those are where all the information is but the density of points I'm going to get is is mostly on the top so my ICP objective I think is deficient in that case and you shouldn't think of this as definitely the right objective that's a point I always try to make at the end is that it's not clear that ICP is the right objective and I think long flat objects on tables or books on tables is a great or iPads on tables is a great example of that okay people understand the pairwise distance I mean roughly understand I think you should think through that example I think I it's in the notes obviously and uh it's worth thinking through that one uh was very impressive to me I okay last version that I'll try to do today here for being robust is trying to solve jointly for the correspondences and the poses okay so let's try to take the slightly more interesting version of the algorithm and I'm going to do that first with just a small modification of the original algorithm I'll do the point to plane ICP it's going to be a window into I think into a bigger idea here so the idea here is I want my model is a triangle mesh a triangular mesh instead of a bunch of points and I'm going to do my example here just in 2D Okay so let's just say I have my model looks like this whatever okay rather than represent the model only as a series of points I'm going to model my my in 2D I'm going to model it as a line segment you know these line segments are the models and I want to correspond points in the scene not necessarily just to the vertices but to the closest point on the face right in 3D this is point to plane is that clear I'd like to measure the not the distance from point to point but the distance from point to plane and allow it to match anywhere on the plane okay so how would I write this um how do I represent these meshes these are triangular meshes okay so typically you have a list of vertices plus a list of faces which are these are points in X Y Z and these are vertex indices I J k so you know this would have vertices I don't know one negative one if it's going to have another one one two right it's going to have all these vertices listed and then it's going to have a face saying that vertex one and vertex two are connected in 3D it would have three numbers but in 2D it's got two numbers right it's got another one that says vertex two and vertex 3 are connected this is my list of faces this is why not perfectly reasonable on disk you can you'll find CAD formats that are basically just this right objs are basically just this okay so I would like to now take a scene point and somehow corresponded to this face so I want to correspond to the face and then have the math be the closest distance point to plane there's a bunch of different ways to do that you probably know the equation of the distance between a point and a plane you can absolutely write that into your algorithm and and work from there I'm going to show you a optimization version of it which I like a little better okay so let's try this let's we'll before I even write the full optimization I'll make one point here instead of saying that my um my point is you know I could compute sort of the the point to plane the normal distance I can write this equation for the point to plane distance but let me instead say I'm going to correspond to this point this red point with a equation that describes all possible points on that line segment so let me say that carefully if a point on a face is the sum over Alpha I of the of the vertices and I've got a notation that I liked here bi and face f okay subject to all of my Alpha I's being greater than zero and the sum of my Alpha I's equaling one then a perfectly good way to describe a all points in this set in this vertex face representation would be as a linear combination of the points on the vertex that sums to one okay that's just that's a way to parameterize the the set based on the boundary did I write that clearly enough yeah okay so if Alpha is zero it might be all on this point or you know if Alpha One is um is one and the rest are zeros it might be at this point if Alpha two is one and the rest are zeros it might be at this point and if I go between them I'll get Alpha 0.5 0.5 somewhere in the middle okay that's a standard sort of parametration of parameterization of any convex set and it works for a convex for a plane for sure okay so now let's try to write minimize over x0 in sc3 going to minimize over my scene points this time I'm going to use the X transform the other way so I'm going to modify my scene points into my my model coordinates so that I can write um sum over I Alpha i j p vertex J in face I squared okay there's one big point you have to get here the details are are less important to me okay the big Point here is that so if you can contrast this to the CPD where I had a coefficient Matrix off of the front which was a hard optimization because I was multiplying c times my other decision variables this is a clever trick where this the This this term on the inside is linear in these decision variables and it's also linear in these decision variables so I'm optimizing over this and I'm optimizing over Alpha okay that's very nice it looks very nice I have to still say Alpha i j for all i j Alpha i j greater than zero and sum of alpha i j over which one did I do it over over I over the face equals one that's something to work with okay now these remember that this optimization if I didn't have these constraints optimization still has a solution via SPD unfortunately once I put these constraints in it does not so we have to open up an optimization Playbook that I've only given a few tools towards but not the full Playbook but this is another form of optimization a more complicated form of optimization that you can use to try to solve this harder joint problem okay it turns out so the there's remember there's also these hard constraints hiding in here the RR transpose equals identity and the determinant of our equals positive one it turns out that if you that you I don't know how to solve this this big problem well but if you're willing to relax this constraint to a softer version of this constraint then we can when we have nice solutions for this and this is actually the Crux of a lot of the point Cloud algorithms that are trying to use heavy optimization to solve this kind of problem and the picture actually I think is very intuitive okay you remember this picture which is my ICP objective the quadratic Bowl is a beautiful object for optimization and it's still present here this is still a quadratic objective the the red circle is a horrible object for optimization it just happened that we had a special case that we could solve with SVD if I start adding other constraints on top of this that might look like lines through this or something like this then I don't have a solution with SVD the things we know how to do with optimization are typically about convex sets so the standard relaxation that people do for this sort of a constraint in the 2D case is precisely you're changing the circle to a disk okay I know I'm only intending to give you the Fringe I know I got a few uh furrowed brows but but I want the geometry of this is that the the relaxation of this hard optimization problem turns that Circle constraint into a disk and so when you hear people talking about semi-definite programming relaxations of point-cloud algorithms that's what's happening it's happening in high Dimensions it's hard to think about but it's just turning the circle into a disk and I think if you're willing to say so remember what happened before is I have a I have some uh some rotation matrices that describe my data as well as possible in the simple case they ended up just Landing you know within the noise free case they landed directly on the circle okay with noise they might move away from the circle a little bit the circle pulls them back if you change the circle into a disc that there's one type of noise you reject very well if you're outside if you're going outside the disc then your relaxation is tight if you're inside the disc you're going to possibly get things wrong you're going to come up with rotation matrices that are not true proper rotation matrices the orthonormal vectors are a little too short roughly okay so I know I haven't equipped everybody with that but I wanted to just make those connections that this picture I gave you before which I hope you did understand when it first came up actually is the lens by which you can look at much more complicated versions of the algorithm okay where you can do things like attempt to find correspondences at the same time as poses okay and they typically fall under the heading of semi-definite programming type relaxations okay of Point cloud of Point registration algorithms and the theory will say that they're tight in some settings and that's typically The Noise free setting they're tight okay and uh when you get noise they become loose all right so I think like 10 of you are happy with me for that uh maybe maybe a couple years you'll be like oh man it was worth him saying that but okay good I said that um all right so I think we did a pretty good job with our agenda yeah so you guys know what ICP is have some intuition about when it works when it doesn't some of the biggest sources of noise in our Point clouds dropouts partial views outliers and then a little bit of noise but that's I think a small factor I hope the soft correspondence has landed that was a that was a pretty smooth transition I guess and then there was a bunch of different algorithms right the the pairwise distance was a good one and this sdp relaxation is a is another powerful one okay I'll see you next time look good job
Robotic_Manipulation_Fall_2022
Fall_2022_642102_Lecture_23_Final_Presentations.txt
hey thank you guys for coming early uh I'm going to start pretty much right off because we have a lot of you and a lot of good projects to get through if you um just one logistical note uh I hoped it was clear we said if your video was public that's where that's in the queue right now if it's unlisted that means you don't want me to show it today I think a bunch of people based on emails you sent in the last 20 minutes might have marked your video unlisted and expected to be showed today by default that's not happening you could talk to bill yet and we'll see if we can figure it out yeah talk to me immediately I have queued up all of the videos marked public and we're going to play them and I've got because we have so many of them and uh it's kind of nice actually I can just tell YouTube to play it exactly the allotted time and it stops and every once in a while there's going to be one that's like about to show the results and then it stops but I'm not I'm not a bad guy I just automatically I programmed the embedded URLs to to do this okay Okay so thank you guys for all the hard work and uh without further Ado let's let's start oh we have to figure out yeah everything in the simulator with physics like gravity and you know what how am I going to get you to hear the audio well up there that's something we haven't worried about too much before hmm I don't think this is actually making it louder up in the room is it foreign this is a not exactly the high tech I was hoping for but I'm going to put the mic kind of close to the computer we'll see how that goes you think there's one here that doesn't look promising it's on the PC audio if you go into the system settings in your MacBook to get the option to Output with those speakers sound as well as the limitations of the robot to achieve the Perpetual juggling motion with some additional work we could even load this program onto a real Evo robot and have a Juggle in your life nice to start we need to choose a point at which we're going to throw and catch the Perth to start we need to choose the point point at which we're going to throw and catch the perfect that juggles we modeled everything in the simulator with physics like gravity and friction as well as the limitations of the robot to achieve the Perpetual juggling motion with some additional work we could even load this program onto a real Evo robot and have it juggle in real life to start we need to choose a point at which we're going to throw and catch the approach we use to find such two positions is forward kinematics we first decide on a set of comfortable joint angles of iwa and then calculate the corresponding spatial position of the gripper so now that we have our third and catch positions and their comfortable joint configurations we'll use the formula of projectile motions to calculate the start and end velocity given the max height of the trajectory we also want to get a corresponding drawing velocities we first calculate the Jacobi Matrix for translational velocities at the Joint configuration by multiplying the pseudonym Universe of Jacobian with the spatial velocities we're able to obtain the joint velocity for both catch and throw and now it's time for the cool part how do we get our robot to go from being here with this velocity to here with this velocity at exactly 0.639 seconds later we can frame this as an optimization problem we want to minimize the length of the path subject to the following constraints first we constrain the positions and velocities at the throw and catch points second the entire duration of the arm trajectory needs to be exactly the duration the ball is in the air for and third we ensure that the joints on the ewer robot don't exceed their position velocity or acceleration limits at this point we've done a lot of the groundwork we're super excited we put it on our robot and this is what we see everything is going great then let's go off the ball at completely the wrong position with the wrong velocity graphing the desired positions and velocities as well as the measured positions and velocities we see that the measured values lagged behind the desired values by 0.0538 seconds originally we were sending our desired positions to A system that interpolates the desired state with discrete derivatives the desired state is then fed into the inverse Dynamics controller which spits out the force commands to send to the arm note that this system doesn't feed desired acceleration to the inverse Dynamics controller at all to improve tracking we remove the state interpolator and took analytical derivatives on our position trajectory to get our desired velocities and accelerations and fed those into the inverse Dynamics controller directly we can see that with these changes the measured values now converge to the desired values we put our new controller on a robot but somehow it is still not able to catch the ball gripper is trying to catch the ball where we thought it was going to be instead of where it actually is to account for this we need to change our planner so that it updates the trajectory based on the actual position and velocity of throwing right after releasing the ball we put this on the robot and after some debugging finally works we tried a bunch of different heights and found that it can juggle up to two meters high to make the jump to juggling multiple balls we had to modify our planner when trying to juggle two balls after throwing the first ball you have to catch and throw the second ball all in the time that the first ball is in the air so we have much less time to move between the throw and catch positions we can keep most of our planner exactly the same but after a ball is thrown instead of replanting the trajectory based on the ball you just threw we update it with the position and velocity of the ball that we want to catch and this is what it looks like one thing we had to do was to warm start the arm with the desired drawing velocities at Time Zero because unlike in the one ball case it didn't have enough time for the actual velocity to converge on the desired velocities before it had to throw the ball next we tried juggling three balls but kept running into the problem of the balls colliding in the air with each other because now there are two balls in the air at the same time and because of the slight inaccuracies in the throne we're about to give up when we decided to try making the ball smaller and it worked on one hand this feels a bit like cheating but on the other hand it's also cool because our controller is precise enough to juggle something so small this was the limit though the kinematic trajectory optimization kept failing when we tried to juggle four balls [Music] and finally to put it on the real robot we need a perception system in the simulator we can always know the exact position and velocity of everything but in the real world we'll need to find a way to measure that simulated three Intel real sense depth cameras where we plan to combine the point cloud from each camera then subtract the gripper from the coin cloud by best fitting a sphere to the leftover Point Cloud we would be able to estimate the center position of the ball from the fitted sphere where we would then be able to feed the position to the kind of that's pretty good that's that's I think we talked about a little bit in the manipulator control lecture but that's a beautiful way to get nice tracking very nicely done okay let's keep going I do have a schedule on the an optimistic schedule on the spreadsheet of like about when we think videos are going to come up we'll see how it goes stage of extracting the point clouds from the cameras but I haven't yet been able to subtract the arm or fit the sphere hey everyone our project is titled implicit neural representations for deformable objects our fundamental aim has been trying to find a smarter way to model the formal objects ultimately we could use these models to do Downstream tasks like key Point estimation key Point estimation is difficult on the formal objects due to the many different deformation configurations as we will show generalized implicit neural representations are naturally invariant object deformations since they rely on topological features of the object what are implicit neural representations inrs are functions that parametrize conventionally discrete signals like the pixels of an image or points in a point cloud and learn a continuous function that Maps the domain of the signal these functions are not dependent on the spatial resolution of the original signal unlike standard ml models which provide an estimate of each pixel within an image the caveat is that most inors are trained on data residing strictly in euclidean space so even though inr's learn functions that aren't dependent on spatial resolution they're still tied to some euclidean space modeling deformable objects means we can't be tied to this kind of space generalized inrs can be thought of as applying inrs in non-euclidean domains given the assumption that a continuous signal exists on some unknown topological space we can sample a discrete graph from the space we then compute a spectral embedding for each node in the graph this embedding allows us to approximate each node's location since there's no coordinate system we can then use a neural net to learn the mappings between the spectral embedding of each node to some predicted signal in our case this part is a classification problem where we wish to Output one of n labels per node here are the first three components of the computed ginr embeddings for the Stanford bunny we see that the components vary smoothly over the bunny's surface because GNR embeddings only depend on the topology of an object they are independent of a coordinate frame thus we can use them for learning representations of deformable objects here we investigate ginrs for key Point prediction here is the Stanford Bunny with a few key points labeled corresponding to the right ear the right eye and the tail our project seeks to answer the following key questions all GNR embeddings informative enough for keep on prediction on objects can they generalize to unseen deformations of an object and finally can they generalize to under sample slash oversampled mesh representations our experimental setup consists of manual labeling key points on an underformed or canonical object of Interest here we choose the Stanford bunny with five classes of key points note that the number of label key points is much lesser than the number of total points in the mesh we create affirmations of the canonical bunny by simulating impact with the ground and manual deformation here is an animation showing the impact simulation of the Bunny with the ground to perform the simulation we first capitalize the 2D surface mesh to get a 3D Volume mesh and we then simulated it in pie Drake by dropping from the bunny from a height and recording the positions after impact once we have these affirmations we consider additional modifications such as random addition of new points and edges oversampling or under sampling the resultant mesh all of these modifications affect the underlying topology of the object next you will present the results from our experiments here we Show an example of the training data for the canonical body mesh with labeled points colored by their class on the bottom we will show an example the information of the bunny with predicted classes notably we found our model struggle with false positives but was able to correctly classify all the label points we also observed that once our model performed well on the training example it achieved identical results on the deformed example since their magic structure was the same which demonstrates the utility of ginr's inherent invariants to coordinary changes additionally we evaluated how our model was able to classify key points when the meshes were modified we perturbed the meshes by randomly adding 10 new vertices to the mesh we trained our ginr using 15 sample meshes and evaluated it on five we saw that the ginrs did not require many examples to generalize well to perturb meshes but having a smaller embedding size acted like a regularizer for our model we also tested the ability of our model to generalize oversampled and undersampled meshes we trained our model on a single canonical example and observed that it was relatively robust to uniform oversampling with significantly underperformed on undersampled meshes we hypothesize that the under sampling has a greater influence on the spectral embeddings leading to this erroneous Behavior summary we show that ginrs are useful for learning implicit representations on deformed objects due to their natural invariance properties we see that they are able to generalize to some variations but have improved robustness when trained on various mesh perturbations in the future we would like to extend the study by investigating how our model performs on meshes which are inferred from unstructured point-cloud data we foresee that these meshes will contain a greater degree of variability and would like to see how ginrs are able to perform under more naturalistic conditions lastly we are interested in how we were able to use the topological structure of the mesh to register known object shapes onto partially observed data maybe I'll just ask a basic one so instead of volumetric you're the the spectral decomposition is on is a parameterizing the surface of the mesh is that right so how would it behave in under a large deformation mode like with the is that decomposition robust to really large changes in the if you really squish the bunny yeah I see I see so the vertices are going to get you the fact that you know where the vertices have deformed to already tells you the topology awesome thank you I'm leaving a little space for questions if anybody has any okay we have to keep going that's great thank you an essential aspect of human life is social interaction in many cultures it is common to interact with people physically via interactions such as high fives and fist bumps as robots are becoming more integrated with human lives there's an increased need for robots to become more human-like that is for humans to feel safer around robots and see robots more as human-like counterparts rather than these mechanistic machines that have no capacity for social interaction however there are many Associated challenges with essentially humanizing a robotical system in the first place it's hard to describe what constitutes a human-like motion but then it's also challenging to then translate those descriptions into raw code thus we present handbot we are Miranda Kai Jordan Wren and Aaron zoo and today we will be presenting our final project called handbot handbot is a simulation system that serves as a proof of concept to construct a basic human-like system without the use of learning which is a contrast as many other systems use learning in trying to design human-like systems specifically handbot is a simulation that is able to recognize and respond to certain hand gestures specifically high fives and fist bumps using only the SpaceX of perception and motion planning in as human-like way as possible handbot has three main systems that allow it to respond to human gestures first the perception system of the robot will use a camera to get scene points of a hand or a fist that is floating in front of it then use ICP to determine if it is a fist or not and its location in the world second the robot will perform motion planning to set keyframes of where the robot hand should be finally we use various controllers to control the robot and bring it to the Target hand when setting up our environment we need to construct a robot that can effectively high five or fist bump a floating hand we opted to use the Lego hand that is provided by Drake as well as the Kuka iwa 7 link arm we then welded the hand onto the last joint of the ew arm in addition because the hands configuration would not change after it matches the target hands we fix the hand joints on the robot arm such that it has no degrees of freedom to further simplify our environment we decided to keep it clutter free as perceiving objects in the scene and constraining motion planning would increase the time it takes for the robot to respond and in many cases there are no obstructions in the way of the high five or fist bump in order to identify which configuration the hand is in as well as its location we make use of ICP in our initial iteration we generated the model points by concatenating Point clouds from multiple camera angles around the hand getting the full model view of the hand or fist however since ICP tends to get stuck in local Minima if the model points are not in a configuration similar to the scene points we instead opted to generate model points from angles close to the front of the hand as seen on the figure in the left this makes it so that the scene points almost always match the correct portion of the model points we then abstain seam points from a camera next to the ewo arm and perform ICP for both the Fist and hand points whichever ICP error is the smallest gives us the hand configuration to use and the transform of the hand to approach after learning the orientation and position of the target hand we can construct the robot arms Planet trajectory by interpolating 10 frames between the robot's initial State and the desired final state we decided to use a linear trajectory for simplification reasons which worked well over the short distances being traveled to reach the target hand once we have this trajectory the system will use a differential inverse kinematics controller to translate these poses into joint positions to execute the simulation foreign as you can see we have designed a robotic system that is able to respond to human gestures like high fives and fist bumps however while we attempted to create our system in an as human-like way as possible we acknowledge that our system is far from being perfectly natural while our system may not be perfect the results do serve as a stepping stone for future work on related tasks presenters there so uh just since people have come in now let me just quick Logistics so I went through the spreadsheet you guys are awesome you're like saying I could be there between 3 30 and 3 33 or whatever I'm like I did my best to sort of I basically kept the order but I shuffled people who said I can be here only until 3 30 a little bit and it's all in the spreadsheet now we're going to go through it and if you didn't if you thought if you marked yourself unlisted but we're hoping to be shown here you tell boyan uh quick because the plan was to just show the public videos that was the way you tell us you want it to be shown absolutely uh I will do that for my phone while the next video is playing and just one other you know comment I think there's people in the class that are like in their final year of the PHD are working on Robotics and there's people in the class that this is the first time they've ever touched robotics so I think it's just appreciate the diversity of you know what people have done and uh it's really awesome to see the whole spectrum of and I have just been watching how much people how much effort people are putting in how much learning they've done on that on that part so appreciate the whole Spectrum with me 5 billion access hi today we're presented to you the hanoiwa robot which is a robot that can solve the Tower of Hanoi my name is and we're two Junior studying computer science at the MIT ECS Department today we'll be going through four parts of our project the first is Introduction and problem definition basically why should we care about this problem yeah well so simply to a start to help people in their daily lives robots today must be able to make Intelligent Decisions while performing manipulation tasks and Tower of handle is a perfect example of challenging the robot both to reason cognitively and also to perform strategic motion planning and finally of course to manipulate the object yeah previously we've seen robots assisting in household tasks as well as robots playing other intelligent puzzles such as Rubik's Cube or Jenga so we thought of it applying it to the Tower of Hanoi which is another intellectual intellectual puzzle and basically the Tower of Hanoi is a classic mathematical puzzle where the goal is to transport a stack of discs from one hat to the other usually from the leftmost peg to the right most Peg and the constraints or the rules of this game are that you can only move one disc at a time and then no larger disk can be placed on top of smartness and we want to minimize the time or the the steps required for this task and we'll be going through the approach or how we made the robot play the game first we had the um the whole system set up in Drake which is a similar simulation environment from our class where we represented the disk as box instances and we represented the pegs as flat cylindrical bases and the robot arm we use is the Kuka lbr arm the second step of course is to design our algorithm break it down and finally develop it so here is a basic flowchart of our diet program we break it down mainly into three separate steps the first is to use a recursive General generation algorithm for us to actually figure out what step to do to solve the tower handling the second is to import those steps into the robot gripper and the grip for gripper to iterate through each step to move each list from design from Target to their design location and the third step of course is to Once each step is completed as well as it wants us after completed we need to check for those conditions and tell the robot to stop because that's where we end the program and during this process we're definitely run into some difficulties and challenges which we'll talk about here well first thing is that we figure out we cannot actually grab the test from top so what happens is that they might be just simply too large so a simple solution we decrease the size of the different disks a second difficulty we ran into was with the geometry of the disk so as you can see here in the demonstration when there are all cylinders the robot arm slips when it was grabbing the disc and this is due to the anti photographs which with like occurred surface it's harder so we change it to using boxes that it doesn't need to be specifically at the two diameter points right and the third thing is that we figure out that because we are using time to keep track of our simulation sometimes the same step which should be only executed once it's executed repeatedly so here that's why we incorporate a tracking condition to determine has a group of to the beginning position has it got to the end position if both yes then we said the stat is complete and we execute the step and this is the moment that you've been waiting for how well can our robot play the game so as you can see here this is a 15 times speed data version of our robot playing the game it actually manipulates pretty effectively there's no waste of time and also you can see the stacking is pretty accurate and you can see at the end all all packs have all this have been transported from the leftmost pad to the rightmost pack and since this is also a three layer uh model we can easily generate generalize it to higher layers and some future work so what areas can be improved about our robot the first is perception right now our robot is basically getting the information about location good very very nice now I remember I talked to you guys not too long ago and the robot was barely moving and then all of a sudden it's doing the whole Tower of Hanoi I thought that was like incredible how fast that was good uh I love it I love it any questions no okay in this project we seek to enable a robotic arm to catch a free-flying spherical projectile using only an rgbd sensor and have been welded to the robot's end effector we study this problem because it is an example of time constrained robotic manipulation all perception planning and control must take place within the time of flight of the object this isn't stark contrast to many other manipulation tasks such as pick and place this presents challenges in both perception and control as we must localize the object in real time and plan an efficient catch Motion in order to be successful our system is split into three phases first perception where we localize the object then planning where we select an interception Point based on our estimated trajectory finally control where we execute the catch to localize the object we note that any two points in the rgbd point Cloud must be equidistant from the object Center this constraint can be formulated as a quadratic optimization cost and then after summing this optimization cost over all pairs of points we can use quadratic programming to find an optimal Center we find that the center is robust to noise typically present in modern day rgbd sensors given multiple of these object poses we can estimate trajectory by correcting for the quadratic term in the projectile motion equations and fitting a line to the corrected data points in order to predict the initial object position and velocity to select an interception point we simply take the closest point on the trajectory to the current bin position by using inverse kinematics to command the bin to the interception point we can already achieve limited success in the catching task specifically when the object's Arc is very high and slow moving however for fast moving projectiles the object tends to either bounce out or roll out of the bin after impact to correct for this we introduce a compensation trajectory consisting of two parts first a linear path minimizing the relative motion between the projectile and the bin and second a tilt meant to prevent the object from rolling out of the bin with the compensation directory our system is able to adapt to varying object poses velocities and angles however our system is not perfect the two primary failure modes our significant error in the estimate in the estimated pose of the object resulting in a complete Miss the next is ik failure during the compensation trajectory which prevents us from compensating the object motion properly to conclude our system achieves reasonable success on the catching task and simulation future work could be done to improve both the robustness so did you solve ik you said you said ik from here to here and then did you linearly interpolate between them yeah yeah cool no it's good you got them I really love it yeah uh yeah so basically it was just a piecewise linear introductory between so basically at a window of time around the interception point I mean like linearize the trajectory of the wall really good okay this is a summary of my final project on planning prehensile pushing the horizontal plane using rapidly expanding random trees and motion cones pushing is a deceptively complex task and I chose to focus on a simplified case pushing a predefined object in a horizontal plane with a point contact I.E pushing an object while it rests on the ground we can split this task into two main steps developing a friction aware planner that can find a series of feasible pushes that move a square block from its initial position to some desired goal and implementing this trajectory on a simulated robot in order to evaluate performance we limit the positions around the edge of a block at which a push can be applied to it as we learned in class a pusher at each of these locations can exert a force on the Block for lies within a friction cone determined by the coefficient of friction between The Pusher and the block if The Pusher attempts to apply a force on the boundary of this cone then it will slip motion cones can be viewed as an extension of friction clones we use them to represent the set of possible twists that can be induced on an object by an external wrench this diagram shows the general process for determining the motion cones for our particular task in the case of pushing in the horizontal plane the motion cone is composed of two wrenches one from The Pusher and one from the supporting surface I.E for ground The Pusher wrench can be found by transforming the set of feasible pushes to the Block's own coordinate frame the set of possible support wrenches can then be represented by an ellipsoidal approximation of a limit surface the limit surface can be visualized as a capability of a ground to resist loads applied to the object this is itself a product of friction between the ground and the object in this video we can see that the roll of tape doesn't move until the load being applied exceeds some threshold this is indication of a support range resisting our efforts to move it once we have a set of possible Pusher and support wrenches we can combine them to find the direction of possible motions in order for our block to move we must apply a push-up wrench that respects to friction constraints and this push of range must be sufficient to overcome the corresponding support range we can find the Motions that result from this process prove a vector's perpendicular to the Limit surface at the locations where The Pusher wrench intersects the set of all of these possible vectors forms a motion code which we can finally represent in the world frame using motion cones to validate potential pushes I successfully implemented a simple rrt planner Nova due to the stochastic nature of this approach there are occasions when the planner fails to find a valid path towards the goal I introduced an acceptable tolerance of misalignment with a goal in order to improve the chances of finding a feasible trajectory trying to follow the plan trajectories in the simulator led to promising results due to the feed forward nature of our controller we observe an accumulation of error for longer trajectories these errors result from inaccurate push-up positions is there like a some some time we really must watch the last little bit oh nice okay well that is a really nice that paper that you built off of is a really nice example of planning with that contact that we didn't we didn't have that lecture but uh but for planning with Dynamic constraints of friction and everything like that that's a really nice paper to build off of yeah thanks come back okay since we do have a few people that we have to add to my already full schedule what I'm going to do is I'm going to prioritize people that are in the room right so if we go and the presenter is not here then we'll we can come back if someone shows up but uh but I'll I think we should prioritize the people that are in the room yeah Okay so you go to the grocery store just to see a huge line the alternative is self-checkout but that comes with its own set of issues like Cody faces over here anyways please why would I do uh-huh please Facebook oh goodness we wanted to make this a far better experience I'm Sonia and I'm Vishnu and we create a checkout bar a robot to automate the checkout process at grocery stores here's the scenario we envision a shopper will fill their car with items and push it right next to the robot the robot will then pick up the items one by one put them on the counter for scanning and then finally pick up the scan item and put them in the final cart for the Shopper to push out of the store now creating the simulation required us to create the models for several different items and here we have the individual grocery items that we modeled ourselves it's a box with packaging on most sides and then a QR code on the final side and we repeated this process for several different grocery items lastly we imported a model for a shopping cart like you can see on the right to better create a feel of a shopping environment in our simulation now here's a video demo of our robot in action it comes to this first shopping cart and it picks up each item one by one so here we have Raisin Bran and it'll position it right in between these three cameras that you see at the top it has the QR code on this side and here's a view that the camera sees where it says scan racing brand at which point it'll proceed to pick up the box again and deposit it in the final card now this is our robot working at real speed but now we'll speed it up so you can see it scan the other objects as well here we have Eggo waffles where it repeats the process and then here in the camera view you'll be able to see skin Eggo waffles when it recognizes the QR code and I'll just fast forward the video so that you can also see how it picks up the Cheerios over here drops it on the counter picks it up after scanning and then lastly says scan Cheerios now finally at the end it'll also output an itemized receipt of everything that it's scanned as well as its final price now I'll pass it over to Vishnu to talk more about the technical details so in the control flow for a simulation we had several points of emphasis one was retrying based on mode of failure whether it was dropped from the uh while picking up from the counter or dropped from picking up from the Shopper's car we have code to retry several times and one of the main parts of our project was the scanning of the QR code so as Sonia explained we applied a QR code texture to each of the objects we created and these are arucou QR codes and we used opencv's library to be able to do both the detection and the identification of these codes and we kept a mapping from codes to their object items and we store these items we also code to reset the robot angles when it got stuck such as when it got stuck uh by hitting the camera and we had to end the simulation Behavior such as generation of the receipt and what we did is we took the information we stored from before scanning the QR codes and we generated an itemized receipt with a final total for the users so in summary we created a control flow for a checkout Bob and we had some effective methods for scanning the shopping items using opencv and a Roku QR codes future work would include speeding up the robot in transition adding Collision geometry to items such as the shopping cart and using a more appropriate robot arming gripper to deal with the more diverse set of items we deal with in a grocery store thank you [Applause] how frustrating was that it look it was diffic you said right and I saw it go through itself a couple times it even went around itself was that a major bottleneck for you or do you power through it dealing with when we generated the opposed trajectories we modified where the clearance was so that it would not get itself done to make sure it's far away from it and the kinematic reachability of the ewo was pretty limited to get into that yeah so you had to kind of drop the card I love it and I love the textureless shopper I couldn't read what it said in the banner what did it say in the banner that's awesome so good of course yeah like the simulation and what itself didn't actually line up with what uh Drake was seen with the RGB cameras so even though that in the simulation is Sue of the faces face of odds because you're on with the artery Center it actually saw the QR code on [Laughter] oh man that's like the texture coordinates were different or something yeah okay send that to me after we'll fix that all right for our 4212 final project we developed tetrispot which is an end-to-end robotic system that plays Tetris um the motivation behind developing Tetra spot is that we've seen that the combination of unrealistic game elements that require complicated simulation and fine-grained pick in place is pretty rarely explored and we choose Tetris in order to explore this because it satisfies both these Criterion um things like row clearing and uh peace teleportation require um like complex simulation and at the same time we need highly precise manipulation in order for pieces to be placed um pretty right next to each other and Tetris um has the potential to impact a lot of robotic games and interactive simulators such as things like Connect Four chess or Jenga and as far as we're aware this is the first Tetris playing robotic system that we've seen and so as you've seen in this class there is a wide range of projects being explored in the space of robotic manipulation so we just wanted to highlight some that we felt also touched on some similar aspect as tetrispot did so as you mentioned as we learned in class we learned about the dishwasher loader robot which we saw those aspects of objects identification graphs determination introductory optimization uh in the shopping cart example uh along with the local localization there was also this aspect of optimal peace placement which is very important in Tetris and finally looking at more of the gameplay side you have Jenga with that adversarial on certain gameplay which is a very important part of Tetris as well so here's a brief overview of the methodology of tetrispot we first construct and teleport a random piece and then we detect said piece using a convolutional neural network we pass in whatever our detection algorithm returns to the gameplay algorithm which returns a rotation in a position we transform that into board coordinates and then place that piece at a desired location and then we update our board state if necessary going into a little bit more detail in order to construct and teleport a new piece we require a simulation Loop which is pretty crucial for this entire project and that's because certain aspects of Tetris can't be simulated using things like the physics engine which is what we've traditionally used in class so far um things like being able to spawn a piece at a given location or replace every uh piece once we put it on the board with a series of unit cubes in the same location to make things like line clearing easier are things that we can't do without breaking out of the simulation once we have that randomly generated piece we use a convolutional neural network to detect which piece we've teleported in and we use a mobilenet CNN in order as like our base Network and we train a couple of layers on top of that to determine which of the seven pieces we've randomly generated once we've figured out what piece we have we can then use our gameplay algorithm which will basically determine what the optimal position is to place that piece on the board and that's using heuristics like the uh the height of the board after any Line clears after that placement or like the actual placement on the board of that piece like how low that piece can go and these are heroics that are used by players that are playing Tetris and we thought that they would be effective at teaching the robot how to play Tetris effectively for the final two steps we go a bit more into detail in the next two slides so for picking and placing we plan a trajectory for each piece so what that does is it composes four key poses to construct this trajectory which are the initial pose here a grasp pose an intermediate pose and finally a drop pose the grass pose is fixed for all the pieces and is able to pick all of them up and the drop pose incorporates information from the gameplay algorithm which is the rotation of the piece desired row and column to drop it in after dropping the piece we update the forward state so what that involves is populating cubes and locations of that piece and also if any rows are full we clear that row and in these images we see that before and after of a row clear here's a demo of our Tetra spot in action we can see that we have a successful end to end oh there's three of you I get the wrong time you guys get another movement I'm sorry not row and in the here's a demo of our Tetra spot in action we can see that we have a successful end-to-end grasping and placing of each block and each one lands where desired on the board we can see that all the pieces are being identified correctly and placed correctly and the gameplay is pretty reasonable yeah and so as you're working on this project there were a couple of extensions that we were thinking about so you know obviously we want to make a more robust more optimal gameplay algorithm using elements such as reinforcement learning uh hopefully we thought it would be interesting to make a multiplayer version of Tetris that has multiple ewas playing against each other on the same board and also we'd like to make a more true to life board that has either a vertical or slanted board so that we can utilize that aspect of gravity and you know simulate actual gameplay with increasing gravity perhaps thank you [Applause] questions for the tetris folks the windows that now simulate explosion exploding blocks uh yeah some fracture mechanics or something like this yeah it's awesome so what was the hardest what was the most surprising part of that of that whole pipeline like what was the part that you thought was going to work better that actually worked uh-huh super nice yeah that's a good idea interesting we could speed that part up but but that's good that's good no that's super insightful I like that hi this is Michael and this is Nico for our final project we tried to get the iwa to skip a rock skipping is a relatively simple Dynamic task that can be done by many people even the kid in this picture we thought it would be pretty cool to see if we could get a robot to do the same thing the first challenge of our project was to model the Dynamics of skipping skipping occurs when a rock collides with water and a surface Force propels The Rock upwards we modeled this Collision using a drag Force outlined in a previous paper this force is directly proportional to the contact area of the Rock and favors a flat geometry and velocity this is intuitive for how we understand skipping for our simulation setup we have the El arm position next to the water surface and table based on the Dynamics we chose a flat hockey puck shape for our Rock and centered it on top of the table behind the scenes we implemented a force system to apply the spatial force from the water here is the general floor of our system where we specify parameters such as the known Rock location desired throw velocity and release height we then manipulate the arm to pick up the Rock and door it with the desired parameters lastly we simulate the skipping Dynamics our setup consists of a state machine to switch how we command the arm for each of these states we have a different underlying control method as shown for the pickup State we utilize simple kinematic planning and differential inverse kinematics to drag the rock to the edge of the table this enables us to get an anti-potal grasp on the rock as shown in the video once you pick up the rock we load the rock to an ideal initial throwing position we initially thought to try a similar approach with kinematic throwing by creating radial poses as seen in the picture but we were unable to throw the rock any faster than six meters per second this occurred since since differential inverse kinematics only utilizes the very next pose which would lead to unadvantageous Joint positions to overcome this we switch the kinematic trajectory optimization we knew we wanted a similar radio throwing trajectory specifically we wanted this trajectory to pass through a release pose and a final pose as shown in the image since the release of the rock matters the most when throwing we added orientation and velocity constraints at this point in the image you can see these poses as well as the yellow trajectory line created from the optimization here is a video of the arm throwing the Rock and here is a video of the The Rock skipping after being thrown we ran our implementation over a desired set of throwing velocities throws were not perfect the actual release velocity of The Rock was much less than expected this is likely due to the Rock colliding with the gripper on an imperfect release we just used the two finger gripper which is not necessarily optimal for skipping a rock however we still did get the rock to script and saw that higher velocities led to more skips we can also see the contact force decreases with each impact after many long days we actually did get it to skip however we ran into a lot of problems along the way we saw from our results earlier that rock velocities below 10 meters per second would not skip uh but we kept running into problems where the rock would fly out of the of the gripper at desired velocities over 25 meters per second we tried a few things to prevent this but none of them helped we tried different grasps increasing grip Force decreasing the simulation DT and even increasing friction still none of this helped foreign slip out as shown outside of problems with the gripper we had some problems planning originally we hoped to test how release height may affect skipping the Dynamics would tell us that a lower release height is best because this leads to a flatter velocity but our trajectory Optimizer had a hard time solving for various release Heights [Applause] these are the guys that were asking on Piazza constantly like how do I remove every possible limit from the Ewa you know model right but I don't want Force limits I don't want acceleration limits I don't want torque limits so what was your biggest uh surprise in the end that's robotics yeah nice okay I think it was just the sheer flight uh the velocity of paper itself um we thought a lot of things might have been the reason why we thought it might have been a friction I think he was not meant for uh for throwing at high speeds but awesome design choices for dual armor robotic manipulator control I'm Alex and I'm working with Marcel and Edon so why do arm robots you have access to a whole bunch of more diverse tasks that require cooperation between both arms as well as increased throughput the challenge is um you have more complexity of having twice as many degrees of freedom plus the plan around the other arm so there's three possible approaches to tackling this problem once you have two independent controllers one for each arm two independent controllers with a communication Channel between the two arms and a single unified controller here's a demo of two independent controllers both controllers don't have any um notion of what the other arm is doing they're working completely independently and asynchronously so one arm picks up objects from one side of the bin and the other picks up objects in the other side of the bin and places them into the target bin so here is a more detailed diagram of how we did this we geofenced the robots so the rubber on the left picks up a object from the bottom half of the bin goes to clearance frame one and then from clearance frame one follows the fixed path to the drop-off location the robot on the right uh picks up objects from the top and follows a fixed path it goes to clearance Frame 2 and then follows a fixed path down to the drop-off location um the advantages of this method is that's very simple to implement and it works completely asynchronously so you don't have to wait for the robot but the disadvantages are that you have to manually Define a fixed path for each um each new scenario as well as the path is not ideal because you have to move out of the way to make the clearance requirements to not uh hit the other robot and one failure modality is that there's a grass boundary where no uh robot can get to because the grippers have some finite width um making it so that some objects may not be grasped if they're right on the grass boundary the next approach we kept the controller separate from each other but the other Communication channel between them study this approach we took the task of passing an object from one robotic task to another without putting it on the ground the communication Channel Carry relatively simple information expert the first um grasp the object and carry it to the location send a message to the other arm that it's ready and its location the other arm grasp the object and then send the message that the first time should release it the main technical challenge here is finding a good grasping Post in the air unlike the algorithm that was taught in the class here the point Cloud include both the object and the grouper and we need to separate them before coming up with a good antipodal cost proposal of all this approach allows us to have simple controllers but that still have some cooperation between them on the other hand it's good only when the chance of collision is low and is not suitable for everything the previous methods could also solve a limited number of tasks and were not generalizable next we'll tackle this problem by designing a single controller and proposing a new algorithm for path planning optimization for both arms at the same time in this case The Benchmark consists in picking and placing multiple objects from one bin to the other this task becomes hard because the trajectories between both arms are constantly Crossing each other in the video you can see we successfully achieve this task but let's look at how we solved it the first idea was to design a stain machine that forces synchronization on both arms meaning that each arm will have to wait for the other to finish its subtest before starting the next sequence the second idea consisted in designing two Collision free paths from one bin to the other we propose a normal algorithm that consists in recursively finding two frames one for each arm so that they are not colliding as we can see in the visualization the main advantage of using a single controller is that now we can achieve any feasible task and it is space efficient however there are some disadvantages too the controller becomes harder to design optimization becomes harder too and synchronous movements much lower than the time to success nevertheless we have to say that this method is still not perfect and we find two main issues first is that we only constrain the end effector position to not Collide so we can still have collisions on the rest of the arm second the arm can sometimes Tangled with itself due to the differential like case solver the solution for both of these problems would be to add constraints and optimizing joint space instead of undefactor space our algorithm could perfectly incorporate this Improvement and we leave it as future work to conclude in our project we presented three different design choices of controllers of dual alarm robots independent controllers are similar to program they can work asynchronously but are space and time inefficient adding a communication Channel allows to have more collaboration between the arms and solve more tasks but it but the number number of touch is still Limited finally with a single controller we should be able to solve any task but optimization a problem becomes harder due to the cursive dimensionality thank you for listening okay so what surprised you very careful with it did you use the diff ik or did you use the pseudo inverse jacobia yeah yeah good okay any other questions I do think dual arm planning is still hard people most if you see a dual armor demo in the world there's typically some hacks that make it work s sorry it's still hard to solve the real problem motion you know Collision free motion planning all right and alongside me is hanchi in our Reef and today we're going to discuss our project which is a physics-based growing using inverse Dynamics control so the motivation for our work stems from the fact that throwing objects is a very important skill for robots as it is a prerequisite for many more complex and dynamic robotic manipulation tasks throwing can increase the efficiency of manipulation as well as expand the robot's workspace foreign so our goal is to construct a method for robust to successfully throw objects between one another this problem involves solving the problems of perception grasping trajectory planning inverse kinematics as well as inverse Dynamics control and here's an example of our system and work so first the robots select the grass execute the grasps and then it proceeds to plan a trajectory to execute the throw as you can see this is the trajectory that they're following that it has planned in real time and then this throw is executed and this time it was successful and then this process keeps repeating itself over and over again as you can see it is not always accessible but we'll go more into detail later why so first let's talk about the system design at a high level there is a task planner or a state machine and in this it's broken up into like different states the first state is perception and grasp selection where it determines graphs executes those grass and then it plans at throwing trajectory these trajectories are then fed into an inverse kinematic solver to convert 3D poses into Droid angles in which the joint angles are then fed into an inverse Dynamics controller to achieve precise trajectory control so first off starting with the perception and grass planning Point clouds are sampled from cameras and then combined and down sampled from this down sampler combined Point Cloud normals are obtained and oriented towards the camera with this graphs can be accurately or graphs can be selected and scored and then the best quality grasp is then taken as the final grass pose position for the robot to execute towards yeah for attraction planning to accomplish slowly trajectory consists of three parts the rotation part the linear movement part and the second will slowing part and the rubble arm will rotate to the orientation Direction first and then they will linearly to move to the starting position of the throwing trajectory and then we will generate the last segment of destroying trajectory using our self-designed circular storage activity that's it and for project and proper analysis like since our robots are throwing drop objects in the 3D world but we can reduce this growing Behavior into a 2d dimensional problems as you can see in the right figure when the problems decided towards the topic we can set the story territory into the direction of the target point [Music] and for the chapter planning we used to minimize energy consumption chart streets which is a circular Arc structure design idea since we know the started position of destroying trajectory and the core of our solution is not to find the slowing position and the velocity reaction instead we focus on the radius of the arc and the ledge angle and the right side is the cost function and the relative constants to achieve our precise control we use the inverse times controller because throwing involves large accelerations and velocities and also the robot needs to achieve a certain velocity to achieve a good throw so nail controllers that are not aware of velocities and feed forward accelerations can lag Behind These controls are differentiable schematics and position controllers so we need to incorporate the velocity and acceleration which which motivates us to use the immerse times control because we can use the non-dynamics of the system we use inverse device to generate joint angles in joint space from projectors in 3D and then we use shape preserving interpolation between these to generate a continuous trajectory then we can differentiate the trajectory to generate velocities and accelerations to fit into the inverse ions controller one metric we can use to evaluate our controller is trajectory tracking performance I use this because this is closely related to throwing performance on the left we plot the third tracking error over time and we can see that the tracking area is high is near 21 seconds which is when the throwing happens we also Port the desired accelerations which are commanded to the inverse Dynamics controller over time onto the right and we see that high X [Applause] did you see the same sliding effect that The Rock Skippers saw did you do you think that the um when you were getting to high velocities yeah foreign make sense very nice I feel like I need to find a high lie and effector for everybody or so or uh or get some demos of an Allegro hand throwing or something like that yeah what collision and geometry was the definition it looked like there was no unfortunately the object was like do you guys did you guys look at the Collision geometry I guess maybe for the rock skipping and even the original the first video throwing was pose and they may not like to actually to get the get into particular wall I think on the Collision gym I'm pretty sure that the model that everybody would have found is the one that has multiple Collision points I think when when David took the class I might have given out a collision model that had only one contact point for a finger and I had some regret about that but I fixed it for this year so so I think I think you guys must have had a few contact points but maybe it could have been better still you guys need yeah lots of friction and force hi everyone my name is McCoy Becker I'll be hello we are kamiro and Michael and today we will be presenting our robotic manipulation final project the tele operation of the Allegro hand via kinematic hand synergies and Drake the motivation of our study came from the learning dexterity project conducted by open AI they use reinforcement learning to learn how to reorient the cube using the dexterous Shadow row by hand however we are curious if the problem could be simplified specifically we hypothesized that knowledge of human hand motor control could lead to easier learning of dexterous manipulation via Behavior cloning but before we could test this hypothesis we needed to develop a tele operation system that can be used to study human hand manipulation this ultimately became the goal of our project specifically this goal was broken into three sub goals the first sub goal was to tele-operate the Allegro hand in a simulated environment the second sub goal comes from the idea that while the hand has many degrees of freedom humans rarely control each joint individually rather they control many of the joints simultaneously in a coordinated fashion this idea is known as a Synergy in fact reducing the controlled degree of Freedom can greatly simplify the control input required required to learn a manipulation task thus our second goal was to tele-operate the Allegro hand using known kinematic hand synergies and with those first two sub goals our third sub goal was to grasp various objects using tele operation of the hand with and without synergistic control this work is important to the design of Prosthetics robotic Rehabilitation devices and dexterous manipulation the environment we are controlling contains the Allegro hand the Ewa arm and various objects our system was designed such that the Allegro hand and Ewa were controlled based on separate Pipelines starting from the hand first the hand video is captured with a single uncalibrated RGB camera using media pipe hands a software that can track hand kinematics we obtain joint positions of the operator's hand this is passed into the desired states of the Ewa then we pass those states in the inverse Dynamics controller to Output torque for each of the controllable joints of the hand once this is passed into the plant the Simulator shows the Allegro hand moving like this on the U.S side we came up with a set of poses that we wanted its end effector to follow using inverse site kinematics we find The Joint positions of the arm the same workflow as the hand is repeated to obtain the simulation of the arm's motion with our tele operation pipeline we were able to accurately track the positions of our hand joints and command the Allegro hand to follow our motions composing itself into a peace sign a fist or other desired configurations we control the system in real time but our controller occasionally overshot its intended target before stabilizing each digit can be commanded individually and inputs from the picky are ignored because the Allegro hand only has four fingers we were also able to control the position and orientation of the Hand by using sliders to command the evil positions using an inverse kinematic solver which we used during our grasping tests here we show that we were able to successfully Implement Tel operation using kinematic hand synergies we see the human tail operating the hand to make a peace sign however in the case where there is one Synergy all the joints are moving in a couple of manner simultaneously as we increase the number of synergies in the controller we see that the Allegro hand begins to look more like the peace sign that the human hand is prescribing that is because as we add synergies we are effectively adding more individual control degrees of freedom through teleoperation we are able to maintain a firm grip on a cube resting in the palm of the Allegra hand however without accurate force feedback from the simulation we often relied on exaggerated movements to achieve our desired grip fully forming a fist instead of a gentle grip on the cube for example unfortunately when we attempted to grasp objects placed on a table we had serious difficulties forming a stable grip and often inadvertently applied very large forces to our test objects when trying to grab them this would cause the simulation to become unstable sometimes to the point of crashing the system we showed our implementation of teleoperating a multi-finger robotic hand in Drake we used human demonstration captured by an RGB camera we use synergies to simplify the degrees of freedom to control the robotic hand although some aspects of our implementation were unsuccessful we think that this project paves the promising way forward to understand robotic control for dexterous manipulation we want to improve our contact simulation to achieve stable graphs and ultimately we want to apply this work to conduct Behavior cloning of manipulation tasks based on human demonstration [Applause] I watched most of these this morning data but what was the thing when the Allegro went unstable and the simulation shot off something flew into the real image what was that like what there was like something someone threw a t-shirt through the camera or something dude so what do you think would be the the thing that would give you more you said it sounded like you suggested feedback if force feedback would be necessary do you think you could do less do you think you could just VR or uh all right do you think the um the shadow hand would be better the more Dexter's hand all right future work how good it burns very good with some animal tricks to turn those up cool and you guys have a haptic glove to play with later yeah all right systems problems okay is um Nikita here oh yeah okay good hi everyone my name is Nikita and in this talk I would like to introduce my work on Spades declaring robotic system the problem is the following given a floor plan that is specified as a set of dashed lines the system should be able to construct the building of certain Hate by layering bricks on top of each other according to the floor plan they might be single or multiple robotic arms working together and the number of bricks can be as high as hundreds after parsing the floor plan and constructing poses for each brick the first step is Computing trajectories under a set of constraints that will guarantee gentle placement of the brakes collision avoidance and also avoidance of high velocities and unrealistic accelerations in the robot joints as the first attempt I tried using Global kinematic trajectory optimization on joint positions to optimize end-to-end trajectories from the source to the destination with a large set of constraints shown below unfortunately the resulting optimization problem turned out being very complex and it was not able to cover most of the bricks then I decided to simplify the optimization Problem by the coupling trajectory construction from solving inverse kinemics I also split end-to-end trajectories into three regions each with different constraints for example the grip and the move return regions have relaxed constraints on the grip rotation while the approach region strictly constrains both the poses and velocities to allow gentle placement of the bricks the most challenging part here is planning the move return trajectory as this should like this part of trajectory should avoid collisions with the previously built walls and the robot itself for this project I simplify the problem by constraining the move Returns on to be always higher than the destination the height of the destination break and I'm also using simple tangential trajectories in order to bypass the robot body the constructed trajectories here are expressed in the grip poses and they don't correspond to actual joint positions so the next step is to solve reverse kinematics to Define actual trajectories for the robot the problem is formulated is joint centering optimization with a set of constraints based on the part of the trajectory from the previous Slide the algorithm for solving inverse kinematics for all bricks is shown on the left in short we construct several possible Collision free trajectories for each brick and interpolate them then we try to solve constraint inverse kinematics for each point of the trajectory and if there is a trajectory for each inverse kinematic is solvable for every Point committed if there are multiple such trajectories commit the shortest this light's visualization of the coverage made with Point clouds green clouds correspond to the reachable bricks red to unreachable the whole trajectory planing and adverse kinematic optimization can be solved offline and the output can easily can easily be the metrics such as the number cover the number of covered bricks and also the number of uncovered clusters and so on this Matrix can be used to build higher level optimizations on top of it for example to search for the best position of of the robot or multiple robots that result in the best coverage and in this work I tried this so in general there might be many many possible different possible approaches to optimize coverage with multiple robots and in this work I tried a simple greedy search so the idea is that was going to happen but I I was prepared to show like a couple of these last a little bit by optimization however the placement of the bricks is highly constrained and uh here uh here the manipulator always puts bricks so here the second robot starts working to work at the same time so this comes with better coverage but it you said Global trajectory optimization at the beginning what did you mean by global trajectory optimization so like end to end it's in time you're doing the whole like maybe the multimodal or something like that yes but it's with still with the snapped or whatever it's still the local method uh the the optimization is local and subject to local Minima yeah yeah yeah yeah yeah because they're because of the kinematic because of the Optima Optimizer being too weaker because the kinematics are too Limited nice okay any other questions yeah I know I you didn't get the full story but I was very impressed by the the number of things you got in there okay person's here yeah the Pagan hole problem is a classic problem in robotics where an object is inserted into a hole that fits the object tightly for this project I chose to work on a variation of the problem with a square peg and a square hole the goal was to First be able to perform the task knowing the precise locations of the block and the hole and then find out how much harder the problem becomes when some inaccuracy is introduced here we have the setup used for the project with the Ewa arm the gripper a red block that will be our Peg and a hole formed from Blue blocks that are welded in place also there's a camera floating high above the block that we can use for determining the X and Y coordinates of the block here a pseudo-inverse controller is used to control the spatial orientation of the gripper the exact positions of the block and hole are known so a valid trajectory can be commanded to the robot to put the Block in the hole the gripper rotates to avoid hitting the hole grips the block above its Center of mass avoids obstacles lowers the block into the hole and let's go now we introduce Vision to the problem these are the points seen from the camera that passed through an RGB filter and they represent the top of the block pretty well however notice a row of points to the left hanging off of the block due to that rebellious Row the average of the points will not be the exact center of the block the insertion fails but two corners of the block go in showing that at least the y-coordinate of the block was correct to solve this predicament I devised a spiral search method if an upper Bound for the error in the position of the block can be determined that gives a square area spanning out from the attempted insertion point where the hole must be located so the idea is to move to one of the corners of this square area and then move along the square spiral trajectory spiraling to the center of the area all the while the block is being pressed down into the surface of the hole the termination condition would be when there's a drop in normal force on the arm unfortunately since not all rotations of the block are constrained when the block is held by the gripper the block rotates when pressed down into the hole and since it is partially inserted the gripper cannot move to perform the spiral search also the block has rotated away from the correct orientation so all hope is lost for future work I would love to try this with gripper arms that have a stronger grip or have a higher friction coefficient if that doesn't work I would like to try it with a gripper with forearms I have faith that this can be made to work [Applause] awesome you just need to squeeze the heck out of it that's what we learned from the other uh why is it sinking in so much is that the Collision geometry being recessed or no so what surprised you the most in that that's fair how difficult everything is cool all right so I think uh is radhika here all right nice yes I can't see that far hi my name is radhika Ghoshal and today I will talk about my project on RIT planning for push manipulation this project focuses on generating trajectories to push a box from a start configuration to a goal using sampling based planning we re-implemented prior work by Zito atal titled two level RT planning for robotics push manipulation and learned a lot along the way here's a quick summary of the two level planner the outer loop of the two level planner samples box configurations in its c space to get Seas of configurations like this the start config is marked in red and the goal is marked in green and the inner local push planner operates in the c space of the robot to find feasible push sequences between the sampled box configurations the local planner checks for feasibility by running forward simulations inside it note that the original paper uses purely kinematic methods to generate these push sequences and the quality of pushes here isn't that great let me show a few more for the implementation the Ewa push planner outputs the trajectory of desired and effector poses which is then fed to the differential inverse kinematics controller to convert to joint position commands turns out the purely kinematic local push planner doesn't work well for us due to its poor quality pushes RIT ends up requiring a large number of samples to find push sequences to the goal this path usually ends up being circuitous and unusable in the RIT plot above it wasn't possible to find a part with even a large number of samples so we decided to use a Cartesian Force controller this consistently produces high quality pushes against the Box during testing but we didn't have time to integrate it into the full RIT Loop for future work we like to complete the integration of the force control model into the full RIT Loop finally I realized that Force control is awesome and often makes things easier than position slash velocity control thanks for watching foreign control so how are you going to put the force controller into the is this going to be an rrt extend is going to be using the controller foreign yeah yeah and yeah nice thank you foreign it happens yes but for the gifts in orientation we used a quaternion but like we were finding to the project so it's just the differences in the pose of the brick yes cool okay I think Jared said he was going to come later I'm trying to sure yeah okay I'll come back to anybody who shows up later um I think Annie Rood and Ken's here Kenneth there he goes good all right our project is automated robust stacking of prisms also known as prism bot I'm Ethan I'm Kenneth I'm Annie we care about stacking because stacking is a cognitively challenging task there's applications of stacking in construction and Fabrication we focus our problem on stacking prisms prisms are easier to grasp than stack compared to arbitrary objects due to their simple geometry and flat surfaces all right so our initial method was mostly adapted from the bin picking notebook but as you can see it was not very robust it could not robustly grasp objects or even place them down precisely enough to build a tall stack um so in order to mitigate this we use a state machine the same machine robustly clears the space where the stack is going to be built by detecting when the stack Falls over so that we don't end up stacking over Fallen debris which would render an unstable stat um we also use self-correction when motion tracking error detected so that the trajectories that we execute are the trajectories that we expect we also do Center and mass calculations so that we can precisely place a block such that their Center of masses line up vertically recall from class that we can SMA Center Mass by lump parameter estimation if the object is held still in fact it's possible to calculate the term separately and separately calculate the mass and the center of mass of the held object just by looking at the external torques of the Ewa the only problem is is that the only the X and Y coordinates of the center of mass of the object are identifiable in the world frame since the Z coordinate is in the direction of gravity the solution is to just measured from two poses and use a mathematical program to find the optimal Center or mass that matches up with the measured external torques so another problem we ran into in our initial methods was problems related to grasping our main problem was that we would often try to grasp points that were near edges so our graphs would either fail or if they succeeded they would often be wobbly and would result in poor placing so the way we fix this is to develop an edge heuristic so we can avoid grasping your edges so the way it worked is that after we picked a Target grasp point we would sample the 25 closest points spatially from our Point cloud and collect their normals and then across these 25 normals we would find the minimum dot product between any pair and we would have a cut off because um for a face this heuristic would give us a large value since all the normals would point in the same direction and we would get a lower value in near edges because we'd have normals from two different faces which would result in a lower dot product we also improved our perception by adding segmentation we use the DB scan algorithm to segment objects spatially and we also segmented them via Hue in to account for the case for two objects for touching each other and this led us detect which objects were in the stacking Zone and move them away and it also helped us pick the order of blocks to stack for our final results we evaluated our system by testing it on 10 trials with varying number of objects and object type so you can see here that we tested two three four and five rectangular prisms and the same for pentagonal prisms we have a high success rate for less than or equal to three objects um which is a greater than ninety percent success rate however as we increase the number of objects we can see that the success rate drops and for pentagonal prisms the success rate also drops one common error that we saw especially for a large number of objects was similar to crashing due to the high number of collisions that are involved when the stack gets taller here you can see our system working in full first our robot clears the object from the stacking cylinder so that it could replace it with more precision now you can see for each object our robot rotates the object a bit in order to do the center of mass calculation to again place it precisely on top of the preceding one in this simulation our our robot is able to successfully stack seven objects on top of each other and we think that this is actually the maximum that is possible for our current simulation as for the sixth block it is currently not visible to the top the top of the block is not visible to the cameras thanks for listening our code is publicly available on GitHub for others to build on top of [Applause] so did you did you hide the center of mass at random places inside the block is that what you did uh so what was the variability that gave you uh different I mean was it only the bricks the same initial conditions or where did you uh it was I guess random positions yeah very nice I'm glad you got the center Mass estimation working that's perfect yeah any other questions all right Jared I knew you were coming at 3 30s we just passed you but I'm going back we'll be talking to you about my final project which is called greenbot it's a manipulation system capable of identifying picking and placing recyclable waste so how exactly does it work it uses a custom trained mask rcnn model for object segmentation given a labeled and segmented Point Cloud it can find a near optimal anti-potal grasp for this object it uses a slightly modified version of a pseudo inverse controller to move around and of course it uses some fun 3D models that I found on the internet there's the mask our CNN model it's trained the same way that we approached it in class which is that we start from a pre-trained model using the Coco data set and we strip the last layer of nodes and retrain it such that the output is the labeled and masked images corresponding to the set of objects that I've given the simulation from here anytime the robot actually wants to pick something up and make a decision it can query two different RGB images from cameras on either side of the picnic table and then it can pass off these RGB images to mask rcnet and from here what it gets is labeled in segmented depth images from Nascar CNN which you can project to XYZ space and get a comprehensive Point Cloud for a given object once it has this point Cloud we generate around 25 grasp candidates these candidates are found by choosing a random point on the point Cloud calculating its normal projecting this normal such that it's planar with the XY plane and then aligning the gripper x-axis with this with this projected normal from here it wants to select the most anti-potal grasp which is it tries to maximize the alignment with the normals of the point cloud with the gripper x-axis from here the trajectory planning is a pretty simple scheme we just simply interpolate between a few different key poses such that we have you know a home pose for the Ewa we have a pre-pick pose which is just slightly above the object of Interest we have a grass pose which is generated from the crafting scheme and we have a pose just above the desired drop-off fit from here we can differentiate this linearly interpolated trajectory and use a pseudo inverse controller to move around in order to actually test greenbot we asked how many objects can greenbot sort in 60 seconds that is we gave greenbot four potential objects on the picnic table and gave it 60 seconds to sort as many as possible of the 56 valid tests that we ran 67 percent of them performed perfectly uh sorting all four objects and the allotted time and around 88 of them performed near perfectly sorting at least three of the four objects correctly in terms of future work I think mobile manipulators are really cool so one potential application of this is instead of using a stationary e-arm use some sort of cheap mobile manipulator in simulation and try to get it to actually move around the environment Traverse pick up items and navigate over to the waste bin to drop it off thanks so much for listening to my presentation and I'm happy to answer some questions [Applause] I have a totally nerdy question how did what was the obj file that made the transparent bottle work in the rgbd renderer I didn't know I could do that I see it shows that yeah that's right that's what I okay that's my world is consistent at least yeah nice I mean I would love for it to render in transparent objects but all right questions what was the hardest part of that whole pipeline following a book I was like I guess this goes here and I guess that goes here okay so there's some wrong computers I tell you to train for what you thought so I was like there's no way this is gonna work and they're working on like the first try instead of important surprise that's Garcia and it's crazy nice okay thank you I think business thank you speedcubing is the task of solving a Rubik's Cube as fast as possible cubing is of great interest to robotics as it requires precisement manipulation for example if the face is not properly aligned after you turn it can prevent the next phase turn making turns on the Rubik's Cube quickly is an even greater challenge currently there are two notable cube solving robots the first robot was made here at MIT wow that's really fast but isn't that kind of unfair to humans we don't have six hands let alone six Limbs and our wrists can't rotate like Motors the second is open ai's robot hand its Hardware is more fair to humans but it is really slow introducing speedcuber bot it aims to be a balance between the two robots with only an arm and a gripper it has a hardware limitation like the open AI robot but it aims to make fast turns like the MIT robot before we get started we need to understand move notation u means they turn the upper face 90 degrees clockwise U Prime needs to turn the upper face 90 degrees counterclockwise and U2 needs to turn the upper face 180 degrees before we can make a turn we need to constrain the layers that will not turn initially I was planning to have a second eel arm hold the cube in place however after receiving the model of the Rubik's Cube and finding out the grippers cannot really grip onto the cube I decided to make and use a box that held the bottom layer in place because of this speedcuber bot will only be able to turn the top face to make a U Move we move the gripper to the gruff position for a emove wraps the cube make it clockwise turn and ungroups the Q to make a U Prime move it is the same except we move the gripper to the graph's position for a u Prime move and make a counterclockwise turn to make a YouTube move we can do two U moves or 2u prime moves our goal is to make turns quickly so we should turn in the opposite direction of the previous move this is because after completing a turn for you move the graphs position for the U Prime move is closer than the draft position for the e-move and vice versa with this plan we can model the notion of speaker robot as a state machine additionally to actually get the dripper to those positions we solve inverse kinematics as an optimization problem to get the joint position for the desired pose putting it all together we can give speed to robot a sequence of up face moves and it will execute them the worst case turn time is 0.9 seconds but thanks to the optimizations we made for YouTube moves and the fact that going from you to you Prime and vice versa is faster the average turn time is 0.77 seconds while you 0.3 second time save on each turn seems very little it's also when doing a sequence of turns and speed keepers are looking to save every little second [Applause] we actually heard a story just the other day Sava who just left was telling us that the bend cats robot which is awesome apparently it could go so fast that the Rubik's Cubes would explode did you know that okay the cheap cheaper skus or just go explode all right that was an awesome video thank you for doing that yeah he's got a question yeah so I can then uh measure it we're gonna get like a good grip on the tube so when when you do it just like perpendicular students like The just just straighten up an angle when you try to turn it and slip a bit before uh before you get into the corners and other Corners then it would start turning the face so it's better to start at the corner nice oh yeah another question um I think I couldn't get a solution my kids did a woman so I just did two quarter turns I also saw you when you were programming it in the in office hours one time or something it looked like you had made a Cool Telly up like some way to transition between teleth you like you tell you out for a minute and then you would you would turn on the autonomy for a minute that you'd switch back to teleop how'd you do that did I did I see that right it was all tally up yeah yeah you fooled me yeah okay nice Okay um hey Jenny you guys are next Jenny and Daniel ping pong is a fast-paced sport that requires fast processing and precise paddle control in this project we created ping pong bot a pair of Kuka iwa arms that can perform a controlled rally for multiple hits this project is an iteration upon a previous project by Dylan Zhao and chaitanya ravuri for robotic manipulation in 2021 we use their framework and simulation as a starting point but chose to re-implement the control logic State machine and adjusted several simulation parameters to improve performance and simulation realism as you can see in the previous iteration of this project the simulated ball is not nearly as bouncy as a real-life ping pong ball to fix this professor tedric helped us learn to use some of the Drake specific contact parameters in our SDF to control the bounciness and make it look a lot more realistic here we have a simple diagram showing the high level State machine flow each Eagle arm is running a separate instance of this controller and both begin in the away State using the velocity Vector of the ball we know if it's traveling toward or away from a particular side of the table and set the away or towards State as appropriate after the ball bounces on the same side of the table as the arm we transition to the prep State and finally when the ball is close to the projected contact position we go to the Head state if at any point the ball velocity is pointing away from the pedal the system resets to the away state in the away State the desired linear and angular velocity of the end effector is calculated by targeting the home pose of the paddle centered and slightly behind the edge of the table with the paddle tilted up to check whether the ball is actually traveling toward the arm then trigger a state transition to the toward State it calculates the dot product of the Velocity Vector of the ball and the vector from the end effector to the ball position if the dot product is negative the ball is traveling towards the paddle and the controller will transition to the torn state when the controller transitions to the tort state it uses the X and Y velocity components of the ball and projects the y-coordinate of when it reaches a specific x value in the world frame this calculation is only performed once and the target pose is set to the same as the home pose except with the updated y coordinate for the projection using the time before the ball bounces to move to this intermediate pose is essential for getting the robot roughly close enough to the actual pose needed to hit the ball correctly without this step there's generally not enough time after the bounce for the arm to move to the prep pose before the ball has already reached the contact point when the ball has a positive Z component velocity and is slightly above the height of the table we know that the ball has just bounced after the ball bounces on the table the arm enters the prep state in the prep State our controller uses the ball's current position and velocity to predict the ball's trajectory it then chooses a context point along the ball's trajectory where the paddle will hit the ball it uses a contact point a little past the edge of the table to minimize the chances that the paddle hits the table which we found causes the arm to start flailing wildly the controller then attempts to find a paddle pose that will hit the ball at a Target point on the other side of the table we use the equations of projectile motion to calculate the ideal velocity of the ball after the Collision so that it lands on the target point we use the mirror law along with the incoming and outgoing ball velocities at the contact point to calculate a paddle pose that will correctly reflect the ball towards the opponent finally the paddle uses this calculated pose to move to a pre-hit pose slightly behind the contact Point giving it space to accelerate before the hit when the ball is close enough to the contact point we transition to the hit state to start accelerating the paddle towards the ball intuitively the faster the ball is traveling the slower we want the paddle to move otherwise we're adding too much energy to the system if the ball is bouncing too low we also want to tilt the pedal upwards to add more vertical impulse after successfully hitting the ball the velocity Vector should now be pointing away from the back but I just I thought that was really good uh yeah that's good [Applause] questions for these guys I wonder about the parameters you have to change that you had set up address helped you with to make it more elastic to go make a bouncers the parameters had to change to make the bowl more elastic yeah correct nice which part was the biggest surprise for you of that pipeline was the state machine logic like pretty good or was it but actually because of the like the consensus of them sometimes ah for sure great okay uh are you guys here yeah all right go to the sport in which your player uses a set of clubs to hit the ball into the hose with that feels throat as possible it's writing about requires precise flap control maintain the email of the difficult problem of human beings in this project we propose a framework for robo-armed stress ball into the hose with only one stroke we establish our simulation environment using pipe bullets with an even tearing as the golf course including heels and flat run with little hole on the ground rambling within 3 meters times 3 meter Square the ball is initially placed the steel on the team in front of the robo player the robo player is a Honda scruta Eva arm with the fifth set of clubs up as a anti-factor multiplying our framework the robot first switch to a preheating configuration and approachable slowly to put itself at the desired heating position we calculate the desired initial velocity of the ball and the control arm to hit the ball if everything goes right although a lot of times it doesn't the ball will form a beautiful curve forced by the gravity and air resistance and finally land in the hole to make everything work we need to plan backward carefully first we need to constrain The Landing angle Theta because a large Landing angle will make the ball bounce out of the hole when flying the air the ball follows the aerodynamics for Spike gravity and the air resistance and nothing else this makes it possible for us to do direct shooting to calculate the initial velocity of the golf ball we aim at minimizing the flying time of the ball subject to a set of constraints including Landing constraints Landing angle constraint and Dynamics constraints we solve the problem using facade solver in the solution velocity can 100 make the ball land in the hole after solving the initial velocity we can calculate the desired heating configuration of the robot we manually assign the heating point on the club and build the heating frame such that the y-axis is parallel to the ground and the z-axis aligns with the initial velocity of the ball and the contact normal apply a series of frame transformation we can get the desired poles of the cloud then we do optimization to solve the desired drawn angles and use kinematic trajectory optimization to control the robot to reach the desired configuration next step we want to control the robot to hit the ball theoretically when the contact normal is aligned with Target velocity we just need one degree of freedom to reach the target velocity to make the system more robust we allow Two Joints to be movable during the heating process in this way we just need the mapping from the boss Target velocity back to the drawing velocity however the heating process isn't differentiable or continuous so it's hard to do the mapping we propose to use data-driven methods to tackle the problem we collect data by randomly sampling joint velocities and record the corresponding ball velocity we observe that the data has strong linear correlation with some outliers we apply result to filter out the outliers and try both linear regression and neural networks to feed the mapping model to add test time we include the calculated by initial velocity to get the desired drawing velocity and do velocity control to hit the ball our four immortality was at saturated of 19 out of 100 rollouts the failure cases can be categorized into two cells the strut is not successful on the learning point of the ball is not actually enough this inspires us with two filter directions the first is to improve our formulation for the preheating and heating process continuing the robots in the continuous way to avoid Southern accelerations the sentence to simulate the heating process more reliably and that was impressive cinematography right pretty pretty good so sorry you used uh shooting you know you used cassatti for shooting for the for the ball tell me how those two you said you use kinematics trajectory optimization for the arm and then direct shooting for the ball is that right how did those two go together foreign yes perfect but why did you need shooting for the ball is it because of the aerodynamics or you decided there was drag what's that and that's why it was hard I see I see okay very nice and then you have a space space okay you did that yourself you said the the aerodynamics or that's just in bullet yeah yeah yeah cool okay Pascal and Robbie are here hello we are Pascal spino and Ravi tejwani and this is our final project for robot manipulation we built a pipeline for an Ewa arm to execute a trajectory over a curved surface while maintaining contact with that surface and contain maintaining Force normal to the surface there's much prior work on robots drawing on two-dimensional surfaces but for our project we wanted to explore a drawing on a three-dimensional curved surface for two-dimensional surfaces there's many simple robots that are well suited for the task but drawing on a three-dimensional surface requires many more degrees of freedom for which the Ewa arm is well suited the pipeline to our system is as follows we take as inputs a two-dimensional trajectory plan and a three-dimensional instance of curved geometry with accurate collisions we first estimate the point Cloud corresponding to the surface of this geometry through an array of four cameras then we estimate normals over this point Cloud we then take our inputted trajectory and plan a 3.3 dimensional trajectory over the surface while respecting the normals and then we extract from this series of poses which we turn into a piecewise pose and then pass to a differential inverse kinematics controller to execute on our Eva now for Collision geometry we wanted to simulate a variety of shapes which included non-convex shapes in order to simulate these shapes in Drake the method we chose was to perform convex decomposition which essentially takes a non-convex shape and turns it into a series of convex shapes that approximate that surface here is a video of our Eva arm executing the pipeline that we described in the earlier slides we are taking a predefined trajectory shown in the top right of the screen and executing it over this non-convex curved geometry as you can see portions of this predefined motion are in contact with the surface while portions are intentionally not in contact with the surface in the bottom right of the screen you can see a visualization of the trajectory that we are executing on this slide we show three experiments each performing the same trajectory from the previous slide and shown on the top of this slide but over an array of different surfaces with different curved geometry some convex some non-convex as you can see in this slide there exists a couple failure modes with our approach these include when aspects of the Eva arm collide with the geometry when the curvature of the geometry is too severe or when the plan's trajectory extends beyond the bounds of the curved geometry we envision multiple applications to this control pipeline beyond the aspects of drawing tasks for example window washing on curved windows or washing curved dishes or even applying massages to humans essentially any task where you must apply Force normal to a surface and that surface is not necessarily flat one more thing we would like to consider in the future is the [Applause] gripping the chalk yeah oh so the I see you rolled it to one finger because I saw one point where it came open but that was it was welded to the finger that it moved with ah you tricked me good that's nice foreign any other questions for these guys yeah family and stuff like you know 2D representation okay we're actually not far off time but Public Service Announcement I don't technically have the room after four they haven't oftentimes people I couldn't I asked for it they said no it's booked but a lot of it's been booked all semester in like three times people have come in at 4 15. so let's see I do have a room across across the hall as a it's a worst case if someone comes in and looks really mad we'll just politely go to the room across and keep going yeah because there's a lot of good presentations coming up the people who said they were arriving at three oh uh oh no that's good start it's good that's good yes okay quick quick stretch session that sounds good three three stars from my message to this I think there's a couple people that are already queued up here but um I can open slide I think the students here are those who are not who doesn't have the slide I actually checked the slide okay I don't think they have their project okay is ribbon here is p from ancient history to Modern scandals robotic chess has captured the awe and curiosity of humanity in order to play all systems must understand the current piece locations generate a feasible move and execute the move in the physical world in addition anything related chess playing system must have a complete simulated environment chess solvers and planar pick and place machines have a variety of well-documented solutions however the issue of board reception remains open current perception methods include using top mounted cameras as shown left or using custom built chess boards as shown right unfortunately these methods are physically complex awkward to install and gather limited information about the game state ideally a chest system would be physically simple while understanding what the integer coordinates of all pieces as well as their spatial location we now present chessbot trustbot is a deep perception and manipulation system for all your robotic test playing needs using a single side mounted rgbd sensor test block can read the state of an arbitrary stimulated chessboard producing both exact piece locations and reconstructed Point clouds of all pieces including occluded geometry using this testbot determines the optimal next move and executes it using its robotic arm chessprot is designed as a continuous loop first the user enters their move then this move is sent to the simulator which teleports the user's piece to the appropriate location then chespot utilizes its deep perception system to segment and reconstruct the modified chessboard this allows chessbot to determine what move the user made finally chat smart queries its chest solver to generate a feasible move which it promptly executes let's take a closer look at chesspot's perception system we first separate an rgbd image into a depth mask and color image using a fine tune to mask rcnn we succulent and label the rgbd image this allows us to create a set of partial Point clouds next we use the labels from mask rcnn to determine what piece type each point Club belongs to we use ICP to fit a reference point Cloud corresponding to this type of piece to the original partial Point clouds then we use the point clouds for board inference and manipulation however no system is perfect including chessboard in testing we found two primary failure modes kinematic failures and perception failures on the left we see that a kinematic failure resulted in a piece being knocked over on the right we see that a piece was perceived in the incorrect location while these are annoying for gameplay we think that more sophisticated move detection algorithms May circumvent these issues we hope you enjoyed learning about chessbot thank you for watching foreign very impressive which was the hardest part um yeah KD values as a faces.com and why did you use wsg instead of the panda gripper huh yeah Fair s um it's all good anybody else oh yeah foreign 's back that's uh let's do Reuben hi my name is Ron Castro and I'm here to present my final project for robotic manipulation which is a volleyball setting robot now this is interesting because so far we've been focusing mostly on quasi-static tasks but I suspect that as we get into more complex tool usage we will need to be agile and more powerful and the question asked when it comes to this is what happens if the pipette slips if that's the case in 0.1 seconds it could fall around five centimeters so if we can't re-grasp it in time gravity is in control and we could reach failure the volleyball itself is pretty interesting because it's fast it's forceful and it's fun we only have around 80 milliseconds to control the ball and we're reaching the upper bound of our torque limits there's three main components to this we need to sense where the ball is coming from we need to absorb it and then we need to relaunch it to the desired position my research focusing on making actuators for robotic cans so I took the parameters from my fingers and input them as parameters for the simulation itself there are three main Legos to our project that's the manipulator the robot arm and the ball tracker the manipulator itself consists of two fingers with two degrees of freedom each there's a y shape at the end for stability and centering while catching the ball and running operation space impedance control on them this allows the ball to act as a mass spring damper system where we can dynamically change the stiffness depending on how much energy we want to input into the ball originally we were hoping to just use the fingers to launch the ball however we quickly ran into torque limits which means that we needed to use the arm itself to also launch the ball the robot arm is an e-arm which is capable of reaching a desired pose by using a Cartesian space PID controller now the fingers by themselves cannot input enough energy into the ball to get it to the desired height so we have to figure out what speed we need to launch the arm at and we can do that using the simple energy equations from there we simply follow a linear trajectory using constant acceleration in the desired direction of the ball until we reach the velocity launch now the ball tracker is pretty simple it fits a parabolic trajectory to few sampled points in the air we assume perfect State estimation and we have a tolerance for around three centimeters now we have all the pieces together and we have a robot that can set the volleyball out of 10 trials that we did with different initial parameters we averaged around seven sets with a high of 12 sets and a low of 2 sets the output height was not as accurate as we were hoping it to be we failed to reach the desired height by around 0.5 meters in conclusion we were able to show a robotic system that uses constraints based on real world manipulators and it's capable of playing the agile task of volleyball second of all we have seen that impedance Control Plus compliant Hardware has allowed the high level to be simpler we are simply using constant acceleration linear trajectory good [Applause] any questions for Reuben the Collision geometry of the end effector do you have it with you yeah we have like 30 seconds but if you can hold it up and show us yeah I see nice but for this they're just like rectangular so it's super simple the way that you see it in the actual the visual is meant exactly so it's very simple and I also have little spheres at the fingertips will also make things and do you think the impedance control was essential or you couldn't have done the same thing with position control you think I mean sorry at the arm I think that if you did position control it gets a lot more competed mainly because you like you're only conscious involving milliseconds so if you're trying to control the ball uh throughout the whole time we need a lot of points in between that so to do that I don't know you can do some sort of a project recommendation but it just compute too slowly so if you're trying to do this in real time I think if this control allows them to awesome thank you yeah okay so um I think Jacob's next here start hi everyone this is me and Thomason I'm by yeah today we're talking about large language models for an abstract pick and place task planning in Drake uh so the introduction and motivation is that abstract task planning in robotics is really hard and that humans take a lot of our abstract and semantic understanding of tasks and the way we communicate tasks uh for granted these are in no way obvious to the robot uh receiving the texts and so the idea is to use a large language model such as gpt3 which is trained on vast amounts of Text data that encode um kind of human reasoning into the text data and these large language models demonstrate impressive high-level abstract reasoning capability because of their training set to act as a robot's brain our approach is use abstract Community buying task plannering drink we prompt a large language model whose environment contacts and the go and we don't do any other prompt engineering which is kind of surprising it's a large language model generates a series of pick and place actions note that thanks to large language models output is very it's virus and it's kind of neural language so we need some functions to standardize the output and extract pick and place specific actions so here's an example of a prompt that would provide the manipulator I think it's important to note that our Drake environment is just a modification of the chapter five bin picking example and so we just have a simple tabletop environment with our manipulator in the center and a few objects that are generated around it so the context we provide the llm or the objects in the scene so in this case we have the four blocks in the four plates or disks and then the goal that we provide to the llm is that we want to sort the blocks onto the disks and while I'm not explicitly stated the idea here is that the llm will notice the relation between the blocks and the disks and place the blocks on the this the corresponding caller and no surprisingly they did what we guessed they generate step-by-step instructions about where to put each block exactly by their color which is great for translate into instructions so let's look at the example of our videos there is actually three tasks and the first task is play the block blocks into a square formation and see it understands the square has four corners the second task is to sort the blocks onto the disks and which is which we have already Illustrated before it is sorted by color the final stack is stack all the blocks on the red disk which demonstrate it understand what assuming by Stack and what does it mean by the geometric relationships uh uh so the most important takeaways from this project are that uh with a simple simple helper function llms are capable of kind of directly producing this series of pick and place locations for your planner and so this is surprisingly without prompt engineering so you can kind of just use a large language model out of the box for this um and so you can kind of treat this as a trajectory providing Black Box because you can compute the trajectories on each of the provided pick and place locations and so each time your planner then completes one of those trajectories we can then re-prompt the llm for a new task and because we're only operating with trajectories here in Drake and the rest is handled by the llm um this enables some more complex tasks because they're often parameterized just in the trajectory and so what's really cool about this is this enables real time that's that's crazy so I maybe I missed the simple helper function part but how do you how do you not have to do say can kind of things how how did it come up with the actions you know how to take move object object onto rotation or object Y and so with a small helper function we can extract the form that we need to have attach to our planner so the helper function is you translating a slightly more diverse text into your pick and place for Primitives yeah awesome any other questions well cool all right I'm feeling good about nobody understand is there like a line of people about to come into the room yeah yeah you're good you're good okay I think Catherine is up hi everyone and welcome to caribot which is our final project in making an Ewa draw caricatures in simulation using Drake and mesh cap so we decided to choose character drawing as our task because it presents a really interesting manipulation problem so as you can see in this video demonstrated here caricatures inherently have a technical and creative component in that you really want to emphasize certain features of the image or character you're trying to present at the same time you can't lose recognizability without sacrificing you know the comical nature of the character itself so for our project we really focus on the technical Precision required to generate and then draw a caricature of a given input image and so in the future this could have use cases including facial Imaging drawing where drawing features are important like police suspect drawings or courtroom sketches or even drawing AI generated faces and the Precision itself could be useful in non-facial drawings like architecture or manufacturing now to move on to our approach so carebot consists of two subsystems so given an input image we feed it into our image processing system and then that generates a set of trajectory points that the manipulation system then uses to draw the actual output image now the first step in our image processing subsystem is to create a caricature from a given input image and to do this we refer to the Caribbean paper by Goo at all and recreated their machine learning model in Google collab using this model we generated caricatures of different input images and as you can see there's like a variety of different warping techniques that are applied pretty randomly to generate like a diverse set of possible caricatures here are some other results that we have and basically our model um our pipeline displays five randomly generated caricatures and allows the user to select one that they want to draw such as this one the Second Step was to process the characterized image using the candy Edge detection algorithm which through testing we determined was the best way to pick out a set of definitive Contours that our robot should draw from an image so the steps of this algorithm just very quickly were first to convert and picture to grayscale secondly to apply gaussian blurring to reduce noise in the image third using intensity gradient um definitive like high contrast edges were identified and then finally um using some kind of thresholding process um more edges were like more clearly defined and basically here are some results of the candy Edge detection algorithm as you can see the new image has like a bunch of lines that are somewhat unnecessary to like fully recreating the image so to deal with this we also filtered some of these Contours by Arc Length to remove like small ones that were not necessary and end up with a better picture of the subject and this was the original input photo um that we ran the algorithm on now moving on to the manipulation system we'll also add in a video of kerrybot drawing a character on the side here since the process takes quite a bit of time for karabots manipulation system we took resta's drawing example combine it with the robot painter class from a previous pset and change the controller to diffic we welded the chalk to the robot's right finger and when the truck comes into contact with the chalkboard a line is drawn following the robot's trajectory we chose to base it off of the robot painter code since the piece that featured the eweb following given points which encapsulates one of the main features of carrybot to generate the actual trajectory we take the points from Edge detection and then apply scaling offsets and rotations in order to translate the image into the world frame we also had to insert additional lift points at the start and end of the line in order for the Ewa to transition between distinct lines without drawing on the chalkboard here we can see the output image of the UI drawing a character of The Rock we also faced many challenges in both subsystems during the project for the image processing subsystem we had trouble filtering out the lines that we did not want since it varied quite a bit between images and often produced very questionable results then during the midibulation we had a lot of trouble controlling how the robot arm lifted up from the chalkboard which caused a lot of extraneous lines and now that carrybot is done drawing let's take a look at the results here we have Russ and his many many stages and here's the final result in the future there are many areas of improvement for a carry bot to work on here are some of them such as noise reduction and more cartoon-like caricatures as well as different applications for carrot Bots such as using different art utensils and doing non-facial drying tasks thank you for your time [Applause] I'm sorry I don't look like The Rock so how did you go from so the edge detector gives you an image right so how do you get The Strokes from the image I missed that part oh it does okay right from candy that's the realistic pictures is because in the cartoonish painters because the lines are so clear it's a lot easier to do extended versus you know the human face is part of understand which parts are more important especially if you want to control if you're trying to control for the art of the Contours because if the river starts really small lines they don't really show foreign super nice any other questions yeah put any protection did you run into the issue that would give you Contours that have technically you know One Singular stroke and for those who sort of just like very long will keep the longer everyone yeah there's a lot of like playing around okay let's keep moving but this is that's awesome thank you my name is Lucian covarrubias to find when the projectile first gets near enough to the robot I use a linear optimization where I minimize the decision variable T while constraining the projectile to stay Within Reach of the robot this effectively finds the earliest time where we could possibly grab the object by instead maximizing t with the same constraints we can instead find the exit time of the projectile which is the last possible moment where we could catch it now that we have the time of entry for the projectile and since we know that the projectile has a constant orientation and geometry we can calculate a comfortable pre-gras pose for the object a comfortable pre-gas pose is one where the gripper is always facing away from the robot so as to maximize range and movement capabilities while catching the gripper trajectory module calculates a full and effector trajectory from initial pose to final grasp pose phase one is transition from initial configuration to pre-gas pose which was calculated by the previous module for the position component I linearly as there is no obstacles to avoid for the orientation I used a quaternion slurp to avoid gimbal lock and find a smooth trajectory Phase 2 is a transition from pre-grasp to Grass pose where we must track the projectile's motion while moving closer in order to grab the object the position component can be determined by the physics equation from the first module with an additional offset as the gripper moves closer to the projectile the orientation at this point is constant because there's no rotation in the object the final module is responsible for translating the end effector trajectory into an executable joint trajectory to send to the robot the approach used in this system is to solve an inverse kinematic optimization problem which attempts to minimize the distance between the current joint configuration and a preset nominal configuration while constraining motion to follow the end effector trajectory when forming the bounds for this optimization I was more lenient with the upper bound of the Z component as projectiles normally came from above now we can observe the full system at work [Music] [Music] thanks for watching and I hope you found it interesting [Applause] I think your beginning was the one with the Chopsticks wasn't it yeah that was hilarious I'm sorry I didn't go any questions you didn't deal with the I mean it looked like it was just barely staying in the fingers small circles at the tip of the gripper that like come down where it actually is making contact with the block so sometimes like if it would catch it it would be a little bit off it's only in contact with like these these two berries I see at the beginning where it was resolved awesome very nice okay no David to close all along the way yeah so creative like or I didn't like a physics parametric equation promotion so you can find like the entry entry time like when it gets close enough to drive then the exit time as in like when it hits the ground it leaves your graspable space and then you can just like with those times you can generate all the time constraints skipping like initial to caching and then to be able to stop awesome I'm sorry to keep moving but so I don't see Tom you Tom I know the other time I don't see here and kiss Kasra here okay I'm gonna go to um uh shrutis here all right Rubik's Cube the puzzle that has taken the world and Us by storm it requires smarts and practice but more importantly it requires precision and motion and complex planning to physically do and that is why a project is a bi-manual Rubik's Cube manipulator what does that mean well it means that we have put two ewas in a simulation like mesh cat will show and made a Rubik's Cube flow in space to avoid dealing with the gravity and then we've used low-level motion planning to turn places on the cube we wanted to make a controller that could take in a list of moves from an algorithm-based solver and then execute those moves in order there's a lot of cool robots out there that solve Rubik's Cubes already as we saw but we wanted to closely examine the local motion planning that goes into manipulating a Rubik's cube without all the fancy add-ons to give you a better understanding of our setup we decided to use a two-arm system because humans have two arms and two arms allow for greater mobility and we put them diagonally from each other so that we can get greater coverage both the arms also plan independently without any collision avoidance which surprisingly worked well enough for our task we also welded the sensor of the cube and defied gravity so that we can ignore physics and focus on precisely manipulating the cube we had a lot of goals for the project but as expected it was harder than anticipated we succeeded in getting one face a turn applying that process to other faces rotating the cube and our final system can do about 1.5 moves as shown on the slide we hard-coded four key intermediate poses we interpolated between these poses and passed out to the inverse kinematics Optimizer to get the trajectory um this allowed us to be precise in our trajectory and have control over it so that we didn't knock into the cube we also learned to get the cripper above the cube in the current position for the same reasons the important factor of this was also understanding what letters to pulled we had to hold exactly two layers with one gripper and one with the other to make it work this meant that we spent a lot of time calibrating one of the biggest challenges we ran into was gripping the cube because of the inertia of the cube the grouper kept slipping when it was rotating we tried increasing the force but that kind of squished the two we tried many other approaches and in the end we decided to simultaneously overturn that you were responsible for turning and turn that you were responsible for for holding it the other way to get a full turn we now have the tools necessary to execute one turn let's see if it works it does the safety works the way we change moves together is that we pre-calculate their joint trajectories and then just join them together as you see this does not work after we got one face to turn we applied the same process to turn other faces as well to evaluate our system we chose the accuracy of the turns and the drift as our two metrics since our system depends on the pose of the cube after analyzing all our moves we found that our system has a 72 percent turn accuracy and an average drift of 2.4 centimeters some natural next steps of our system include unwolding the cube adding gravity fixing the sequencing of moves maybe even exploring other ways to plan trajectories like random sampling trees and implementing some sort of feedback control for better Precision overall what we set out to do with the project was to really understand low level motion planning which we definitely have seems consistent that we have a slippery Cube yeah that's that's interesting we have to look at that more carefully so what surprised you the most not that easy even the Rubik's Cube if you think about the mechanism for the Rubik's Cube it's actually pretty pretty slick mechanism any other questions okay I think uh I had it a minute ago I think Isabella's here yeah hi I'm Carlina my partner was Marissa we're doing a project on egg breaking cracking eggs with inverse kinematics um so there have been other projects that dealt with eggs such as cooking and baking one of the most noteworthy Worthy is Yoshida at all that used one arm to crack the egg against the side of a pan and dragging the half of the shell that it was holding backwards and using the pan as a leverage point to separate the shells in most cases though they disregard the process of cracking eggs by using a separate machine to either break the eggs entirely or using pre-made mixes which include powdered eggs the egg breaker seeks to emulate the more intuitive way in which humans May crack eggs a trajectory of poses through the entire motion is created with varying speeds throughout to move controllably yet create enough Force to crack the egg when hit but not to crush it the trajectory is followed by generating Ewa arm joint positions with inverse kinematics on the left we have our simple trajectory which goes directly to the egg but it enters at an a non-ideal angle and doesn't allow for a nice grabbing position on the right we have another trajectory but instead of following the trajectory nicely our arms as you can see bounce back and forth egg breaker calculates trajectory from intermediate poses and time stamps and then uses inverse kinematics to follow this trajectory so we control this velocity of motion um as well by timing out our time stamps which allows us to pick up the egg without going too fast and pushing it away and hit the egg quickly enough that it breaks the egg but not too fast that it crushes it entirely additionally we're changing the angles of the grippers to pick up and empty the egg at different points of the trajectory uh so this directory um is shown as follows here so we start out um by initializing and we move directly above the egg but not going directly to it so that way we don't uh crash um into the egg or in the bowl the weird angle then we're lowering at the egg level uh here we wait for the crippers to close around the egg by putting two subsequent um positions at the same time so that are two different time steps same position um right in a row now we're raising the egg up above our initial uh where the egg is located then bring it back down to hit the egg against the table raise the egg up again and then over Atop The Bowl so you don't crash into the bowl and then rotate the grippers to free the yolk and the egg whites we've successfully commanded the trajectories and picked up the egg with both groupers simultaneously in order to hit the egg with sufficient Force against the table we approximate the required velocity by both intuition and video analysis at .07 meters per second our model hits at 0.1 meters per second which we calculated using the base of the Ewa at roughly 200 millimeters and giving the gripper two seconds to get from the top of the trajectory to the bottom so here's our final successful attempt we follow the same trajectory as mentioned before rotating the grippers to simulate separating the eggshell from each other our design successfully emulates a human cracking an egg with inverse kinematics this may come in handy when creating robots for human robot interaction as well as learning from other human processes or manipulated objects some areas for improvement include finding the contact forces between the egg and the table directly rather than approximating the velocities as well as finding the force con the force between the grippers and the eggs to make sure we're not crushing the eggs between the gripper as well as a better egg model rather than our solid egg we have right now Rhythm representing it as two halves and also modeling the insides the egg white and yolk um so in the future foreign [Applause] got a lot working that's awesome any questions I want to be a little shorter with the questions just because we're dialing into five o'clock pretty fast okay let's great work um hello I'm Isabella and I'm going to be talking about my project on simultaneous localization and motion planning using belief space approaches robots get information about the world using sensors but real world sensors are almost often noisy as shown with the Intel realsense depth image is here thus an important problem in robotics is motion planning under this uncertainty one way of addressing this is to perform actions that maximize the information and content you get from your sensors for example if you're trying to get somewhere in a dark room you might touch the walls to get information about your environment so the question is how can you make robots do something similar one approach is to make trajectories accounting for the uncertainty of a robot state or belief space planning to make the concept of belief spaces concrete here's an example this is the distribution of a robot's position in a 2d World updated by a particle filter that takes in haptic information and a robot's velocity notice that the distribution is quite non-gaussian which is important later in terms of Prior work a common method for robot localization is the extended Kelvin filter but it's not optimal in systems with non-linear Dynamics past working belief space planning also assumes a gaussian belief State and maximum likelihood observations unless some questions are how do we generalize to high dimensional non-gaussian States and how can we make the algorithm filter agnostic Nigeria is suggested By plot it all in their paper efficient planning and non-gaussian belief spaces is to sample from the beliefs based approximate it here while the robot is not at the goal it gets a state with maximum probability X Prime and samples some other states from the belief space it then creates a plan maximizing the information difference from X Prime and other states which is an optimization problem solved by quadratic programming it then executes the plan while tracking the belief State using a histogram filter if the belief State deviates too far from the trajectory then we replan to evaluate this algorithm I set up an UI arm facing two boxes of known positions and we have a laser pointer um sensing the distances to the boxes the only unknown of this system is the position along the y-axis of the robot and the overall goal is for the robot to localize and reach the goal position which in this case is the middle of the gap applying plot algorithm this is a possible trajectory we get the robot starts off with its laser pointer at the lower box so it generates a trajectory that detects a key feature in the environment which is the gap and then it goes to the Gap so overall the algorithm is quite successful with a 93 average task completion rate and I observe that if it starts closer to the goal it has a higher chance of success however there's still some Oddities in the results if you look at the plot of the trade [Applause] so when I was talking about police explaining someone I think Leroy asked uh is do we think that trajectory optimization without all the stochastic belief space everything is uh easier or harder than the belief space version do you feel like you had any with the numerical optimization fairly successful or was it pretty brittle I think the thing is when I ran a bunch of Trials um within like my code but like the optimization was succeeded as well like yeah 93 of the time but like when I ran it um it always said obviously the things that I saw were that like the optimization failed a lot but like I guess I don't know within the code all right like one or another look at my mind it always it failed like 575 or something hmm that seems like something we should figure out awesome okay I think we have an insertion here gameplay hi everyone my name is Frank Gonzalez and I'm Robbie Cato and our project is on manipulating and grasping industrial tools nowadays manufacturing is turning more and more towards automation using robotic systems to efficiently create high quality products in order to support this effort we investigated creating a system that could autonomously grasp a tool in such a way that it was ready to use for this project we simulated a Kuka us-7 arm with a general wsg gripper and two tools a hammer and a screwdriver to demonstrate the capabilities of our project this plot diagram shows the overall robotic system we implemented with the two red boxes indicating modules that were not fully integrated for this project we developed a stack using mask rcnn and ICP proposed estimation and an inverse kinematic solver for State determination ideally we would use a decision maker module to choose which tool to pick up however with there only being two tools in this situation the decision maker was de-prioritized and thus not implemented regardless the key poses were passed a slurp for generating pose trajectories we end by passing the computed State trajectories into the simulator and Frank will show the results from just this block with everything else being pre-computed in these two videos we can see the end result of a full stack implementation with the robotic arm navigating grasping and lifting the hammer on the left and doing the same for the screwdriver on the right here we can see a slightly different perspective on the grasps after the evil arm has come to rest from these images we can know that these are indeed usable grasps and not a random grasp on the tool putting the system in a prime position to continue forward with whatever task it might be provided one key Pitfall we had was the mask rcnn model the image on the left shows the top five bounding box results from our model all of which are labeled as scissors and many of which don't capture the full extent of the tools this is likely due to the images we use to train the model an example Screwdriver from one of the data sets as shown on the right the images in that data set were close cropped and rarely showed the full tools just like the hammer handle here it's also important to note that scissors were the most represented object in that data set as well this highlights the importance of finding training data representative of the expected environment in our case a clutter table with most of each tool pictured a critical next step for this project would be to generate our own training data set in the simulation the new data set would likely give better labeling and segmentation results that could be fully used to integrate mask rcnet briefly touching on some of the lessons we learned throughout this project for starters over time we both definitely became more comfortable working with the Drake environment at the beginning there was definitely some bumps along the road that made getting started pretty challenging but once these were overcome progress became much smoother as for me I learned quickly that sometimes simpler is better I ran into several issues trying to create proper definitions for the tools resulting in awkward physics simulations due to incorrectly defined inertials this difficulty came from using awkwardly defined links and trying to determine the inertials myself which didn't go well at all the moment I switched to simple geometries everything worked much much better and that summarizes of our project thank you for listening to our presentation that's terrible oh my gosh apologies all right any questions for these guys yeah basically we know that you know that like grabbing a hammer CE but then there's like some rotational symmetry very similar to the preset where you just have the arm go and like grab the hand for the door like awesome okay hello my name is Oswin and I'm working with racial today I'll be working uh present our project on collab report planning for multi-arm manipulation and particle systems imagine you're a Teppanyaki Chef pushing red fried eggs on the iron griddles friendly from burning or maybe a chop of carrots and you're collecting it into a pile on The Cutting Board her particle systems like these it is often difficult to find a good state representation to describe the Dynamics instead of working with the state of which particle one can choose to represent the state using a density image and perform planning directly on this density image space this an approach that has been done by a paper by Terry and rest before for the case of single arm pushing in this project we choose to extend the singular case to the multi-arm case why is this interesting first you get squeezing papers you also get sharp corners and finally you get into arm collusions leading to both arms moving a different direction moreover it's difficult to generalize moreover it's difficult to generate single arm albums to the multi-arm case higher dimensionally action space means that there's bad scaling for Action space synchronization and algorithms that use sampling based optimization and the internal Collective collisions are something that only happen in the multi-arm case and not in the similar case um so how do we solve this problem first we use the that's the image statement um representation next we design and learn Dynamics directly on this density image and finally we use mole base Ro approach um where we use something based optimizing to minimize some control reactive function to guarantee the controller's stability we have tried three different methods for learning the Dynamics of our particle systems currently non-linear methods including heuristic model and the list where method works but the neural networks does not work well apply for her risk model contains four parts first we are given initial State and the action command then uh based on this state and the commands will generate a figure mask representing the pushed area by our heuristic function later we generate a mask for Waste radar location uh on the error mask and finally we clear the pixels according to error mask and reallocate this clear ways according to a weight mask you can see that our prediction is quite close to the front shoes however predicting the Collision is still very difficult all our three methods suffer here compiled with single arm static Collision will only happen in modular arm case it adds non-linearity to the dynamic and more dependency to the initial distribution the heuristic model executive example if there is no particles the portions should not collide with each other with particles as the horizontal pressure pushes too much particles and those particles collide with the other Pusher it gives a large Force back to the initial Pusher and the stop it and this also makes the dynamic more sensitive a slight change in distribution might result in a huge change from Collision free to Collision the kept Dynamic model does not work well it does does learn something that it should decrease the weight in the pushed area but it does not quite understand how much it should decrease and where these decreased waves should be added back on the other hand both in non-team methods can break the image line as well the mistake method has a more active structural bias meaning that the air is more sparse well this first method here is more accurate overall it's more errors it might be too finally we combine both Dynamics both working Dynamics models with sound based optimization from the Terry and Restless paper and substantly to the size of controller like push particles into the target Circle Center surprisingly despite the much larger action space compared to single arm case we still obtain a stabilizing controller using both on deep Dynamics models in conclusion the basic problem of learning General Dynamics is too hard to stop and it is not necessary to fully surveys for planning so they always say was that that oh I see that was not your end okay nice so so what's the conclusion would you would you stay away from Deep going forward or do you think the Deep Ness I see What's Your Inner Space that the states but that's not conditioned on the control of course with the property nicely said okay we have four left if you guys are willing to stick with us we're going to get everybody in yeah four left assuming everybody's here thank you guys for your patience and I I I'm loving it so I can avoid blasting YouTube all right yes we learned about some powerful techniques in motion planning including kinematic trajectory optimization as we discussed however they often tend to avoid collisions too conservatively in addition real world tasks frequently have multiple potential Solutions in this project we take inspiration from cognitive science models of how humans plan and make decisions and explore how we can obtain robot trajectories that are good or more precisely near optimal and that also take varying approaches to traversing the environment my name is yavana and my friend Stewie and I are going to present our project on Mark obtain Monte Carlo in short mcmc motion planning for boltzmann rational trajectory optimization in this project we Implement five mcmc algorithms of zeroth first and second order and combine random sampling and optimization to generate good and diverse trajectories we evaluate the trajectories and perform ablation studies on 2D navigation and 3D manipulation problems we show that zeros and first order methods prove is sufficient for 2D problems and that solving 3D manipulation tasks benefits from second order derivatives finally we suggest that our mcmc motion planners may be a helpful way for robots to model humans next we discuss what we learned in our process of getting there we learned that Markov chain Monte Carlo is quite a powerful method for generating samples from desire to distributions indeed even for the simplest of the algorithms that uses no derivative information we were able to obtain a surprising amount of viable trajectories we experienced how defining the cost function for the problem can be difficult and for just a small change in how we penalize obstacle Collision compared to the length of trajectory we observed trajectories going straight through the obstacles and away from the goal we compared the algorithms with respect to their time complexity and the kind of trajectories they generate and we found that for 2D environments unadjusted landerman algorithm requires least tuning to generate diverse obstacle avoiding trajectories we learned that sampling from a boltzmann distribution which is commonly used to model human decision making allows the trajectories to explore multiple potential Solutions and as would make sense intuitively we found that the variance of the path scales nicely with the number of obstacles the beta parameter of the Baltimore distribution commonly referred to as the rationality coefficient intuitively corresponds to how strong is the preference for low-cost trajectories next We performed experiments on a 3D manipulation environment involving motion planning to a desired pose of a seven degree of Freedom robotic arm here we compare two trajectories sampled from Newtonian Monte Carlo a second order method with beta equals 0.1 um and beta equals 10. we can see that for beta equals 0.1 there's quite a bit of shaking and sub-optimality while for beta equals 10 we see a smooth interpolation to the desired final pose we also compared to hamiltonian Monte Carlo A first order algorithm that's widely considered as the gold standard for mcmc with beta equals 10. and we observed there's quite a bit of wild oscillation this is consistent with our quantitative results which show that for this 3d motion planning problem only the second order algorithms are able to optimize well enough to discover low-cost trajectories uh at the same time Newtonian Monte Carlo also produces the most diverse trajectories and therefore does the best at generating diverse and near optimal trajectories in this project we successfully used mcmc algorithms to produce diverse and approximately optimal trajectories for motion planning problems we also consider several directions for future work most importantly is a more complete evaluation with more diverse and complex experimental environments um second uh in handling constraints we perform a projection step which causes us to lose the technical condition of reversibility and theoretical convergence it would be interesting to see how we could restore these with more advanced methods lastly the direction that we are most excited about is in are using our motion planning algorithms verbation trajectory prediction and human robot collaboration problems uh thank you for listening to our presentation and we hope you enjoy foreign that was super clear thank you the so the it was joint space in the 3D examples that you're planning in right so the jump from the 2D examples to the 3D examples was actually jumped from two degrees of freedom to like seven degrees of freedom yeah nice no that was really really well articulated thank you okay Tom's been waiting here here we go hi my name is Tom and today I'm excited to present my final project for robotic manipulation on augmenting ICP using dense object Nets with applications in surgical robot perception state-of-the-art robots for vascular surgery offer minimal autonomy surgeons maintain direct control over a wire-based surgical manipulators and many surgeons are deterred by the associated learning curve a major obstacle to higher autonomy surgical manipulators is developing highly robust simultaneous localization of mapping ultrasound is the primary Imaging modality utilized and when you're interested in developing a slam algorithm capable of mapping an unknown vessel of geometry in Vivo ICP is a common front-end algorithm used in acoustically based lamb however as we learned in class ICP susceptible to convergence to local Minima in the case of poor initialization or minimal Point cloud saliency intraoperative ultrasound suffers from both of these issues though exploitation of 3D priors from pre-operative imaging has the potential to improve ICP performance therefore I propose the use of contrastive Correspondence learning between adjacent depth images simulated from a prior CT mesh these correspondences will be used to inform registration tasks on real ultrasound Point clouds representing successive poses of the robot prior to intervention the surgeon would run the following data generation pipeline first we select a random pose inside the pre-operative CT scan to represent the surgical end effector and generate an offset pose to simulate motion of the optical frame we utilize the combination of raycasting and an inverse pinhole model to generate simulated depth images and ground truth pixel wise correspondences this data set is then fed to a dense object Network which is a similar framework to what we learned about in class finally we calculate pixel wise loss in the descriptor space given knowledge of ground truth correspondences to illustrate quantitative results first I show the pixelwise correspondence precision the network did not perform as well as the original architecture proposed by Florence likely because we're working with sparser death images in comparison to Rich RGB information it was noticed that when the probe view surfaces from far away interesting structures like vessel sub branches and significant variations in curvature are evident from depth images therefore in future work it may be possible to apply a larger weighting to high intensity depth values in the pixel-wise loss function might demonstrate the registration performance for vanilla ICP Network augmented ICP across 40 point-cloud pairs Don ICP had a lower inlier error average the variations were not found to be statistically significant to explore underlying factors for this I'll now discuss some qualitative registration results [Applause] all right so the big money question is does it work for does it help solve the global correspondence problem oh um foreign I'm glad it worked okay last two taking it home here we go hi we're Martin Fiona and Hannah and we're presenting our final presentation on scoobing for 64210 fall 2022 robot manipulation as with DX AI robotics Alfred which we talked about in class they use a trajectory automation scooping technique um however besides some Niche applications of scooping in Industry there's little academic work in scooping um best we could find was scooping with a flat spatula off of a flat surface um whereas we're going to be exploring the scooping with a convex tool um and based off of all of like the previous research as well as some previous conversations with Russ we decided to try and pursue our pre-compute and choose approach where we pre-compute appropriate trajectories for different pin States uh it's just empty half full and full and then use perceptions to try and determine which trajectory to select um and so this is the direction that we eventually hope to take this project although we it's not quite finish pursuing the perception selection process the first part of our project was setting up a scooping environment um it's pretty similar to the bin picking setup of the main difference with our E1 arm setup is that there's a measuring cup that we have welded onto the end of the UF in addition we are using spheres instead of bricks for Collision geometry um and this is because they're way easier to simulate and we were running into significant issues with our simulation speed even after we updated to use the sap contact solver which is faster than the default mesh cat contract solver um and actually the simulation speed constraint becomes a pretty strong constraint for how we thought about the granularity of our poses in the future we pursued to approaches in parallel and the first one was our geometric scooping this is adopted from the robot painter notebook that we saw in class and the key idea here is we're using simple geometries to plan the motion of scooping from one band and pouring into the other so for the scoop trajectory we started with a circle but it was really hard to balance getting deep into the bin and also being able to turn the scoop upright at the end so we ended up using an ellipse to have more control over both the width and depth of the soup for the pouring this motion was less constrained since we're starting out of the bin and then moving the objects in so we could still use a circular path or other approach um record and so the idea was um if we could scoop successfully using telef we can save those trajectories and use them back later so this is two steps One recording and then two playback um for recording it was pretty straightforward because we um tried to set a good interface for us to um record with so it was pretty much telling off the robot as you would usually and then um when you're done you would press the the safe poses button and it would save all the file um from the user standpoint you know pretty easy um but yeah behind the CP has put in like quite a lot to um make it happen um and the way that we had to put the sage was we used a file representation of a file with basically one pose for each line where each pose is represented by S6 people um you have rotation for through the numbers and then XYZ for the other um they're not like perfect descriptors but they work when you're trying to just get from one pose to the other for like the longest of hoses um the main components to using the teleop record for a future automatically pack are in cleaning up the teleopath reading the files that they're stored in and actually moving the email Arm based on the information that we've read so a single trajectory file will usually hold an ordered list of desired poses and there's often more than a few unnecessary positions within this file um just because of the nature of teleop and so some manual cleaning is sometimes desired um in addition uh deleting addressing poses can sometimes help with resolving issues from that you are getting stuck or having joins that are fully extended and we parse these clean files to create paths based on the movements between poses rather than the positions themselves um and then the UI can then be directed to execute these movements now we'll talk a little bit more about our results so here we have a video of a full scooping and pouring path with our 50 simulated spheres so we set this video up by five times because as you can see there's a big slowdown when the scoop starts interacting with the Spheres and with this trajectory we also tend to have to push some spheres out of the bin but we can reliably pick up three spheres every time after the scoop we have an intermediate frame to make sure the scoop stays upright so we don't drop the fears we just picked up and then finally we move on to pouring into the other bin both approaches yielded successful scooping and pouring results while the geometric approach created relatively smooth trajectories it proved difficult to navigate the bin environment and plan what geometric shapes would work best until the up approach creates much more flexibility and can quickly plan trajectories to new situations but maybe a little bit less smooth so some of the challenges uh along the way and lessons we learned uh we've done that even though yeah this is a robotics class A lot of the stuff we did to help us out with actually um just like a software development and some future work um is that we we settled on our super early on but it's actually not the best scooper for scooping um the walls are actually pretty high on super so it's tough to get it to go into a bin full of spheres and also we didn't get to um automatically choose the trajectories through perception uh there's also room for trajectory optimization um the trajectories you get are ultimately as good as the trajectory is that you can tell you after yourself part of that is probably like making a cost function for Scoops I'm not sure if this is something for Drake or something for um users of Drake speaking of that simulation should make things better yeah that's our sleeping projects so um you know it was tough we learned a lot thanks for watching foreign I can definitely help you speed it up if you want if you care we can speed it up okay Ryan looks it's ready he's all right and thanks for joining us today so Garden buddy is a robot arm that controls some unfamiliar hoes in an unfamiliar Garden the information here is unknown beforehand to the robot arm and so the speed of the water is unknown and location of the plants is unknown here's a little demonstration that we'll um you know dive deeper into later in this presentation so the two main components of the project there's a perception side which is exploring this unknown environment and there's a motion planning side which is commanding the robot playing the trajectory and tuning the controllers so here's the overall approach so we start by getting the perception and then the perception component the module passes on the information to pose optimization which passes on the information to the interpolation module which goes to the ik module and then the controller module and this is a closed loop so we begin with the perception component which takes in the scene and needs to find the target plant locations as well as the droplet speed to find the plant positions we use our rgbd sensor to get a depth image and the filter for pixels with lower depth as those represent the plants excluding the Ewa controller and once we get this filter depth image we perform a graph search to find the Clusters which represent the plants as well as the center plants which represents our Target locations shown here to find the droplet launch speed We Begin by executing a sequence where the robot launches droplets horizontally and then using kinematics knowing the height of the robot we just need to find the average location of where the droplets land to do so we take our a sequence of five color images taken 0.1 seconds apart and compute the difference between them to find the motion of the droplets and then we perform a convolution and filter to remove any other noise from motion leaving just the droplets and thus we can find the average location and using inverse kinematics find the speed and feeding this into the motion plane as long as as well as the plant locations the positions of the potted plants and the speed at which droplets leave the hose we want to find the optimal poses for the gripper to be in such that water coming out of the hose reaches the potted plants we do this by solving a constrained optimization problem using mathematical program we want to find a pose that's very close to the one the robot is currently in but such that water droplets coming out of it intersect with the Z height of the potted plant in the same X and Y position as where the plant actually is so we do this by using equations of projectile motion that we've seen in 801 we find the time T star at which the droplet reaches the correct Z height and then we find the positions X star and Y star where that happens so we constrain our optimization problem for the error cost to be the difference between the candidate pose and the current pose and we set the constraint that the X and Y position of the landing is the same as the position of the potted plant the sequence of poses to a smooth sequence of keyframe poses for a robot to follow currently we move from post to post directly this leads to jerky movement and also means that our robot does not stay on one plant long enough for the plant to be watered fully we propose a solution where we interpolate with two more segments in between we first moved to a comfortable Z hype where the robot does not have to move through itself to get from one post to another and after the second segment we aim to be at the goal pose you spend the third segment at the goal pose that the plant is watered fully this three-part solution leads to a smoother trajectory which leads to good results we pass this as a piecewise post trajectory into inverse kinematics so that we can convert these into joint angles here's an example of the pose optimization actually working and you can see the frames and the robot snapping to the frames so for the inverse kinematics we use the ik solver manually and this allows us to go from desired poses to the Joint angles and we update these every 8.1 seconds here without any optimization on anything you can see that the robot just jerks around and the trajectory is not optimized at all for the controller we decided to use inverse Dynamics controller this allows us to go from to from joint angles to the forces that we need the inverse Dynamics controller specifically allows us for bigger time steps and it allows us to tune the PID gains manually here's the whole simulation being run and this is a whole closed loop so you see that the uh the Water shoots horizontally which allows us to get the speed and then we can shoot the plants knowing the speed and this entire simulation is a closed loop so the robot doesn't know anything um the camera gives this information and then the robot can know where to go from there thanks for watching our presentation and here's a link to the deep note [Applause] from PI virtual display what was the hardest part um that was weird okay yeah right very nice thank you everybody for an awesome semester that was really really really fun and for those of you that stuck in here the whole time that's awesome I'm happy to stick around for a little bit but uh yeah
Robotic_Manipulation_Fall_2022
Lecture_2_MIT_6421064212_Robotic_Manipulation_Fall_2022_Lets_get_you_a_robot.txt
so anything else i should worry about before i start or just start okay welcome back everybody hopefully you guys can hear my audio on it sounds like the audio is going through the hdmi cable seems to be intermittent that's awesome okay everybody so um i brought some props today uh maybe i'll even try to uh leave a little i don't know if there's a room if there's a class coming in right afterwards or not last time last thursday there wasn't so we actually had a little time afterwards um i don't know if there's i think sometimes there's a seminar on tuesdays at four i don't know if it started yet this time of the year anyways i'd love for you to come come down and uh be able to play with the rope the that i brought like one of every robot hand i had in the lab and the kuka we're going to talk a lot about so maybe if there is a if there's a bunch of people that come into the room at four and we have to get out then i'll just sneak through this door you guys could hang out with me there but it's not glamorous there but there is another door and we could that's what we're going to do okay so the goal of today is to you know start digging into the material most of the most the time this this semester we're actually going to talk about software about ai about algorithms right but just this one day and maybe a little bit again when we talk about cameras and and if we talk about some of the tactile sensors later but mostly just today we're going to talk a bit about hardware and i'm going to i'm not going to talk about mechanical design or other things but i'm going to talk about is the the features of the hardware that we have to work with that are going to impact us our algorithms okay so so mostly i'll try to highlight the the features that are going to affect the way you control your robots so the high level goal is first i want to tell you a bit about robot arms it turns out the hardware matters uh the choices that you make when you pick a robot and we've picked ewa for the class matter there's a bunch of things that are about you know that are properties of this robot that are not properties of all the robots and and we'll lean into those a little bit uh we're going to talk a bit about how do you simulate a robot like this including some of the you know the level with which we're going to simulate the details of that hardware and then i'll talk a little bit about robot hands and about mobile manipulators uh and the like too i have in my mind you know this is kind of like you're you're booting up into your robot class right it's kind of like you're you know you gotta outfit your mac you know you get to pick your uh your different tools unfortunately i picked for you and and we have a just one robot and one uh hand but but that's kind of what i'm thinking of here is you're we're tooling you up for the class i'm going to run a handful of notebooks during the lecture all the demos that i'm putting together for class i'm putting into the links right from the notebooks so if you are reading the notes and you follow along with the deep note links you should be able to run anything i run okay almost always okay so you may or may not know that um i mean we've clearly had robotic arms for a long time you've seen robotic arms doing things on factory floors uh they've been helping us weld cars together for a long time right uh but the robot arms are changing i don't know if you realize that they're it's like they're evolving right something's happening and um you know the robots that we used to think about and we used to build we still build of course there's many applications are these relatively big scary robots that are often there's a cage between the human and the robot right or they're they're confined to a very particular sort of warehouse environment and they're extremely good at what they're doing but they're they're really not designed to be along around people and one of the big trends in robot arms today is that they're trying to be more and more comfortable uh you know we're trying to we're we're talking about cobots these days people you know people and robots cohabitating or working together uh to get the job done this is one in particular that's the rethink robot and that's rod brooks who's uh one of our emeritus faculty he was a leader the director of the lab for many many years and that was a big deal for him he was really making a point when he posed for that photo but i'm not afraid of my of the robot draping its arms over me because we built a robot that's fundamentally safer to be around and actually you know while this was originally targeting a very much a factory environment the the idea behind the rethink kind of robots was was maybe a small like a mom and pop own a bakery and they want to be able to program a robot or you know sort of a little bit closer to the humans and of course we're all dreaming of of the robot that we can take home and and will hug us a good night right uh and and people are working on that right there's a there's there's uh active research on trying to build baymax you know in various various forms around and we'll talk about it when it becomes relevant so if however you go around to the various research labs working on manipulation these days there's a pretty common cast of characters right so you'll see kind of there's a handful of robots maybe a handful more that are on this this page but you just normally see sort of one of these a handful of robots this is the universal robot series you can see universal robot three which is a three kilogram payload you can see a 15 you know they can get big but they all look about the same like that uh rethink had their baxter was their official one their first one then they had a sawyer which was one arm kuka you see here uh konova is a is a the jaycos are actually a great robot in particular because they are mobile this one as you can see has got a pretty serious infrastructure that goes around it i would not consider this a mobile arm it's got a you know it took us a freight elevator and a half uh and then some pushing and and grunting to get it down here uh you know the the canova was built to be mounted to a a wheelchair originally and is in has been one of the more popular mobile manipulators uh this is an avb umi the franco panda these are all you know the robots that you'll see in the research labs and they have different properties and the biggest thing maybe that differentiates them is the way you control them typically falls into one or two uh modes we have position controlled robots and then we have robots that have torque sensing and torque control so we tried to learn a few lessons from last time the screen was too dim on the uh on the video stream we thought oh we'll just turn the lights down that'll fix it turns out there's the big skylight is the the offender not the lights in the room so we probably didn't fix it and i apologize there's we posted on piazza though hopefully the the link to the slides if you want to follow along that way i also thought i wrote too small last time so i'm going to try to write comically big you know we'll see how that goes okay so it makes sense that there's uh you know sort of the words make perfect sense i could command my robots by sending position commands when i say a position in terms in the um sense of a robotic arm i'm actually sending the positions are the joint angles of the robot so right i could command saying go to this joint angle go to this joint angle follow this time series of joint angles you know these are the ways you talk to a position controlled robot that is very different than saying i want to i want you to apply these forces or these torques at the joints okay and in order to do torque control you have to have a certain type of robot in fact if you if you care very much about torque control and torque sensing that that sort of quickly uh reduces the field of robots that are are viable for you and only a few of these are actually torque sensing and torque controlled robots uh and i wonder if you know why that is like why is it that so many robots are position controlled right how why are so many robot arms position controlled it's actually fairly um you know there's a fairly sophisticated argument behind it i'll give you a light version of it there's a slightly more dense version of it in the in the notes if you care to read but there's a couple big ideas that i think do affect the way we talk to our robots that i want you to make to understand all of the robots on this screen are driven by electric motors okay i think that's true yeah so the core thing that is sort of supplying power to our joints is an electric motor an electric motor you would think the standard model of an electric motor would be that there's some sort of simple relationship if i put in some you know current into the motor should be proportional to the torque at the joint okay similarly the the voltage you'd expect to be you know proportional to the speed of the joint and these are fairly simple relationships when you're in the sort of right spot in the torque speed curves then these things actually are pretty good models of how the motors we build today operate um okay so if we have this sort of nice proportional relationship i mean it's they're often even just a linear relationship or athlein relationship between current and torque then it seems kind of silly to say well most of these robots most robots today aren't actually torque controlled because why if i certainly i could just supply you know control the current i'm sending to the motor right why can't i then control the torque of the motor and the reason is that electric motors like to spin fast thousands of rpms right and robots you probably don't want that guy moving at thousands of rpms right and they don't like to produce a lot of torque right so so actually uh it's very important to put a big transmission a big gearbox something that looks like this you know this is just a particular planetary gearbox but we typically have between the motor and the actual joint that's moving a big gearbox okay which we'll often just call the transmission and that gearbox is meant to turn the super high revolution count in the of the robot into a low revolution count on the joint and to amplify similarly amplify the torques that you can um so you can apply and the it's very common on the robots that we we see today to have this be in excess of you know 100 or even a thousand to one uh ratios okay the gear ratios now that turns out to have a profound effect on the way that we think about the dynamics of our robot okay for a handful of reasons i want to make sure i i get it right so i want to you know call them out highlight them carefully here so the first thing is that it turns out that some of the gearbox dynamics are are hard to model okay why are they hard to model because if you think about what's happening inside there there's a lot of friction of gears rubbing against each other there's actually backlash do you know what backlash is right so you have teeth of our gears going like this and when they're pushing in this direction everything's good they're applying it sort of a constant force if you change directions there's a momentary gap where they move and make contact with the other teeth for instance right and if you don't model that then you're going to get weird effects okay [Music] there's all kinds of things that happen inside there there's flexing of the of the gears and stuff like this okay in particular friction backlash okay but because of these you know there's already hard difficult to model effects and then this the the thing that happens is that you have a compounding uh sort of importance of these effects because you take some of these dynamic features and you multiply them by some big numbers and suddenly they're a very significant part of your dynamics okay there's another effect that goes along with having this big gear ratio which is called reflected inertia okay i'm going to tell you what that means in a second here and what these sort of all boil down to is that it turns out that on these robots position control via like for instance a pid controller where this is a proportional integral derivative controller these work really well um in in a strange way the magnification of the gearbox actually makes pid control work better than you would expect if you didn't have the transmission or they are the big yeah the big uh gearbox okay so i want you to understand that in a minute these things combine so pid which is a very simple you know position control idea in this sense it becomes a dominant force in sort of controlling robots and it takes a lot of work and cost actually to to do better and add extra um you know to actually achieve some sort of torque control so most robots out there if their goal is to do precise motions over and over again are perfectly happy to stick with the stuff that works very well which is these position control okay so let me just step through that in one level of detail but not not in its full glory but let's just make that argument and make sure you understand those some of those points okay so um i say that transmissions are difficult to model okay gearboxes are hard to model so what is the implication of that now some of you today are you know say i know how to train a neural network to model anything i'm not afraid of some hard to model gearbox and i actually i love that uh because people are starting to make progress here where where traditionally we've just said don't try to model the gearbox it's too hard some people are making progress and making modeling these really hard things and and we've seen some success there and i i actually think that can be great uh so we might see a revolution in those kind of technologies but classically we've said those things are hard to model uh don't even try and so if you if you don't try to model that then then your alternative is to add another sensor so basically if i'm if i'm applying current and voltage at the source of the motor and i want to regulate the position and i've got something difficult to model what i need is a sensor on the other side of the difficult to model event right is that okay you think to see i guess you'll know in two minutes or something okay the most common and easiest sensor to add after the motor is a position sensor back in the day we had a lot of potentiometers these days they're mostly encoders okay and then we can use feedback a simple feedback rule with this pid that's what the pid is to regulate the joint angle right so i don't have a perfect model of the gearboxes but i know some very very basic properties like if i apply more torque then i'll get you know i have a monotonic release you know relationship between the torque i'm applying at the motor and the output right it's not that i'm gonna somehow it's gonna suddenly go backwards or anything like that so actually it's enough to add a simple feedback loop around it and do some basic control okay what's interesting though is that um you know there's a there's actually a science of trying to do control without the big motors so um there was a time where people were saying this this path we're going down with big gear boxes is seems wrong-headed maybe we can actually just scale up our motors big enough that we can actually get very low gear ratios and avoid some of this and then achieve high bandwidth torque control and actually the the leaders of that are our on our faculty in mechanical engineering that's harry asada and kamal youssef toomey i don't know why i picked a picture of him with fish but uh i think that was i think it was hard to find something different but he's you know he doesn't always have fish but he's uh often found in building two i guess and they wrote a book it was actually kamal's thesis was about a book about direct drive robots okay and they're saying keep your gear ratios under 10 for instance and the reason is and the analysis they did in that book which i think is extremely important to understand is that if if i look at the equations of motion of my robot and this is worked out in a little bit of detail in the in the nodes then i get when you see these equations i want you to basically see f uh so m a equals a bunch of forces okay and instead of the um instead of the mass he's writing j arm which is the inertia inertia of the arm which is a the mass like quantity theta double dot is his angular velocity and he's relating that to the torques that come from gravity friction coriolis terms and stuff like this okay and then ni in this is the um is the gear ratio the transmission ratio i said oh you're right good lord so he wrote alpha double dot aquila his angular acceleration i'm going to call it q double dot everywhere so let me use that here angular acceleration thank you for catching that okay and ni is the gear ratio and the the only important thing i want you to get here and uh is that the gear ratio pops into this equation in what i thought initially was a surprising way uh it multiplies some of the terms in the equation by n squared i would i would have thought okay if i've got a hundred to one gear ratio then i'm getting some terms in my equation that are scaled by a hundred that's pretty bad right turns out the scaling is actually on some of the terms is actually you know squared of that right so um so dramatic change to the dynamics in particular so j rotor is the inertia of the motor j arm is the inertia of the arm and because the gear ratio affects the arm but not the motor even though if i'm looking at the robot and i'm thinking okay i'm gonna you know the dynamics of my robot are dominated by where the mass is on my arm it's actually not like that if you look at the dynamics of the robot from the viewpoint of the of the motor that all the stuff that's happening down at your arms is reduced by the squared of the of the gear ratio and the the just the inertia of your motor moving around is actually on par with the inertia of your arm moving around your motor is a simple thing that's sort of spinning around its axis it's not changing dramatically depending on the configuration of the robot so it has a it has a big effect on the dynamics of the robot it turns out you know when i go to pick things up you'd think i would need very different control gains if i'm picking up something heavy or not but a lot of these robots if you have a big gear ratio you can just use the same control gains everywhere because picking stuff up is actually kind of lost in some of the gear ratio you know squishes that out and in fact the dynamics look fairly constant over the workspace because the the coordinate varying terms are getting squished out okay even more the diag the dynamics end up being diagonalized okay so you can almost think of controlling every joint independently instead of all the couplings between the joints again because the coupling terms relatively gets get damped out okay so it means yeah please no you're good that's great no no it's good i appreciate you calling me on it okay so think of this this schematic here i've got a big motor oh this is actually a tiny motor okay i've got a the robot arm tiny arm in this case okay and then i've got a whole bunch of gears in between it so i'm going to call this my motor this my arm and this my transmission in between when the motor turns a thousand times or a hundred times that when i say the gear ratio is 100 to one that means that every 100 terms of the robot is going to only turn the arm once fully around so that's my gear ratio and what i'm saying is that so that if you think about the physics of this system there's some inertia in the arm it's going to take some torque in order to start causing accelerations here at the at the arm okay it turns out that even if there's a lot of mass over here it has a relatively small effect because the gear gearbox makes the the effect it has on current at the motor small compared to just the magnets that are in here that have to move around those magnets have some inertia now you'd think if it's tucked inside my robot and i've got a big heavy robot arm and a little motor inside here clearly the the mass of the arm should dominate the motor but not it's not the case it turns out that the smaller relatively smaller magnets even though it's a small percentage of the total mass of the robot it actually has a an inordinate effect on the dynamics of the robot when you have a big gear ratio thank you and the you know the the direct drive robot story was actually let's see if we can build robots differently let's keep the gear ratio extremely small and over the years there's been a various ways to to accomplish that the first ones actually back in kamal's thesis where like had enormous armatures they had these big old motors in order to get achieve direct drive right um people have done it with cable drives there's a there's a famous series of robots like the barrett wham if you've heard the whole the whole arm manipulator that achieved it by having a very low distal mass okay so if you keep your the weight of your robot very low and you put your motors on the table and you run cables then you can reduce the torque requirements and get away with relatively smaller motors that's another way people have done it and the the reason this is actually coming back now in 2020 is that there's there's more motors out there that that are working extremely well these outrunner motors if anybody knows from from hobbyists the you know the the uav world has popularized outrunner motors where you it's just a different configuration of the of the motors which are capable or if they're happier producing higher torque for their mass okay and so you actually we're starting to see some new robots being designed again that are trying to be closer to direct drive but most of the research robots you see right now are in the very high gear ratio rate regime yeah that's a great question so i'll just repeat it for the so for the video um yeah so the question is if that sounds great you're igno you're taking all the complexity of the world you're kind of driving it down to small and the dynamics become easier why would you want to do anything else besides that right so your ability to control the forces on the uh in the world then is also diminished right so if you want to do more sensitive if you want to control the forces you're applying uh to the world or be more force sensitive or other things that's where this starts becoming a problem if you're just trying to control positions then it's a great thing to do yeah so when you want to hug rod brooks uh then then you need to be a little different i think okay um this is the blown-out picture of one of the e-word joints okay so he was taken a different approach and uh there's a series of robots that do a similar thing but i actually think that this robot was one of the first ones that really changed people's mind about this approach being high performance and viable it was originally done at the dlr the german space agency and then kuka turned it into a product right so they basically said well we can keep a big gearbox that keeps the ergonomics of our motor uh where we want them but in addition to putting a position sensor on the joint let's go ahead and put a torque sensor also on the joint now that seems like an obvious thing to do why wouldn't everybody do that well torque sensing is a is a bit of a black art and they did an extremely good job to make it work and make it packaged well okay so the iwa evo is actually written all in lower case and it drives me nuts but that's what that's how they'd write it and i'll try to respect that um okay so the ewa still has a high gear ratio but they've added both position and torque sensing at the joint side or across the across the transmission okay so if they can measure the torque directly then they can close a feedback loop on the torque and try to regulate the torque now to do that they had this beautiful design with strain gauges okay strain gauges are again are you know i think people have gotten better at it but they're generally hard to do to get high performance and hold calibration and all these things okay in order to do that there's always um in if you're trying to measure force there's always going to be a trade-off in deflection how flexible your your shaft is versus how rigid and your ability to measure torque so uh the key the key thing that happened on this robot is they were able to make they call it a flexible spine and they think of this as a flexible joint robot they put a component in that sh in that shaft which is actually a stiff spring okay when i say stiff spring i think it's like i don't know 5 000 newton meters per radiant or something like that okay and they achieved performance in terms of position commands and other things that would be that would make us anybody who wanted to use it on a factory floor are still happy but they were able to um to still get torque control for people who wanted that okay they did that not only with the beautiful design but with some really good control which we'll talk about later when we talk about force control and the like okay keep going down this spectrum there's a there's another type of robot out there which is which actually baxter is a version of which uses series elastic actuators okay so you could call this a series elastic actuator but we we typically don't because this is a very stiff spring and we want to think of it as a flexible joint but and you know admit that it's flexible but mostly think of this as something that's capable of being a very uh doing high bandwidth control so if you needed to follow a very fast trajectory you could if we're saying that we're in a different operating regime when we're operating around humans and we don't need a super the ability to control very high frequency things then you can maybe make the problem easier by having a soft spring taking this down a huge range into a much softer spring so you know these are more like 100 newton meters per radian orders of magnitude softer let's say okay and then really just you can even just use position sensors on both sides to measure the deflection of the spring and have a torque sensor okay this is the go ahead idea in series elastic actuator and it it deserves it owns a certain part of the design space you wouldn't want to be doing like i said super high performance super high bandwidth things with the series elastic actuators but i guess for hugging rod brooks it was appropriate right does that make sense any questions on sort of the the high level architectures of the of these yeah because so you imagine this is a great question why wouldn't you want to why couldn't you do something maybe high performance so from a linear systems perspective you basically have a low pass filter the spring is going to look like a low-pass filter between your motor and the shaft so if you were to try to do something very fast with your motor it would be a it would you'd only see the decayed response at the output shaft because that spring exactly looks like uh you know the first order looks like just a low-pass filter and so you're um yeah you do give up your ability to do bandwidth high bandwidth control yes correct that's exactly right the maximum torque is not affected it's your it's the rate at which you control that torque yeah and so some people say serious elastic actuators or any elasticity is almost it makes the robot safer but that's a little bit of a dangerous argument because you can actually store a lot of energy and you can't stop applying that energy very fast right so you might couple that with weaker motors and other things that that keep you in a safety i think it's not enough you don't make a safety case purely by saying i have series elastic you need some extra requirements to be met good question so once you have a robot that has torque sensing then you know they really made they did make a safety case this with this this is when it was at the dlr that's sami haddad and he was trying to um to argue that the torque sensing on this robot is good enough that it becomes suddenly safe to be around humans and and this is the torque sensing is good and the bandwidth is high so if you were to sense did that end already oh here we go okay so he made a impact with this uh with this work by basically just starting to have the robot hit him at high speeds okay and showing that even if it somehow collides with him at high speeds it can so quickly measure the torque realize it's made at contact and stop those things combined made it an effective safety case if you go on he starts hitting himself in the head and then if you look hard enough on the web i think you can find something with the knife not his not his head but you know and this was one of the first arms to get certified by some industrial standards committees in europe in particular this was the german project okay and that's a really big deal if you want to be around humans for me too i i care a lot about you know robots let me stop that so i can have your attention so uh if you think about trying to do delicate control even if you're not manipulating a human but you're manipulating objects and you're trying to control the contact forces in order to crack an egg or other things like this i have chosen iwa in our lab and in our in the class here because it gives you the ability to do that potentially now that's a fairly expensive arm that's like 80k or something for that arm the hands get add up too okay so it's not on the low cost side but it's on a high performance side for what we want to do the way you know oh yeah do you have a question yeah right i think that that once you have that big transmission the current is a very poor indicator of what's happening at the output shaft you so just because there's so basically if you write the equations of motion you say i've got a motor torque and a current uh there's some terms in there that are that dominate because they get multiplied by the big number and you just can't trust that relationship anymore yeah so the way you know i mean in addition to hitting yourself the way you make certain a rock star uh torque control robot demo is you you convince people that you can pretend that your robot's not there okay so this is gravity compensation let me restart that a little bit this is like onto kuka website right the standard thing you'll you'll see when people are trying to show you they built the robot that's capable of accurate torque control is that they model the equations of motion of the arm and they try to cancel them out so therefore you can take this big heavy robot push it around as if it's not there okay and if you can do that it really is a fairly good test i mean for him to push it with a pinky or something like that you know there's a lot of transmission dynamics here that are being cancelled out by the ability to sense torque and close that feedback loop so that's very impressive by the earlier the earlier arms that claimed to have torque control you you would have you know been better than a rigid robot but nowhere close to that okay uh right so that was called gravity comp in the in their video grant gravity compensation but you can in they're actually doing a little bit more than that they're canceling out friction terms and other aspects too great um i take more questions if you have know yes um good so so the question is so what so what is the difference between the stiff and the soft so the soft um the ewa is a expensive carefully engineered system that achieved high accurate torque sensing with even though it had stiffness you can get away with much cheaper designs much more less accurate designs much cheaper sensors if your spring is soft because you just have to measure a large deflection and so it's like it's just easier to measure torque you may imagine if you if i apply a certain torque and my spring you know only changes 100 of a degree then i need a really accurate sensor if i change the same amount of torque and it flexes like this then it's a simple sensor to get the job done electric motors aren't the only game in town although they're winning they're definitely winning oops i'll go out of order but atlas for instance is a torque controlled robot mostly we actually used position control in the arms but its legs were torque controlled but that was a hydraulic robot this is a the earlier version or actually even the new version of atlas okay uh so they're they're pumping fluid through the through the valves through valves and they're measuring the pressure in the fluid the differential pressure of the fluid across a valve is roughly proportional to the force that's being exerted so that's another way to sort of get achieve classically a torque control or force-controlled robot is with hydraulics um but electric motors are definitely even at boston dynamics they you know the humanoid is still hydraulic but the quadrupeds are now electric and i wouldn't be surprised to see an electric humanoid coming so now the ability to do that torque control it's important if you want to hit yourself in the head um it's also just important to practice for the type of robot manipulation we want to do so as an example of this just if i remind you of this this loading example and this is particular that one move right there we we don't know accurately the position size everything of the dishwasher in fact every time we actually had a bunch of these robots doing the task and every robot was in a little different every dishwasher was in a little different place relative to the robot right all we had to do was sort of get the the robot sort of tool to get its hand around the handle that wasn't too bad but then as we're moving through the arc of the dishwasher we're in a very compliant mode we're using those torque sensors we're letting kuka's low level feedback controller put the robot in a relatively compliant mode the joint angles are probably deviating from our plan trajectory quite a bit but they're complying to the door dishwasher door and that's just very important right there's a lot of tasks like that that if you don't let you know if you you can see robots that kind of get jammed right they're like actually if there's rigid on rigid you know you think bad things happen uh and the ability to do this sort of soft thing and let the world go with the flow a little bit uh is a big deal for for making these things work people are doing well with position control robots too i'm just singing the praises of of of torque control okay so let's think about i just you know argued that the hardware is important the way you uh actually even the way they write a controller around the hardware is important so let's just connect that back to what you're doing on your problem set you know we have this manipulation station you've been looking at the inputs and outputs right ewa takes in in this manipulation station system right there are input ports that take the position so if you want to send a position command it will close a feedback loop around position that's fine it's capable of doing that too but the uh the extra feature it has this optional feed forward torque okay so actually inside the the system inside the this big box here they're right they're running their own low-level controller that is trying to regulate the gravity out regulate the friction and they're allowing you to think about only the torques not required to move the robot so this is the feed forward torque in addition okay and then you can measure position velocity torque right you have the torque commanded torque measured and some sense of external torque so which is the difference between the torque they expected given their model of the robot this is the torque that was required to move the robot and these are the other torques that came from the world so if i get measured some torque and i subtract out the robot's uh dynamics then you're getting the torque external force extra forces from the world and you'll see that as you play with it um more okay but if we go to simulate this let's just think about how you how do you actually simulate that there's the first piece of simulating this of course is the physics engine we need to have the equations of motion of the robot so let's simulate first the e1 okay and actually if i want you to take one thing away from this i put it on that title slide and i'll put it up again at the end here um is that simulating the iwa will require a physics engine no doubt but it's more than simulating the physics it's somehow more than just simulating physics physics is the first step but having a physics engine is not enough to simulate a robot of this complexity okay you have to simulate the controllers the sensors all that other stuff in order to have a good faithful simulation okay in practice in drake the physics engine is called the multi-body plant okay this is doing the sum of the forces equals mass times acceleration you know including contact forces it's including uh the friction in the these kind of things okay if i take the um if i take a description of the robot and put it into multi-body plan this is how i do it okay uh in practice all you have to do is you just say you know make a multi-body plant add ewa from file there's a few different collision models we have sometimes you can have a lot of times we ignore the collision on the arm but just put the collision on the hand it just keeps the modeling simpler if we just add an ewa into the physics engine and you say simulate then guess what's going to happen you was going to fall to you know into the abyss right so you need one more line which would say and by the way why don't you weld it to the table okay and or weld it to the world at the origin is what this is doing and then go okay and what's happening behind the scenes there is that uh somebody kuka you know provided a a description file in one of the standard robot formats of the ewo dynamics typically we've had we people had to clean them up okay but if you go in and dig in or if you need a new robot or a new environment and you want to add to it there are these description formats that that you might have seen we we handle sdf urdf rujoko format are the three that we handle directly and it's just a simple description file actually it's not as simple as it should be xml is kind of gross but it's a description format that allows you to to say what the mass is what the inertia is what the geometry looks like importantly you can set a different visual geometry from a collision geometry maybe you want the robot to look like one thing but the physics you want to actually use let's say simpler geometry so you don't have like some weird artifact in your mesh that causes you to get your arm caught on the table or something like that okay and then you just list the links links the list the joints it's a pretty simple description format normally the robot providers give those to you uh in practice the robot providers often provide something and then the community cleans it up a little bit and you can find online something good you know beware if you find one online uh a lot of times they're pretty bad uh i i'm sort of shocked at how bad they are um a lot of even the kinematics can be wrong but but almost certainly the inertias are often wrong in fact mujoko if you load mujo was another simulator by default i think it ignores the inertia in the file and just recomputes its own because that's certainly an option you can turn on i think it's on by default just because they don't trust there's so many bad inertia files out there you can you can write numbers into the signed distance into the sdf and not scientists uh see the description format which don't which are not possible for any physical system right there there are there are constraints on these that these numbers have to satisfy to be governed you know generated by physics and often they don't right and sometimes you put a file in so you know drake for instance will say you told this is a nonsensible inertia and mojoko would just be like i'll just ignore that and and simulate a different one the different design philosophies but um you know these are these things these files are out there they're sometimes wrong okay and then um you know we have the the other thing that you saw in your in your files and you will be able to use in your projects and the like is we just have a a simple shorter yamo language that just says if you want to add a bunch of if you want to add a the robot with some bins or for instance and then you add a foam brick we have just you'll see there's one extra little modeling language that makes it fast to uh to add lots of different models together into one simulation okay now so multibody plant is the physics engine you also need a geometry engine okay it's called the scene graph in in many gaming engines and in drake also it's called scene graph this handles all of the so this handles the masses inertias and kinematics but this handles all the geometry queries and if you want to talk to a renderer if you want to render a high quality picture if you want to talk to the visualizer if you want to compute collision geometries that's the geometry engine that's that's in scene graph okay they both manifest themselves as systems in your system diagram so multi-body plant the physics engine is just a system it has a lot of mostly optional input and output ports okay scene graph is just a simpler system where just pop you can add in you should just make connections from other systems saying i'm going to declare some geometry i'm going to tell you it's pose and then you can ask questions about collisions about rendering and stuff like that it's kind of interesting actually to think about why did we separate those two you could you could maybe say all of those should go together in the same physics engine but there's actually a lot of cases where you'd like to let's say have powerful sensor models and geometry rendering and stuff like that but maybe use a different physics engine for or like in the under-actuated class we often write our own simple dynamic equations or if you have an autonomous driving project you probably want to use a very simple model of the car you don't want to simulate tire mechanics and the full physics model would be overkill so you know you can have one scene graph and multiple physics engines all feeding it's the geometry into the single scene graph okay so that's why there's two systems if that seems weird okay so you put those together and you have a basic simulator okay so let me just pull one up here okay so if i just populate my my system with an iwa model first of all you remember what i said about the context the context is just the state the time the input whatever what's the state of the physics engine for ewa which is a seven degree of freedom robot you can see you can just print out the state it has 14 states so for most physics engines the state is going to be the positions and the velocities okay and it has actually a bunch of parameters you can you can change the parameters and take gradients with respect to parameters and stuff like that i'll just cruise through this and if you simulate with just the multi-body plant then you can see what the next you know you can see how the state evolves the physics engine is complete in that sense but there's no rendering yet because i haven't added the scene graph if i want to visualize the scene i'm just going to add the i'm going to add two systems i'm going to add the multi-body plant and the scene graph and then i can call publish and suddenly now i've got a rendering of the ewa in the visualizer and then now if i simulate this is what happens right i can actually i think i could play that back no i didn't save it i can the next thing i say oh yeah here's here's how you do an animation okay so you can save and record in the player and everything like this okay so this is what happens when you simulate just the physics of the evo that robot will never do this thank god right um so simulating the physics is not enough to simulate a robot of these complex of this complexity right this model is a model that says give me your torque input and currently the torque input is just set to zero because there's nothing else happening okay and then given that torque input i'm going to compute the equations of motion figure out how the positions and velocities change and then i'm going to send them to the geometry engine that's all we've done so far but that's not enough right so in practice what's happening is that we have a big old box down here that's running their controller that's doing something like gravity comp and friction comp whatever and we can add that to that we need to add that to the system add that ewa controller into the system which is an additional bit of complexity like a lot of a lot of simulators don't provide the you know the infrastructure to write all these controllers and everything too okay you get a bigger class diagram now we have a pid controller an inverse dynamics controller and the like those are then connected to my multi-body plant and scene graph okay but just a diagram the dynamical systems language puts everything together and now if i send the zero command to the not to the directly to the plant but to the ewa controller module then the robot simulates like this right that's the zero command going into the ewa controller is now us you know much more like what happens on the real robot okay there are levels of fidelity which you can simulate all of the details of the controller like in fact someone asks on on piazza there are actually mechanical breaks inside that so if you were to just have a you know a motor trying to hold position on an arm for a long time that motor is going to heat up and burn out right so a lot of robots that are designed to be doing these kind of operations actually will as soon as the robot stops a brake will be engaged we largely ignore that in the simulation of our robot you could model that but that's just from my perspective as soon as i send a command it starts moving there's something down in the details that that lock and unlock that break but it's never influenced the motion of the robot from the level of detail i've looked at if we needed to model it we could okay and even the way that we think about the detailed flexion in the joints their controller their low level controller cancels that out well enough that i typically ignore the flexible joint dynamics if we really wanted to be moving it at the limits of the robot we would add that in okay is that cool so the manipulation station this thing that has the input output ports is just the combination of those controllers of the controller the scene graph and the dynamics the three things that you're going to almost always use right i'll have my multi-body plant my scene graph physics engine geometry engine my controller which is implemented in a few pieces right and all i do is i put a diagram around this and provide the input ports that are the level of abstraction that you would have you know that gives you this level of abstraction right this is the thing you've seen and you're probing on the problem set okay all that is is just making the diagram that does the details of the of the robot uh you know expose some ports so that you have a new level of abstraction you can just think of the manipulation station as one system that has all those details inside it the cool thing is of course that i can take the system out of my code put a different system in that just talks directly to the robot and the same inputs and outputs will just drive the robot around we do have enough of these we have a handful of these robots upstairs um there was an early version a prototype version of this course before covid and before you guys multiplied where we were going to have everything run on hardware and i would say at the end of the year if you've demonstrated sufficiently in simulation something and want to try it on the hardware that sim to real gap is small enough that we could we could consider doing that okay so that's kind of the power of the modeling i mean that's just a computer science is all about abstraction in dynamical systems the block diagrams are the way you accomplish that abstraction okay questions about that oh good i i chicken scratched it because it's there's the details are more important are are on the slides um or in the notes i said you i wrote you a position in evil position measured but that's actually one of many input ports and many out many more output ports actually and even once you put a hand on the robot there's going to be another detail the controller for the hand is also going to be here okay there's a few more little systems in here that provide that total abstraction okay so let's talk about hands the oh there you go it's right there that picture is the has the answer i wrote the first almost the first one on both sides okay we talked about about arms we talked about physics is only a subset of simulation right let's talk about robot hands and why did i pick this simple wsg so um of course when people think about robot hands they think about this right they think about a dexterous hand always holding a light bulb or uh or something like you know something fragile an egg or something in the glamour shots okay um this is the shadow hand i don't have one of those here i do have the allegro hand in the middle there uh here this one costs a lot more money that's why i don't have it um okay this is the shadow hand is the one that was in this um you might have seen this famous open ai rubik's cube uh well this is the just the letters and but then they did a rubik's cube after that and it was i think they were operating at the very limits of what that hand was capable of and they spent a lot of time fixing the hand and working with the hand provider in order to make that endurance uh testing happen okay but there's an argument out there matt mason used to make it um you know maybe the most most strongly but i think you could argue that a lot of the things we want to be done with manipulation in the home if i were to give you one of these things from the toy store and send you into my home you'd be pretty useful right if you said you can only you know operate in the world with this little two finger gripper thing you'd be way more useful than i mean than rubik's cube twiddler right um so there's something to be said that i think uh our robot hand technology will mature it will enable great things but i don't think we can say robots aren't good at manipulation yet because of the hands right i think if you put a powerful enough brain behind the hands then we should be expecting more than we're seeing so far and one of the best examples that sort of made that point this was the pr1 this is actually if some of you know the pr2 robot this was an early prototype and the robot went in and with simple two-finger grippers little claw hands did all kinds of super useful things in the home right it uh cleaned up the living room there's another one where it gets a beer out of the fridge it you know that mops there's like incredible things that this thing did what's the secret teleop there was that was all driven by somebody you know behind the scenes that were they were moving the arms but the hardware was capable right and they demonstrated that a long time ago and that was i think that's just a really eye-opening that we can't blame the hardware simple hardware can do a lot of useful things okay so in that sort of spirit we've gone with a simple but high quality hand for most of the experiments we can play with dexter's hands and i put some in the notebooks if you want to play with the allegro hand or whatever we're doing some research on on more dexterous hands but i think a lot of the manipulation problems that get towards intelligence can be studied avoiding the complexity of the hand and focusing on the complexity of the manipulation with a two-fingered gripper so this is the shunk wsg50 uh it's kind of the ewa class you know high exp way too expensive but high uh quality sensing torque control it's force control now in the in the fingers okay i actually i forgot to make i only i kind of made the point about the reflected inertia but um but actually the shunk gripper is an amazing example of the of reflected inertia so i said i said that the point the reflected inertia is that the motor's inertia reflected through the joint looks bigger than it than it should be because it's multiplied by the square of the gear ratio or similarly the the inertia of the arm reflected back into the joint coordinates is much smaller by the square of the inertia okay and i think the e or the wsg makes that point beautifully so these are tiny little fingers maybe i can no i can't turn it uh okay these are tiny little fingers they weigh very little in terms of mass but if i push on them they feel very inertial what's happening there right is that there's a big gear about gear ratio inside here and it's there's a motor and i'm doing most of my work to ro to turn the i'm sorry for you guys that was badly posed by me but the fingers are moving slowly and i'm pushing hard that's what's happening right and it feels like there's a large mass okay and that is the effect of the reflected inertia in fact we actually don't simulate that super well in the first notebooks that i released and i'm embarrassed because uh there's a there's a newer version of the of the dynamics engine that i could turn on and it would simulate that reflected inertia beautifully but right now if you notice in the teleop demo how many people actually ran the teleapp demo in the first notebook okay everybody else run it i worked really hard on on that um but if you go down you can like you'll get in a situation where the fingers look kind of wiggly and loose and this these fingers will never look wiggly and loose right and the difference is it's actually the dynamics of that simulation are dominated by my light little fingers okay i have to choose a small time step i mean it's it's pretty it's pretty reasonable but the the size of the time step i choose to simulate the dynamics is dominated because of the light mass that we're simulating in the fingers and if i fact if i in fact add that reflected inertia then they feel much more massive and i can take bigger time steps and i could simulate faster speed wasn't an issue for those little simulations but that that dominates right it's actually it reminds me of a story so when we were doing the darpa challenge the first part of the darpa challenge was actually running our code on somebody else's simulator in the cloud right and we were working super hard on these balancing control and you know part of the game if you get to know me the part of my game is to try to understand the mechanics understand the structure of the mechanics how do i write better optimizations that exploit the structure of the mechanics okay we worked really hard we did fairly well in the competition right but we i heard a talk from the guys that wrote the simulator later after the competition and they were like oh you know we realized somewhere in the middle that uh it's pretty hard to have a heavy robot in white fingers so we just realized you could take some of the mass from here and throw it in the fingers right and i was like i'm a pretty chill guy right but i was like all the blood's running today what did you do to my beautiful dynamics that's not that's not how you should simulate it um so simulators do weird things to make it happen but physically the right way to model that is as a reflected inertia because if you add mass to the fingers that if i lift exactly i mean i i shouldn't i should only feel the mass of the motor and the mass of the fingers when i lift right but when i push i should feel the force of the the extra inertia of the motor so you can't just add mass and get the same effect it's wrong right okay but there are you know these beautiful hands out there that i brought a series of them here one one of them is the sandia hand uh it's a little bit you know big sort of dexterous hand there it's got some cameras in its fingers that was a pretty fun hand to work with um this is the eye high this is one of the first under actuated not one of the first but one of the most successful i think early under-actuated hands it's actually if you people know right-hand robotics the people that's a start-up so i mean it's it's a mature start-up at this point that's in town and they were the original designers of this hand created a company called right hand robotics and they're doing logistics they have a newer much better version of that hand now this is the robotique three-fingered gripper it's actually an incredibly clever hand it's got these four bar linkages it's hard to see but you can come down and see it afterwards so if you just squeeze it has less degrees of freedom than joints but it has four bar linkages so that when you collapse on an obj on an object it will close but it'll clo it'll adapt its geometry to the to the hand to the object right and this this one too this one does it with tendons this one does it with rigid links right and there's a great a great series of hands out there i put uh descriptions of them in the notes this one is maybe uh out of the box one of the cooler ones so uh of an unconventional gripper i don't know why it started in the middle here but although because it's that's what people used to do i guess okay now they have just a bag a balloon full of coffee grounds and the idea is that when you suck on the coffee grounds it goes through a phase transition the thing is very compliant and conformant when it's loose when you suck the granular media jams and it holds position and they can use that to basically pick up anything with this like bag of coffee grounds right and that's one of a million sort of uh not a million but a handful of really there's always an egg a really really cool hands that are out there that was not good i guess and then you'll see um you know more and more soft hands i've tried there's i think the soft hands are moving towards the place where they can be more and more dexterous so this was a play on the open ai demo but now with a hand that's just balloon actuated effectively right this is um you know soft materials where the actuators are just uh you know blow you know expanding and contracting the air inside the um inside the fingers and who knows i mean i would have said before that soft hands are awesome but they aren't dexterous enough to like button my shirt i'd be good for picking up an egg but not for buttoning my shirt and people are trying to challenge that um we'll talk we'll have a session later about uh tactile sensors i've sort of haven't talked much about sensing in the hardware thing today we'll talk about cameras and tactile sensing uh later but one of the big trends in tactical sensing is actually sensing with a camera that's behind your skin and trying to use they call it visual tactile sensing and we'll talk about what's good and bad about that when the time comes okay the other thing that um you know we won't spend we can you can certainly simulate these for your projects i i haven't i haven't i won't put emphasis on the mobile manipulator case but it's an extremely important part of manipulation and sometimes i feel bad about it because i think some problems are artificially hard on a robot with a rigid base tomas lozano perez likes to tease me because you know you can easily run into failures of the kinematics like the kinematic problem is like solving a puzzle when you're a rigid robot with exactly seven links or even worse if you have six links and you're trying to manipulate something on the table or reach into the sink kitchen sink right that's gets pretty hard and if you just put a mobile base then like there's so many more solutions to the the kinematics problems and he just thinks i'm working too hard on the wrong problem but the on the flip side once you can drive around then um you know you can get into all kinds of trouble uh right so yeah this is the pr2 the second version of the one that made the examples and got a beer out of the fridge this is the fetch robot this is the toyota's hsr this is the everyday robot right i think so leslie tomas i think haven't truly been happy since the pr2 died they've never found a complete replacement this was a really good robot that enabled a lot of research in a lot of labs but we but it's extinct now and pretty much i think the last every part every spare part that could be purchased on uh online has been purchased on ebay so i think it's pretty much dead you broke a pr2 don't brag about that it was like you just killed a species right no they were really good robots um and then this is the video that i failed to show you last time but it's it's like one it's an amazing uh mobile manipulator um that was my slide was hidden last time and i only showed you the failures but it actually successfully most of the time takes your orders and drives through the grocery store and uh you know completes the orders with uh combining all the perception but also you know very useful adding the mobile base obviously made this task possible all right any high level question i could i've got to end with like my favorite robot videos of all times but before i do that is there any other questions about what we've been talking about yes great question so so if i say i can't simulate the gearbox but i say that there's some depth in the simulation then where's that happening so i'm relying i'm modeling that the closed-loop dynamics of the low-level feedback controller which is measuring the sensor and and on either side of the transmission is provide that's providing a contract to me that i can i that i'm that's what i'm modeling is the contract saying that the the closed loop performance of the feedback controller around that messy gearbox makes it look like i control control torque but i also model the gravity the things that so torque is not enough their low level controller tries to compensate for friction tries to compensate for the inert the gravity so that's the model that we're simulating of their controller but we're not getting in there on the messiness of the gears because it's hard to model yep correct gravity friction you know and uh contact forces are a big one that i think that the thing that makes simulating manipulation heart much harder than previous you know wheeled robots or legged robots is the again it's what i said about the light fingers it's even you know if i can pick up sort of anything i have a heavy robot and i can pick up light objects and provide contact forces that can change very fast with small changes in geometry this is what makes the numerics of simulation very hard so most of the effort in manipulation simulations in the physics engine is about simulating the contact accurately great okay favorite robot of all time pretty i think so i mean it's not it's like asking me to choose among my children but um this is really awesome so this is a ishikawa lab in in japan they um they basically took their electric motors and took off all the safeties and probably burned them um i would guess but they overclocked their motors in order to make a series of just jaw-dropping high-speed video demos this is look at the footage this is a long time ago right so they did very high speed tracking first for vision and then they did uh you know high-speed motions of their robot and they completely in my mind changed what was possible in terms of manipulation in a narrow sense i don't think this is going to like be successful every time but you you got to see what it does okay here's dribbling this is high speed slowed down [Music] i mean this is like in the early 2000s i think it's flipping me off man pen spinning right so there are some good hardware out there they can throw and catch but let me just get to the my favorite one here we go all right this is a cell phone right what that is so good right i met the people that worked on that and i was like i saw how many times does that work it's like like once you know but uh but it doesn't matter to me um okay and by the way you know openai was so got so much press in 2019 for their rubik's cube but this was 2017 right and uh these guys were doing rubik's cubes way faster you know it's almost not fair that nobody knows about this one you know anyhow all right cool so if anybody wants to come down and see the robots um you know check it out yeah yeah sure so yeah yeah you won't be able to oh that the fingers for sure yeah yeah go for it yeah no this one is actually the four bar linkage and this one's tendons you can see the tendons they're fragile right so uh i mean we've we've broken with things in the hand and they're like this is rock solid we dropped our humanoid on that a few times and it was still fine i'm going to bring it down for proper demos later but but right now it's just a statue for silly reasons we uh we brought the wrong pendant that's well i wasn't i was just planning to mostly pose it but um i was going to put it in a slightly more elegant position than that yeah they can be pretty expensive they're different even the shunk which is the simple one in some sense it's the high-end simple one it's 15k yeah wow yeah one of these would be like even more than that yeah uh the allegro so so this would be a pretty expensive one this is designed to be a low-cost dexterous hand so it's actually using dynamixel which are those like hobby servos it's a high-end hobby service but it's appeal is that it's low cost rather than just right the direct drive means that you have to have a very big motor so it gets very very heavy yep it's just a matter of keeping your robot light and the cost down and fitting into the packaging yeah no it's good the tendons are you
Robotic_Manipulation_Fall_2022
Fall_2022_642102_Lecture_13_Motion_planning_part_1.txt
it'll be doing a you know another a week from Friday if you have questions about your projects how to you know whether you should run things on deep note or on your local machine if you need trouble having trouble installing stuff just ask um consider by the way asking if you're asking questions about Drake you can ask them of course on Piazza we'll try to answer them I love it when people ask them on stack Overflow because then we get questions that you know it kind of builds up the answer base in general so we don't answer the same there's like a wealth of answers on Piazza that are locked away from previous years and I just if only those had been on stack Overflow then the question would already be answered and consider checking stack Overflow uh you might already the question might already be answered Okay so today we're going to start talking about motion planning let me I think we've already motivated it fairly well I'll remind you that one particular motivation was we built this clutter clearing example where it took the ycb objects from one bit and dropped them off in another bin and I got it to some moderate level of robustness like it would run to all night long and wouldn't crash right but um but there were some pain points there were some failure modes in that and the biggest one I would say the one that was most uh you know painful to watch for me was that it would occasionally do a couple very bad things with the simple version of planning we did which was just sort of straight interpolation of the end effectors occasionally it would smack into the cameras admittedly I did put the cameras like right in the middle of the workspace um you know to try to make the point clouds good but it would occasionally smack into the cameras the other thing is that if it happened to pick something that was sort of right in this bin and right in this bin it would try to almost drive itself through the base of the robot and differential ik would do its best but it would kind of sometimes the robot would almost fold in on itself and it would get out of whack and it was kind of not very pretty so there was a recovery maneuver in there that would just say well things are bad let's just go back to home and then start again both of those we should expect to be able to resolve with some better motion planning and that's a goal for this week we're going to talk about two of the the two mainstream approaches to motion planning I would say one of them is really much based on optimization we'll talk about that a lot today and we'll talk about a the sampling based version of motion planning a bit on Thursday and I think there's some some work that tries to bring them together where you can try to get some of the best of both worlds but one of the other sort of let's say high-level motivations for the lecture here of using optimization for motion planning is that it can make a difference like this is a an example from dexai it's a company they're they're in town here and there's kind of a spin-off from MIT they're doing they they've got robots that make salads and do all kinds of food preparation you might have seen it in some of the kitchens around here uh you know they for them their their business model is roughly how many salads can I make per hour right or how many uh sandwiches or you know whatever we can dish out so for them it's not just about not running into cameras they want to eat every ounce of performance out of that robot and the rate that they can move food is partly due to the you know how fast you can move food that it doesn't fall out of your gripper right it's partly due to the velocity limits on your robot acceleration even jerk limits on the robot torque limits on the robot these are Dynamic limits that are hard to think about and when you're when you're doing well you're almost always moving up against those limits and it was just fairly recently that we were actually working with them and they started doing trajectory optimization to improve their scooping and their you know success story was they're moving twice as fast now and they can earn twice as much money or something I don't know exactly the business model but it's something like that right and that's true in a lot of applications that I think it's very nice to think about not only planning motions that are good but that are writing up against the limits and getting every bit of performance out of your robot okay so I want to get into planning and that sort of example first by thinking a little bit about the the non-linear optimization view of inverse of kinematics of inverse kinematics okay so let's start with solving ik but solve it well and understand some of the the subtleties of that cost landscape and understand why it's going to you know set us up for understanding the motion planning version okay right so I know you've guys are very familiar with this at this point but we've been saying that forward kinematics right is the function some kinematics function that takes the joint angles and moves it over to the gripper and inverse kinematics we say could be for instance trying to do an inverse of that of course that is a definition of inverse kinematics but we'll try to talk today about how that might not be the right way to write it down first of all explicitly thinking about it like this maybe this object should never exist in code directly okay and that maybe this isn't as as Rich of a spec of a specification of inverse kinematics as we want right so there's a lot to know about this problem even when we talked about differential inverse kinematics we saw some of the subtlety right we we tried to use the gradients of this we observed that the um the Jacobian matrix wasn't always full rank right so that implied even locally it could have many solutions for instance it could have Zero Solutions depending on what's Happening okay so this is not necessarily a great function and you could think about that if I have a desired pose of my robot right maybe with seven degrees of freedom I've got many possible solutions with my elbow for instance that could get me the same pose right and you could ask certainly ask for poses that have no solution if it's too far out of the workspace there's no solution and sort of everything in between can happen okay but this is obviously such an important problem for robotics that there's a a long and storied history of of solving Converse inverse kinematics problems um we'll do the optimization base version in a second but let me say a few things about the the history of that right so one thing you should know is that um there's a special case for a sixth degree of Freedom manipulator a standard sort of Scara sort of manipulator yes I always try to do it but we normally do our own yeah oh we've I see I have been not putting that mic on just because we've been using the audio from this one yeah okay thank you thank you so much so you could take you know just saying a robot at six degrees of freedom is not enough but there's a standard revolute revolut sort of version of you know a standard six degree of Freedom robot arm that um is sort of a special case of this problem if you have a six degree of Freedom robot arm then it then um you know with all revolute joints or there's a couple there's a few cases that are are well understood then actually people have uh an exact understanding of that inverse map okay so in this case there are sort of closed form Solutions and in particular there's finitely many solutions so you can't actually do that it requires seven degrees of freedom to do what I did with my elbow okay in six degrees of freedom you can do this or this you know there's you can count how many possible solutions there are you can enumerate them and people use this often you might you might have used it before um most often if you're like in the Ross world for robotics you might be using it through ik fast is a common a familiar Ross package that is basically using the closed form Solutions of the it's actually a compiler so you tell it the description of the robot and it'll compile down the um the these Solutions when it's possible it doesn't enumerate all of the solutions it makes it has some heuristics to pick its favorite solution okay but it'll give you one when it exists it's super powerful if you look at the work behind ik fast actually you might be surprised where it comes from but it actually comes from algebraic geometry right so it's a it's the study of polynomials and polynomial equations that comes from a deep history I'd say of algebraic geometry for kinematics so it turns out that the kinematics of our robots can be described perfectly with as the solution to polynomial equations and you can take that's a fundamental math question now how do you find given some complicated polynomial equations how do you find the zeros for instance of the polynomial equations and there are math packages that will do that some of them have been optimized for the kinematics case and they can solve really interesting and cool problems so for instance if you think about a four bar linkage okay and you ask if I change the angles of this four bar linkage in 3D space what is the path that can get curved out by the you know end effector effectively of this four bar linkage it can curve out crazy paths right but those are actually the solution to a set of polynomial equations and even the manifold of solutions can be returned by an algebraic geometry package and um in fact they even have places ways where you can design your four bar linkage in order to execute us to be able to execute a certain kinematic path right so there is a there is a history there of very powerful tools for understanding this this problem I would say in the 80s 1980s and 90s these were mainstream things it's not the main topic of inverse kinematics now because although these are very powerful as I'll try to convince you this is I think still a slightly limited view of inverse kinematics you know only trying to invert that function f might be an insufficient description of the problem really what I want to do is I want to think about I want to find a solution that gets my hand here but I also want to respect my joint limits so joint limits already complicate a lot of this math if you're in the land of polynomials people like equality constraints and polynomials you don't like inequality constraints and polynomials and Joint limits are inequality constraints and so that gets you into semi-algebraic geometry which for which there are many less results okay but you might also have things like collision avoidance constraints oh that really screws things up okay so I want you to know that if you want to design a steward go platform there's like there's a right set of tools to do that but for the more General problem we're gonna we're gonna turn to more flexible but less reliable solvers okay so let's think about inverse kinematics as an optimization this is just a fun demonstration of doing interactive versions of the inverse kinematics problem on our humanoid okay and I'll show you the Kuka version of that running on my computer here in just a second but instead of um let me even just put this back so you don't watch that too much but instead of writing this you remember even for the differential inverse kinematics we recommended trying to solve an optimization problem so that we could put limits on and things like that so instead of solving this what I want to start thinking about is what if we say I want to minimize over Q maybe I want to find a comfortable cue okay something close to some desired in my Ewa I always pick like the initial condition to be something I guess my arm is not quite an iwa and my shoulder is certainly not what it used to be but um okay so I picked some comfortable position I'd call that my nominal position and when I'm trying to choose between all the possible solutions I'd like to pick one that's close as possible to my friendly happy position as opposed to like picking one that goes like this okay and so I'll I'll do something like this but then I'll say subject to the idea that my forward kinematics satisfies this solution but then I'll also say you know plus joint limits Collision constraints this is a if I only had this would just be a a slightly more General view of that problem which would tells me what to do if there's multiple Solutions that's this gives me an objective over that would define my solution if there's multiple solutions to find which one I pick but then because we've moved to the language of optimization we can also do other constraints so this is the object I want to study for the next little bit questions about that what does that look like as an optimization this is a quadratic cost which we like in general it's the nice simple it's even a positive definite quadratic right so that sounds that seems good but this is a potentially this is a non-linear non-convex constraint almost every case right joint limits are are simple linear but but collision avoidance again are very non-linear constraints so we're quickly in the land of non-linear optimization non-convex optimization right it's the picture is the non-convexity is coming from the constraints not from the objective but the picture is kind of still like what I was talking about here where you you could expect your Optimizer to find Minima but it won't necessarily find the global minimum right this is kind of the picture I want to have in your head really what's happening is is the picture is a little bit more like like there's a big quadratic form but because of the constraints I'm only allowed to go here maybe I could go here and I could go potentially the thing that defines those sets is so complicated that I can't expect to get to the best version of it okay but intuitively it's really it's the same as kind of having a landscape like that thank you so how do we solve that problem and how do we I mean connect it to what you already know we we have differential inverse kinematics right so what is how does differential inverse kinematics work we solved that reliably with convex optimization right and we did that even with some of these other constraints by taking a linearization of these constraints that was the differential ik version of it in fact the way we often solve this is actually very much like solving the differential ik problem over and over and over again so when you use um the nonlinear solver we we tend to use for these problems is snapped although some people think ipopped is better I I actually believe them but I just I think it's a you know you pick your your weapon and you learn how to use your weapon very well and so I've used snapped more so I can make snap do good things and other people who use other solvers know how to do that better but this is just the name of a commercial solver all right it's a semi-commercial solver yeah and it is solving um using sequential sequential quadratic programming okay so actually when snap is solving a problem like this it's actually effectively solving a differential okay ik problem multiple times and charted trying to rapidly move to a solution so the differential ik work that you've already done is happening behind the scenes okay so this gives us a really potentially Rich language of costs and constraints and we're gonna we're gonna look at some of the cost Landscapes and everything but this is what we're talking about for for Atlas so maybe we have you know somebody saying I want my hand to be at a certain place that's what this little marker is doing is that someone in the interface is just pulling around the hand but Atlas also had joint limit constraints collision avoidance constraints even self-collision can avoidant constraints right um there's we needed a lot like things like gaze constraints right so the way that you move your hand maybe should be subject to where your cameras are and you want to pick up things you can see which turned out to be a really funny thing for Atlas because the first version of Atlas its head was like this it could see here and its hands could reach down here and like the place where you could reach where the cameras could see was was like surprisingly small it was really irritating they changed if you look at Atlas now they kind of took the arms off flipped the shoulders around and now it looks a little like an ape but it's um but at least it can reach where it can see that's good okay um you know when we were doing it on the humanoid we wanted it to also keep the feet in the same place we didn't want the feet to move around when we were solving the ik problem we also had for instance the center Mass had to be inside the support polygon so the robot didn't fall over but these are just a list of more and more things and once we've gone over the hump of saying we're going to solve a non-linear non-convex optimization then it opens up a huge library of these types of constraints you can add you still want to add things that are smooth and nice functions okay but but it's it really opens the um the floodgates okay so let's think about it for um for a couple simple examples so let me even run interactive ik for this simple robot here which by the way this is you know I always run on M1 accepting people were like oh baby it doesn't work on M1 it works on M1 um Okay so here's my interactive inverse kinematics in mesh cat so I've got my my Kuka iwa and I'll just move around the um the end effector positions and I want to there's a couple points I want to make here is that you know it's solving and it certainly can solve at real-time rates I think the delays here are probably more the network or something than the the solver it can definitely be solving fast enough okay um but every time I'm I move the sliders it's like it's waking up for the first time and solving a global version of the problem so the way that can get you is if you just do that for instance um there's no real guarantee that it's going to find con smooth Solutions right I might as I move it around I might get it to jump between different solutions probably if I move it towards itself it'll start jumping around right that was cool I mean that's a lot to ask to move right through yourself but uh foreign okay and it's it's finding the solutions it's pretty impressive but I wouldn't execute on that on the robot right okay um so the cool thing is though you when you start adding more and more of these constraints it gets more and more powerful so um here's now the same interface but I put a poll like if someone put a pole in front of your robot you should be mad that's not a reasonable thing to do but it's kind of a reasonable demo uh for for the ik here right so now I've done I've done you know almost exactly what I wrote here in fact The Joint limits but also the collision avoidance constraints okay and the fun case I guess is when I try to make the robot move that way what's it going to do right so I guess that should be positive y is trying to reach it's trying to oh and then it snapped over right again don't execute that and because you know in consecutive poses on the robot but as an independent solve it's pretty good right it's reasonable to be that there's some places where it's not going to be happy with the solution in fact so the solver will say in that place where the where it couldn't find a solution it actually did well that time I'm like I wanted to make my point oh damn it oh okay there yeah right there the solver will say in that case ik failure it knows that it failed so that's comforting I guess a little bit um Okay so so this is a pretty powerful toolbox but uh but let's see what we can do with it one of the most important lessons I want to I want to give you here is that actually this specification is I think in addition to not being what you want because it didn't you know if I just saw this version of the problem you know it didn't account for these but there's another view I want to give you which is that saying this is almost always more than you need to say okay for most manipulation tasks picking a desired grasp and constraining that the robot reached to that grasp in x y z yaw pitch role is is overly specifying the tasks and as you use optimization more you will learn that writing a minimal version of this constraint that is only constraining exactly what you need and know more will open up the flexibility for finding Solutions and it's the sort of a better way to write um write your inverse kinematics problem so let me make that point here with this version here okay so now I have a the robot is supposed to be grabbing the cylinder okay and this time I put the sliders on the cylinder okay instead of on the robot now my objective so I'm not worried about the inertia of the cylinder in this case my inertia is just to have the cylinder somewhere in my hand okay but I'm saying that the the location along the cylinder shouldn't matter and the rotation I picked a cylinder so I could argue that the rotation shouldn't matter so if I write the problem right if I move as I move the cylinder around the robot actually shouldn't move until I get to the end and then it has to follow right see if I can see this from a different angle too so and similarly as I it should be willing to rotate itself you know there's no reason why it had to come in at exactly the orientation at some orientation any orientation where the cylinders in the middle of the hand should be sufficient okay so when you want to level up in your inverse kinematics you should write just the very minimal version of that constraint does that make sense so how would you write that constraint what is that particular constraint how would you author it right what does it look like this is how I did it you could there's various versions of it but let's what I said was I'm gonna I want a constraint that says so the the decision I should have written a q in here that would have helped but the decision variables are are inside the thing that defines my relative transforms okay G is the gripper frame C is the cylinder frame and what I said is that there's a point there's for there's going to be two points on the gripper frame that I would like to be inside the cylinder so in the cylinder frame this is the center of the cylinder in X and Y and the cylinders are by default along the long axis you know almost in every robotics package I guess so if I'm saying but I'm between 0.5 and 0.5 that's just saying there's some point of interest on the gripper frame that I would like to be inside this on the center line of the cylinder but anywhere between point the length of the cylinder and then the points I would like to place there are two points on the gripper frame okay which remember the gripper frame here I grabbed this old picture just so you remember that RGB right so the y-axis is the one along the gripper so 0.1 is my 0.1 puts me somewhere around here in y and then I did z a small number and a negative number a small negative number which is this and this you know so along that blue axis is that clear so what I'm saying I guess I have a round chalk here right is I've got a gripper frame I'm going to pick up this is hard okay I got a point right here and a point right here that I'd like to be inside the chalk and that's enough right that means that I'm going to be aligned with it right and I'm going to slide along we actually have ways to write orientation constraints directly but I thought this was a little bit easier to put on a slide and consume just think about points putting a point on the line of the cylinder and if I put two of them I constrain myself exactly the way I want and not in other ways that I don't want cool and so that works pretty well and I think for it's a general strategy for you know picking up objects or whatever depending on what your grasp selection strategy was but you should try to get one that allows you to give that have that extra flexibility because there's going to be many many more solutions yeah did you have a question but only in the one axis that was my goal right so it's true that I could have I could have allowed grasps like this I chose not to I wanted it to be like this but I was it's free to rotate in that axis so I guess it constrains two degrees of two of the rotational degrees and leaves one free okay the um so that's our language in fact it's a rich language so um you know this this actually translates directly in code we can just say add a position constraint it uses the multi-body notation you know you say I'm in the gripper frame I'd like my Q point which is this to be in my other frame which is the cylinder frame you know there's a pretty direct mapping from that math into the code okay and we actually we have a pretty rich library of costs and constraints add position constraint position costs orientation constraints orientation costs the Gaze Target constraints all the things we've used in the past you know there's ways to just take your frame logic your points in one frame need to be something related to some other points in some other frame and add that as a con a cost or constraint into this problem and then hit solve and you're good the one that I used I mean there's actually there's some pretty interesting ones actually um the one that I used in the collision avoidance constraint is a minimum distance constraint which is the most complicated one that's in there uh which just adds a single constraint saying that the all of the Collision pairs in the world are at least some minimum distance apart and it's written in a particular way I mean you could write it yourself but the difference between a thing you'd write yourself and what's happening in here this is leveraging the more advanced features of collision detection so that it can objects that are far away get called immediately with access aligned bounding boxes and and and then it's doing some clever smoothing as the two objects that are as close the closest switch if you if you switch from being as closest to the table between closest to the laptop that would be a non-smoothness in the cost landscape and it tries to smooth that out and there's a lot of details behind there to make that work pretty well it doesn't it's still not bulletproof because it's a complicated landscape and that's kind of what I want to tell you next is just let's just think about the landscape that's happening in these problems okay so um well first I'll just show you we use this a lot actually and and um my favorite one was um they gave the the DARPA robotics challenge we had to do a number of things the robot had to drive a car right and and the government furnished us with an atlas we run won the right to get an atlas and then they told us we had to drive drive a car and they gave us a small car it was a pull it's a Polaris right they gave us an enormous humanoid and then they gave us a tiny car and it um we had to invent Rich inverse kinematics just to figure out how to fit the robot in the car right um and it turns out it didn't really fit behind the driver's seat you had to sit kind of in the middle and kind of on the passenger seat and put your leg across the console and go like this and drive like this it was it was awkward and embarrassing and we actually fell out in the process everyone time but that's how we we turned the steering wheel see yeah and we did all the workspace analysis and everything using this inverse kinematics pipeline but the good thing is it's such a general pipeline that once we had it uh you know working on Atlas we had a chance to work with the NASA Valkyrie robot and everything just worked like it just the same code you just flip out the atlas model at the beginning put in the valkyrie model and we could do all the same stuff okay so it's a pretty powerful tool chain okay so let's just appreciate for a minute what's happening in the geometry okay and so I want to just I want to visualize a few um simple configuration space regions okay so the example I I carved up here to try to convince you of this is let's just take a two-link arm I'm going to make Theta 1 Theta 2 I'll call it L1 L2 and then at the bottom of this two link arm I have some colored chalk here somewhere I'll make uh hand that has radius r okay and then like a jerk I'm going to put this robot in a constrained environment okay with walls like this that are just W apart okay and let's just this is of the simplest possible sort of kinematics um problem it's one word um and we can solve it in in closed form we can say I know exactly which angles are going to be in Collision which would be out of collision because the if I just take the X position of that the P of the gripper in x is just going to be L1 sine q1 plus L2 sine q1 plus Q2 and I want this to be in the limits where the sphere is not intersecting with the wall so it works out to be that okay and if you plot that region which I've done here and I could change my lengths and my radius or whatever it gets funky right it's it's this one is as simple as it gets but it's not some simple convex region right the the landscape that my inverse kinematics tools have to work around is on the I'm sorry if I didn't say this clearly the the the green or the the region in the middle is the feasible region and uh the way I could plot it in Desmos I plotted the two infeasible constraints but this region in the middle is sort of the feasible region okay zero zero is feasible right and as I move it around the shape of my feasible region can change pretty dramatically whoops that so that would be if the walls are crushing the robot so that's um okay that's a toy example let's do it for the Ewa okay I wanted to I was I spent some time trying to figure out how to visualize this for you um this is the example I I came up with okay so I actually am going to lock out uh four of the joints of the Ewa and just plot three of them because I can make animations in three dimensions okay so I'm just going to leave the three of the iwa that are in the plane and then I'm going to ask it to reach into the shelf and I'm going to do the same thing we just did with that but I'm going to do it with the real geometry of the Ewa all the real Collision geometry of the shelf and the Ewa okay and this is what it looks like what the okay so um the inverse kinematics problem has two two bits so let me try to make this makes help you make sense of this so the first thing was I wrote an objective saying that I want the hand to be um the point in the hand to be a point in space to be in the hand okay and that can that constraint is this green region that would be an annulus all the way around it's truncated only by The Joint limits okay so this would be the it trying to go like this you know like this it's in the plane and I yeah I got to work on my shoulder Mobility before the next time I give this lecture um okay so that's that's the feasible if the if the Shelf wasn't there which is already sort of a terrible thing right like that's a scary kind of landscape it would be smooth if I'd plot the way I made this was I just sampled a bunch and I called marching cubes and it's so it's a slightly bumpy version of the true surface um by the way snapped our inverse kinematics code does find the solution if I find an optimal solution for this I used this as the initial guess and it found its way into that nook and cranny to find that solution which I showed you in the in the picture okay but what's this thing this thing is the merging cubes version of the sea space of the configurations where they're that are in Collision or out of collision okay so this is the boundary from where you switch from being in Collision to being out of collision and it's horrible it's absolutely horrible and in fact if you want to find the solution it has to be feasible it's tucked down in this little you know well there it is okay that's that's that question we're asking snapped to find an answer to for us so when it's not fails be nice it's like a really hard problem we're asking right in fact let me even just show you I also I got a little crazy with this right so let me show you the the cost landscape we actually give to snapped okay so let's see if I can move this over so this is the same um problem but I'm going to actually plot it we don't give them the boundaries just as true false things to snap we uh we make smooth we the minimum distance right it doesn't have a cliff that it falls off when you're in collision and not we have the minimum distance function which tells me how far I'm in Collision or how far I am away from Collision okay so that becomes a smoother function but um but you can look at all the pieces of this okay so first the objective is beautiful and smooth right it's my quadratic which has some goal back at the um at the comfortable position yeah awesome awesome so so this is um I'm plotting in 3D because I have three joint angles so it's not Cartesian XYZ it's q1 Q2 Q3 are defining the three axes of this plot thank you for asking that so the point there means a particular choice of joint angles and we're trying to yeah perfect thank you for asking that and so this is telling me I have a favorite joint angle which is my comfortable home position and I'm quadratically penalizing things away from it and then and the solution is turns out to be here on the landscape that blue okay but if I start turning on the constraints here let's turn them on one at a time here um the position constraint which was that annulus I don't give it that annulus directly I give it the x y z location and ask it to be inside the the constraints so if I plot the x y z the way I've plotted this here is for each of the constraints I've plotted a region which is red if it's infeasible and blue if it's feasible but it's a smooth function you see right so it's only I got a small annulus of possible feasible Solutions but they are the level set of some curve so snapped knows it needs to be it needs to get this function to be inside some band and it's allowed to use gradients and the like to try to move me down into that band but there's also a con so in why it's trivially true because the robot can't move in y so all of the joint angles uh are satisfying in y and then in z there's also another band and those two only intersect in a small little thing okay it's using the gradients of those individual functions to try to find it but it's a nightmare of a problem the bounding box constraint was just the joint limits and then the minimum distance constraint is the big scary one right that I it's a little bit less scary when I show you it as the the full distance computation instead of just the clips but it's still a hard problem for SNAP to solve okay so be nice to your solver it's solving a hard problem questions about that ik is um I think it's a Workhorse you'll use it in like mature manipulation tools we'll use these these queries they can be made Fairly robust but even the best people in the field still complain about ik failures and and the like so you know there are problems that we we would like to be able to solve in this kind of space that we can't quite solve the uh can't solve reliably enough to you know to ship let's say in a product um there are versions of this that try to solve the global optimization problem we have one of them implemented in Drake you can play with it if you want it can it consumes a smaller vocabulary of possible constraints that we know how to do stronger optimization on you know but it's it's uh and it solves much slower it would not solve at interactive rates it's obviously for the Dual arm it's less than a minute but it's more than a second okay so kinematic trajectory out let's take this is a good time to take a quick stretch yeah before I jump into the trajectory optimization version and feel free to think about questions while you're stretching I love that the landscape isn't just hard for Snot it's apparently Hard for Safari [Laughter] your web page is using significant energy closing it may improve the performance of this device that's just drawing it not even solving for it Okay so my promise at the beginning was that we were going to stop smacking into the shell into the cameras and the like and we're just talking about inverse kinematics so far but my my claim now is that if you understand that you've actually solved a lot you've gotten yourself most of the way to solving trajectories okay and the idea is pretty simple foreign let me use that simple landscape you know oops that was my pendulum landscape where I had a bunch of feasible Solutions and so far I've been saying find me a point somewhere in this landscape that satisfies some criteria okay the motion planning problem is going to be find me a bunch of points that all satisfy the criteria right maybe they satisfy you can put different costs and constraints on the points okay and you're going to put some sort of conditions that sort of ask these to be have some continuity between them that's the main thing I want you to think about when you're thinking about extending inverse kinematics into kinematic trajectory optimization is I'm just going to find many points and ask them to be connected by a curve okay remember my inverse kinematics solve as I pulled it around there wasn't any guarantee that it would find smooth Solutions it could go like this and then suddenly like this right so I'm asking it now to just not only find independently good points but to actually have them connected by some simple constraints okay so maybe I'll even show you the show you're working and then we can get back to how it works this is kinematic trajectory optimization let's do the reaching into the shelves example here okay which is this one okay so um I thought you understand I think how to write an optimization to find a hand in this pose or in the red pose but what it's found this time is a sequence of poses that goes from one to the other okay and I can just move along that as a trajectory that goes back and forth okay um and I'll do it in a slightly different order here but I can also do for instance the clutter clearing example here so remember the problem that I had was if I was reaching here I might sometimes smack into the cameras going around okay so now I find Solutions I actually thought it was going to find a solution inside the camera it found a solution that went around the cameras yeah I mean it's my fault for putting the cameras right in the middle of the of the reasonable space okay but it satisfies all the all the constraints okay and finds a nice smooth motion and you can put velocity constraints on the than the start and the end if you want okay so let's look at this a little bit more carefully so the Collision geometry that is using in that minimum distance constraint I can turn that on this is what it looks like okay so I have a simpler geometry for the robot I just made boxes okay and I put a big Sphere for the hand just because not only because the hand is I mean the hand could be a box that's fine but um but it's going to pick up some things so I wanted to have a conservative region probably the mustard bottle would snip outside there I'm going to smell you know I'd still smack into it I could have done a better job on that it was late it actually was early um okay but within that approximation it does a pretty good job of you know of solving the problem thank you I have to say um writing up these examples is a was a nice reminder not only of how well it can work but how annoying it can be when it doesn't work so let me show you like I guess Mark rabert would call it the unvarnished truth or some other the dirty laundry if you will okay so um it's not quite the code I think is fairly clean in the sense that you just add the natural costs and constraints to the curve so you basically say I'm going to make a kinematic trajectory optimization I'm going to add my joint limit constraints I'm going to add some velocity limits those just come straight out of things I think it's fairly readable I'm going to say I want my start to be in this position constraint my end to be in some position constraint it's all very natural easily Justified the only cost is shortest path basically that I I basically say I want the time to be small and I want the distance to be small and this is a strong recommendation I want to make to you in life don't Jam your all kinds of things into your objective and constraints be very minimal in the way you write your constraints and try to write exactly the cost function you want if you start cost function tuning life gets sad um some of you know what I mean RL in particular has that trap okay here's the one thing I don't like about the example as it is I actually call solve twice I first solve with pretending the Shelf is not there and then I solve again using that as an initial guess to give the the shelf that is there if I don't do that let's see if I can adjust that enough to let me just take that out and see what happens this is the Shelf example okay to be fair it said it fails right it's I would never have executed that but it just was unable to find a solution um in fact I can do less severe things to it um right so trajectory optimization failed snapped was unhappy it was unset I was able to satisfy the constraints it wasn't I wasn't surprised and thought I should execute it but it didn't solve the problem I asked it to solve in fact if I even change the sphere on the hand to be a box then if you think about what happens if I my straight line trajectory might or if the Shelf wasn't there it would go from here straight through to down here okay if I have box Collision for my hand and box for the shelf and it goes into penetration there's nothing that'll help it to know which direction to get out so this was my intuition which made me say well if I put a sphere on there then it'll have some sense of which it could move in directions to get different you know but this is the dirty laundry right so let's just see what happens if the if I take that off oh no it still failed okay um what did it do crazy but this curve actually is the visualization of the solve as it's happening so if I were to run it again you'll see you can actually see it struggling it's like trying all kinds of crazy Solutions how did it solve it pretty well that one time that was I should have left it and looked at that but clearly I just got lucky what is that oh no no oh I see I've seen that one before too yeah so it went up down okay whatever the point is it's it's um I think these tools work extremely well when you have a reasonable guess but they're not solving the really Global optimization problem okay yes um for sure so the way if um in fact I had to be a little bit um so I make this minimum distance constraint saying that I want to everything to be at least one millimeter out of collision and this is my don't ignore don't even consider bodies that are more than 10 centimeters out um okay but I had to actually go through and say I want you to evaluate that constraint at 25 points along the lung the length of the trajectory but I could have added that in um you know at just the beginning at just the end and I can choose where to add it the particular parameterization I've implemented here for the kinematic trajectory optimization I'll say a little bit in the next uh bit but it actually separates out the path that it's optimizing from the time parameterization of the optimization that allows us to write a few things and more things convexly so um so that that's the only subtlety to your question so so yes you can definitely say like halfway through the execution I don't want to be in Collision but later I can and there are other formulations that might be more natural if you know that you have a ball for instance flying through the air and you want to grab it a certain thing maybe you don't use that separate parameterization combine it otherwise you can't you can write combined constraints even in this this formulation too they're just um my I emphasize the things that could be convex when they could be which is the reason but you could certainly write the non-convex constraints on this solver I I actually I I worked pretty hard on this I I would say I got a little obsessed with this uh trying to write make this good um partly because last year I didn't have this for you guys and I felt that a lot of projects would have benefited from it so I this is actually the code that's running this is gonna I'm gonna push it to Drake and go through code review on it this week so it should be locked in it's got full test coverage it's you know pretty mature and good um when I don't solve it twice yeah so um okay if you do nothing then it's just going to take a it I mean it's just going to pick Q as like some slightly non-zero trajectory just to avoid the the locomotive but often happened at zero but basically it's a default trajectory is is nothing knows nothing about your problem so for the second problem actually I didn't have to solve it twice but I can actually show you um yeah uncommon just to see the initial guess I thought that I didn't know you were going to ask that but uh I thought it was interesting so I put that in just in case so I just made this as an initial guess I just made a trajectory that went from here to here just a comfortable position I only rotated the base okay and the reason I needed to do that at all was because the solver kept trying to go the other way around it would be like here and hang itself basically so I think okay I have at least tell us I wanted to go that way and that was enough and it found good Solutions myself a best relationship the question is why did I pick that particular cost um so yeah I did minimum time and minimum path length there's a couple other formulations that are naturally convex in this parameterization so I biased my I biased myself towards the convex objectives but um acceleration we don't know how to do convexally ironically it's you would think if you could do one derivative you could do two and three but we know how to do positions and velocities we don't know how to do accelerations because t squared makes things not convex roughly uh so that is reflecting a bias not just as a roboticist but as the guy who had to type the code in and they made that I know it makes the solvers better I think the there are other ways that if that I will talk about at some point is uh that you can minimize acceleration convexally if you choose there's a after you've optimized your path if you lock the path and then just optimize the trajectory then you can put even more constraints on accelerations and and the like and optimize that so for the for instance like the dexa use case I would recommend to them to solve the best optimization you can with the weaker constraints but then at the last instant saw that again with the with the path fixed and you're just going to make sure you're moving exactly along the rails of that path so there's a whole toolkit here and uh yeah I think a lot a lot to know but it's good questions other questions about that yes that's a great question yeah so I have done nothing in this formulation to talk about uncertainty it's just hard don't be in the object or or not it is super useful to talk about uncertainty and there are some forms of uncertainty that you can put in in nice ways and convex ways and the like typically gaussians you know are are a good thing but if you move a complicated robot it doesn't say Gauss distributions don't stake out gaussian and the like so yes there's a there's a topic that I would lump under belief space planning or planning under uncertainty or planning for information gathering that would address that very nicely but they it typically means Hardware optimization problems yeah yes is this fast enough to do the trajectory optimization at each time step by Ohio if the thing is changing that's a really good question so can you solve this fast enough to solve it online um so that idea would be called Model predictive control MPC if you see that name to roughly resolve at each time step with the shifted data you know and you're kind of moving as you know you know that you've executed um I would say I don't have evidence for this example one way or the other because the MPC problem if it would be bad it would be naive to solve the problem from scratch every time step you want to use your previous solve not only as an initial guess for the other solve but typically there's a lot of problem data that your solver collects while it's solving that first one that you want to pass that would be called a worm start for your solver you say I'm going to solve an almost identical problem um I just change it a little bit reuse as much as you can and so in that regime I do think this would be um real-time compatible but again it's going to be limited to uh local changes if it suddenly had to go a different way around the bin it's or around the cameras it's unlike you can't guarantee it's going to solve that in fact in general solving the nonlinear optimization on the Fly is a little risky I'd say yeah for Atlas for instance we did great trajectory optimization offline but we didn't do it online because we didn't want to be like running along with that listen all of a sudden it says I can't find a solution what are you going to do right um so we didn't consider that robust enough the people who do MPC on in in line would restrict typically restrict themselves to convex problems actually there's a ton of autonomous driving companies that are solving the nonlinear version on the Fly which is terrifying but but uh but uh so maybe it can be made robust enough but typically it's in some envelope where it's been sufficiently vetted okay so let me tell you a little bit about the way to write these continuity constraints I think I said most of my dirty laundry there's probably a little bit more dirty laundry it's a pretty good example okay so how do I write these um continuity constraints so remember when we did the piecewise pose when we designed our keep our sort of key points keyframes by hand and then just interpolated between them we interpolated those using there was the class was piecewise pose because that was trying to do something clever about the quaternion interpolation on the rotation but in general this is a piecewise polynomial it was in those cases it was typically a cubic and and it's often called cubic splines okay and so so roughly speaking for each interval of the piece for each piece you know a piecewise polynomial would just say I've got some coefficients that I'm trying to fit these are my decision parameters in this case times my coefficients which would typically be something like this I to zero to three okay something something that is just a polynomial in time around some nominal um you know relative to some starting place in the in the segment and it's just a a this linear coefficient on some non-linear function of time this is a way to just say you know I've got some polynomial in time which the parameters of that polynomial this is like you know t plus 2T squared plus three T cubed or something like that okay the extra logic of making it a piecewise polynomial just says I'm going to make some intervals here and in each interval I'll ask this to be a polynomial of degree three this would be a degree three polynomial this degree degree three polynomial and you could put constraints on that make them smooth okay so this is just a piecewise polynomial representation of a trajectory if you ask in your optimization formulation if you represent this as a piecewise polynomial and instead of choosing Q at time 0 and Q at time t or whatever as uh as your decision variables you make the coefficients of this spline be your decision variables then that's going to parameterize a curve through space and the decision variables fit right into our optimization problem I can write like make the forward kinematics at a certain time you know find the alpha that makes the forward kinematics at a certain time satisfy my objectives and constraints I don't feel like everybody's with me right if I said right now I'm saying you know I want X F kinematics let's say at Time Zero I've got this as a constraint in my optimizer I can just write XG is f kin of sum over I Alpha I with t set to zero I just evaluate these this time it gives me some number some some constants here and I put that through my kinematics that's just another nonlinear and a smooth nonlinear function of the parameters Alpha so instead of changing Q directly I'm going to choose the coefficients of my spline those are the standard decision variables in a in an optimization it turns out that people make different choices about um the land of polynomials is a is a rich land right there's lots of different polynomials these piecewise polynomials um which are representing just T to some power they have some expressive power they have some numerical properties there are different classes of polynomial parameterizations that have different powers and different numerical properties you might have heard of Chevy Chef polynomials the genre polynomials bezier polynomials Bernstein and bezier are the same thing but right and they all have different properties okay but I think this is a super simple one to understand it's I just forget it even forget I just what if I just said this right T to the I is that I should have just written that that's easier right forget about the fact that it's I mean for each segment you need to subtract off the interval but but I'm just saying Alpha of t to the I okay this would be a the simplest piecewise polynomial it turns out there are other ways to write instead of writing just writing the simple polynomial simple monomials of a t to the I there are other ways that you might just say some other polynomial function of t to the I that might have give you different properties and they can parameterize similar curves the one I chose for this kinematic trajectory optimization was a bezier polynomial a bezier spline in fact not the only choice but this is my choice was a b spline which is again just a particular form of of a curve like this where this is still polynomial okay those ends turn on and off in a annoyingly complicated way but it the reason I chose Bernstein polynomials and these bezier polynomials this is mostly it's busier but I'm pretty sure it's the same as Bernstein um let me not write that in case somebody walks in and says what is he talking about they're not saying I'm sure it's brazier and I'm pretty sure it's Bernstein also what's that uh the burns the busier basis or the the oh is that right yeah okay that's lame he says it stands for basis not busier but the basis is a bizir uh curve yeah yeah okay yeah I believe you yeah um there the reason that I chose this representation is it has a couple of nice properties the biggest one is it has a property that's called the convex Hall property which says so so basically the decision variables the alphas that are in this curve have a geometric interpretation Alpha zero Alpha One Alpha two alpha three they are the control points the decision variables become the control points okay and I have some guarantee that for each subset of the there's a there's a particular order and degree of the polynomial okay but basically that I know that in within each segment of the polynomial the Curve will be some linear interpolation of the decision variables okay and then I get another region at the next time and I know that the curve is going to be guaranteed to be able to stay inside there so because of that this is a nice this is a nice property because if I want to write that Q of t is inside the joint limits Q Max Q min for all t then I I can say that if just all of the control points are inside my joint limits then I can guarantee that for all time I will never exceed a joint limit I don't have to sample exhaustively to guarantee I haven't violated any joint limits I can Leverage The convex Health property it also has the property that the derivatives are still busier splines so I can also write for all t velocity min The Joint velocity constraints are enforced completely with this with this parameterization that would be hard to guarantee with a piecewise polynomial represented like this but it's possible to guarantee with a zero polynomial okay collision avoidance constraints I do not have that guarantee once I apply a non-linear transformation you know to my cues I cannot guarantee that this Curve will not run into a table so it's a common problem would be to say I've I've sampled you know 50 times along my trajectory to not be in Collision but somewhere between time 37 which was out of collision at time 38 which was out of collision it went right through right that happens so um you know the more robust solvers even if it's gonna if it's going to use this kind of a tool will write those constraints maybe give some margin but then we'll sub sample after the solution we'll sub-sample densely potentially in order to see if that happened the code that we used at Tri would would do that and if it ever find a violation it would add a new constraint resolve on the Fly and would you could add layers of robustness like that in order to to try to avoid those possible pitfalls but in this formulation Corners happen that's there's there's no rigorous certification that you won't clip a corner so this is actually um what we used in heavily in the dish loading uh that's not completely true we used both this and a sample based method that I'll show you after but this is a a nice example where it had to solve some pretty hard problems so the mugs were placed in this video to show the trajectory optimization they were placed in a few different configurations in the sink that one was the mug was sideways and it was able to pick a grasp that was and it was if successfully found a path that went all the way from the mug in the sink all the way to the the rack but when it's top down it can't find that solution so it had to stop set down the mug pick it back up and that was because the kinematic trajectory optimization could not find a solution that would jointly satisfy the constraints we put on the pickup and the set down when the mug was in a certain pose so it had to do this extra step of rotating the mug which was a the slowest thing about the dish loading robot we used to we actually had people come in and we would time the people versus the robots and people can do in hand reorientation and it's like an unfair Advantage so like so we had someone tied their arm tied behind their back and they could still beat our robot because they pick up a mug and they just like turn it around and stick it in and we'd had to go and set it down and then move the arm like this in order to pick it up and um you know that was the best we could do with that hand I yes motion padding and going into task planning and then coming back into motion plan that's great so so there was an element of task planning there too the the absolutely right so the task planner would check conditions based on whether the trajectory optimization could succeed there's actually there's a simple version of this which is what we used at the task planning level you could you can actually just solve if you forget about this there's a there's like a shortcut you can solve just to ask whether you should solve the problem so can I find the same grasp on the mug that satisfies the conditions here and here so you put a cons the only constraint you put between the two separate Solutions is that the relative pose of the mug compared to the hand is the same so so that would be whereas so far we've done grasp selection where we just looked in the sink and and forgot about what we're going to do and just said can I find a grasp but if you could say I have to find a grasp that I'm going to be able to set down later then that puts additional constraints on how you pick your grasp and that's a quick way to verify you know to to say I should even Explore that solution or not even Explore that solution a smaller optimization problem this one I think gets stopped between um but we have uh we have stronger methods that can go through that right uh let me let me not be it's possible to be planned all the way through there's some planning time pauses in that one the question is does it take time to plan here is it going all the way through yeah it might have I think he did plan all the way through we have to ask honkai so the question about the task planning is actually a really good one I was going to try to um make that point too so there's there's versions of this trajectory optimization problem that do creep up into trying to solve the task and motion planning problem one of my favorites is from Danny and Mark um where they're solving you know this sort of multi-step optimization problem using trajectory optimization okay um they require they do have a higher they have a trajectory optimization compatible higher level planner that turns on and off constraints in a branch and bound kind of uh kind of way and we I think will depending on which Boutique lectures we pick towards the end I we might spend an hour and a half talking about task and motion planning okay but just to say this this is one of the approaches that can sort of go at the distance good I'll call that a day kinematic trajectory optimization is basically ik where you just push a polynomial through your ik solver that's the big message okay see you Thursday I'm happy to answer any project questions if people have them too there's probably a good answer for that depending on which defects we're talking about yeah that's good okay I I'm happy to take that question if someone wants to send me an email yeah or or yeah just give us the context on the YouTube at a certain time or something yeah yeah I tried it again okay I opened the connection and it didn't work yesterday and then AJ says like it's a good thing yeah yeah I it's possible it was like yeah but but as long as it's working so I mean um I'm pretty sure it becomes unlimited number of mesh cat instances once you go through the ngx things it would basically you guys saw like 502 back Gateway and then the engine that okay the time that that happens is when people um if you if you if the output of your notebook was saved from a previous session and you don't start mesh cat again and you click on that that's that's when I sometimes see it I thought just every time you start up the notebook you have to like run your start
Robotic_Manipulation_Fall_2022
Fall_2022_642102_Lecture_9_Grasp_selection.txt
foreign we know who wanted things foreign foreign this I think maybe they're working late on the piece efforts foreign okay let's do it this is uh you know day two of our mini series on going from one object in a bin to a cluttered scene with with the a lot of the complexity that goes along with that just to make sure I say that I I think once we get this far into the course I've I've learned over the years that people most people appreciate me just sort of saying okay where you know what have we done what are we doing why are we doing this today so I'll try to put these up most of the time if I remember it but right we started with saying first in the basic pick and place we said we had a single known object and we even assumed we knew its pose someone told us a priori it's pose when we did the geometric perception we went to a single known object we still had the model but we had to estimate this pose and this week we're trying to do many diverse and unknown objects with unknown poses the task is relatively simple though it's just going to be clearing the bin kind of task and uh that looked like this it you know it turns out I looked at this afterwards and actually that's the Spock the Spock dock is right there that's the one right there you'll see that it's going to get picked up in a second here there it is see but sometimes it goes gets thrown and actually I want to watch this for just a second longer this time because um this video shows actually some of the subtleties to get the ones out of the bin it has to you see that out of the corners it had to go in and sort of nudge things out of the side because uh it didn't it wasn't able to find a stable grasp and then at some point it tried a few times gave up and started switching to the other side we actually got to the point where we would almost always clear all the bins and we'll do a little bit of that uh with you guys too talking about the sort of task level reasoning and how you program that but we're going to be today thinking about the easy version the easy picks the ones where we can just go in and and grab and don't have to do anything more sophisticated so I started this section talking a little bit about contact mechanics and even contact simulation and partly because our tools are very optimized for it we're going to start off by just you know dumping a bunch of things in the bins we'll dump a bunch of red bricks or Cheez-It boxes or or mustard bottles into the bin and my preference would be to actually you know design an algorithm that would solve a small optimization problem to find a static equilibrium and I have a little bit of comments about that in the notes but that actually is a very hard problem to solve and it's a much easier problem to just let the contact simulation engine do its work and initialize things out of penetration and just let physics bring you into a nice static equilibrium so we talked a little bit about that but I use that also as a chance to introduce a few of the concepts from contact mechanics most notably you know normal forces the contact frame and the friction cone and we're going to build on that today we're going to use the friction cone idea a lot because we want to think about what makes a good grasp where should I go in and grasp in the bin and there's a few different views of this it's changed dramatically over the last 10 years but for many many years there was a sort of the classic view in fact this I would say a lot of manipulation research and Robotics fell under the the umbrella of sort of grasp analysis or grasp optimization if you look in the handbook of Robotics under grasping you'd find a couple nice chapters that go through I'll give you a a sense of what they talk about but actually this isn't as big of a topic anymore because most of these analyzes assumed a lot about the object you assumed a lot of knowledge about the object and in practice when you're getting uh getting your information from a camera you don't have enough information maybe to to do some of these analyzes that are so clean and so so nice but maybe not as well suited to the wild west of a robot in the home okay and when when the Deep learning Revolution started um oops oh I put that in the wrong place but we'll come back to that okay uh when the Deep learning Revolution started a different approach showed up which was we're basically going to just um train a deep net to basically tell me where is a good place to grasp given a point Cloud coming in or an image coming in even tell me where I should grasp okay and there were a handful of things that a lot a handful of approaches that came out around the same time that did that one was by Rob Platt and and his students there was Dex net there's Dex net four or five by now but uh that was 2.0 was the one that was doing a lot of these grasp analysis and training grasp methods there were a handful of approaches like this okay but they changed the picture they changed the landscape of what we were seeing our robots do into this sort of um instead of one known object that you're trying to reason about very carefully this was the first time we started seeing videos that looked more like this where the manipulation research was dump a pile of stuff random stuff and start operating on it and that was I think a really important shift for the field to make was to to just go to the sloppy manipulation side of things outside of the world okay hmm so this is roughly you know what I've already shown you but but this was one of the maybe the first version of that these methods took um you know a completely deep learning approach they just trained possible often in simulation to try to find what was a good grasp and just uh or or used some simple heuristics to do it uh you also see it in uh picking sort of versions of this this is from Alberto's group and part of their Amazon picking challenge they had very much the same kind of thing they had a deep Network that was taking RGB images in saying these are the places on the image where these are the hot spots where you should probably try to pick and then a relatively simple algorithm that would just go down and with suction it was very simple to just go down and suck there okay but this is um around the same time this is uh Lucas who was a student in my group we were kind of uh we're saying do people do we really that's a great problem formulation but you do you actually need deep learning for that I mean there's there's actually there's pretty simple strategies that are that would do pretty much the same thing so as almost a thought experiment um Lucas coded up roughly what we're going to talk about today which is a no deep learning just geometric reasoning version of the problem and it turns out as the as things have evolved people are now people often combine these two worlds so they'll use for instance the geometric reasoning with known with some perfect Point clouds or something in simulation to give a score for the Deep Learning System to learn but I think you'd be surprised how far you can go with just the the pure geometric version of this thank you okay so I want to start digging in and talk about what makes a good grasp what do we learn from our our older grasp analysis and what can we take into the unknown object case and what can we have ultimately we'll talk about what do we do how do we do it with deep learning so okay so the um there's of course layers of thinking about uh the complexity of grasping the simplest case would be if you have just from kinematics alone the ability to know that you've got a good grasp so let's say a kinematic only analysis extreme form of this is a notion called form closure Okay so it starts with an object of some interesting shape I guess I'm making a z I didn't really know where that was going when I started but okay um and then in the case of a dexterous hand I might summarize the case of my dexterous hand holding this z-like object with the location of a handful of points my fingers Okay and like we talked about last time most of our analysis uh is actually summarized through Point models of content okay so I'll assume I have some fingers here in a purely kinematic analysis I'm not actually going to think about forces at all I can ask a simple question which is have I completely caged the grasp okay and I'll make it formal in a second but the kinematic question would be is there any perturbation to the pose of the object that I could make such that such that I could I could move without causing a violation of my non-penetration constraints if it is the case that no matter what I moved what I tried to do to move I would always be going into penetration on one of the fingers then I've completely caged I mean it's if I imagine if I had fingers everywhere then it's it's trivial that there's like nowhere I could move I've completely pegged this thing down there's no directions that it can move okay that would be called a form closure form closure so let me make that more precise so let me say it this way if the fingers are held fixed this is all in the object coordinate right or relative to the object my fingers are fixed then the object cannot move in any direction this is um you know this is a very conservative notion of a successful grasp it requires me to have like completely enveloped the the object but this is the maybe the um the simplest to understand right and the way that you can write those conditions down we already have the language for right so if if we were to use for instance our sine distance function which is a function of Q right so this would be let's say the sign distance of the ith contact then you can actually check the conditions form closure by just analyzing the variations of of that side distance basically and so the way it's typically written in the form closure world would be let's distinguish separate Q into Q object and Q robot we'll split those two in half and I'll assume that when Q robot is fixed what I'd like to ask is um if I were to make a change to Q so let me write this as Q object and Q robot now let's just a more verbose way to write that same thing just to make that distinction the question is is can I find a q a Delta Q object oh such that there existence that for all I Hue object plus Delta Q object thank you robot greater than zero implies that Delta Q 0 must be zero that's a kind of a funny way to write it but the the standard thing would be to say um foreign if my goal is to have non-penetration right if my sign distances have to be greater than equal to zero at all the fingers and my robot doesn't move if I say I'm going to consider all possible I'm going to consider all possible Delta Q's which satisfy this non-penetration constraint if basically the only Delta q that satisfies that is zero then I've got a form closure that's the formal definition okay it's an enveloping bracing contact it's actually fun to think about it and if you go to the the handbook of Robotics kind of text you'll see all kinds of interesting cases you know so it it's written like this you this suggests that uh taking the gradients would be the right way to look at it to look at the gradient of this um you know of these functions and if the gradient is full rank you would expect this that sort of a condition would hold and that's true but it's actually richer than that because you can find some interesting situations the classic one is The Hourglass here okay where I have an object that looks like this and I have fingers that are here for instance it's possible to have a form closure but you only can see that it's a form closure if you consider the curvature of the surface so this is a case where you actually have to take second derivatives of those constraints in order to see it okay so that's that's the sort of definition the definition is less important to me that you understand than the basic concept of a completely caged this thing can't go anywhere because I put my fingers Rock Solid the object can't move okay but that's not going to get us where we want to be here right because that would require you to somehow get under the object completely that's not what's happening here we're coming in from above we're making a grasp and we want to instead think about you know a more a less conservative version of uh stable grasping so let's loosen up a little bit and think about static analysis next so we're gonna you could think about this as basically the limit where I've said um the fingers have to do the work without friction helping it's the frictionless case uh this thing is still guaranteed to be stable but if I go to full Statics analysis and I think about friction cones then I can loosen my requirements and just say as long as the friction cone is getting the job done the contacts with the help of friction that I could still call myself a stable grasp okay so we're going to go from form closure to force closure if you guys have heard that but let me do it through the um through the path of contacts contact wrenches Statics analysis do you understand when I say that right so there's kinematics there's Dynamics which is f equals m a and and you have large accelerations or whatever and there's Statics which is so somehow in between which you do think about forces but you assume your accelerations are zero you're looking for for stat equilibrium kind of conditions okay so we could go to full Dynamics in fact we should go to full Dynamics but we have a nice stepping stone here to go through uh Statics okay so we're going to think about um if I now have point contacts with friction cones what can I say about the total uh stability robustness of the grass button I've got myself to do that I want to go through the slightly more general form of the friction cone which is the contact wrench okay foreign you remember your spatial velocity we already talked about one important spatial quantity the spatial velocity which these things in general have components for rotational you know angular momentum angular velocity in this case as well as the translational and an algebra that goes with it right well today we're going to use the spatial forces okay so now we have and it's notation heavy again but this one I'm tempted to just do this and and say that you're good but you don't have forces relative you know in a relative coordinate system you do have forces that are applied at a certain point so in order to distinguish it from that the monogram notation uses uh this notation which I'll spell out here which is Force on body a applied it applied at p and then expressed in C okay what's important is that it has an algebra and it has six components once again it has a torsion yeah you know a torque a moment all in the same world and it has the translational Force right so in 3D it's a six by one vector all of these spatial vectors have similar algebras right so for instance if I want to change the expressed in frame I can do that with a rotation Matrix that's the the common operator on that the way I change the point that it's that it's applied to is a pretty simple thing but um which I hope is kind of familiar but so if I had a force being applied to a body here and I wanted to instead summarize it as a force being applied here if I wanted to change if this was p and this was Q and I wanted to change and write F applied to body a at Q how can I write that as a function of the force applied at p I don't expect you to answer but I give you a minute to think for a second it's probably going to involve a vector the vector P to Q right like that the forces if I apply a force here the forces I could have applied them anywhere the x y z positions right those are just the same so if I break this down in my coordinates here the forces are here are just little f a P yeah this is the same but if I apply a force here that had zero torque for instance then up here it looks like a force but it also provides a torque and that torque is going to look is going to be just the cross product right so the cross product of this with this is going to be my the torque that's I could summarize the same Force as a force plus torque here as I could summarize with just a force here and that's just our basic cross product Force computation so just to get it right I get Tau a p plus from Q to p cross f P looks like that okay I think I got my signs right that's an important point if you're um if you don't love Statics you know that's okay but just just to say I could have any any force that I'm applying anywhere on the body a spatial Force I can summarize that as if I was I can apply exactly the equivalent uh force on the body at a different point you know I could I could have applied it here with a with a different torque and force as I applied here and having exactly the same effect on the body right and in particular when I sum all of them up their total contribution I can summarize their total contribution of many forces as a single force and moment at a particular point I just I could just put them into the same coordinate system and then just add them together you can add these two these forces together that's part of the algebra okay so let's think about how to think about our friction cone any questions about that let's think about how to reason about different friction cones and different robustness now that we have the basic language of spatial forces and we're reminded there's probably a cross product involved by the way this cross product you know this cross product shows up in the transformations of spatial velocity too I heard that maybe we didn't get that through on the problem side that a lot of people got got that one wrong on the problems that they that people didn't use the the cross product thing so we'll probably post a little that one was we'll post a little summary of what's the right answer to make sure people get it but um yeah these cross products are a standard fare in spatial notations it's the in whenever you're mixing rotations with translations Okay so well let's even start with a with a familiar case we talked about remember that that example where we said the box was on the um on the ramp and I had a box with a couple little feet in it so that I knew exactly where the contact forces were box with mass m and we said we had friction cones that looked like this and then we had a mass a force of gravity applied at the center of mass like this you guys remember the that notation right the normal force was here the friction Force was being applied and these possibly in these directions in the constraint from cool and friction was that the magnitude of the friction force is scaled by the magnitude of the normal force so the question about whether this thing can be in static equilibrium could be written as can I find such that mg which I've written um in a spatial force would actually be zero torque and mg this would be applied at the at the center of mass for instance so I said Force applied to a at the center of mass we'll call it this can I find an equal and opposite set of forces at the contacts that would sum up and equal this right so maybe as an optimization problem say can I find forces at the contact points I such that the sum of the contacts I equals mg and the torques in general too foreign I'm trying to find a a balance between these things and it turns out that the geometry of making this some these sums match the the summing forces is easy summing friction cones it also turns out to be easy okay it turns out that if I wanted to to find these forces subject to the f e inside the F of CI is inside the friction cone I don't know if I said that well enough there's a Tran there's a this thing becomes non-zero because of the chord that transformed okay but if you want to see say is there a force that's inside the friction cone that will resist this you could write that as a very simple optimization problem okay find me some forces subject to their constraints so that the sums match but I also was trying to say last time there's a beautiful geometric picture of this right we said if that if that Vector lands inside the friction cone then it then there's a static equilibrium and if it doesn't land inside the friction cone there's not okay so I want to I want to talk you through the slightly more General version of that now people remember that that statement and do they grow do you graph that statement or should we if your task is to find a force that has to live inside this cone that that the sum of these forces has to be equal and opposite to this right there's a geometry to that so you can add the sum of forces is is sort of the element-wise version of it but I can also say the the sum of some sets and ask if the sum of those sets if this Vector is in the element of the sum of those sets okay that's called the minkowski sum in this case right there's something called a minkowski sum if you don't know minkowski sum is fine I just want you to get the intuition but if you're new to Makowski some it's super useful to know how General this this idea is okay there is a notion of given this friction cone and this friction cone there are locations in space and the basic operations of of the spatial operators I can tell you exactly which forces I can I can in moments I can resist at this point right and the minkowski sum of these two if they're in line is actually just going to look like a vector like this okay and if this if this is not if the negative of this is not inside that then I'm not going to be able to resisted the minkowski psalm says a vector is in the minkowski sum if I can take one element out of this Vector plus and add it with one any element out of this vector and get the vector I want okay it's just a notion of set sum and the minkowski sum allows us to summarize the entire friction cone into a contact wrench set so this thing here is a contact wrench cone or set and it's a beautiful object right in particular because even for very complicated bodies and very complicated friction interactions if you know the points and you know mu friction then you can then actually this thing is still a con a convex set given convex cones for your friction in known positions this is still actually a really nice object to work with so we've lifted the question of will this block slide into a bigger question saying does the force of gravity live inside the contact retro set okay I feel I didn't say that super well so please ask questions yeah possible to step through an example of what it looks like to add those two coins together yeah yeah okay so the question is can I step through an example of adding those two cones together the um the problem is I'm 2D on the board it's almost always trivial when you have unbounded sets but um let me try let me try to convince you that it's trivial and then maybe I can do a find a bounded set version of it but let's try so let's say I had um a box and I had I had a friction cone here and a friction cone I'll do a red one over here okay and let's just ignore rotations for now let's just say ignore rotations this is because I'm making an example on the spot and I don't know how to draw torques out of the board although the picture is is actually really good um all right so we're 2D and we're going to ignore rotations then the question of what what is the minkowski sum you know if I wanted to take and say what forces can I resist with those two frictional contacts then the minkowski sum of these two sets well first of all because the forces just translate I could move them to the same coordinate frame with no extra operations I'm just going to move this over because the forces didn't depend on the position only the moments depended on the position and now if I ask if these two sets what if I add those two sets together in the minkowski sense meaning for any vector can I find a vector that's in the one vector in this set and one vector in this set that describes let's say this vector then the way to think about the minkowski sums is um is to take any one set and drag the other set along with it okay so you're going to convolve the other set so you're going to draw like okay I've got a Point here but I also have a point here like if I chose this point in the blue set I can still choose any of these points in the next set okay so it's the shadow that gets pulled out when you drag the second set across all points of the first set those are the forces that are admissible given those two friction cones and I think I could I I think it's always trivial in 2D because I can get any Force as long as I have a non-zero friction cone even if they're not lined up I think I could resist any Force because like this is a this is because they're infinite so I basically you could take your brick and you could just squeeze the heck out of it and resist any Force there's no Force you couldn't in this simple analysis if you allowed orientation changes if then for instance this the way I've drawn it actually if I allowed orientations this would be susceptible to torques like this because both these forces are trying to resist Motion in this direction and these forces are trying to resist Motion in the same direction if you applied a torque like this my fingers wouldn't stop me wouldn't be able to stop you without moving right so you'd actually be able to break free of me like this what shape would that be so it turns out so because the relationship with forces in the Cartesian frame if I wanted to put them into the frame of the origin then I would get a see my forces here with a cross product here would be resisting things like this so that would be a vector either coming out of the board or into the board I'm there's a sign that matters here so I have a cross product like that but basically both of these are going to put a vector let's call it out of the board and if I'm off by sign I apologize both of these are going to give me a cross product that's pointing out of the board so the torque that's pointing into the board I got it right I got it right yeah thank you the torque that's pointing into the board I would not be able to resist because the Cartesian or the the minkowski sum of the two vectors that are coming out of the board cannot resist the vector going into the board I knew two days easier yeah but that's a good question and I hope I said that okay yes this is solely directional we're talking about infinite forces yes great so these these analysis are when I'm thinking about it as a as a cone so far we're thinking about unlimited amounts of force being applied yep there's a more sophisticated version of it where you could even have torque limits in your hand so that would imply a force limit here be a bounded cone okay we can do that with jacobians and the like that's okay but there's really this beautiful geometry of reasoning about all possible forces that I could get out of a out of a set of frictional contacts now what makes a good grasp sorry to have taken that a little longer than I meant but what makes a good graph what makes a good grasp is that you have a large contact wrench comb that when I put all of my frictional contacts together you're able to resist all of the possible all of the possible disturbances that I might have right so in general like maybe I don't even know the the mass of this so I want to no matter what the mass is I want to be able to resist the force of gravity so that would mean that I'd need to have a contact wrench cone that could resist arbitrary forces in the vertical Direction maybe I think there's some some guy that comes by every once in a while and applies you know Orient you know forces torques in the y-axis right like an adversary that walks in the lab every Tuesday at three and and and applies torques around the y-axis in which case I'd want my contact wrench cone to be robust in that direction if your contact wrench cone contains all possible forces and moments then you've achieved Force closure that's the definition of force closure and as you can see from this simple example that it is possible it's not even that hard if you have enough fingers in six degrees of freedom you need a handful of fingers in a um and a lot of friction maybe but Force closure um if I can resist arbitrary spatial forces spatial forces are also called wrenches this object that is the Stacked torque and force is a spatial force or a wrench okay so that's a beautiful idea yes please so Force closure is conservative it says You must be able to resist all possible forces if you are not able to achieve such a force closure grasp or if you have torque limits or other things on your robot then you might not be able to ask for Pure Force closure you might say let me restrict my attention to the forces that I think are actually going to happen in this scenario good so I hope that the high level message at least comes through is that there's there's this contact wrench object and there's actually there's a beautiful geometry of it if you want to get into the optimization of contact forces there's a beautiful geometry through the minkowski sum but there's actually also a very important lesson actually this picture is not so bad for for telling that lesson one of the strongest lessons for from this is that if you want to be robust it's pretty effective let me make it not a box let me make it slightly more interesting object okay that um if I were to pick places where I could possibly make contact okay all things equal if you want to have a big contact wrench it's really good to choose points that are co-linear and pointing at each other okay so if I were to pick this set instead of this set these are called co-linear antipodal points antipodal because the normals associated with those points are pointing the normal here in the normal here are pointing like opposite poles okay and they're lined up the you know you can you can write this analysis in general but this analysis tells us that all of the things equal try to pick your points directly across from each other and not in arbitrary plate you know this is a directly across from each other but it's you know it's normal it might be pointing you know not not at each other ideally you get your friction cones pointing right at each other and that's a very robust grasp if you're gonna if you get two points of contact try to pick them like that yes um shape and so like within the 2D case no matter what orientation they are yes so I think it's too too deep proved to be too simple I mean because I think even if they're if there's no rotations then you're going to resist any translational Force if you lined up then you would be able to also reduce resist any torques but in this situation you were not able to so I guess with orientation it is okay this example is enough to see why it's better to be like this than like this and the reason is because when they're like this those two are contributing opposite signs in the torque whereas this one they're contributing the same sign in the torque they're both torquing in the same direction around that there is some point in the body about which they contribute the same sine torque it doesn't have to be the center of mass as long as there's some point on the body where they contribute the same torque then you're susceptible to somebody coming applying a torque to you Okay so I think a very reasonable thing to do if you want to pick up Spock the duck then um is to take this important lesson from grasp analysis but that can be applied with very little knowledge of the object right if I want to prove that the the um Center of mass and the the gravitational Vector is inside a cone that requires knowledge of the center of mass that requires you know knowledge of of the object some sense but if I don't know that that I'm as a good heuristic I'm going to look for antipodal co-linear points and what we're going to try to do next is say if you just look at a point cloud you have no notion of what's an object what's not an object how much they weigh but you just try to find places on the object where you can get good antipodal grasps then that's a pretty darn good heuristic for getting a good stable grasp and we'll see how far it goes yeah oh I see your yes your question the um well let me let me make sure I understand so you're saying I picked like some strange points but then I'll I'll kind of turn my hand so that they are lined up will be like a line like good so this is not about the current Force you're exerting this is about the normal of the geometry which sets limits on what forces you could possibly exert right so this is the shape of the object is not my current choice of grasping strategy of force okay so let's step through um well the next thing I'm going to do is I'm going to step through we're going to go from our Point Cloud we're going to figure out how to estimate some normals and we're going to apply that to doing grasp selection okay I've got some I've got a demo for this all right so it turns out you've already seen normal estimation um a little bit you saw plane fitting in your ransack problem and we're going to have you do the normal estimation details we just gave that as a method in the plane fitting but we're going to give you the problem of doing normal estimation I think we picked that as one of the problems um but this is the basic operation so I've got my mustard bottle okay I've got the point clouds from my mustard bottle straight out of the camera and what I'd like to do is estimate the local geometry of that frame and I think I've got this yeah okay here's how it works right so try to get it centered here basically the operation we're going to go through first we have to process the point Cloud into into a reasonable form to do these operations if it's too noisy then plane fitting might be not optimal you want to do it on a dense the densest possible Point cloud and you want to do it on all the points from all the cameras so let's step through that sequence but the basic operation is given a point Cloud I'm going to estimate its normals by fitting a plane to like the N nearest Neighbors okay that plane defines the normal vector and we'll see it actually also defines the directions of curvature with the same plane fitting algorithm it's a very useful operation for trying to decide if you're gonna if you're going to put your finger there or not right this one it's not going to have antipodal grasps because it's got no backside but uh and but we can do it with multiple cameras we'll do better okay so let's figure out how that works foreign from point clouds that's over here actually the first step is just there's just some processing that you have to do right so the processing that I went through in that particular setup was um first I took the the point Cloud that was from the the cameras around the bins I cropped the point Cloud that's sort of an easy thing to do based on some XYZ location because the first point Cloud comes in and it has the other cameras in the in the view it has the bins in the view this is all pretty confusing so I'm going to crop to an XYZ bounding box then we do normal estimation and the reason you do normal estimation um let's see so let me say what the other steps are then I'm going to merge the point clouds from multiple cameras thank you and then I'm going to down sample the point cloud in order to make it more reasonable for my grasp analysis okay but it's the order of these things is important the reason that I do normal estimation before I merge the point cloud is because what we're going to find out is I mean you'll see is that fitting a plane to these points will tell me the normal but won't tell me which direction is the normal so before I get rid of the correspondence between these points and the camera I had to flip the normals towards the camera there's a simple little step saying you estimated the plane I don't know if the normal is this direction or this direction but if the camera was over there then I know that the normal is going to be pointing towards the camera okay so you need to have the point know which points came from which camera in order to do that and you prefer to do the noise the normal estimation on the dense Point cloud okay then you merge the point clouds into the one big thing and then you can down sample and and use more efficient algorithms from then on it's kind of crazy to think that um that we do this all the time uh you know compute all of the normals for all of the points coming in using K nearest neighbor queries at all the points this was one of those examples when I started working with Point clouds I don't know how many years ago now I was like no surely you don't do it for all the points all the time on every frame yeah they do that's just what you do yeah you could do it on you know multi-process if you want you can put on a GPU if you want now but but that is actually surprisingly lightweight computation and people just do that all the time it seems inelegant to me but um but that's standard fare okay but once we have our um our cropped Point Cloud let's just make sure the line fitting algorithm is actually super clean and super nice so I want you to think through it with me actually I guess this is a good stretch time you guys want to take a quick stretch as I about to write down the next set of equations quick stretch people seem to still like it the three of you that are actually commenting on the uh everybody should write comments I mean you fill out the survey I know you're tired at the end of the problem set but I want more comments I want to hear what you think all right let's estimate our normals how do we fit a plane to a bunch of points it's a super simple this is like uh I really like how clean this is okay so um I'm going to write it as an optimization because I always do I'm going to say I want to fit a bunch of points centered around my Vector p and I want to estimate the normal vector okay so how am I going to Define that I'm going to search for a normal Vector I'll call my normal Vector n okay and I want to find the normal I'm going to define the normal Vector is over all of my nearest neighbor points I I would like to find the vector n such that the dot product between point I and my my nominal P this is my point at which I'm evaluating my normal and this is my normal okay I want this dotted with the normal to be zero right ideally so what I'm going to do is I'm going to try to minimize the squared sum of the dot products the N which is as close to zero dot product with all of my vectors is the one I want and I want to constrain it to be a unit vector okay so what's the picture you should have in your head here I've got my point of Interest here I'll make it blue I have a bunch of other points in the scene that are my nearest Neighbors each one of those the vector Pi minus P looks like this these are the pis minus P's right again it's a little rough on the board but even in 2D I can sort of do it and I want to estimate the normal vector these are pi minus p and this is n I want the dot product of these to be zero as close to zero as possible and of course I'm going to get it only up to a symmetry right this one should be the same equally good answer okay the the optimization is really nice actually really simple the way to see it is to just rewrite this I'm going to give you the geometric version of it of course the answer is again singular value decomposition but it always is somehow I'm going to write this slightly differently I'll break that up I'll say it's the sum over I I've got the dot product is a scalar right a vector dot with a vector you know times a vector that's a scalar a scalar squared I could just as well have written that though like this this is also the scalar and I'm going to multiply that again by this other scaler here this is just playing the trace trick here that's exactly the same quantity I just unrolled the squared one more step here let me write it like this and transpose sum over I p i same thing I just did a little algebraic manipulation there and we'll call this this is the data Matrix of the day here this is our data Matrix there's no decision variables in there that's just what we're getting straight from the point cloud right and our decision variables so our problem here is just minimize n and transpose WN subject to n equals one remember what we're doing we're just trying to fit a Plane by finding the normal vector Okay so let me draw that in 2D here we've got N1 N2 I know that I have I want my n to be a unit Vector so this is the n equals one this thing is just a quadratic form a positive quadratic form because it's the sum of these positive things okay it's going to have so it's going to be a bowl centered at the origin that's going to have some some long axes and some short axis let me call it the long axes like this so this is like the bowl if you see what I'm trying to draw here that's supposed to be the origin there but this is n transpose w n is some constant this is these are the level sets of that bowl right yes I'm trying to do it in just as if I had was doing it in um I'm plotting two two of the axes of a 3D thing right this this bowl it would be coming up out of the um out of the plane towards you right think of that as a a bowl where the ax is coming out is the cost so what's the optimal what's the optimal n it's the smallest eigenvector it's the eigenvector corresponding to the smallest eigenvalue absolutely right so in this picture it's going to be it should be more centered but it's going to be the place where the ball is the most elongated right because the cost is going to go up faster in this direction and up slowly in that direction the place where it's elongated right so there's two optimal answers and star and they're right there as we expected we didn't expect to be able to to pick this one or this one both of them are optimal answers okay so it's exactly that n star is the eigenvector corresponding to smallest eigenvalue of w it's the unit eigenvector right okay so the the way to estimate the normal uh of a point cloud is just to assemble this little data Matrix and take some eigenvectors and eigenvalues and you've got the normal you've done your estimate normal estimation and you can do this you know with a KD tree doing the data structure for nearest neighbor for instance pick your 30 nearest neighbors go through all the points on the point cloud and just compute your normals and you get that's what people do right that's uh you get these you get these funny looking Point clouds with their normal vectors yeah all over again the way you pick this one or this one is you have one extra operation afterwards you say my camera is over here so I have to pick the normal if the normal was pointed you could just pick one of them arbitrarily if the normal was pointed on the inside just flip it around so it's pointing at the camera because you know you can't see behind the objects yes yeah so so this is just a more robust way to do this so I want to so if I had exactly two points like you said I could do that computation but if now now imagine that they're noisy Point clouds or it's not a perfect plane that's a great question it's not a perfect plane this is saying find the best one that summarizes in a least Square sense over the 30 nearest Neighbors so it's going to be more robust to small variations in the pixels or even you know maybe I was there was a little corner In the real Point Cloud it was perfect no noise but there's just a corner this is going to be more robust to that great question super interesting thing let me just get one more in right so if I were to do almost exactly the same optimization but I were to say what if I want to maximize that over n I want to maximize my DOT product so in the picture here that's going to look like this these points over here and the exact same computation the one that's the maximal dot product is the one that's the direction of least curvature right so the maximum this is the direction of least curvature and it's exactly just the large it's the eigenvalue corresponding to the largest eigenvector yeah what's up yeah that's a good question so we did do the ransack for the plane estimation so that was an example where you like you really want to get the normal of the table there the the plane estimated of the table so it was worth the extra cost of doing ransack to try to get that right for instance I think in the I'm just going to run this over my whole point Cloud we typically don't do ransack we typically would be um I think that's just sort of an arbitrary where the computation Falls uh decision I think you're right you could do better normal estimation with ransack in this step two great question yeah least curvature so um so if I were to look at the side of my my bunny my Stanford bunny or whatever right so I've got some point Cloud here look viewed from the side it has some curvature okay and I've got another Direction where the bunnies kind of more curvy so if I were to sum up the I had my normal here and I have my p vectors like this right so the vector of least and there's a in the other axis they look like this and there's more like this so the axis where it aligns the most where the if I can pick a particular Vector that the dot product is the highest then that's going to be the one that where the bunny is more flat and the one in that's left over is going to be the direction where you're you're more curvy it happens since we know this is a symmetric Matrix and symmetric matrices have orthonormal even orthogonal in this case at an orthogonal bases we know that even that third axis that that intermediate eigenvector has an interpretation as this last axis but the I think the math tells us that the dot product that is maximal is the direction that's flattest if it was completely flat then you'd expect you would just pick something they would they'd be equivalent eigenvalues okay so um let's put this all together so you can sort of Imagine then how I go through this step I'm going to look into the bin I'm going to crop my point Cloud I'm going to run this normal estimation over I'm going to get my um my normals and I'll get my curvature if I wanted you to use it I'll pull them all together into merged Point clouds and then I'm going to down sample down sampling is another one that um it's a relatively efficient algorithm the way people typically do it would they would just be to make a voxel grid over your point cloud and summarize all of the points that landed in the same voxel as a single point that would be the standard voxelized down sampling and you can keep your normals around you'll average your normals which is kind of weird but you just average your normals of all the points that landed in the same voxel but they're standard algorithms and there's libraries like open 3D that are just these you know lists of they have all mature implementations of all the the normal the best point Cloud algorithms before that there was PCL PCL wasn't maintained for a while it looks like it's being maintained again but I don't know by whom so um but but there are standard Point Cloud algorithms you know I went through we've implemented the the simple ones in Drake but Drake doesn't isn't trying to cover all these just enough that you guys don't have to install also open 3D because it was a big install that broke all the time okay so um oh yes so now this is enough for us to to do a basic um grasp selection algorithm okay here's my grasp selection algorithm at work I'm gonna load up my gripper load up my scene okay I'm just hallucinating the gripper and assuming I can fly around a little bit and I'm going to evaluate potential grasps with no knowledge of the object by doing a little bit of Point Cloud math so first I've gone through I've taken my point cloud and I've computed its normals and down sampled and everything like that but now for any potential nominal grasp I'm going to do a another little small batch of Point Cloud computations so first of all what I since I want to find antipodal grasps what I'm going to do is I'm going to take the place where my fingers are likely to touch and I'm going to crop the point Cloud so that um the red points are the ones that are sort of the points of interest okay and then if I were to have both screens up but maybe that's um that's hard right now but every time I'm doing this it's actually printing a cost for every possible candidate like this it's Computing a sum of the normals that are in those red points basically it's saying I'm going to reward normals that are antipodal and pointing to in the the axis of my gripper and I'm going to just sum up the more normals I have that are pointing right at my gripper inside those red the better and then I add a little bit more um a few more costs these are now just very ad hoc you know but I I want to come down from above I don't want to pick it up from inside the table um I and I call it infinite cost if the uh if the gripper command collides with the point cloud like this that's infinite cost or if it Glide collides with the bins okay but the algorithm now is just going to be so I'm doing this manually so you could see and if you run the Google yourself you'll see in the console it's printing out the costs and and whether when it knows it's in Collision or not and our grasp selection algorithm for the day which knows demands almost nothing about the objects is I'm just going to pick a bunch of actually going to pick a bunch of points in the point Cloud put my hand around them and then just evaluate the cost I'll pick 100 of them and I'll take the best right and in this case I'm going to just take 100 of them and and plot I think it runs I forgot to run it yeah if I generate grasp candidates then there we go I got random objects falling in the bin and I just drew like the five best candidates and it looks like I didn't simulate it for it looked like the things are still falling down uh that was smart I I let it simulate for like a second it looks like it didn't quite settle and then I said find some point clouds and it does its thing okay and then I just picked the best one I go ahead and grasp wash rinse and repeat okay that's a surprisingly effective algorithm I I'm going to show you by the by the next time sort of the full uh the full version of this I'll maybe give you a preview of it now I wrote it very recently I hope it runs on this machine see how we go and I'm going to keep working on it but yeah okay so this is basically the the end-to-end demo using exactly that and it uh you know it'll go down to just using this strategy it happened to pick there on the Block it'll go over using exactly the plans we did before with my clearance height whatever it'll go drop it off in the other bin uh wash rinse and repeat just go back as soon as it goes down it looks it takes another new Point cloud chooses the the grasp and we'll add more and more logic to it to to know if it fails multiple times to give up and go to the other one that's a so that's a that is an antipodal grasp it's not the one I would have picked if I had written a better algorithm right there are limitations to the antipodal metric actually that's a great example I want to talk about the the limitations here in the last few minutes so why was that a ridiculous grasp what do you know that it doesn't know what's that the corner is not stable so it's pretty cool so it was evaluating the Reds the red points they were arbitrarily close to it was it wasn't looking like one step away from the Reds right okay so that's one thing you know is that there's a notion of robustness that isn't captured by that metric what's another thing torque Center of mass right you know like you should pick objects around their Center of mass because it'll be able to resist more torque this is oops I ran into the bin that'll be fixed by Tuesday yeah um yeah good but uh yeah so so it has no notion of mass it doesn't know what the objects are right so it will it has no concept that can do that um and it ran into the camera all right let me just call this uh a working project oh look at that that'll be better by Tuesday good so um to what extent and this is this is actually great so there's a couple ways that what I've said was inconsistent with the fact that I just said you should know where the center of mass is right so one was that um yes that I think where you picked the object up is not going to affect the motion of the arm because of the gearbox thing right so you're absolutely right I said they reflected inertia of the gearboxes can dominate the mass so that robot's going to move mostly the same whether it's picked up a red brick two red bricks whatever okay whether the red brick stays nicely inside the hand is about whether the gravitational wrench is inside the friction cone that you've achieved now there's one more point which is that I'm actually commanding that gripper to have a particular amount of force of gripping Force so it's not actually an infinite friction cone it's command it has limits on how much force it's going to apply partly because I don't want to scrush this you know Crush Spock right that's what I was trying to say yeah you could just if I just if I don't know anything about an object then maybe grabbing it with infinite forces isn't a good idea and the Hand isn't capable of infinite Force so that is also a way that the mass could matter more than the analysis we've done the clean analysis would suggest great example great question yes correct ly spot on so he says so if there's two there's no sense in which it knows the two separate Point clouds are connected yeah so I have a I have an example of that was that your point or yeah um yes so what are some limitations of only using geometry and actually did you see that one right there um that was exactly your case let me start over again there's two objects there and they happen to have their they happen to have their backs kind of close and they had antipodal grasps like that but it wasn't the same object it just got it wrong and it had no reason to it never it didn't think about that it found antipodal grasps they seemed about the right width apart and went in for this for the squeeze so the uh people call those double picks for instance for exactly that reason it also will do things like if you'd run this long enough and you throw things in the in the bin you'll see it do you know what we saw in the red brick but you see it in more ridiculous you like pick up a hammer or something you know by the by The Edge right it has no concept of center of mass it has no concept of object so it's not great but it's surprisingly good right the other I'd say another big limitation of this of the approach as I've advertised it here and I think a place where the Deep learning versions really do outperform is uh on partial views right so my point cloud is only going to see what my cameras can immediately see if there's a back to the object that was occluded or I didn't have a camera over here then then I've lost my ability to reason about that antiplural grasp the if if I were to run the algorithm a bunch of times and just train a network to predict this it could in some sense hallucinate the back of the object right and so I think these partial views are one of the big places where the the Deep learning based approaches do outperform this but it you know you get pretty far with just a little bit of geometry processing transparent objects there's a couple other cases where I think this doesn't work well but it's surprisingly good cool um does that make sense any questions any more questions about that so the weak part which is the part I didn't I didn't code up yet well was is the higher level logic and that's what we'll talk about on Thursday because it's some or Tuesday no not Tuesday it is Thursday because we have no class next Tuesday um which says lots of time for me to fix the demo uh but at some point when you start writing this higher level logic of like okay I picked it three times it's time to go try the other bin we need more Machinery to start dealing with that and we'll do it next week
Robotic_Manipulation_Fall_2022
Fall_2022_642102_Lecture_10_Programming_tasks.txt
foreign my part was correct after a few minutes now we'll see if it's useful that's the question I'm going to look at everybody's of these sad if nobody seems to be impressed or informed okay 235 let's do it so today is uh part three of our little segment on manipulation in clutter and I have a bit of a agenda today just because we have a couple things let me just I want to start by just talking about final projects for a second since that's starting to starting to be project time I want to when I was getting good questions at the end of the lecture last time I was like geez if I just made a 3D visualization this would all be clear so I made a 3D visualization about last time and we'll I'll see if that lands a little better I'll just spend a few minutes of that trying to say what I said at the end last time better today but most of today is about authoring more complex tasks of the you know this this clutter clearing kind of example of doing bin picking that we've talked about that's the part I haven't given you any tools for yet and we'll talk through the basic tools that that we use and give you a preview I'm not going to give a full description of padiddle when if you guys know padiddle the pddl the planning language but I want you to understand where it fits in the context and we'll talk a bit more about State machines and the like okay so uh final projects I mean I won't I'm happy to discuss uh some of you have been coming with ideas already that's great uh some of you've been posting on Piazza or sending emails that's all that's all great if you reach out with us but it is time to be thinking seriously about your projects so a week from Friday will be the project first draft of the project proposals the goal of that the reason it's a first draft it's not because we're I mean well the cim will actually give you feedback on on uh on that part of it but for me it's a chance for you to tell us what you're thinking and us to give you serious feedback on the scope of the project you know maybe you're saying that you know the thing you think is going to be easy is actually going to be really hard is a common thing that I end up saying or maybe you could do a little bit more if you've got multiple people we can try to we want to try to understand if you have a plan for how the different people are going to be able to collaborate these are things that you know we've seen enough projects I've seen enough projects go by that I can give some feedback like that so the more time you put or the more thought you get into your first draft the more feedback you get right so so please take that as an opportunity and then you know we have a a future draft a final sort of proposal just to kind of lock it in and oftentimes if your pre-proposal was was great then you just say I'm good we'll be clear about whether you should resubmit this same thing or not that's a common thing is that people have worked out what they want to do and we just say you're good to go but if you give you some feedback saying try to increase the scope decrease the scope or did you think about this then we'll have one more chance to revise that there are rubrics on the website about how it should relate to your research or not in general it's great if it relates to your research but if it is exactly your research with no changes then that's not so good so the the goal is I mean the dream for me is to say like I'm working on this thing in research it's kind of related to the class if I hadn't taken this class I would have never tried this new idea that I learned in class right and it completely complements your research it could use the same robot simulation you know it could be a it should be something about manipulation please um but uh but think broadly about that and if you're not um you know doing research in this area that's fine too and we should have a lot of project ideas we there's project ideas already on the website my goal is actually to just take one more pass on that put a couple new ideas in and then we'll post on Piazza saying hey take a look if you if you're still looking there's nothing wrong with the list up there I just haven't put a few last few ideas that we we could put up there for this term and I'll do that tonight or tomorrow any questions about the the project there's links oh yeah please you can do either an individual or a team you can do teams across undergrad and grad that was a good question we got just as you understand the logistics uh you know they have different requirements at different times so um but we're fine with that we're happy to deal with that and you can do you can do more with a with a team of course but you should have some sort of a plan for how they how the multiple people will interact uh no bigger than three is what we've been saying yeah two is very common three is fine if you can have a good plan any other questions I love the projects it's it's the best right so it's partly because you get to go with some idea a lot deeper than we can possibly give you in lecture or in a problem set but also because you guys come up with such good stuff and it's like the most rewarding for me and I learned a bunch from you guys from seeing what worked what didn't work what you tried maybe you tried to implement a paper and we learned I get to learn about the paper by your implementations so I really really enjoy it and I hope you do too yes absolutely yeah so there's actually a list on the on the project page on the website I almost want to put it in big bold but there's a list like the last bullet says here's some projects from previous terms and I have a list of some of the I mean they're the ones I I put up are the ones that people gave me permission to put up and there's and there were some of the ones that were the most successful in some ways so they're not like every project that has come out that that it was an a you know is on that list but it gives you a sense of some of the things that people did were really amazing so take a look at that if you still have questions I'm happy to try to give more examples okay so last time I was drawing Force diagrams I was like trying to use my thumb coming out of the the Blackboard and um I really wanted to have a little bit better picture for you so I went and made that over this is what I did with the long weekend you'll also see the State machines I came up with too but let me try again just to say the friction cone Force cone contact wrench story with a little bit better pictures okay so we talked about this the the forces I'm going to just do it on slides because it's mostly what I wrote on the board last time and it'll go faster I if I don't write it everything again but we talked about we talked a lot about forces and in general the spatial Force the generalization is of both the torque plus a force right so we said uh you know torque plus a force this is a applied at some point in space you can name it it can have an expressed in frame okay you can add them all the there's there's the spatial algebra of spatial forces right which which sort of fits in but the most um important one that came up was that there's this um if you're considering a force applied on a body I'll draw just the simple version of this to foreshadow my animation here right I have a a force being applied directly at a body it's kind of weird if you might think it's weird to think about you know how can I apply a torque right that's that's that's not why the torques appear maybe at every point that there's a force being applied it might be just a 3D force with no torque immediately but I can summarize the result of this Force at any point on the body and if I want to think about its effect as if there was something applied at this point on the body then I can write this as a force plus a torque okay and if I want to come think about the net effect of multiple forces applied to multiple bodies I can move them to the same point of reference sum them up okay so the way that you move between thinking about a particular Force being applied at a particular point on the body to a different point on the body the torques come from the cross product which is something you know from physics from intro physics kind of stuff right so the cross product of this Vector with the force gives me the torque okay and then we talked about please ask any questions if you if you have them I know that's I said that last time and then we drew these diagrams and we talked about the friction cone and I got a couple good questions after just like you know what is going on with the cone like what is why is it not just one force why can't the world decide and pick one force right and the world will of course decide and pick one force but the the laws of friction that we describe are saying that friction will do whatever it needs to to resist motion but it'll pick one element from this set which is the range of possible forces they all live in that cone right that's the way to think about this friction cone the rules of friction say I will pick whatever Force inside that cone that will keep me from moving at all relative to the point that it's being applied and if it and then if it's if it is moving if it's just unable to completely stop motion it will dissipate energy as much as possible so for as an abstraction to think about the things that friction could do to me it's useful to think about a whole cone of possible possible forces and if I say there exists a force inside the cone then I know friction has the ability to stop my motion right so it's a little bit weird to say I'm not picking a force right up off the bat I'm going to think about all the things that friction could possibly do in order to resist to resist motion and that's what the friction cone is okay get into my Innovation here okay so yeah I guess I answered the check yourself that was that's a flood um the interesting question then is if you think about the set of possible things that friction could do to you to resist your motion and you think about the the basic algebra of moving forces applying cross products the algebra given that the points of contact are all fixed is all linear operators these these equations if p is known you're not moving the point of contact then the way that forces change through these equations is always linear and if you take up if you replace a particular force with a whole cone and you ask how a matrix changes the cone it turns out it stays a cone and that's the magic of what I was trying to talk about last time but I I didn't have the animation to help me okay I'll write the I didn't write this on the board last time but I'm putting it into the notes more carefully the point is that you can think about shifting so the friction cone right at the point of application you might say there's no torsion there's no torque at the point of application and the cone at the point of application is just defined by the X Y components in my contact frame being less than the the coefficient of friction times my Z component that's what defined my cone that's just the friction cone definition if I want to shift that whole cone right if I want to shift this whole cone of possibilities to here Then I then I can just I can apply my same same linear operators of just shifting the force and then getting a cross product to that whole cone and that's what I'm going to animate now Okay so okay here's my little animation all right so I've got a box I'm not even gravity it's just a box stuck to the world okay and I'm applying a point on the on this box the green is the immediate friction cone and the red is the friction cone if I thought of it from the point about as if the point of application was the center of the body is that clear now here's the weird thing is right the the friction cone has um six elements so I can so how do I how do I animate Six Degrees of six you know a cone and six degrees of freedom I'm going to just cheat a little bit right and there's a place where it's already going to be misleading I'm going to draw this cone the three XYZ cone I'm also going to drive draw the torque cone as if it was independent but it's not actually independent so it's like the projection of the six-dimensional cone into two cones I think that'll be clear in my picture but that's what's happening is I'm going to draw two cones to represent the six-dimensional space okay and then I can move the body around and what you see here is that no matter where I apply the forces on this body I still get the same x y z component of that Force at the body that doesn't change what changes is this is the torsional component as I move depending on where I move around the body I get a different torque a different wrench that's representing my cone and it's a funny shape right it's a it's it's low dimensional you have to look at it from here to see what's happening why is it low dimensional well we know every element in that set has to be orthogonal to the cross product it has to be orthogonal to the line from here to here you can't produce torque except for or you know orthogonal to that line and as I move out further away right I'll get this elongated code I'm drawing a truncated cone the real cone if there's no limits would go on forever but in those directions does that picture help at all oh man does that picture help at all right so why is it torque over here now is that if I'm applying a force that goes that goes here then my friction cone should be in is that I got the right direction I better get the right direction I have I can only resist Motion in one axis right I resist motion like that yes that's correct so so if I were to come up to this body and apply a perturbation and you ask what can that frictional force resist if the perturbation torque I applied is inside that cone then friction can stop me so if I apply a torque that's pushing me into the finger it can it can stop me but there's a whole other direction if I applied a torque in the other direction where I'm just going to move away from the finger the finger won't stop me that's why it's one-sided right point I wanted to make last time was that the antipodal grasping is a good strategy I think with this picture I can I can land that idea I'm not as confident now as I was a few minutes ago but um but let's try I need a second finger okay two fingers now okay I can move them around okay if they're down here um what happens is that the friction the torsional effect that that friction can have they're both in the same direction right that's not gonna there's a whole motion that I can't resist whatsoever but if I move it up and I'm more antipodal and something beautiful happens so not only are they going in opposite directions that's good but because there's a they're coming from a different cross product they span a different space right so this one can accommodate wrenches that are in that plane and this one can accommodate wrenches that are in that plane it makes a nice little butterfly kind of looking thing the rules of now saying can I find uh something that resists me in either of those forces are using what's What's called the minkowski sum if you're trying to ask is there if I apply a new wrench can the sum of the two wrenches resist my motion then that's equivalent to asking if the if the wrench is in the minkowski sum of those two and now that these are spanning the space I have a nice strong wrench that I can resist with okay so maybe you're totally on board now and that would be awesome but at very least I think when things line up they look pretty right I mean they they look like they're covering a lot of space right they're they are it's an effective grasp and that it's different if I move them off to the side but the story is still sort of the same right so as a general strategy you know of all if all things created equal I'd like to be nicely aligned around the center of mass because then there's no gravitational torque to resist whatsoever and I only have to worry about external forces but if I don't know much about the object what we did last time was we said let's just go ahead and just pick antipodal grasps because without knowing more that's a pretty good strategy yeah so uh because of the minkowski some thing is that that's like the one blue themed drag across the other blue thing exactly does that crisp everything that's that's what I want that's the picture I want the details are just a little bit more subtle so the question was let me repeat for the um so so the minkowski sum you should think about is taking one element from the first set and then applying the entire second set to every element of the first set so it's like you can drag it around um so so take this cone here and apply it to every possible force in there and so yes it looks from this picture like you can create the whole Space the only thing that's a little bit misleading is because I've done the projection of the two of the six dimensional space under the 3D you don't get to independently pick XYZ and torque so you have to be a little bit more careful about saying that but but to first order absolutely the picture I want you to have is that the minkowski some of that resists all torques yeah pure Torx I can say with confidence right from this picture you can resist pure torques All Pure torques yes right so the and think about how that would happen right so because the reason that you can produce arbitrarily large it's a function of your normal force it means you'd have to squeeze harder but if you're willing to squeeze harder then you can then under this friction law you can resist all of those torques I think the difference about spanning space is not I think it becomes the the coupling becomes um more important right it's I think the the optimality of optipotal antipodal grasps um maybe isn't isn't completely visible in this projection if you think about only expanding the spaces but maybe the magnitude already tells you something that it could be good okay it's a little better maybe but that's that's this wrench visualization you can play with it I mean it's it's uh I pushed it to the deep note so you can play with that and and uh tell me if it makes sense or doesn't make sense I want to hear it was worth it there was a weird I spent like way too much time making this animation because uh there was a bug in in or not a bug but a known limitation in the way the webgl works in the browser I was like I'm sure that my math was wrong for hours and then it was it was not my fault directly anyways so you better have learned something from that okay and the the minkowski sum is sort of the the beautiful part of that okay um so let me tell you now about back to the Clutter clearing so um we were talking about this project you know with the robot moving things back and forth that's kind of our short-term goal is just be you should be able to program a robot to do stuff like this with no knowledge of the objects with dense clutter right and to do its thing so that's the other thing I did hopefully it'll load in a reasonable amount of time here is I finished that example remember how I showed you my halfway example and it ran directly into the camera and it dropped the thing and you guys all laughed um you made a big file so let me I can run it locally too okay I'll run it locally instead it's taking a long time to load I made a Big File partly because I replaced the the red bricks with the other ycb objects I thought that would be a little bit more interesting but those stupid meshes are so big or the the texture maps are so big that it takes a long time for it to come in the browser when I'm in campus Wi-Fi okay here goes random ycb objects thrown in a bin now it's going to start it's going to do our whole pipeline right so there's cameras here they are going to do the point Cloud processing it actually it waited for a second for the initial conditions to to fall it's going to then take the point clouds do some point Cloud processing find the normals take random samples do the antipodal grasp the grass selection it dries 100 random grasps it picks the best one by our little grasp metric right and then it does everything we've talked about so far it makes a simple plan of the gripper frames it interpolates that into a piecewise pose trajectory it then runs differential ik this is the whole tool chain okay and it's going to do its thing it's going to pick up and slowly move all the objects it will still fail every once in a while but it's pretty good and actually the failures to the point are to the point now where I think they're they're worth they're they're pedagogically interesting right it's not like I didn't spend enough time with the code it's like we haven't done motion planning yet and so uh so we should we need a better tool for this and this will go all day long every time you run it it'll have different initial conditions it also has some fairly not sophisticated but the basic um that's awesome uh some basic recovery Maneuvers so sometimes it'll pick remember the antipodal grasps are just a heuristic sometimes it'll pick right on the edge of a soup can or something like this and see that was a kind of a double pick that it started to do right there it rotated it didn't drop this one but if it does fail to grasp it'll lift up it'll realize it failed to grasp it'll go back down right it also you know when it can't find a grasp I'm going to write all this down when it can't find the grasp in the first bin it'll transition to the second bin and start picking from the other bin out um oh see look it stopped and it realized that it did that and it's going back to pick a different thing right that that extra level of robustness makes this something that I uh but it knew it see it realized it I think yeah come on and try it again see that was one where I was like you know that failed but I that's a lesson right um and it'll keep going to the point where like I you know I just let it run at night and I came back and the funny thing was all the little things you do to make it robust so for instance I actually put an invisible floor underneath there why because every once in a while does drop objects just like the real robot did and if they fall to negative Infinity then it makes my contact solver fail because that's a that's numerically bad to have a negative infinity and like a number around zero right so so now there's a floor and they can only fall to to like one meter okay and then because there's Soup cans I had to put like lips on the floor because they would fall to the floor and they would roll off the side and then fall to negative Infinity right so all the little things got figured out okay and now it just will run all night and and with some probability depending on how long you sleep you know it'll come back and many of the things will be on the floor but it didn't crash and it kept going and it does its thing the other interesting one that you that maybe I hope I kind of hope we see but I have mixed feelings about it is that it um differential ik can get itself in a bad State yeah that's a you see it didn't try to grasp there but the um but the hand is unable to get where it's trying to grasp because the differential ik is commanding a certain thing it's getting a certain different thing but look it's going to try I think five times and then it'll give up and say you know I'm done with this bin I'm going to go do the other bid that was even worse is that five maybe this is five thank you see right that's like intelligence right there okay yeah and so so there is one other super interesting thing it does which is that um sometimes diff ik will actually because it's not thinking about the joint constraints of the robot it's not even it could choose a trajectory that goes very close to the base and then the arm starts kind of getting a little funky in the Jacobian it'll get itself kind of Tangled Up with just a deaf ik view of the world because it hasn't thought of anything about the joint constraints of the robot it's just thinking about moving a hand through the world and that's an impoverished view of what the robot has to do so there's the the most the last thing I had to do to make it sort of nice and robust is that if it got itself all tangled up it was and it would if the tracking error of the hand compared to the commanded hand got large then it would say okay I give up and it would switch it would turn off differential ik turn on just joint control mode and just go you know I'm going to come back to my comfortable home position and I'll start it I'll start again things got bad I've got a safe home space and I'll come back in again yes this is all running in real time right now on my computer on this laptop yeah so that's One X right now the only thing that would be different on the real robot would be that physics would definitely run of real speed so yeah this is this is just running at real time speed yeah I'm I mean the limitations of uh the motion planning are partly because it's simple I could spend more time making motion plans and then you might see the famous robot pauses where it stops to think about a motion plan and then moves which the community is getting better at and and we should aspire to not have but um but there's no POS you know there's there's okay there's one the one little thing that cheats here is that the simulation clock can stop while I think in this in this version of the simulation so but you would see that if it looks like it's frozen and even the physics is Frozen that's because it was thinking for a long time but I don't think there's any computation that's um you know really sub and I'm simulating at a pretty conservative simulation DT just because I wanted to not have anything uh there it can crash I will list down the failure modes in a little bit it can crash but only really I've only seen it crash at the initial conditions if I my random uh initial bin thing put things in deep penetration it could potentially blow up what is the uh stopping this from running faster um I could make the robot I could make the trajectories faster I chose relatively slow trajectories if I did that then I think the um yes that's a great question so there will be joint limits on the robot joint velocity limits on the robot that would prevent the you know a lot of times our Factory robots this was originally a factory robot are geared to move fairly quickly but not like not super fast they're not supposed to be throwing things across the room and stuff like that and some robots are right so there will be joint velocities that will be a constraint at some point on Ewa in particular there's joint you can hit the joint velocity limits if I just used this differential ik controller then I would also worry as I started getting close to those limits that my tracking error might increase and so my pick success or my my running into things would increase and a better controller at the lower level could or tighter gains if everything's stable uh could make that run faster I hope to show you some beautiful emotions of the ewas and Hardware soon uh unreal you know before the before we get too far in the class you know the algorithm so we we it looks at the point Cloud it doesn't know anything about the objects it looks at the point Cloud it finds normal antipodal grasps it takes my scoring function and takes the best antipodal grasp score so it could do it could do anything you know there's actually a pretty funny failure case if it if it drops um if it drops the object like immediately down in the bin then it backs off but not enough I could easily have fixed this but I just didn't it backs off but not enough and then the hand is still in the camera and it sees a point cloud and some really appealing nor antipodal grasps so it starts going you know in thin air trying to pick itself you know uh but then after that it gets out of the way and so it's just a one-time snafu and I let it go okay so I feel like you know how to do that oh sorry go ahead I crop so I feel like it's not cheating since I know where the bins are in this application to just crop the point Cloud to be in the interior of the bin so we won't find that although one strategy for getting that one this is I guess that's one's just going to hit me every single time in this run right because it's never going to move and every time it's going to be the last one it tries to pick so every time it's finished with the X bin it's gonna it's gonna try to do this for five or six times on the real robot we actually so we we did something where we kind of pushed it out of the corner that was a pretty good way but the other thing we did actually to your point was it was really good to pick the bin and Shake It that was another good way to get things out of the corner but I didn't let it do that so you have the entire tool kit for this except for the high level recovery type planning and so I want to talk about that now okay lesson learned I won't put an enormous file in the middle of my presentation okay I'm just going to leave it there even though I'm not quite ready for that just so that the browser is responsive okay so the way that I programmed that was with a state machine there's my chalk over here which is the simplest type of programming at the sort of task level where did I put the oh here it is be getting old okay so I want to talk a bit about programming sort of at the task level that's what people often call it right not the individual motion this is maybe as opposed to motion planning task planning is a high level what should I do in what order to accomplish my long-term goals right should I be picking out of the right the bin on the x-axis or the bin on the y-axis for instance you know should I be did I drop something do I have to stop and take a recovery these are the high level plans okay and the simplest approach we'll talk about would just be writing a finite State machine algorithms class you've seen these this is called an fsms for instance there's their theory of computation class thinks about State machines and what they can describe and and the like okay um so this robot or this demo has four states it would be better if it had more probably but I implemented four and got pretty far with that so the first one is just waiting for objects to settle my my warm-up phase because remember I initialized the objects in the sky and they fall down and until I put this mode in it would try to pick it would take the point Cloud when the things were still falling and then we'd be trying to pick the objects of the sky but they you know by time it gets there they were long gone so that was ridiculous and I decided to add an initial setup phase where it just waits for a second or two and then I'm going to be picked from I'm in a mode where I'm picking from the X bin which is just what I'm calling the bin that's on the positive x-axis because I didn't have a left or right obviously a different mode for picking from the Y bin which is actually on the negative y-axis and then I have this go home mode which is when things get bad it shakes itself out of shakes itself out of problems and and does the right thing most of the time so I can think about the way that those modes interact by drawing a simple State machine diagram I guess there's also technically one more which would be done I didn't call it a state because I just used an assert statement if it can't find any objects if it tries like six times and there's no place to pick and six times then it's like I I just I'm just gonna assert failure um okay but I got my weight I've got my pick from X I pick from y and then my go home and it's useful to think about organizing the behaviors of the robot in a graph type structure where I can think about you know from waiting after I've done waiting for the objects to fall I'm allowed I allow it to transition to either of these two possible States it prefers to go to pick X I'll tell you the conditions in a second certainly if I'm picking X and I've decided there's nothing left to pick then I can transition to picking y if I'm picking what from the Y bin and I've can't find any grasps I'll switch back to picking from X okay and I put my logic is basically deciding you know what would I do to continue here and do I need to transition to the next to the other mode similarly from either of those States if I get myself with large tracking error then I can just give up and go home we didn't see it happen once in that it hopefully doesn't happen too often um and uh and I can do that from either of these states and the go home technically can it almost always goes to pick X but it it could actually go to pick y if there was nothing in x maybe I'll even say it it really kind of goes to pick X and then transitions from there here that's how I coded it okay so you know this um you know this edge here if I were to just draw it like how what are what are the conditions on this the condition on this is am I done waiting you know like more than one second has elapsed and I found a grasp in the X bin okay this is simple stuff right but that's the logic I encoded on that edge effectively okay and similarly I've transitioned from here to here if I can't find any more grasps here and I find a grasp over in y then I'll switch mode to picking in y right that way we'll repeatedly pick from y until it's roughly done so I'm staying in this state right and then it'll go back if I had no internal mode logic then it might you know pick from X and then just the next time it might find X again and it'd be kind of annoying if we just move one object back and forth right I want some sort of persistence saying the pick from X until you're done then pick from y until you're done so I make these edges uh not transition until you've failed to find a grasp right no grasp okay now this is turns out to be a powerful methodology right um you can transition to this if I can't find any grasps right the no grasp cases this is a very simple version of the basic state machine Machinery but you'd be surprised how many like robots you've seen out there that are probably using something very much like this right to program the task level the reason you have to think about it like this so we're used to writing procedural code you're used to writing code that says do this then do this then do this you know for Loop while loop that's the way we're all most used to writing code okay you don't get to do that if you have to be a robot or in a simulation where you have to have an answer you have to have an action something that you must output at every time step right I can't be like somewhere in the in the while loop you know doing my thing you must have an answer at every time and that's what changes the programming Paradigm from instead of the pure procedural logic to something that's organized more like a state machine or more like some Behavior trees or more or something like this it's the it's the requirement that you must tell me what to do at every time step you're not allowed to like go off and think forever all right so how do you implement that as a system right because we're trying to use this we're going to ultimately write our plan our system well it's not that hard to do I'm actually um I'm very interested maybe at the end of the lecture I'll ask you again about what you think would help you the most in your projects maybe you could even tell me on your project proposals because I've I've always felt like um low-level control motion planning even perception we've given you a lot of tools for that but people don't feel Limited in their projects by this level because I don't have I haven't given you lots of tools for this so you could tell me what is most useful but the simplest version these kind of state machine diagrams are very easy to write in the systems framework all you do is you the simplest version of it there are more advanced versions that that actually do event detection to decide to transition let's leave that out for now let's just say every DT every time step I'm going to be in I'm going to start I'm going to wake up I'm in one of these modes and I take whatever action that's inside there and I output that to my output port and I'll decide should I transition to the next one for the next time step that's a simple discrete time difference type equation type system it happens to have complicated logic potentially the conditions about whether you change it's not a linear dynamical system but it's a perfectly reasonable conditional difference equation okay and you can do that by just saying I'm going to go ahead and tell myself to update I in that example I updated every 0.1 seconds the simulator is taking time steps at like a millisecond right so the physics is updating very often the planner is only updating 10 times a second right because why would you don't need to decide things that if I drop the thing at uh at 100 or even a thousand Hertz right okay and then so basically it says you know call my update function every 0.1 seconds okay and then I'm going to declare that I have some state I could I tried to use uh you know words and I used a python enum to use the just to make it readable so if I want to use a python enum as my state that's we call that a abstract value state I can tell you if you like why it's abstract value maybe actually maybe you care the um it's it's a it's a programming Paradigm called type Erasure okay so basically I want to be able to pass things through the systems framework without understanding this type think about converting it to like a void pointer void start pointer or something like that and I could take any if I take any kind of type and I want to turn it into something that I reason about in the systems framework I can make it an abstract value it erases the type it passes it through but the people who read it know how to put the type back on that's what the abstract value make an abstract value get does is it just apply it erases the type and passes it through and then adds the type back so if it looks weird in the code it's doing something clever standard I don't know um okay so I just say I've got it I've got a I've got a stake here which is of this planner State type and its initial value is weight for objects to settle right simple and then in my update I can get out of my context so that just declares something that lives in the context which has a erased type I don't even know what it is but the context can reason about it with the type Erasure idea and I can get it back at the beginning of my update for my context and then I can write it to my context at the end whatever my new decision of my new mode was and in the middle I can do all kinds of logic about which state I should go to next what my actions are during that state that's the basic way that you write a difference equation that's using some sort of abstract type like a mode that I'm in do people like when I talk about that or like this isn't real robotics yeah okay okay good um okay and then the other piece that's happening here to make that work is when I transition from weight to pick for instance when I transitioned into either pick or pick pickaxe pick y or even go home I'm actually calling my planner only on the transition not when I'm inside it I call my planner saying when I transition in I'm going to decide from where I am right now how do I go down and pick and place it's using exactly the stuff you've known I pulled it right out of the previous notebook it was great make gripper frames make gripper pose trajectory make gripper commanded trajectory right out of the existing examples okay but I want to save that plan so I don't have to recompute it so I add that to my context too right I just say I'm going to take a the whole plan which is a piecewise poly note a piecewise pose or piecewise polynomial and I'll register that with the context also that way it's available when I'm inside here I can pull up what's my plan I can just evaluate the trajectory at the current time that's the architecture so you can imagine assembling pretty you know I just write if statements in order to make this state machine logic right and I can you know make my plans and declare any internal state that these things need with the systems framework and it can it can do all these things so the architecture then is I actually pulled out just you know whenever you can be more modular in the code and so that way I can reuse this in the next example or whatever I tried to pull things out so I have the grasp selection algorithm is put out into one system it takes it expects three cameras to tell me what the point Cloud are but I use the same I use two copies of this I use one for the Y bin and one for the X bin I just tell them for the for the one bin you're gonna use camera zero one through two the other one you're gonna use cameras three four five okay body poses is one of the output of the um of this manipulation station which just tells me where my gripper is that's what roughly it's it's the internal State estimate of the of the manipulation station and it outputs my you know it does the sampling algorithm and it picks tells me what the cost was and the best pose the best cost and the best pose if that cost is infinite then I've just I've considered myself to have failed the plan okay now the cool thing is um you can the systems framework tries to optimize some of these uh operations for you so if nobody asks for the output of the grasp selector then it doesn't do any computation it only does that computation on when you say give me your output so I can so the algorithm here just says sample 100 points whatever but it's not running most of the time it's only running when I transition in the Planner on the on this or I decide to transition and I ask what was was what's your grasp so there's just a mental switch of saying how do you write code that gives an answer every time and state machines are a way to think about that but otherwise it's it's pretty standard stuff the planner um you know it has a more it has a bigger job just because so this x-bin grasp and why been grasp are coming from the two grasp selectors but it has to know where the um where the arm is and where the what's the current gripper open and closed basically this is the gripper pose and this is the gripper open close and then it outputs it makes its plan at certain it looks at it what time it is it looks at the plan that it's already saved and it just pulls out xwg right the gripper commanded pose hands that down to differential ik all the other three are just to make it so you can stop using diffic and go home if you need to that I have to I have to be able to switch between using diffic or not using diffic Downstream that's what my control mode is I have to uh send the Ewa position trajectory when I'm doing that and I also have to tell diff ik to stop trying to integrate and just look at the current state of the of the robot otherwise if you if you turned off diffic for a while and then turned it back on it would have no idea where you are and would command something completely wrong so you just have to be able to say hey difficade look at your current state don't try to integrate forward okay so those are I would say those are just details I was borderline about whether I should even put them in but I think it just adds so much more robustness and it exercises the ideas a little bit more so as an example I thought that was useful to have yes yes so so um as you were asking a question I was anticipating your question I realized I forgot the self transitions but uh but you ask something different so I will ignore that Arrow um that is completely hard-coded or well it's so uh just to make the demo interesting I basically I pick a random point in the bin to put it down it goes to a stereotype gripper this is the make gripper trajectory I just have keyframes right and every time I'm going to put something down I pick a random place somewhere in the in the box and I I pick that and then I do the pre-pick post pick stuff to set it down and every once in a while you'll see something ridiculous like it'll stack mustard bottles right because it just randomly picked the same place a couple times in a row and that wasn't a good idea but yeah it'll next time probably if that's a tall mustard bottle Tower it'll tend to knock it over the next time and Randomness has been restored entropy is restored good so I just want that to be I want you to feel like you can do that right and you have to look at the example maybe or you can just pull this example up and adapt it right I want that to be available for you but it's really you can you can author pretty complicated things pretty quickly and people do right so um well let me just I guess I'll say my failure modes first here I kind of made a list as I was watching it like what what actually makes it fail right and I could burn down any one of these and make it never happen but I thought it's interesting at this stage of the class and say you know what's actually failing the first one is my my initial conditions like I said is kind of a silly and with some probability like the mustard bottle could be inside the soup can and multi-body plant says I can't find forces that will get me out of Collision in one step and it fails on time zero basically right so if it does that forgive me just start it again it's all good um again that I could I could make a better initial guess sequence to resolve that motion planning becomes a real bottleneck this simple heuristic of grasps is too simple and it will occasionally bump into things it'll occasionally Collide the object it's picked up with the bins it'll occasionally go too close to its own base so that like if it shows to go from that corner to that corner and I just made a straight line interpolation it'll go way close to its own base and the arm starts like I don't know what to do about that right we haven't respected The Joint limits of the robot and so differential ik becomes problematic uh for those kind of reasons Ergo we will spend a week on motion planning in a in a few weeks and we'll make that way better perception we talked about before even at the antipodal but you see it in this demo regularly right it'll it'll pick at the corner of an object and then the object will be swinging around right because why because the wrench generated by the gravitational force is is outside the current grasps friction cone and so it'll swing right it does do double picks like we talked about because it doesn't know what an object is right um the Phantoms like I said it tries to pick itself if if the if the hand was in the point Cloud it'll be like oh that's a good place to pick and it goes up and just picks the Phantom hand uh that one would be really easy to fix but I just thought it was hilarious so I left it and then the other ones are actually super interesting I would say like it's actually very hard to do much better in this case we will do it we'll try to do it but we'll use Force control and other and non-prehensible manipulation is what it's called to do things that it's not just picking in place we'll have to like use the hand side of our hand to push objects into the corner apply forces to get it to lift up so we can get our hand under it right it's not just about getting your hand around it and squeezing it's going to be much more than that and when objects get stuck in the corner or if I actually had to remove the Cheez-It box from my list Cheez-It boxes are not allowed in this demo because they constantly would like block all the other objects with something that was too big for the gripper to pick up from the top and I was just like no cheese at boxes that's it that's for next week okay so if I were to just I I wanted to even because we got to a certain level of robustness this is like the weakest robustness I would ever talk about being robust but it's actually interesting to think about like what would I do if I were to start now applying our stronger tools to make that more robust there's actually very strong tools you could apply and if you've had had an internship at a autonomous driving startup or something like this right you might have used some of these tools right the difference of going from a a little toy project to something that has to work every time Which is far from right now but there's great tools that we could use and they're they're all available if you choose to use them for your projects or whatever but for instance when we were working on this this was also very complicated at the task level right and one of the things that happens is when you're doing so many different things and you have task level interacting with low level controllers and all these things really bizarre things can go wrong right when you're writing code you should try to write almost always you should try to write unit tests or component level tests that's the only real way to write code is to every time you write a small function make sure that function does what you want but for weird interactions between the components that happen when you start taking all this complexity in you have to do more you have to do some level some level of system integration testing and we're already getting to this point with a simple clutter clearing demo right so the way that we do this and again this is part of the motivation behind the systems framework declare your state declare your randomness is we have a Monte Carlo test Suite right you can run off in the cloud lots of clutter clearing simulations they're all deterministic given the initial conditions of the of the initial context which includes the random seed that governs everything afterwards okay and you just Let It Go off into the cloud and run and you write a little thing that detects whether it's a good you know success or a failure and it tells you in the next morning how often you've failed it'll even make a little movie and put it in your inbox if you want right and say like this you know it'll record the last few seconds and uh and tell you you know these are the these are the statistics of failure and these are the specific failure cases okay and that's how you just hammer on that you hammer on that and you actually make your Monte Carlo tests harder if I make the initial conditions more diverse or if I add more sensor noise right I there was on that project it was actually very interesting because we had like the red team that was trying to make this the the noise models more aggressive and other things like this they were trying to make the simulation harder to pass and then we had the people that were trying to make the robot better and so there'd be a night where someone would you'd see like git commit you know 1492 or something like this and then and they would suddenly the score would go way down because it would start failing more and you realize oh they added new noise models to the dish rack detector right and then you see like three commits later someone fixed it and then it goes back up and I my dream was that we would like go up and like hit 99.99 that's not what happened if we stayed basically flat because the red team was too was was just as fast as the as the as the good guys right but that was good because in the real robot on the real Hardware we got to 99.9 but we made the simulation like so aggressively bad it'd be like an autonomous driving always trying to drive down through I don't know downtown New Delhi or something like this or like a bumper cars or something like that it was the the simulation was throwing everything at you all the time and and the real robot was way easier by the end and um so you can you know you can use the Monte Carlo simulation Suite if you'd like we don't have the cloud version of it in the public trade but we're thinking about putting it over and there were some really subtle bugs that it would find that we actually found in simulation and then we're like no way that could happen on hardware and then sure enough we saw it happen on Hardware it was really fun to like because actually the first when we started turning on all this the first thing we did is we found all kinds of bugs in our physics engine not like easy bugs we got f equals MMA wrong but like numerical issues When Things fall to infinity and stuff like that really subtle things but then when or sorry really like annoying things that were bad failures I would say but then as we kept running it we got to the really good failure we burned down all the Sim to real kind of stuff and we found very subtle buggers and this is my favorite little example okay so um let me see if I can yeah so there was this case where the robot would pick up a mug and we saw this in reality pick up a mug go to set it down and be like and then we set the mug down and try to pull the rack out and pick up the mug actually no it didn't even pull the rack out it would just go like this and it would set it back down and then pick the mug up set it back down and it just we get this infinite Loop and we're like okay that's super low probability event right there was never going to happen it happened on the real robot and it was actually an infinite Loop effectively an infinite Loop and this is why okay it was the night that the guys added the sensor noise to the dish rack locator okay so what they did is it was a very simple noise model it just said the perception system for the dish rack would just on every step pick a random number it would adjust the true dish rack location by some gaussian okay and there was a particular location of the dish rack that found this weak threshold that somebody had put in their code where basically with very high probability when it started making the motion with very high probability that it would think the dish rack was out enough to put a mug down but over the time it took to actually put the mug down with very high probability it would find at least one sample that said the dish rack was was too far in and I had to go sit back and set it down so the whole planner would go okay fine put it down and but then by the time it went again it was it with very high probability it was out enough and this thing would just go like this the day we saw that in real robot on the real robot like a simulation is great and it was it was like this it was this is what I mean about the crazy hard simulation it was the robot was trying to do in simulation it was trying to put dishes away while the dish rack was going like this right which is nuts but being robust to this inferred robustness in the real world okay so this type of programming Paradigm is very simple but it's actually used all the time my favorite example actually I used it in under actuated too Mark rabert used to talk about his simple controller for a Hopping robot and the best thing about it is it fits completely on one page and it looks like this he's got a flight phase then a landing phase then a compression of the spring phase and then it pushes off with the leg and it unloads and it's back in the flight phase right he's the language of State machines to talk about his controller and use that language to describe a fairly complicated controller on a page it is in his book it's one of my favorite examples of like a rich Behavior coming out of a simple State machine and it actually for me it's it's a it's a goal that we should aspire to with our best control designs is something so simple could achieve something so complex but but normally when we do optimization based control we don't get that simple stuff out I don't know if you've seen this version I know Boston Dynamics puts out awesome videos all the time this was one from a few years ago this was spot opening doors Andy was a student here um so maybe that's why I like it so much but you know spot is really good at opening doors I've never seen the code you know I I know a lot of people that work there but I have you know conversations informally but I don't know anything for real but I'm pretty sure that's a state machine right that's doing all the same kind of logic we're doing here where he's like if someone pulls my but you know then uh probably that that case wasn't hand specifically hand coded but it's surprisingly robust behavior in the real world on a real robot with I like to call them robot Whisperers people who know how to write these State machines really well and can make really robust behaviors out now our goals in as researchers are probably are to to do less hand-designed control State machines type controllers like that but they're getting it done in Industry today and if you look in the robotics open source toolbox you know there there are for instance s Mach is it's actually pretty dated now people still use it though this is a state machine you know right in Ross people that have state machine toolboxes and the like if you look at the um the individual skills and the disloading task I should run that again as I say it there was a simple State machine there it was actually written in a way that was kind of a mix of procedural logic and state machine logic but but it's it's even non-trivial every time you pick up a mug right you approach you do a little bit you see that little visual servoing that's ICP remember I told you about that before it does a little ICP to you know I thought the mug was going to be here I'm going to use my hand cameras to adjust it uh and then I'm going to insert the grasp retract move to my pre-place all the stuff we're doing here and that was done at the low level control for the for the dish loading demo my favorite one actually is the that programmed a really rich Behavior was when it picked up plates okay so um if you this is the Sim obviously this is real that was a pretty complicated maneuver to get this big hand or the big robot to reliably pick up plates like that and it was coated with the same sort of a logic where it would be a kind of it was it was the way we had it authored in code was a kind of a mix of procedural and state machine logic but um but a simple script it's kind of what you'd expect and this is the art is in picking the parameters of these little Transitions and the like but the script is exactly what anybody would write down pretty much right so um for instance this is I asked C1 who wrote this to describe it and he's he's funny he's like yeah you kind of scooped the gripper into a solid grasp um you know and these are terminated with sensor based feedback right so it would kind of scoop up until it felt collision with the hand you know in the in the palm and then it would stop that and transition to the next thing okay but these State machines are real and they allow you to program pretty complicated things those couple examples were all at the sort of low level people find that they State machines this this picture this a view of the world or you know the one for your new system kind of works when you have I don't know 10 states or something like that or simple structure maybe that that one's relatively sequential so you tend to go forward until you break out but if you get more serious and you try to build a more and more complicated robot out of this then this paradigm breaks roughly it breaks in a couple ways it just becomes very complicated to author a big diagram and get all of the transitions that you'd want correct and people found out that um it was very hard to like write one state machine and take like you maybe have a sub graph here you think I'd really love to take that sub graph and use it in a slightly different application have like a modular sense in the state machines and something about the state machine architecture people decided it was very hard to do that it's very hard to like take a piece and use it because all of you have to sever all the edges coming in or that changes the behavior of the state machine and then you have to rewire it to all the new places and that is so delicate that it was very hard to do that so the the different Machinery that people have authored it's actually came from the computer games world right this is like Gamers were writing big state machines to make their unreal you know games and the like and uh uh and it just got it just broke and they so they invented a new paradigm called Behavior trees which are very state machine-like I was since that's late I won't talk about the details here we have a problem that will help you think through it but it's another programming Paradigm that's similar to this but it's the organ the computation is organized a little bit differently at every time step you run through the behavior tree and they can tell you whether you're done and you should stop or whether you should continue on to the next one each of these nodes has a little bit of programming logic and the the consensus has been that this slightly different programming Paradigm allows you to write much bigger machines that are much more modular where people will frequently take their behavior tree a big chunk out of this Behavior tree and stick it in another Behavior tree and immediately be happy with the behavior it really does come out of like gaming that's where it started but roboticists use it too in fact I to prove that I found another um a reference to the Ross Behavior tree package yeah right this is pie trees Ross right and it's actually really good to really nicely architected a software package and you can make a behavior tree fit in the same way we did a state machine that could be a system that's reasoning about all the discrete logic and making that fits right into the system's framework I haven't made it easy for you to do that and so again at the very end I'll ask just you know or in over over time as you think about it I could try to make some of these things easier to do but okay I'm sorry that I'm just gonna I'm just going to keep going a little bit since we I won't do the stretch right now but um foreign so the interesting thing is that a lot of the the state machine and even the behavior trees they get used to a certain place but but oftentimes people do need more and the reason you tend to need more is if you have some very long-term planning required if there are consequences of your actions now that need to be sequenced in a certain order to achieve a long-term goal like maybe I need to have all the mugs in the top shelf you know in order to accomplish my my dish loading and that takes it to a different Paradigm which is the planning paradigm where you tend to take these low-level skills and combine them into some sort of a skill framework and we'll work on skills and motion planning a lot uh going forward but I want to foreshadow that here with just thinking about how it how it's related to State machines okay so you know for for the um the disloading example there were a bunch of skills like pushing things out of the corner picking up you know mugs picking up silverware these are all authored as different skills and then rather than write down one time some big automaton or some some big state machine um we would have a we would Define the rules of interaction and use a planner to decide what the action was on every time step okay and that's got a long history in AI strips is the original name of it the this planning architecture where you just say I've got an initial State I've got a goal and I basically think of it as doing Graph Search on a discrete graph saying I'm going to do this action and then this action and then this action in order to accomplish a long-term goal you have an initial State a goal State and a set of actions where those actions are authored as I have a potential Edge saying based on what conditions are must be satisfied for me to write to do a Pick X and then what conditions will be satisfied if I if I do Pick X for a while and strips is the old version padiddle the planning domain definition language is the newer richer language of these things I could tell I won't I will I'll just go out I'll just move past the details of that but but I think it's super interesting to look at how this worked in the specific application of the of the dish loading okay so um we had our skill concept our action primitive which basically you had to in order to write an action primitive in the framework you had to basically say given the current state am I could I run my skill right now yes or no and if I were to run that skill what would be the new state and that's pretty much it you know and this the skills that we did the actions that we implemented were this is the entire list open dishwasher door closed dishwasher door start dishwasher that was pretty funny we it wasn't actually plugged in because we weren't allowed to run water to the lab in that particular way which is probably a good idea in retrospect that would have but but the robot we built a whole skill with a capacitive sensor because it was this it was a smart dishwasher right so you needed to put like a like one of your stylists from the from your yeah you know a standard stylist to make a capacitive sensor work and we mounted it on the back of the Ewa and we had a skill that would just go and Boop and the dishwasher would do and we're good yes right um is there a pessimistic one too yeah so my guess is um I actually don't know I would go I would go look but um but it's possible to get unjammed by these kind of things they would be uh there was an aggressive version that was it was more optimistic that's pretty funny I copied the list but I didn't I didn't actually look at that one carefully ah yes I'm exactly yeah that's a great Point there's no learning so far and I think that's you know my last slides are like how do you put learning into this but this is not learning so far yep it's it's a it's a hard problem and I and I so yes in the limitations of this one of the big limitations is that you need to define a state a symbolic state of the world and I'll put this on a slide in a minute and you need to somehow have the perception system tell you what the state is and I have a model for how that State's going to evolve and that is a really hard thing to do and it's a limitation of the framework so goals are typically authored in a logical way but they are often impoverished because of all the things that can happen in the world the dishwasher state was things like are the clean items put away you know the number the number of clean items put away the number of dirty items available you know there was is a very discrete state of the that summarizes what the robot perceived in the in the sink and that's all right out of the code okay but if you take those discrete logic and author yourself a bunch of skills then you can do a little planning on the Fly and get all the robustness that you see so this is like the example of someone went and close the dishwasher rack uh and it had to set the mug down that's exactly what it was that the robustness here made us susceptible to the crazy noisy wrecks right so Boston Dynamics they like kick the robot right but we just close the dishwasher we could have kicked the dishwasher I guess okay but that that this time instead of a state machine making that robustness it's a planner that makes that robustness you just say the these are the actions I can do this is when I'm allowed to do it and what the goals are going to the outcome is going to be and I roughly do a graph search a richer version of graph search to make that happen and you can get the same levels of robustness out this is what exactly what you were asking the problem with this approach the limitation of this approach is that it requires somehow to take those discrete States and acquire them from perception okay and you have to have a model of how that state is going to evolve if I apply certain actions and these are very these can be very brittle in the real world so there's been famous debates about this and uh one of the most famous maybe uh was Rod Brooks who wrote a nice paper called elephants don't play chess um arguing against uh you know intelligence without reason there was a there was a line of uh of arguments that he made basically saying the AI Community is going down the wrong direction by trying to summarize the world with symbolic States it's just too hard elephants don't play chess they don't do planning like this on every time step that's the that's the chess analogy okay and he when he was saying this was right before he went off and started iRobot and made Roomba which was highly successful without playing chess with just kind of bumping around and you know what it how it used how it did that the Roomba application initially at least it's probably evolved now but uh it was basically it was Rod's version of it was called the subsumption architecture but he was the precursor to behavior trees Roomba was much more like a behavior tree and it was a it was a saying don't don't rely on planners it's too brittle for the real world symbols whose grounding is physically and physical reality has rarely been achieved says rod and then I just I'll just end saying that I do think one of the new challenges and you'll see lots of papers about this right now is how do I make skills this sort of skills framework long-term decision making work when those skills are learned right where now you know the way people are we're picking up plates uh Tri now is with you know policies controllers that were acquired by learning right and we have uh demonstrations that are much more robust perception failures and other things like this now they can pick up plates very robustly coming out of machine learning and the question is if I want to put that into a skills type framework so so a high level task planner can read it about it or write a state machine around it or whatever how do you combine the new tools for learning with a classic social planning and that's a big topic we'll talk more about good the last thing I'll say is that we're going to try to we're making the because you have your project proposal next week the piece that will be a little smaller this time to accommodate but please again ask us about project proposals and uh use the pre-proposals a chance to get good feedback okay thanks [Music] foreign
Robotic_Manipulation_Fall_2022
Fall_2022_642102_Lecture_22_Planning_under_uncertainty_wrapup.txt
that's not the real problems so the second part we catch through I think it's just for some reason like not an ideal so when we beat it basically like all right okay welcome back everybody so today I'm going to try to split the lecture roughly I mean probably do a little bit more than half on planning under uncertainty there's a lot to say there but I'm going to I think in the spirit of this is a boutique lecture kind of at the end we're going to give you some of the reasons why you might want to learn more and just some of the key ideas of planning under uncertainty uh and then I really like to spend a little bit of time just kind of summarizing what we did we picked up a lot of tools I think the connective tissue that puts them all together is still forming you know and I think it's just useful to kind of um make some of those connections again remember what we learned why we learned them how they've come up in your projects and and wrap it up like that so let's give a few examples about why you might want to do planning under uncertainty so Maybe oh um maybe one example would be at the task level right if I so if I asked one of billions chat Bots or you know whatever the if I said you know hey robot get me uh I don't know I have to think of something that has uncertainty but get me the mustard and maybe um maybe because of our massive understanding of the way kitchens are laid out and where people put things maybe there's I don't know of 40 chance that it's in the fridge I don't know if you refrigerate your mustard or not I know it's a it's a personal uh choice right maybe there's a 30 chance it's in the cupboard maybe there's a I don't know 10 chance there's one in the pantry or maybe I don't know 20 chance we're just out you gotta go to the store right so a planner thinking about the task level that's only thinking about let's say the most likely scenario is going to be impoverished in the way in its ability to accomplish the task for sure right but not only in the total success right of if it's not on the fridge it fails but also maybe if it if the robot is already in the pantry for something else even if it's a low probability maybe it's worth a quick check before you take the time to drive over to the to the fridge right so reasoning about probabilities at the level of decision making is is hugely important and I think the at the task level it's sort of uh it's sort of clear I think I guess with um you know with all the things all the large language models that that William was talking about maybe there's like a 10 chance that it'll tell you to for a water over your head or something like this too right but uh that's the Wild West okay but it turns out it's super useful down at the like dexterous manipulation control level two so let me think of a good example so I actually brought a plate so I can make one of my favorite examples is um one that came up when we were loading dishes a lot right so I brought a dish I would have brought the dish rack but that didn't work out um okay so it turns out the way that the robot let me imagine this is the rack in the bottom of the dishwasher okay and the way that our robot loaded dishes was very characteristically it would grab the plate have a pretty good you know thing and then it would line up the plate above and go down right and that always bugged me it always bugged me right because humans don't do that whatsoever they do it much faster first of all but they'd also take a fundamentally different strategy right they come in they almost always make contact they're compliant and they go in like this that makes sense right instead of having a sort of straight line trajectory where the the plate you kind of estimate possibly with some accuracy the location of the tines in the in the tray right and line up the plate perfectly and try to put it straight down I think a human would come in at an angle intentionally made contact here then passive mechanics would have it rotate up a little bit and it would kind of slide down the only explanation for that is something about being more robust something about uncertainty not needing to accurately understand the orientation of the plate or the position of the tines right and it's actually just a fundamentally different strategy at the level of the controller because because we are implicitly thinking about uncertainty we saw another one too that in your um in our journey here right remember the example of pushing the book this was the example of force control where you wanted to control the the friction between the friction cones between the finger and the book you also wanted the book and the table but right there what's that motion right there right and then he's going to go around and pick it up the only justification for that second motion was that he could reduce the uncertainty of the orientation of the book right so as he slid it around right he even rotated the book it was all about the friction cones right and the exact orientation of the book was some relatively subtle function of the friction cones but by coming in with this known position of the fingers even if the book was at relatively different orientation if he starts pushing it's going to line the book up nicely and it's a very robust strategy that would get it to the end of the table at a known location that's the only justification for that middle move otherwise it was completely worthless right so even at the level of planning control reasoning about uncertainty really matters and I think the Hallmarks of a system that is reasoning about uncertainty and planning and control are pretty clear right so one of them is certainly just a level of robustness that you can obtain if you're thinking about all the things that could possibly happen not just planning optimistically through the world right and there's various ways to talk about robustness and um some of them I would certainly put under this umbrella but there's other approaches that can achieve robustness I guess but I think the absolute Hallmark identifying feature of a system that's reasoning about uncertainty when it's making its decisions is information gathering actions we will see examples um throughout the lecture here of cases where the robot does something fundamentally different you need to program it fundamentally different not for the sake of accomplishing the task of getting the book to a certain orientation but actually just for the sake of gaining information reducing uncertainty and and using that reduced uncertainty to accomplish the task with higher confidence Okay so really if you don't have uncertainty reasoning you know flowing through your system you will you will never see robots making actions just to gain information that that's a that you know that's a property that only happens if you're trying to to think about optimizing uncertainty or or something like that [Applause] okay so you need a whole stack to start reasoning about uncertainty you know you need to think about stack you know uncertainty at the level of perception uh and then we're going to talk mostly about how to use it in planning and control luckily our perception systems are actually pretty good uh like there's lots of ways that we already have probabilities flowing through a lot of our state-of-the-art perception systems right so the image recognition was putting out probability that it was a sheep probability that it was a dog probability that was a cat so on and so forth that's just one example but you know we talked about pose estimation where it was outputting entire distributions over possible orientations of the of the mugs right uh we talked about key point estimators that weren't putting out XYZ coordinates of the key points but we're actually putting out a belief I mean well a heat map over possible orientation locations of the of the of the key point right and over and over and over again you'll see people have I mean certainly neural networks are capable of putting out lots of interesting things oftentimes those interesting things include distributions over possible outcomes right and I don't think there's a a big barrier anymore to asking your perception system to tell you a little bit about how confident it is there are different types of uncertainty right you might have heard about aliatoric versus epistemic uncertainty and and there are different ways to ask neural networks to address them I will I won't dig into those but but there are great actually at the perception level I think there's many good ways to think about uncertainty the challenge though becomes and the thing we'll focus on here is how do you consume those estimates of uncertainty down through the planning and control stack because we haven't said anything about that yet so how do you make long-term decisions that reason about uncertainty well if you're going to make long-term decisions if you're going to make a plan you need a model so we need to think about a model of how our uncertainty is going to change over time if we make certain decisions so that I know if I make this decision it's going to my uncertainty might evolve in some way if I make a different decision I would like it to evolve in some other way and that way I'm going to choose my actions based on the way the uncertainty is going to evolve right so we need a model for the way the uncertainty evolves the Dynamics of the uncertainty right and there's um you know there's lots of ways to think about this there's this is stochastic processes and stochastic Dynamics there's lots of things to think about but we've actually already written down everything we need when I write this general form of you know f of x is potentially X and right where I had my Randomness coming in here right if I if I've authored my systems in this way where the randomness the random variables come in here then I have a I actually already have a model of uncertainty and how the uncertainty can propagate through the system but so far we've been just talking about this Dynamics here and we've been only saying you know for simulation for instance we've just been saying I'm going to take a random draw at this I'm going to evaluate what the next xn is for planning we've so far pretended that that's just zero for instance okay and we've done deterministic planning and today I want to say well let's say we want to do planning where we admit that this one's not zero and I think it's easiest to think about that first if I were to if we look at this just for notational reasons if only to look at this in the case where I I have a finite State X U and finite noise kind of w right so that would be the standard sort of tabular Markov decision process or we're going to do a partially observable Markov position which isn't process so um let's you know to start let's say that state action and observations are all um discreet right so there's a finite number discrete and finite there's a finite number of states I could be in a finite number of actions I could be in a finite number of observations that's not what our manipulation systems look like but that's just how I'll write the problem I'll just avoid writing more probability notation than I need to in order to tell the base of that basic story foreign so this I could have done there's nothing in those equations that say that X is a continuous variable or U is a continuous variable I could write exactly those equations where X is just uh you know this is a transition map okay but when we talk about belief space planning we normally write this in a slightly different way we'll write now the probability over initial conditions let's say would be a function of X which would be the um probability distribution over initial conditions and then I'll write the Dynamics here which is coming in as a random variable here but I can write that just as a probability of transitioning to some new state given I'm in some current state it depends on P but it doesn't depend on W here this would be my transition probabilities and similarly I can write my observations as being probabilities that are conditioned on my state and my action so is that I don't know how how comfortable everybody is with probability notations but I just want to make it clear that um you know this is this is sort of a deterministic way to write a stochastic equation where I say there's a random variable coming in but this is a deterministic function I could equivalently write that as saying I want to know given X U and P for instance what is a distribution over Y in that sort of removes this from the argument and I have a distribution over possible is that is that clear how I could write the same thing in two different ways with two different notations for instance if this was a a gaussian right and I'm pushing this through and maybe this is a linear equation with a gaussian here then I could write the output as a gaussian distribution okay that would be not a function of w or I could say make a specific draw and tell me the specific value that comes out on the other side and the reason it's so nice to do the fully discrete case is that I can represent each of these just with a finite set of numbers then right so the how would I represent a probability over possible X's well maybe if x is drawn from just some number of possible X's or maybe I'll I just I realized I'd get a notational overlap there but I've got some notational um some some finite set of x's then my probability over X over I'll write of X zero I should clear up my notational overlap my probability of X zero I could just write it as a vector which is the this is the probability that x0 is X this probability zero X1 right it's just a vector and similarly the the transition probabilities well they're going to be it's almost a matrix but since I've got two variables it's actually a little easier to think about it as a tensor actually now I wouldn't have done that a few years ago but now I can just say tensor and everybody's good right think about that as a tensor for for every you I can I have a matrix of that maps from my current X to my next X Prime and all the Machinery goes through very nicely when you just have tables of numbers and the question of how do you represent a probability distribution is just is not there when you have a finite list of numbers okay so what's going to happen now is we're going to have um we're gonna we're gonna watch how this system can evolve under the probability distributions what what is the sort of State evolution of my probabilities sorry I moved it twice the state of the system I would say is clearly you know the state of the plant is clearly X right that's what we've always been calling the state but from the perspective of the Observer someone who's trying to to track what's happening going on with with x we need a little bit more than just X to summarize what's going on okay so it actually gets very deeply into the things we've been talking about with we talk about different how to learn different state representations what makes a good state do people know what's the definition of a state what is the fundamental property that sort of defines a state okay so he says minimal and sufficient condition information to predict the next state yes I mean almost right so so I so we could argue about whether States need to be minimal or not I think you could talk about a minimal representation for State I'll uh I'll forgoing minimalism for now okay the question is so yeah a state is something that lets you fully predict the next state that's a that's a good super good definition but it's not actually the definition we want here right because the system is stochastic we can't perfectly predict the next state so we need a slightly richer notion of what state a slightly richer definition but completely consistent is that a state is a set of information a set of numbers for instance that lets you forget all of the other things you've seen it's a sufficient statistic for all of the history of your observations okay so if I wanted to write the evolution of this system if I wanted to predict for instance what is the probability of why at the nth step being I don't know the ith y conditioned on all of the things I've seen so far U zero U1 U2 up to let's say u n minus one but also y zero y 1 up to y n minus 1. potentially the next y I expect to see or with a distribution over y's I expect to see is a function of all of the things I've seen in the past okay what we want is to summarize all of the things I've seen in the past so that the prediction based on a state that represents this is the same as if I had all of the histories right so I want to say the probability over y n equals y i conditioned on some b I'm going to call it B here BN and maybe u n since that's coming in right now okay is it is equivalent to having all that prior information okay so this state here what does it mean to be a good state it means it's a sufficient statistic for the history or a sufficient summary let's say of my entire history of observations okay so for the purposes of the Observer the state that you want to track this state this thing called B is called a belief state and we want to make sure we get our head around that and then use it for planning okay so a belief state is some efficient hopefully not always but some let's say numerical summary of all the things that I've done that I've seen in the past that is sufficient for me to predict what's going to happen in the future and for these sort of Markov processes and dynamical systems of this form there's a natural choice for the belief state it's not a unique choice it's certainly not a minimal choice in most cases okay a minimal choice would be to say that the belief let me say the I'll use it as a vector again since everything's nice in the continuous thing so I've got a belief vector and the belief at for the element I of that Vector is going to be the probability that X at n equals x i condition on u0 U1 y0 y1 and so on and so forth so the belief a sufficient statistic that allows me to forget everything I've seen is a probability distribution over all the possible States I might be in okay that's a super powerful thing it says that all I need to keep no matter how long my history was no matter how many observations I have if I just summarize my current estimated probability of what's in the you know of being in state zero and state one state two it's just a vector one vector right if I can just keep track of that then I have everything I don't need to remember anything else about the past and more because of that because it's sufficient to summarize the past and to predict the future it's also sufficient for optimal decision making so it turns out we know that the optimal controllers optimal policies must be of the form un is some high star it could be a function of n potentially a BN even when the system is partially observable right so in in the case where y just shows me X and without noise for instance then X actually you know B can just be X and all those all the things we already know still work if they still fit in this framework because my probability distribution collapses to just a single point okay but in general I have to keep an entire State distribution over X yeah yeah yeah um it's I I wish I had like a one-liner for that um but but it is uh it is true so um I mean you can you can derive it recursively from these equations if I give if I wrote out all the equations you can see it recursively in the algebra and everything like that um it's also the tenet of of filtering so the Bayes optimal filter for instance takes exactly this form okay but um without all that Machinery I have to ask you for a little bit of a leap of faith maybe yeah but thank you for for asking okay so now our new commodity is to traffic in these beliefs now the problem caveat you know uh uh spoiler alert sometimes uh it's hard to keep track of all the possible to write a distribution over big complicated things this might be the shape of the mustard bottle it could be the time of the day it could be that there's a lot of things to potentially keep track of in the world and it becomes uh untenable to try to keep track of all the problem a distribution over everything that could possibly happen but in the smaller problems and you know with selected targeted reasoning about uncertainty you can you can do very well with this Okay so this is amazing I mean I'll give you a few examples here so um there's a classic example of that people talk about in The partially observable Markov decision process of discrete worlds that have discrete observations one of them is a cheese maze silly thing but there's I don't know cheese here okay and the mouse has to go and find the cheese okay and there's a discrete number of places that the mouse might be for instance and there's um there's observation so when you're in a certain place in the in on the board if the mouse I'm not going to draw a mouse well maybe I could draw a mouse but uh yeah something with the tail and uh years okay there's a mouse running around the maze luckily the mouse can see the numbers that we put down which are like signposts which tell it where it is and the interesting cheese mazes are the ones that have observations that don't tell you exactly where you are they give you an indication they give you information about where you are but they don't instantly determine where you are because maybe there's the number two appears in multiple places okay something like this is a a classic one maybe there's five all these in all these places six seven six okay so what is the evolution of this um belief going to be so if I if the mouse wakes up and is following Bayes optimal you know it's a Bayes optimal Mouse uh then it's keeping track of a finite list of probabilities right be it zero maybe it thinks I think there's 11 things here right so maybe there's equal probability everywhere that there's that it could be anywhere in the board and then it sees and maybe it's maybe I should have started down in a more interesting place but maybe it sees two at the first time step and after one observation it's collapsed its belief to being zero in most places but there's still half of probability that I'm in this place and wherever the next one is and the rest are zeros right and as as the mouse moves through the board it's updating its probability distribution over possible States okay and the recipe for updating that falls directly out of Bayes rule applied to those those forward Dynamics for a more complicated and more robotics version of that um people do and people know might know a lot about State estimation and Monte Carlo filtering and the like this is one of the first ones that kind of popularized probabilistic robotics okay so this is a a robot a trash can robot moving around with a sonar because that's what we had back in the day okay and and you could think about this as being a much more complicated cheese maze and the observations are now the depth returns of the sonar if I start it over again it starts off with probability all over this is sampled versions of that probability and as it gets sonar returns right it gets information about where it is but it doesn't completely determine where it is and it has a probability Mass which is like that Vector all over the space and as it evolves it can do pretty complicated things right that's that's just a more sophisticated version of this super simple example okay now what's essential now this is the this is the point of this part of the lecture um what's essential is that the rules that govern the update of the probability District of the belief distribution just have dynamics that we can write down it's just another system okay you can write down that the evolution through Bayes optimal filter B is a function of n plus one is just some function of B of n U of n and Y of n right it's a system that looks like this U going in observations coming in it's got an internal state B inside it maybe it's you can put the B on the output Port if you want that doesn't actually essential here and if you have goals that are specified for instance that I want to get to a certain belief I want to with high probability be where the cheese is or want to be in some room in the map with high probability then the task is just like what we've done before where it's a a task of choosing the you subject to the Dynamics F that moves around the target b okay as a result all of the tools that we've already talked about well we you know uh with some caveats but but the the basic tools we've talked about can work so you once you have this problem for instance you can do trajectory optimization and I would say the dominant approaches maybe would for for large scale things would be a trajectory optimization kind of approach or um or a sample based sampling-based motion planning type approach okay and this is so we've talked mostly in this class about kinematic trajectory optimization this is really a dynamic trajectory optimization that's the the biggest caveat I have for you is that you have to think about um you do have to think about the fact that you can't take arbitrary paths through b the Dynamics of this function f do limit U so you know the types it's an under actuated system pretty good right uh but uh it is it is actually interesting and hard because it's an under actuated system you don't have enough actuators to control your entire belief okay and so actually the trajectory optimization versions we do in under actuated are more suitable than the kinematic trajectory but it's very it's a small extension from the types of things we've done okay so you can do trajectory optimization over you subject to constraints that be has some initial condition final condition you can put a cost on B so on and so forth now almost right so there's one important difference here why as I've written it here is this system's perspective Y is still a random variable coming in it's a function of um it's a function of X and U but also it has some it could be a noisy measurement so you either so you actually have to do a form of stochastic trajectory optimization or you can make a choice to sort of um be optimistic about your observations why but people have studied nicely how you can do this you could do stochastic trajectory optimization if you've heard of iterative lqg that actually would be if I were to recommend one thing to solve these problems I would recommend iterative lqg we had some work that tried to be optimistic about why and used deterministic trajectory optimization to do it okay but actually the flavor of this is very much just trajectory optimization gets you pretty far [Applause] okay so let me tell you that version of it okay so this is a toy version of the problem um imagine you're a point robot starting here that's the initial conditions we call this the light dark domain it's just a very it's the simplest kind of instance of a of a problem which has State dependent observation noise okay so the basic thing is that it's dark over here your position sensors are noisy when it's dark and over here it's light the positions sensors are pretty accurate when it's light your goal is to get to zero zero zero if you thought and if you didn't reason at all about uncertainty and you're had a pretty you felt like you were in this initial condition that was their mean of your initial conditions then you take a straight line here okay but if you have if you have um process noise too for instance you might end up actually very far from there if you write an optimization to say I'd like to get here but I'd like to get here with some confidence I'd like my belief to be narrowly distributed around that goal then it actually makes sense to go into the light in in order to come back to the dark okay so this is explicitly an information gathering action that you don't get from deterministic reasoning but you do get from reasoning over belief state the reason there's two curves is that we we're talking about two different ones this is one just based on linearizing the whole equations and doing basically one step of the iterative lqg kind of algorithm and this one's based on a direct trajectory optimization actually kinematic trajectory a dynamic trajectory optimization okay but the the principle is the same is that only because you're learning about uncertainty did you choose to go into the light to come back the specific objective here oh yeah please that's true at this time 10 steps down the line so how can I plan of the more trajectory right now all the way to the end if I don't know you don't know that so this is a very deep question thank you for so the question is if I'm now I can't know how my how my uh distribution is going to evolve there's so um you can know the how the distribution over distributions is going to evolve okay so um what you what you don't know what sensor measurement you're going to get at time three in the future okay that you have to either think about all possible the random variable of the District of possible measurements I get at time three or you can say I'm going to propagate where I think I'll be at time 3 and then assume that I'm going to get a particular measurement at time 3 in order to keep going but but actually uh you do you you can't this is a this is a complete if we agree that we have a dynamic model of how things go and what my measurement noise is for instance I do have understand things like if I were to look around here I would have a different view of what's behind here and I would expect to get I don't know what's behind here but I know that I get more information to reduce my uncertainty about what's behind this paper if I were to move here and that turns out to be very powerful yeah enough that it causes you to take information gathering actions and then because you might be surprised and what you find there might very much determine what you do we often use this in a replanning cycle so you you plan but if you ever see something that then dramatically changed your view of the world you just replant but that's a great question the particular uh objective here just to think about it so instead of representing this as a a table of possible locations here the representation here was a mean and covariance over possible locations and the goal was to say I like to be here where my mean was at the goal but my covariance was as small as possible find a trajectory and there was some cost on I should have put in the cost on action too so but it would go across here and then and come back in with as small as possible and it was better to go into the light than to take the straight path okay so that's still a little abstract here's a robotics version of it okay a manipulation version of it so let's say that you know there's going to be two boxes in front of you but you don't know the size or location of the boxes let me just read it carefully the robot must localize the pose and dimensions of the boxes using a laser scanner mounted on the wrist right on the left wrist It's relatively easy When the boxes are separated but when they're squished together like C on the right then it's actually pretty hard okay so this is a simple example of if the robot is taking information gathering actions it'll actually do something different in order to increase its confidence of the location of the Box before it picks it up and you put this into the trajectory optimization formulation where you you take measurements as as laser scanners out there and it actually decides to go off and push on the left in order to get a better sensor reading of the right and it's tracking a distribution over possible poses of the box and and the like and it makes the decision just with trajectory optimization to take that information gathering action right to reduce its uncertainty and then it goes to pick up the box but that same algorithm if the boxes started off separate it did its first scan and found it was fairly confident would have just gone in to pick up the box right same thing here you can actually see the this is a rendering of the distribution over those possible locations of course it's a high dimensional thing so it's plotted down in a in a way that's a little bit hard that's why it's periodic is because it's the raft of higher dimensional thing plotted on a single line okay and the big robot would make those decisions Generations you're saying local Minima of trajectory optimization um yeah it's a really big question so is this kind of trajectory optimization more sensitive to the local Minima for instance in some way I actually think it might be less sensitive even though it's solving a harder problem because for the same reason we talked about with the randomized smoothing in the whatever I actually think that putting distributions over possible outcomes Smooths out some of the Kinks in the cost landscape and uh it might be a little less sensitive but it would still have the big local Minima will still be there but it might get rid of some of the small local minimum it may be related um if you mentioned information patterns but you know it's like the objective the objective kind of gives the gradient to organization too gather information do you feel like in some cases in some senses rml or some kind of learning algorithms and try to optimize for an objective would practice also doing something like that awesome awesome yeah that's actually the last point I want to make so that's that's really good so so just to be clear this this is doing information gathering right that is that push I would consider to be only valuable for the sake of gathering information no the goal here is to well with high probability pick up the box and the only reason it does that is to uh is to reduce its uncertainty to gather information okay but the second part of Leroy's question was actually the the biggest last point I want to make here which is that um this really does have deep connections to the state realization questions we've been talking about I'll put it in a different slide just so we're not watching that um okay so so now I'm imagine I'm a doing system ID right input output system ID for instance to try to learn a state representation foreign in order to accurately predict my future observations if I'm going to learn a state-space model for this we've you know in the linear system ID setting we sort of thought about this as we got a deterministic system we're trying to recover and it did recover the a b c d matrices pretty well but if I have a stochastic system here that it's trying to recover and its objective is to predict y with its highest confidence possible then the state it has to learn is to be is actually I think better thought about as a belief State now again the belief states are not unique I mean all the stuff we talked about with similarity transforms and the like is still present here and it might be that the belief state is not minimal that having tracking all the things that the real State might do might be more than you need to predict why we talked about approximate information States that's exactly the idea of finding a state representation in here which is an approximate belief state similarly if I'm doing RL let's say let's say from via policy gradient or if you do a you know a policy gradient with a dynamic policy like an lstm or something right or if you have a value function or a q function that has some Dynamic some states by the way I think if you do that you've walked away from RL Theory I think there's I mean people are working on that theory now but that's not the standard thing to do in theory but people do it in practice now they'll put an lstm representing the value function or the Q function um and the states that this thing has to acquire in order to accomplish the task let's say we had an oracular you know agent that would just solve the RL problem and solve the representation along with it then the dynamic the state in the controller is probably best understood as being an approximate state of the belief space of the of the system that's what's required to make optimal decisions similarly the value function if if trained you know so but this would be a task relevant approximate state right and this one similarly would be just the part of the belief space you space you would need in order to accurately predict values okay so I do think RL is potentially doing this yeah um and and I think that's that the language of belief is exactly the right way to think about what RL is doing in those cases [Applause] so you should learn more about belief space planning it's good stuff yeah okay let me step back and just cover the course again in a few minutes right um I just I think it's really helpful to just sort of connect it all together so we've done a lot of things covered a lot of tools sometimes at a level that I wish I could spend four lectures on we spent less but I hope you came away with a lot of a lot of tools and I've it's been very rewarding for me to see you guys hit some of the subtler points in your projects for instance right so let's do a kind of a where have we been people also asked in the survey for things like predict manipulation 40 years in the future that's hard but I'll you know I'll try to say a few things about where maybe where it's going to okay um okay so we started off you know after the basic introduction we started talking starting off with just like you know basic kinematics jacobians stuff like that um the multibody notation is something that I doubt too many people will move forward with as part of your major like um part of your life but if you do you'll be happier I promise I think I've seen bugs in notebooks that I mean I I made my my own you know like if if you find yourself frustrated that the Jacobian you got out of diff ik is in the wrong frame or your forces are somehow seem to be like in the wrong space or something like this more careful use of multi-body notation will save you consider it you know it really does help I think and I think the general view I tried to push sort of um less about the mechanics of the kinematic equations which is a slightly more standard treatment but I think thinking of it as a spatial algebra and understanding the basic operations of how rotations affect frames and stuff like this um you know that's a that's a lesson that we came back to multiple times and I really do think a lot of you found that the differential ik pipeline became a Workhorse for the for your projects a lot of people using it and some of you um really I think got to appreciate it or maybe you're mad at it but it but um you know a month from now you'll be really appreciative of it baby for instance I one of my regrets is that a bunch of people copied the iwa painter notebook and that had only used pseudo-inverse control not the full diffic because it that was sufficient for that notebook and I hadn't sort of pictured everybody copying it and trying to use it for more than it was good for the pseudo inverse controller can run into singularities right right and some of you did and it blows up unfortunately the way it blows up is it causes multi-body plant to say I can't or it says the integrator you know I I run into time step equals one e to the minus 14 and that's not a very clear message but fine that was just the pseudo-inverse being insufficient and if you switched over to the diff ik which was the least squares interpretation which allowed you to have constraints then those issues went away right so the differential ik as an optimization I think is a Workhorse uh and a thing I I hope you feel like you learned right we jumped into geometric perception there's a party going on somewhere of course we learned um you know iterative closest point and its variance I would say both the kinematics and the ICP kind of work helped me start talking about many of these problems as kinematics problems as optimizations too and I think the takeaway that some of you are seeing when you're playing with perception on the on the project is that you know these Point Cloud processing algorithms are very good for refinement if you have a known geometry but they're not great for the global part of the perception problem and you really wanted to bring in the Deep learning pipeline to help with that the bigger part of the problem and you know it requires those required models they're great for accuracy let's say for refinement if you will but they need an initial guess and remember I said that if if you were to take away if you told me I could only have RGB or I could only have depth my answer would have flipped a few years ago and I'd say take my depth keep I'll keep my RGB okay we built up more into the Clutter clearing was the example that I used um for a few reasons right we started to talk about perception and clutter you know richer perception that could handle the you know occlusions and things like that uh about more complicated simulation mechanics right and about even programming at the task level right so this was scaling up the basic recipe into a really much more sophisticated uh version of the problem foreign it also helped make the point that we didn't need to estimate the pose perfectly in order to be successful because that clutter clearing demo right was just using antipodal grasps it wasn't even thinking about what objects were and it went pretty far we jumped into deep perception right we talked about mask our CNN and the like right that was the first Workhorse if you're starting to do perception in in the real world you might very well still be using mascar CNN we talked about deep pose estimation the category level versions of this with for instance dense object dense descriptors and key points for instance being an alternative to actually estimating the pose maybe key points are enough or dense descriptors these are super powerful methods they're getting better I mean they're data hungry I think if there's one thing that we've there we're seeing as today's Trend that will continue uh is that a lot of the pipelines that started off being hugely successful based on supervised learning are now turning over into self-supervised learning versions of these problems right finding good ways to To Train A visual representation that's sufficient for these kind of Downstream tasks using unlabeled data is the big is the big new trend not even that new anymore okay we did motion planning like we covered a lot of stuff right we did motion planning which started with just richer spelling of inverse kinematics right all the power you use and a lot of you guys are using inverse kinematics only actually I'd say um you know calling a lot of sequential inverse kinematics calls and a handful of times I've been saying you know maybe you should turn that into a kinematic trajectory optimization right why because solving a bunch of inverse kinematics calls independently is good but it doesn't actually ask them to be related to each other in any smooth or subtle way and so Ken I tried to say the kinematic trajectory optimization was just inverse kinematics with the constraint that the inverse kinematic Solutions are consistent with each other they can all be described from one spline and we talked about sample based motion planning too sampling based right some powerful tools I threw in some stuff about graphic convex that's there too of course but you know if you remember rrt and PRM then you've got that basic vocabulary do you remember you know then all of the stuff about the different ways we're doing control on the manipulator we had our our next foray into Force control and manipulator control can you remember the you know why PID control is good but inverse Dynamics control is better if you have a model and why we actually use joint stiffness control in a lot of cases for the for the robot right we we like to think about executing joint trajectories but relatively look with a low stiffness controller so that if we bump into stuff right we're still compliant enough to to keep moving and not break our robot or the environment um but we also talked about direct Force control where you're thinking explicitly about the forces or indirect Force control like Cartesian impedance control or Cartesian stiffness control one of my favorite examples that came up with that actually remember the a few people are doing writing projects right and I gave them the mesh cat painter right a little thing that just says put a chalk weld it to your hand if you want and and draw some lines okay and it was interesting to have the conversations with people because in the in the case where the chalk is welded to your finger the difference between force control and just a difficult for instance with with the joint stiffness control or inverse Dynamics is small because you could put you just put yourself into a reasonable amount of penetration you move yourself around and that's all good the the robot will you might have to do a little tuning to not push too hard because otherwise the chalk will get stuck if you don't push hard enough it might not draw but pretty much you just tune in once how deep to push you follow your trajectory life is good but the people who switched from welding it to the finger to holding the chalk had a different experience yeah so as soon as you push down the chalk might move in your fingers as you draw it might start moving in your fingers and suddenly there if you just picked a nice trajectory and started moving around okay then you might have drawn for a little while and now you stop drawing or you know because it moved in your fingers and it's hard to know where the thing where the chalk is in your fingers so actually this is a beautiful case where if you think about the space of forces you just say I'd like to be pushing down with a certain amount of force then even if the chalk moves the end effector will move for you in order to keep yourself in contact with the with the table we talked about controlling not just the robot but then the whole you know the objects in the world right I used the language of visual motor policies to talk about that I really think something great happened when we started putting cameras at high rate into our controllers and we need to understand it better you know right now I'd say our ability to get visual motor policies is still a little weak we did it with behavior cloning we talked about it with RL policy search for instance but we should have more powerful reliable ways to get visual motor policies they're very good they're still we're still we're still working on it okay but this is the stuff that's making the Rockstar manipulation demos right now right I showed you rolling dough there's all kinds of things that visual motor policies can do that are surprising and then we wrapped up with um you know intuitive physics learning models task and motion planning and a little bit of belief space today so that's a lot of coverage right we've covered a lot of things some of them more carefully and some of them just quickly at the end but I think it's a pretty good representation of what's happening in a modern manipulation system when I reflect on the class and maybe what I'll do next time let's say just you know but um the one thing that I I think this overly emphasizes and maybe I wish I would emphasize more I think I'm going to put mobile manipulation earlier in the class because I think it opens up I didn't realize I mean I think the tools are actually not that different to solve mobile manipulation the math is the same but the the ideas you would have for your projects I think are going to be different yeah I think billions lecture yesterday really emphasized that right you you wouldn't ask you know a chat bot you know what should I do with you know go get me a Coke or something like that if it's if you're limited to the world of your table and I think the the open vocabulary ideas the Anything could happen in the world you're going to send your robot off and do anything Wheels help you think about that right you have to be you have to I don't know you could bring a lot of things to yourself in a conveyor belt but it's not the same okay so even though the math is actually very similar I'm going to probably make a bigger emphasis on mobile manipulation next next time um you know there are some different parts of the math where people think about navigation and and mapping and other scene level kind of perception problems that would come along with that but I think the biggest thing for me is just the needing to think about the open open domain open vocabulary part of the world for me and I'm saying this partly so you can agree with me or disagree with me over Anonymous feedback is fine you can shout it out right now that's fine um the other thing that I think I want to emphasize and I said it on on Tuesday I want to give a few more tools that you could in your projects for instance use for the task level reasoning I think if you could have just written a padiddles specification you might not love writing for dental it's kind of a weird but uh but it's very powerful to be able to think about longer or you know longer term tasks more abstract tasks and I think the I'm thinking that the presentation focused a little bit more on the the dexterous part of manipulation and a little less about the the world part but you can leave with a slightly you know knowing that there's other perks in fact it's interesting kind of to think about when I was thinking about that um that dichotomy uh you know it just happens that at Tri right the um the org chart is kind of telling right so there's a there's a dexterous manipulation team but there's also a mobile manipulation team separate and it really does bring they're complementary there's a lot of problems that you get into you know where you don't need you the the mobile manipulation team I showed you their grocery store robot it they were happy with a suction gripper for a lot of things they weren't thinking about the dexterity required but they're moving through the world and experiencing things that my robot on the table is not experiencing right and once I said that I realized okay well I haven't said enough about soft robots there's a soft robotics team and there's also a human robot interaction team I'll write it out right we mentioned soft and I and I offered to spend a lecture talking about tactile sensing but um but we didn't get to that one okay and human robot interaction I is hugely important I just it's not my expertise really but any thoughts or questions or anything about that high level scope stuff feedback yeah what would be the next steps for you as a manipulator a robot programmer yeah for you as students yeah I mean there are a lot of really good classes I don't know which of them you've taken and which you if you haven't I mean I'll be teaching under actuator which I've advertised a few times in the spring but there's great classes by Luca carlone about perception and state estimation and the like there's there's in fact I could offer I could just summarize a list of some of the great classes maybe on a Piazza post um I'd love to I'd be happy to do that we have a lot of good classes on campus maybe not enough actually I would love to see more um that's a great question is this a research tool kit or is this a you know I'm I need the robot to move today to make my startup work kind of toolkit I think um there's a lot of robots that do things that you'd consider to be manipulation that don't use a big part of the stack but they are the the places where the world is more constrained so the classic example would be a factory room floor where you're welding or something like this it uses maybe force control a lot of position uh programming and the like but it doesn't need to think about perception it doesn't need to think about all the uncertainty and complexity or even planning that comes with the fact that the world could be very diverse and I think in industry startups you know big companies are now investing a lot in the next generation of robots starting with more flexible manufacturing flexible Logistics the Amazon problem the delivery problems and I think that they are hitting this straight up this is a this is core material for that kind of a job and then absolutely there's research that is taking every one of those and decent pushing farther but I think as soon as you start needing to perceive the world in order to do your manipulation and that's driven by the task then the old stuff isn't getting it done and this this stuff is bread and butter yeah thank you so people asked me to predict the future I can't do that but I'll give you a few thoughts if you want so um in fact you know Rod Brooks another famous roboticist that got two uh you know went off and was lab director and uh then started a company and but he I took his class when I was a student embodied intelligence I think I think it's called body intelligence yeah he always says that people have a tendency he reminds us it's not his quote I guess but people have a tendency to overestimate the importance of a new technology in the short term and under dramatically underestimate the the potential in the long term so it just means I'm just saying everything I'm about to say is wrong but uh but I do think there's some huge trends that that we've seen enough of to to lean into right um I'd say actually billions talk last time is one of the biggest ones the idea that we could have more common sense priors to make decisions with robots I think is the biggest change coming to the field in a long time maybe and it's happening you know it's starting we've always wanted it and I don't think we're quite there with with large language models but I'd say like the large language models and the visual language models and the like um that Brian talked about are that's like the first compelling approach to say we're going to get something kind of that smells like an unnatural intelligence common sense right and I don't even know how to measure the potential change of that that's going to happen with that it's going to be um it's going to be weird I can guarantee that that's that's a high probability prediction right but it's I think it's really one of the the biggest things that's going to change what we're doing um it's sort of interesting the people I talked to about this um they actually say that maybe they're just trying to make me feel better but uh but they say it's interesting because there's so many people are excited about this and they want to think about how to make robots do these multi-level tasks that in some ways it actually puts a premium on motion planning that just works and feedback control and skills and other things all you know the stuff I did maybe emphasize a lot in the class is suddenly really important because there's you know Engineers everywhere that don't know that yet and the robots don't work all the time but if they did we could do incredible long-term tasks now so I actually think in a weird way this um you know not manipulator equation driven thing is probably going to put a premium on some of the core manipulation skills let me see the lower level slightly lower level stuff including you know dexterous style manipulation I would say that um we're going to see so I obviously like simulation um but I think we've turned a few Corners with simulation and I would expect that the use of simulation is I think it's just like at the beginning of what we're going to see in this field and it's going to continue to change rapidly um I think some percentage of the robotics population is converted and says I believe that if it worked in simulation I'd have a pretty good chance of it working in reality the people that are training perception systems on simulated data are pretty convinced I think I think less people are convinced about the contact mechanics um I mean we focused on it more than a lot of people and they're the Sim to real Gap in the in the contact mechanics and you definitely have to be a skilled user of simulation to make that transfer you could set parameters wrong but uh but I think if you if you're a skilled user simulation then more and more people believe that you can do your work in simulation the bottleneck there is content right how do you get your robot your art assets your objects you want to manipulate into simulation and I think there's going to be just probably a huge uh change in content we're already seeing it with um it's funny when someone says like five years ago let's say 10 years ago just to be safe people said um said I have I built a simulator they'd mean they wrote like f equals m a down and they would maybe they wrote a renderer that's part of a simulator right but now if someone says I've got a new simulator they don't even they they built something on top of a physics engine and they don't even cite the physics engine but but uh but it's they're like now you know there's these I think very important content aggregators right people that just say I've scanned a bunch of houses and I've put a bunch of different objects in those houses and that's my new simulator offering and I think that's value that's hugely valuable so we're seeing people generate that data in lots of different ways sometimes with manual effort sometimes with procedural generation you can make a program that spits out random living rooms right and increasingly what we're seeing is real to sim kind of work right um I think this is just going to be a huge component the fact that you can drive around stata with just an RGB camera come out with a perfect neural Radiance field representation of it you know and then so what do you do with that how do you get that into a simulator uh it's not enough it turns out to feed the simulator but people are thinking about this now right how do you how do I just ingest so that the robot every time it sees something new it puts it adds it to the simulator and we build the Matrix um I I predict that that would be a huge it's going to just ramp up more and more and more I guess along that route I think maybe an easy one to say but let's just think about it for a minute I think big data hasn't come to robotics yet but it's coming it's come through large language models and visual models but the thing that we're waiting for is uh let me say interaction data right data that has forces we talked about in the system ID world that if I watched an object fall on YouTube there's limits to what I can learn about it right I can't learn its mass for instance right and I think um we're getting to the late to the world where people are deploying enough robots and thinking seriously about how to aggregate that data um Fleet learning is a huge potential that you know all the robots on the edge as Edge nodes pool their understanding pool their models pull their data to learn something more about the world than they could learn by surfing the web right and that's coming but every year we say it's coming and it's still it's taking a long time considering how important it is it's taking a long time it's it's sort of frustrating that that we don't quite have it yet it's hard to because because the data that you generate on your robot that's not exactly the data I want to generate on my robot and so it's not immediately useful you have to think about off policy RL and all these but even the distributions shift can be really tough okay but we're gonna there's gonna be a crossing point where we have enough robots and they're similar enough or we have enough copies of the same robot and maybe we consolidate Hardware or something where where suddenly I'm going to program my robot completely differently because you generated a lot of data right also the same thing too is that a lot of the work we're doing here we're kind of programming the robot as if it's the first time it ever experienced this uh and we think a lot about learning as okay I started with my policy parameters as you know zero random numbers around zero how do I do that and that's not the world we're going to be living in right we're going to be living in a world where um there are many robots that have already done most of these things and I should start with their hive mind uh you know Global model and maybe specialize for my current situation so that's going to that's definitely coming to this this neck of the woods too and maybe just to to say a last one um I think uh I've said it a few times but I'm just very optimistic about theory of ml RL control um you know coming together with empirical stuff I think the empirical success of these things raced ahead but you know now we have many of the the best theorists in the world that are excited about understanding those better and I think that is just going to be a very harmonious future I mean we have uh Scott Aronson right is our Quantum computation guy he's he saw gpt3 and now he's open AI for I think I hope I'm not wrong Scott um you know but but to have the quantum computation people get so excited by these large models that they have to go figure them out that's good that's great right that's like bringing all the really great people together and I'm just very I mean the controls people are so smart they're so so smart you know and they now see that some of the things that were that have happened in RL and they're they're moving in that direction right and I'm very optimistic about that and how that changes things so if I were to just like at a meta level uh try to convince you of something uh it's maybe it's I think it's in this space which is and I said it on day one and I'll say it again to close this off here um I mean for me this class even and the notes as they slowly evolve and uh you know the way I think about it I think um because the systems we're building here are so complicated we have to think rigorously about them and I think having a foundation of the things we know and rigorous thinking about the things that we're still inventing is just so important and I think if you talk to the best empirical machine learning people and the most influential papers and you look at the authors or you look at the style of the papers you know they're extremely rigorous I think people get the impression that um that you can put a quick algorithm together you can make some curves and you're good but that's no those aren't the papers that are having massive impact and so I really I really want us to take the time to think deeply about these problems and and build a foundation of you know across these these complicated disciplines and you know push that I think that's what's going to push the field forward maybe maybe more now than in some other times there's just been such a bubbling up of ideas and I think feels it feels to me like it's time to to consolidate a little bit and then push forward again but good okay so um that's it for me it's your turn so Anthony sent out the uh logistics for Tuesday but basically I think and and his text is the gospel if I say anything different right now but the basic gist is please come come at two because it's going to take longer than an hour and a half to do it if you come at 2 30 that's fine but if you can cup it too it's great it really is like the best part of the class and please when you're presenting or making your videos you know the um think about what you learned that you wished other people knew that's the value and that's why I get tons of value out of that of learning about the things that you thought were going to work that didn't work you know um algorithm you tried that we we haven't covered or I haven't thought about that much I I hear your experiences and I understand things better because of them and so that's what I think you can all get that out of each each other next week um you know the goal is so once you put your name on the sheet we're going to March down the sheet people in the room will put up first uh we've had a few times where someone would sit in the room for a really long time watching people who aren't there and so so if you're here in the room then then and you've marked your uh video as public right so you can when you upload to Youtube you can make it public or unlisted the public videos means we're going to show it and we'll show it even on on the live stream because some people will watch remotely if it's unlisted it's still on the spreadsheet so you can watch everybody's videos I mean the the con the class you remember your your members of the class are your audience whether you're listed unlisted or or public but the broader world is is only your audience if you mark your video public okay and uh you know I've got a room until five we'll see what we do we're going to March through as many as we can try to give a little space for questions there's a lot of you and it takes time to March through the videos but please come it's it's really really a fun part of the class and I know there's people like all over that are going to be watching because they've they've seen awesome projects in the past right and I I think they're going to see awesome projects this time and it's not about how well your robot works it's about how much you learned and how you can communicate that okay okay good see you Tuesday I'm excited
Robotic_Manipulation_Fall_2022
Fall_2022_642102_Lecture_16_Manipulator_control.txt
and then we'll run a basic simulation with a couple examples so hopefully that'll that'll be useful okay let's do it thank you as always for your feedback on the surveys someone actually said they want more jokes and if it wasn't an anonymous survey I would like instantly give an a for that but uh um unfortunately it was Anonymous and that one person will but it it made me happy okay today we're gonna do the second half remember I talked last time about a point finger and with the promise that at some point we'd put the robot back in and the goal today is to put the robot back in and I'm going to do that in a few steps I want to tell you about you know unfortunately there's a bit of a zoo of different manipulator control ideas they are all very I think very related and very simple so I'm good my my task today is to try to keep that organized for you in your head on the board and try to make it so it's not confusing okay and I'm going to try to do that by uh I'll put the outline here and then go back to it a couple times Okay so so we'll do first just joint space control and I want in joint space for you to base it just to lock in what it would mean to do PD control what it means to do stiffness or impedance control what it would mean to do inverse Dynamics control okay and then we're going to go into Cartesian or end effector space and we'll primarily think about what it means to be doing stiffness or impedance control there okay and I'll make sure I I talk a little bit at the end about some limitations and extensions Okay so they're very related ideas very simple ideas I hope that by the end you'll understand you know the difference between those and then understand the really the beautiful trick about making the entire robot program the Dynamics at the end effector okay now last time we did it on a point finger and just partly to review that and to build into the next one the interesting thing about the point finger case is that your joint space is your Cartesian space so we can do we we did in some sense almost we didn't do inverse Dynamics but we did these and and I'll just write them again and we'll we'll launch from there okay so foreign so we thought about our robot as just having uh being a point with some Mass okay so the configuration of that was really just x y Z position of that okay it turns out that I could have equivalently said that was the position of the finger using my multibody notation the position of the finger and the world coordinates and since we're going to go back and forth between joint coordinates and end effector coordinates it's a weird thing that in this particular example though they're the same okay more more generally those are going to be different and we're going to want to go back and forth between them but I'll highlight here that they're the same and just show the various components when they're the same are very simple okay so um the Dynamics of this point finger were very simple we just had our Mass was our gravity Vector let me write it in the slightly more general form this time this is the gravitational vector which for the simple robot is just 0 0 negative M times 9.81 you know negative mg if you will okay and then we allowed ourselves to have some fictitious forces of being applied like our actuator was coming in as a uh and act you know like a jet pack basically that could apply forces anywhere and then I could potentially have some external forces that were applied at the finger and we talked last time about how to regulate the external forces right either directly or indirectly when you see this by the way the reason I always choose to write it like this I've said this once before I'll say it again because it'll we're going to build on it today here but I want you to see this equation and the reason I always put some terms on the right hand side some terms on the left hand side is because I want you to see this as just Mass times acceleration on one side and these are these are all the forces right this is and the sum of the forces so the gravity is a force a torque right my control input is another Force that's being applied in any external forces so this is just f equals m a okay and I always try to write it where I have Ma on one side and force on the other side unless we start trying to be fancier but the the original governing equations should be like that Okay so in joint space there's an interesting question first of how do I track a trajectory so we talked about Force control and we'll get back to force control but let's forget about interacting with the world let's just say I want to move my finger around and I spent some time maybe with my kinematic trajectory optimization made a beautiful trajectory Q of T okay I've got some beautiful Q of T I'll call it Q desirative T okay maybe the result of trajectory optimization there's a problem in the manipulator control World which would just be trajectory tracking right if I have some Q desired of T how do I make it so Q of T tries to track you know converges On Cue desired of T and maybe tracks it with high precision and for this I'll just I'll say the external forces are zero for a minute and we'll put those back in okay so so what is a good controller for tracking a trajectory that you might want to run on a robot okay even in the point in the point finger case it's just so so simple that I think it's worth writing it here first so we can see how those same ideas manifest themselves in the full case we only have a few different cases okay so the one that we've seen a few times I feel bad for the people that have a robot directly in the way but I guess of all the things to be occluded by it's robots are good uh okay so uh maybe I could do a PID a PD control right PID controls perfectly good too but I'll just do PD control for now so what if I did KP Q desired minus Q k d Q desired dot minus Q Dot that's one of the systems that we've seen before you're right sorry uh no it should be plus over here I yeah because I put the desired in I put a minus here sometimes I do Q dot it was absolutely an error thank you um good so by the way this you know this is well defined if I have a trajectory a priori I could take the derivative of that trajectory right and I could have that a desired velocity at every instant in time too if I'm tracking some trajectory okay so what happens if I put this controller even into my little Mass um system let's say I have a desired trajectory that was just a sine wave or something like this let's say Q desired of time was just I don't know 10 times sine of t I could certainly you know figure out I could take the derivatives of course right U dot T is just 10 cosine of t and I could run this controller how well will it do at tracking what are the good and bad things about that assuming I've chosen pretty good values for let's say KP and KD how would you expect this to perform yes good so so there's a there's a question about delay um which we're gonna that's that's actually the second point I want to make so yes there's going to be a little bit of lag here what about even if what if I just said the desired trajectory was Zero what's going to happen that's even the simplest case what if Q D was just zero what's this controller going to do won't be 90 degrees out of phase that's a good question I mean if if I've chosen a critically damped thing and I say the desired is zero so Q desired is zero Q dot desired is zero then I will have a system that has a critically damped response assuming I've chosen this but the question is does it get to the desired zero and it doesn't necessarily because there's an offset term here in the gravity and this is fundamentally an error driven controller okay so if Q equals Q desired and Q dot equals Q desired then the torque is zero and gravity is going to pull me away it requires some error here in order to resist gravity and balance and I made the trivial simulation just so we could see it okay uh uh this is the simple case let's even make it's easy to see when I just set the amplitude to zero okay if my desired trajectory is just a flat line and I have a point Mass finger and I have gravity and I run a PD controller then it will converge to a steady state but it's not necessarily going to drive the error to zero a PID controller would drive that to zero if I put the integral term back in but when we start tracking fast trajectories the integral term could be is going to have a more complex effect so I'm going to leave it out for now there's a different way besides the integral term that we could take care of that and I want you to appreciate that that actually is what happened when we wrote the stiffness controller before okay so when we wrote a stiffness controller before we wrote almost the same controller but we also added in actually subtracted off the gravity comp term right and we called that our stiffness control before and I almost feel silly calling these things different things in the point finger case they're so simple okay but but I think this is an important idea one thing you can do if you know the mass of your finger is you can just subtract It Off okay and the reason that that intellectually matches stiffness control is because then I can say that the resulting closed loop Dynamics were mq double dot equal um Plus I'll do it like a mass spring damper so I'm going to change my sign on that I think Lee Ray was looking for this is a DOT here now Q desired equals zero in fact if there's no external course Force right the pi the PD controller has an extra nagging term from Gravity the stiffness controller by virtue of trying to act like a spring is actually canceling that out but in the simple case the only thing that's different between the stiffness controller and the PD controller is gravity compensation okay and of course if I add that gravity compensation into the simulator then I get a nice response that will converge to the desired when the desire to zero at least if I do something more interesting like have it be the sine wave again then this actually will do a fairly okay job attract this is the Orange is the desired the blue is the actual I started a little bit off the nominal and although there's a little bit of of uh of lag certainly at the beginning here it actually does a pretty good job I think I'd have to put up the bandwidth to see the phase uh I'd have to increase the frequency to see any notable lag hmm okay it turns out you can do better still you can get around some of the lag with just one additional idea and guess what that additional idea is what happens in inverse Dynamics control in the point finger case it's all extremely simple what would you do to try to get around that lag if you will which is there's an extra piece of information that we have that we haven't given our tracking controller which is we know we know arbitrary derivatives of Q right of Q desired if we tell it where it's going to go and give a feed forward term that says that uses Q double dot then we can do even better we can get better tracking performance still okay so so there's a couple ways that you could add that in the way that it's typically done in inverse Dynamics control I wish it was exactly the thing you'd expect but for for the important reason it's just a little different than what you'd expect but it I'm going to say it's Q double dot desired and I'm going to go ahead and multiply that mass by all of my terms here flipping my signs let's do it like this so this is a feed forward term and it's kind of a fundamentally it's what you need to do to get around this error driven control you want to be able to send it so that if the trajectory is on if your current system is exactly on the trajectory it will get the command it needs to stay on the trajectory you don't have to wait for error to occur to get back towards the trajectory the way to do that is by giving instantaneous information about where that trajectory is going to go and this is what happens in inverse Dynamics control okay super simple ideas especially in the point finger case and they're just going to map directly over to the Joint full joint robot case and of course I will try to convince you here that if I were put the feed forward term in foreign things get even better that I get beautiful convergence to the to the nominal trajectory and I'll stay on the nominal trajectory I won't deviate on every oscillation I can once I'm there I'll stay on there it's actually easy to see that okay the reason to do this and have mass actually multiply my I'm going to scale my KP and KB KD also by the mass is because then if I write out the closed loop Dynamics I can say that I have mq double dot um I can pull this whole thing on the other side here minus this I'll just pull that on the other side okay um I'll put it in Spring Mass damper form again put my damping equals zero okay that's the resulting equations of motion and if I were to just call this term e if I if I Define e to be my error I'll call it Q minus Q desired then I could write this same equation and you'll see this often in the multibody manipulator control world I could write the same equation as just A first order a second order spring Mass damper on the error okay so the error will converge to zero and stay at zero and that's beautiful okay that's that wasn't true until I put the feed forward accelerations in I want to make sure that that these ideas are clear because we're because they'll get more you know same ideas but just with more terms and the like if we go to the when we go to the manipulator case yes excellent good so so the question is what what are the requirements for the controller in terms of knowing the system right so in this case I only need to know the gravity terms right if I have a model of the gravity terms of the robot then I can execute this controller here I applied a controller that had the mass also inside it mass and the gravity term okay so that it does that does ask me to know more more about my system but you can ask an interesting question about if this is approximate how sensitive is it uh this is actually relatively not terribly sensitive um you know you can do an error analysis of what happens if I put M tilde in like this if you have controllers that try to invert mass and stuff like this it can get a lot more sensitive there's different ways that mass can enter this one's not as as terrible okay the Q double dot is um so this controller doesn't need to sense Q double dot on the robot it's just if I have a trajectory I can differentiate it twice that's my my motion plan I differentiated twice as long as I could do that that's okay so I didn't I don't feel that I added a new requirement in terms of sensing in that but there is a new requirement in terms of the model great question this controller is actually the one we've been mostly using in simulation in it it has torques coming out in in the you know Drake systems framework whatever you'll see this as the estimated state coming in the desired State coming in and then you can send a feed forward acceleration coming in that's the inverse Dynamics controller if you'll see that in the manipulation station stack that you've been running but we actually don't send that we just leave this disconnected because ewo won't accept it and I'll tell you about that at the end what maybe why okay if I put Force back in then um you know it's really it's not so different if I were to put the force back in then I'd get a trailing Force term and it's interesting to think about what happens if I put the force back in now I have a a second order damped oscillator but with some driving external Force okay so maybe that's still a pretty reasonable thing to do yeah good okay sorry again this is Q double dot desired this is good that's a great great so so this is me analyzing the closed loop system I don't have to this is the controller that I implement this is what I have to type in the result of combining my controller with physics is something that uses Q double dot because physics uses Q double dot but I don't have to implement that for qdot it's going to be I mean depending on the sensors but there's you know rotary encoders there it derivatives we typically think of as positions and derivatives to to be pretty clean and we try to avoid accelerations as a general rule of thumb you don't want to do high bandwidth control with accelerations um so Ewa you know just uh something I was going to say later but Ewa allows you to send Q desired but it doesn't actually allow you to independently send Q desired and queue desired Dot um I believe that's I think they made the safety argument with that um and that somehow maybe you could imagine a a bad user sending inconsistent q and Q desired dot that might break their safety proof for instance um so they instead take a sequence of cue desired and they der they add a little bit more delay even and they'll take a finite differences to estimate Q Dot and then send that as a command but you're only actually allowed to talk to the robot with Q desired over time all right so let's blow this up now so we understand in the simple case PD stiffness and inverse Dynamics it's just adding one thing at a time yeah for which part I hope they shouldn't um the stiffness control other tauji so they should both be my nostalgia thank you because uh yes the the tauji is on the right hand side with the use so I have to subtract it out thank you for catching that too many symbols to get right on the board okay same recipe now but let's do it on the manipulator equations remember the Dynamics that we were writing was just mass times acceleration equals the sum of the forces the way that manifests in the manipulator equations is now my m a is just a little bit more complicated it looks like this and this okay these are core this is a Now The inertial Matrix these are the Coriolis terms but they really go together these are this is and you should think of this together as M.A if you were to change the coordinate system then they would both change uh equivalently this is this is m a okay and then we get the sum of the forces so you get the gravity forces which in general are some function of the configuration I'll assume that I've got U's everywhere so I'll just say I can command every coordinate and then I can have I'll write it as a torque now a Tau external okay so the first thing I want to ask is just in joint space how should I do trajectory tracking if I have a desired Q D of T and now I have these equations then what's the analogous thing to do right and it follows exactly from what we do we could do PD control right which actually looks identical in this case it's just now it's identical I shouldn't even spend my time writing it maybe but the stiffness control also looks effectively identical it's just canceling out a more interesting gravity term minus Tau G which is a function of Q now and the resulting dynamics of that are a more interesting version of what we've done before but it's still a it's just a more complicated spring Mass damper system and it happens in The Joint coordinates so what does that mean is if I put a resistive Force if I if I push on a particular joint I expect it to feel like a spring when it's pushing back at me okay and these terms are the mass of that spring Mass damper system and they're a more complicated object but they it should still feel like a if I were to move one joint at a time for instance it should feel like just I'm pushing against it and we're going to actually do that Terry is it okay to do it now yeah let's do the so I want you to at least see me feel the um I mean if we'll do the joint impedance control okay now so iwa does it's called Joint uh impedance control okay but like I said before it actually it doesn't shape the mass of the robot it only shapes the rotor Mass so unless you get to the level of modeling the rotor inertia and the elastic joints at our level of modeling it's a it looks like a stiffness controller okay now let me just say something about safety here so this is we're only running the the very simple controller that has been actually certified and Terry's got his hand on the big red button but I don't advise in general people running up and touching powerful robots Okay um I'm going to just do this carefully so right now this is a joint impedance control mode so that means if I were to apply it's got a something like a 50 Newton meter I think was the gains we put in here a 50 Newton meter gain on any one of those joints it's going to let me move it right I can kind of move the robot around it's going to resist it's going to drive itself back to this nominal joint configuration but it's in a complicated space right the response is a complicated function of the kinematics of the robot because every joint independently is looking like a spring okay it's pretty beautiful and it feels very nice and natural and smooth which is a testament to the to the hardware okay good we're going to do the other one in just a minute if that's okay yeah so that's I think at our level of modeling this is kind of what we should think about iwa as doing if you wanted a higher Fidelity simulation model if the reason your robot was failing to pick up a coffee cup was because of the Dynamics of the elastic joint then you have to go dig deeper than this but for almost all the manipulation research I've done we haven't had to go to that level of modeling power okay you can do inverse Dynamics control too and we and we often write it down it's interesting though that um that iwa doesn't do it and I think it does go back to the the requirements as you say of of what's happening um the inverse Dynamics controller would send in um I'm going to take a mass times the times the entire signal here I'll send in Q double dot desired as my my lead my feed forward term and I'll put inside here my KP plus KD outside here I'll take my Tau of gravity okay and the result is once again a beautiful system keep switching my e-dots and whatever but that looks like this that the error Dynamics in joint space will converge like a second order spring with a more complicated Mass Matrix to the um it actually has the Coriolis terms too like I said those two go together those are still there okay but it's a more complicated Mass spring damper system but the error converges to zero so this is a nice way to do high-end trajectory tracking control if you had a torque controlled robot and you wanted to do extremely accurate trajectories I then I recommend sending in the Q double dot uh forward I guess I think it's only doing first derivative it could be taking two finite derivatives differences of my Q command but I don't think the advantages there when you've already done a lag so I suspect it's not okay so the interesting differences now become when we start thinking about how the forces enter the The Joint equations okay the multibody equations because forces naturally live in Cartesian space and everything here is in joint space in joint space it just looks like a more complicated uh finger if you will and all everything still goes but let's see what happens if I add forces and I showed you once before when we were talking about friction cones and whatever I showed you these equations without fully justifying them um the way that forces enter the multibody equations I wrote this down sort of quickly without justification that it looks something like this yeah that I have a Jacobian transpose times F where this is a Cartesian Force but these equations live in in joint coordinates right if you have a robot with some joints and some equations of motion and you apply a force a Cartesian Force here and the question is how does it affect torques at the links or vice versa if I were to apply torques to the links what sort of force would I be applying at a point on the world okay and the relationship here which is is there but it more generally is that you have this relationship of torque is the has a Jacobian transpose times Force and that's uh that one I want you to remember that's like a good thing to know at parties I guess and um it's like yes I think that's a that's core knowledge I would say how many people know why torque is Jacobian transpose types Force yeah okay I'm going to add to your core knowledge uh it's really simple okay it's it's just a power argument okay so so there's if you think about the the work actually done at the end effector this is the way we think about it so if I have a robot in some configuration I want to think about the incremental work done at this point on the end effector work is forced times distance right it's this is an argument of virtual work called work is force times distance okay I'm going to compute the work done at the end effector in two different ways I'll think about a virtual change in x and I'll take a think about a virtual change in q okay these are two different ways and I should get the same answer okay so my the total work done by the the force should have an equivalent work done at the by the torques okay so if I say the force let me use my correct small f dotted with some Delta change in x must equal the torque dotted with some virtual change in q okay so this is a virtual displacement in x this is a virtual displacement and Joint okay this Delta notation and that's a DOT product since we also know that Delta X is a is related to Delta Q by the sum jacobians then if you put this together and say it has to work for all Delta Q I have F and I'll go ahead and multiply out the transpose I'll say F transpose equals Jacobian Q Delta Q is Tau transpose Delta q and this has to work for all Delta Q then it is equivalent to saying Tau and I'll take a transpose on both sides is J transpose f so anytime you want to go between a force computation at a particular point and a torque at the joints the Jacobian is exactly the mechanism you need these Jacobian transplants okay it's just an argument about virtual work it's actually a it was a I think a major uh advance in sort of the rigid body mechanics when people started thinking about these virtual displacements and that's the Lambert's principle and all the good stuff all the variational mechanics work so there's a lot of depth there and the reason for those annoying just virtual displacements is actually really important and and the like but I think it makes for simple algebra and you get Tau equals types of course okay so uh let's think about now um trying to live in this space remember the the amazing thing we want to do if we're thinking about forces is we want to make the big complicated robot act like a you know be able to control the forces as if it was a point finger so given we have this relationship and we see how it enters the multibody equation how do we make that happen it turns out this translation between you know the Cartesian space and the torque space with the Jacobian transpose can actually be applied to the entire multi-body equations and the result of doing that is is one of the most beautiful results on our list here which is this idea of writing Cartesian space Dynamics task space Dynamics okay so um now my location of my finger I'll call it an end effector more generally here so be my end effector frame we know that that is related to Q by the kinematics my velocities are related by the Jacobian my accelerations also have a the relationship V Dot of E which we actually have multi-body notation for a calling that a and that gives me if I take the derivative time derivative one more time I get this plus an extra term J dot q q Dot the tricky step here is if I take the multibody equations solve them for Q double dot and insert them into this equation then I actually get a new set of multi-body equations the derivation is a little bit more in a little bit more detail in the notes so they certainly all the terms are written out there I'll keep that off just to keep it simple so what what is this equation this is the manipulator Dynamics from the point of view of the end effector okay this is my command input I can just achieve this with my Jacobian transpose and I can actually write the Dynamics as it's as it's viewed from the finger okay from the end effector and since these equations you know so this thing has the original Mass Matrix in it it has a couple of jacobians in it has a couple inverses in it but it's always well posed assuming um yeah it's actually always positive definite it has some nice properties okay and all of these are just functions of the original equations and the Jacobian it's basically the Jacobian transpose applied to all those equations but these equations looking at the Dynamics of my robot through the lens and in the coordinate system of the end effector looks so similar to what we've done before that I can use the same kind of control in fact I could write a stiffness controller the same way I did before I just will cancel this out I'll write the PD terms on this and the resulting equations will be an end effector Dynamics that looks like a spring Mass damper system okay that's the amazing thing so this is Cartesian stiffness this is the analysis but the controller is simple it's still just I'm going to do my KP I have to it's I have to modify it into the correct space but my ver my in this coordinate system the controller is is simple it's I'm going to write it as p e minus p e k d dot e minus p dot e and then minus this F of gravity okay now you see why I was I was worried about snowing you with the details right so but I hope you see by the simple analogies that this is really just writing the PD controller canceling the gravity but we're doing it in the coordinate system of the end effector Dynamics it's it's a it's fantastic okay should we run it Terry is that good I showed you the the kooka folks running this uh you know early and they looked happy so I figured I should try maybe I'll be happy again this is just the one of the the basic certified controllers okay and uh which one we're doing the first the translational so the way to make this interesting is we put a different translational stiffness in X Y and Z yeah so I think it was 150 250 and 50 or something like this right so one of them is going to be really that one's a little bit that one's that's the 250 and then this is the 50 newton meters okay there's two parts of that question so the controller that I've written here I can actually just command Jacobian transpose times this virtual force so I don't have any ugly inverses in the country in the control but you're right there's a null space of that control and I should do something if I have more degrees of freedom than the thing I'm commanding then I should do something in the null space to to not leave that undefined right but the actual forward mapping looks good there's no there's no inverses in the forward mapping okay so just once more so what what is also different compared to remember before I was doing the The Joint by joint and the response in the end effector was actually pretty complicated because it was living in joint space the Springs were living in joint space but now it really does feel you know net obviously different and obviously you know kind of linear I mean I don't know if I can feel linear but uh in the in the end effector coordinates right that's why he was happy it's pretty good okay and then we can do the same thing in rotational coordinates so all the same things work if you were to use our spatial Vector notation and say that the thing I'm trying to control is not the position but the orientation or in general you know spatial velocities and frames in the end effector then you can put a stiffness you have to be a little careful at how you write stiffness in six I actually cited your paper actually about the the maybe an interesting way to write a six degree of Freedom stiffness okay but now if I wanted to push it sideways he made it very stiff in the XYZ Direction so that end effector wants to stay there but it's allowed to be soft in the out of plane and I could I've got a smaller moment arm to do the last one but yeah it's math math works is there are different ways to write the stiffness how do you parameterize the stiffness in that box is it it's just the what does it take a the um orientation stiffness is it a diagonal matrix on rpy yeah I think it is but I actually don't remember yeah they do rpui okay yeah so so don't I won't try to get to pi over two foreign awesome thank you Terry okay questions about that yes acceleration commands correct good that's not a naive question that's an advanced question I would say um so remember how I said you can mix this with um in the hybrid position control so ewo will actually also accept a feed forward Force we normally live in joint I know I know better what it does enjoy I assume it could also take a feed forward force in Cartesian but actually I'd have to look to make sure that we we almost always use joint space control for a limitation I'll talk about in just a second so yes if you're um you're in the limit you could set KP and KD in the the joint space impedance controller to zero you can say I know everything I should know it will still grab compensate torque for you the gravity gravity torque and it actually does a little bit more it compensates friction too in a very clever way but then you can command forces directly to add in therefore you could do the acceleration based stuff through that I think the bandwidth is needs to be considered so so expecting it to track super high through that Force command is to maybe you know there'll be limits to what you can do that answer the question yeah great okay um yeah so so this is super powerful there's only one thing that I don't like about it um does anybody what's the one thing I don't like about it I kind of alluded to it in that question I mentioned it once before Wyatt which is why it's a fair question what's that the null space is um I think that's manageable you should just so um in fact I should probably have lectured about that uh so the the because it's sort of a thing um it's called operation space control right and uh the the idea so we talked about joint centering in differential in inverse kinematics there's a joint centering it isn't I did put it in the notes there's a joint centering sort of version for for this um I can say it off the top of my head here how would you do joint centering to take care of that null space um if you're normally commanding this with your your command that comes from stiffness control and you want to take care of the null space then the standard thing people would do would they would do this is the null space projection and then write something like KP Q minus q q desired minus Q lots of symbols but the point is you could write it like a PD controller and have it in the joint coordinates and uh projected into the null space of this so that's a really beautiful idea that says in some sense and the way people talk about this this is the first priority task the torques you pick should absolutely create the virtual stiffness that you want at the end effector but in the null space of that Jacobian and any extra degrees of freedom then I'd like the rest of my joints to act like they're a PD controller going back to the original okay and this mixing of of joint space and and end effector space control was originally uh you know it's called operational space control and operational space control has grown into a whole you know sort of Rich library of ways to prioritize different costs different tasks and constraints and you can there's humanoid versions of it and and the like but operation space control that was originally that's just you know let's mix joint and force into the same framework great question but that wasn't my my biggest concern I have Exquisite control I can write the Dynamics beautifully but I have to know the point about which I'm going to write my Dynamics that's the thing that drives me nuts okay all of this is based so heavily on the Jacobian and it's the Jacobian that gets me from robot joint coordinates to a particular point on my robot okay so if I'm in the factory or something and I'm applying forces exactly at the end of the orange knob on the robot then life is good okay I can I can act exactly like I want at the end of that orange knob it's actually the center of the Center of that little orange sphere I think is where the coordinate system is for us today all right but if the robot bumps into something halfway up its elbow then there's we haven't solved that problem we haven't programmed the response that the robot's going to have if I were to go up it's turned off right now but if I were to go up and knock it like this none of my math has told me what that response should be if I programmed the response down here and it's for that reason that we tend to live in joint space because if I get a perturbation anywhere it might not have a beautiful interpretation at the end effector but it has a logical interpretation in the joint coordinates okay you would like to say there's a richer problem of trying to say like how would I program the response for whatever contact I happen to be experiencing that's a rich problem and I'll I'll maybe jump ahead and say that too I have I didn't even show most of my videos here okay so there's an interesting problem of contact estimation if we know J transpose f equals torque then you can ask the question if I'm feeling torques that I didn't expect at my robot where on the robot could I have been did that Force come from okay and you can imagine if you can estimate by looking at your joint torque sensors where on the robot you're made contact then you could program the response to to interact at that point with some impedance or some stiffness but it turns out that's a really bad problem I think this picture was one that that paying made to to try to make that point if you are experiencing some joint torques and you try to map that back to the possible locations on the arm and you admit that there's a friction cone on the arm so so the location you know the direction is not directly imposed by the location it could be even if I'm pushing here it could be any of these vectors and you try to solve the inverse problem for a particular torque there's a bunch of places on the robot that could have possibly this is this wasn't even the worst one I think but now that I'm seeing it but um you know there's a lot of different locations on the robot that could have explained the same torques and in general I think you can make some progress the people that have done really nice contact estimation have done it well enough that you can stop when you're about to hit something but not well enough that you can program the response at an at an unexpected location right that's a hard problem especially if I were to allow the fact that there could be multiple points of contact then it's completely it gets very complicated people who've been asking about human robot interaction right if if I bump up against something and it's a wall then the thing I should do might be very different than if I bump up against something and it's a person in both directions because actually the person might be trying to command me right by pushing me around and I should be responsive submissive I guess to the to the human right or it could be other cases where I should try to get the task done even through some so it's very very hard uh problem I think the I think the world is sort of agreeing that maybe the way to solve that is with tactile skins and that's one of the things we're excited about with the with the soft robot project at Tri is we're trying to build sensing skins and I think I think at some point you have to try to estimate the contact location with us with something a richer set of sensors than just your joint torques I think I kind of I think we understand that the joint works are not going to get it done unless you have a lot of links okay so I'd say that's the biggest limitation of this is the impedance control stiffness control end effector view of the world needs you to know where the end effector is and that's why um you know so if I if I now go back to this um one example I talked about it with key points it's kind of exciting to think about maybe combining some of the tools from perception with some of the good Google tools from control right so um as simple the simplest version of this I would say would be that if we're looking at a tool and we want to apply a force at the end of the tool instead of the end of the end effector if I use my sort of key Point estimation pipeline if I want to control a stiffness in multiple degrees of freedom maybe I'll have an oriented key point or a handful of key points so that I can get the orientation of the object to okay and then I if I assume the object is fixed to the hand I just have a slightly different end effector on my robot I have a Jacobian that's just slightly different but based on the the location that came out of the key Point estimator and I can suddenly apply impedance or stiffness control at the end effector of the tool right and that's what made the you know the examples I showed that of a racing and and plugging in and things like this this was impedance Control Plus key points I think there's probably lots of potent combinations of the different tools but you do need to you do need to know where the location of the contact is in order to get that done okay yeah and so what do we do on the Ewa um we are sending in only Q desired trajectory like I said like I said and uh you know ewas controller is differentiating we don't get to send the feed forward we tend to do joint stiffness control despite the Elegance of the Cartesian stuff The Joint stiffness um is you know it's more robust to unknown context I think or more reasonable response principle of least astonishment if you will responses unknown and an important point that I maybe forgot to make but the difference between the stiffness control which is canceling out gravity versus the just a PD control is if I can cancel out gravity with a feed forward gravity term then I can choose my stiffnesses to be much softer I can choose KP and KD to be much smaller if they don't have to also fight gravity and that's how we got um you know compliant motion for things like opening up the dishwasher door the trajectory of this was I would say carefully planned but with a highly imperfect model of where the dishwasher door is where the hinge on the dishwasher door was and we are heavily relying on the ability for that robot just like when I pushed it in joint space mode to deviate from its planned trajectory in order to execute that task if it didn't if those games were too high and the robot was pulling down on this and it was kinematically bad the hand could get jammed and uh and things would be bad yeah you blow a fuse these things are pretty robust any other questions about them did that lineup of did that organization help with the Litany of different versions okay cool I don't mind ending a few minutes early and um I actually have one other thing if anybody wants a drake sticker someone gave me Drake stickers and I have laptop size Drake stickers and uh cell phone size Drake stickers and if you guys come to class you get to get Drake stickers I mean just take one or two but okay see you next time
Robotic_Manipulation_Fall_2022
Fall_2022_642102_Lecture_7_Geometric_perception_part_3.txt
up again all right that is recording too yeah Okay so today I want to wrap up what we've been talking about with with Point clouds and that's what I've been calling geometric view of perception okay so um just to recap a bit we started off by talking about let's say maybe day one in addition to uh introducing Point clouds and cameras and the like we talked about the point registration problem where you have two point clouds and you're trying to find the pose that Maps one into the other so it's a point set registration or Point registration we went from there into the iterative closest point algorithm that's snuck into day two but that was the start and then day two we started addressing the fact that real Point clouds are messy right we talked about the various ways that they could be messy and we talked about generalizing the notion of Correspondence as one way to address this right and things like more clever ways to deal with outliers right and we generally toyed around with generalizations of this point set registration problem you know we're trying to find we've got two point clouds we have to guess the correspondences or somehow be told the correspondences but we're going to roughly try to register these two point clouds together but I don't want to leave this sort of part of the course until we until I tell you that I don't think Point registration Point set registration is sort of the only way to think about this and it's not clear we started talking about about it a little bit at the end last time that this idea that you should just find the pose that makes two point sets close together is is always the best objective it might be lacking in important ways so today I want to go beyond just registration and start thinking about some of the other aspects of what makes a good uh perception system just even if you're given just point cloud data what are things that you need that maybe is beyond the metric which just says are my points close together okay foreign and I can start with that you know and this is still in the context of sort of geometric perception unfortunately there's a lot of perception that's beyond geometry too but even even in the lens in the landscape of sort of geometry I think there's there's things that are missing from our basic algorithms and I want to start talking about them today so here's what happens when you start using ICP and all these variants right you have a beautiful model of your mustard bottle or your coffee mug or something like this it's sitting on the table you've got a couple cameras pointing at it and ICP tells you roughly that you know here's the table and it's like you know here's my mug and it's telling you that your mug is in the table right for instance so they've got some returns it did its best bet to best to fit and it's giving you a pose back that says you know mug is in the table I guarantee that will happen it'll happen for a bunch of different reasons maybe because you have points down on the table that are pulling it down you know or we have outliers or something like this but this is a very classic thing to happen is you you're looking at it you say that's a silly answer um clearly the Mug's not in the table or you'll get another one where you know the mug is floating in the air right and you know again you know something that the point set registration algorithm doesn't know which is that mugs don't spontaneously hover in the air right I mean maybe when they're in motion but if the scene is static you know then then this shouldn't be a reasonable answer and another one that you'll get often is if you're trying to register multiple things it's sort of a um a version of this um you can get you know if you're trying to register multiple mugs at the same time you'll get them overlapping for instance right you'll say it'll say I've got I found two mugs in the scene for instance and they're you know in penetration okay so there's clearly things that you know that we somehow haven't yet expressed in our points should be close in a euclidean distance objective okay there's another really clever one about knowing just knowing that the camera you know if the camera is getting Point clouds here or points here and the camera is over here right then you also know for instance that there can't be an object anywhere here between the camera and the points right we haven't captured that yet either so the goal for today is to is to think about ways to to capture those kind of events yeah foreign so this I'll call this here I'll call the free space constraints this would be I guess both this one and this being in the table these are non-penetration constraints and this one's actually interesting right um this fact that mugs don't fly um normally this is a has to do with with physics right it requires something about the equations of motion to know that that's uh not an okay solution so in the case of a static scene I think the simplest version of this would just be the say this is a static equilibrium constraint right you you somehow know that if you were to draw the forces for a free body diagram those forces should should balance and you should have equilibrium right and you have we haven't yet told our perception system how to how to think about that okay so all of these are possible some of them are harder than others but to do it we're going to have to start um we're gonna have to give up a little bit on the beauty of our formulation right when we were up here we were solving this beautiful problem in the point set registration right our you know we were solving this sort of a problem with some correspondences known and we used primarily the singular value com decomposition in our inner loop right this was our heavy hitter was the ability to if I if I give you a bunch of two point clouds and I need to and I know the correspondences then the inner loop could be I can find that pose using singular value decomposition and that's beautiful and it always works you know it'll like you can give it noisy Point clouds it'll find the best effort it'll solve this least squares problem okay and it's every time you call it it'll give you the same answer it's good stuff we're going to give up on that we're going to have more approximate methods these days that will work sometimes not always but can work can handle a richer specification of problems Okay so I realized last time I went off the rails a little bit which is see this is why it's good to come to lecture even when the videos are online because I saw all of you going like what right and I that's feedback for me to know I went a little fast at the end so let me just make sure I put this in context right what we've been talking about some of these optimization problems uh we've been talking about convex optimization problems and the examples I gave you before for instance when I had a quadratic objective right if I had let's say X1 X2 and I wanted to minimize over X something that looks like ax minus B squared right subject to some linear constraints we you've been playing with this now as a quadratic program okay so this is a beautiful type of optimization problem where I can take I don't have to even worries too much about having an initial guess for X all I need to do is find the minimum of this cost landscape which satisfies these constraints and I know that I found the optimal answer to the problem there's a CL this is just one example of the space of convex optimization problems which we're going to use but not study in their in the glory in the depth that they're worth studying but I think you can use them very effectively with just having this basic understanding of of um you know of things that have quadratic objectives can be handed into a quadratic programming solver for instance if you had a linear objective but still linear constraints that would be a linear program some of you've known about that there's other examples so linear programming is a really important class LPS you might hear second order cone programming LP socp and I mentioned too quickly last time semi-definite programming which is sdp okay so there's these important classes of objective functions which look more interesting they might look like ice cream cones instead of like a bowl but they all have this property that if you write the problem down and you can fit it into one of these Frameworks then you are sure that your solver will be able to find a Minima and give you the global answer okay and we'll we'll go we'll use some of them again as users throughout the class but um I just want I want to mostly make a distinction between those and what we're going to start doing today which is more General non-convex optimization okay so now I'm gonna think more generally about minimizing some function subject to some constraints right I still have my objective function up here my constraints down here and my decision variable is X but now I've in when f is arbitrary I've lost this beautiful picture right I can suddenly start having cost Landscapes that are much more complicated maybe f is doing this and the expectations we should have for the algorithm are therefore a little bit less right we're going to basically have algorithms that will say if I start from an initial guess they'll find a minimum right maybe I start from this initial guess and I'll walk down the hill right and it will return a point saying you know this is the best I could find right some some minimum we'll call this a local minimum but we in general unless you know something more about this the class of these curves we lose the ability to say that I guarantee I found the best solution okay so this is just a very high level setup but what I want you to make make sure it lands is that the problem of registering two point clouds where objective is just this nice quadratic distance between the points we were able to use strong convex optimization kind of ideas for that when you start doing these other important things which are obviously important to rule out non-sensible Solutions then we are almost always going to leave this picture and enter this picture so you will find you'll have perception systems that maybe don't stick the mug in the table but they might give you every you know they might not give you the best solution how does that at that level is that are there questions at that level I know some of you know this well and some of you that's a very fast introduction to a big topic but trying to walk that line yes that's a great question so the question is when humans are doing perception what you know are we what are we doing are our senses better or whatever um of course I don't know exactly the answer but I think some things are clear I think we are bringing so much extra information to bear on the problem Common Sense type information that changes the way we perceive the world in ways that these geometric algorithms are not capturing and even deep learning algorithms are not capturing although they're getting closer right I think as as we start you know using Foundation models and big large-scale models that really have some broader understanding of where objects can be in the world and maybe there's hope for for some common sense but like um you know if I open a refrigerator I have tons of priors about what I expect to see in the fridge and what I don't expect to see in the fridge right if there was I don't know uh a gorilla in my fridge I would be surprised and my perception system would fail probably right you know at least in the short term I would probably run you know I don't know but there's I think if you if you start reflecting on what you're doing as you're going through the world before you open your eyes you're bringing in so many initial guesses right which then makes the you know I think we probably don't have super accurate geometric reasoning I yeah I mean people are good at 3D reasoning but not like depth camera some millimeter accuracy kind of good I think robots should be far superior than humans in terms of accuracy and those kind of computations okay but I think certainly we are able to rule out silly cases the computational Machinery with which we do that is is hard to know yeah yes okay that's a huge question I love it um well maybe so the question was about tactile sensing and how how you know but um let me bite up a version of that question right so so how would it how would a tactile sensor for instance fit into this and uh and since we're talking about Point clouds and geometry um a particular type of tactile sensor that's very popular these days is when you actually put a camera underneath your finger okay or Palm or something like this and you actually can get a point Cloud it's a very special Point Cloud that has that never sees past your skin for instance but you can imagine actually using these tools almost out of the box um with a tactile sensor also think of it as a camera that has a minimum a maximum range that is your skin okay but of course bringing in things like um non-penetration becomes very important when you know immediately that things are going to be touching before you before you see them so so I think actually this lecture is very well motivated by tactical sensing so so why you know I I think I think you're right to continually Pro press on why tactical sense I think tactical sensors have more potential than they have realized so far I think everybody in the field would agree with that but somehow we have you know massive data sets of in a massive computer vision community and stuff like that and we have like a few people making tactile sensors a new one came out today the gel site mini just came out today uh if anybody saw that that's cool so maybe a new form factor but um but they're just not as many they're not in everybody's iPhones right um it's a smaller scale that Community is still growing okay good um so I want you to know actually that um although there's a ton of things to know about these different problem classes and when you go from non-convex to convex and things like this um when you're writing the when you're writing code for instance in mathematical program or in there's a handful of optimization parsers which try to do some of the heavy lifting for you okay so if you write in costs and constraints you know in the language of mathematical program you add cost add cost add constraint mathematical program is actually doing a lot of work behind the scenes to decide whether you've still stayed in the realm of a convex optimization and when it can detect that you have then it will call a special solver that's extremely you know efficient for convex optimization when you add a more General constraint then it'll bounce over to a different they're still custom solvers behind there that solve non-linear problems but they use a different set of of algorithms behind the scenes so for instance if you just in you know mathematical program you add a quadratic cost that's so it doesn't even have to guess at this point you it knows the the objective is quadratic when you say add quadratic cost okay if you add a cost like this actually x dot X where X is the decision variable then because this is actually X is actually symbolic it knows enough to be able to parse that and realize you've added a quadratic constraint and it will solve a call a QP solver similar you know like those are the same here okay but you can also add arbitrary functions as as costs or constraints and we'll do that today okay just Define a python function um and here it it doesn't have the ability anymore to know that it's a even though this one happens to be quadratic you could have put an arbitrary function behind there so it sees that it's going to start calling a non-linear optimization solver and when it calls your function actually it's going to pass in uh a version of x a variable that is an auto diff type which is automatic differentiation so that it can also take gradients of your car of your function and try to use you know gradients of this in order to get down there as fast as possible yes yep that's that's a great point so so there's two points to there so let me repeat the question so um let's say I'm not taking a you know we've been talking mostly about your you woke up your eyes are open the world was still how do you understand it that and in that case something like a static equilibrium constraint is appropriate um but the problem is actually different when things are moving both in the case in two ways I think first of all if you open your eyes and admit things could have been moving when you started then you don't want to be using static equilibrium constraints you could use more uh more General Dynamic constraints you know that things are not going to be you know falling faster than the acceleration due to gravity for instance that could be a constraint on your perception system there's also a separate part of that question which is if I'm not opening my eyes and taking a One-Shot sort of approach to it but I'm rather tracking then that can also change the problem you can take an initial guess and expect that the answer only moved a little bit and the same way we did instead of inverse kinematics we did differential inverse kinematics you could start taking you know just jacobians of some of these things and expect actually a linearization of these problems to work fairly well so I think in both ways the tracking problem and the dynamic problem are are pretty different I'll mention tracking again at the end just to close that Loop but that's a very good point great okay so let's um let's think about first the problem we already know but in a way that gets us into non-convex okay so if I had parametrized the my point registration problem my point said registration problem using um Theta instead of the pose written out with rotation matrices and the like then already I have a non-convex formulation of the problem so let me just say that carefully here it's a good way to bridge to the more advanced versions here if I wrote before I was minimizing over P and R where R was in SO3 right sum over I p plus r yes I okay and I I could write it like this or I can write it with the r R transpose equals I right determinant of r equals plus one now I've by writing this and this I feel that I've over specified it but at least it's super clear okay so the decision variables here where this is a matrix now you know R is a three by three Matrix this is a three by one vector so I had 12 decision variables or in 2D maybe it was it was a two by two Matrix and a two by one okay in 2D so what if we instead said I want to parameterize this by just minimizing over P which is the two by one and Theta okay and I'll write the same objective now we'll have Theta enter through the rotation Matrix like this right where R Theta is um my standard cos Theta minus sine Theta sine Theta cos Theta okay so I've changed my decision variables instead of being from The Matrix the entries of the Matrix are to not being just one variable Theta right this is just a scalar that seems good I've got less decision variables Maybe but my beautiful quadratic bowls are not beautiful quadratic bowls anymore I've got some Sines and cosines in here that are changing it in this simplest example I can actually we can still think about what that landscape looks like okay so let's do that and just so I can draw it on the board let's even do it with the rotation only case first just so I'll just say positions are known or re maybe played our trick of using relative positions so we did everything relative so let's think about having our our model so I we had an accident last time in my blue chalk is no longer with us so the model points are now green sorry for that I had a nice thing going there but um it's like my non-symmetric shape this is my model points in 2D okay and I've got my scene points which are going to be only rotation for now because I've already subtracted out because we know we can subtract that out I've got my scene points like this and the question is what does the objective if I if I plot a function of theta what does this cost look like and are you know is it going to be terrible or is it going to be sort of okay and I think it's actually not that bad to to draw it takes kind of a thought experiment but let's take any one of these points this is what the known correspondences in this case okay so if I have my model point in my scene point then my cost is the distance here okay my rotation they should be if I had drawn it perfectly they would lie on a circle right because they can only they're only allowed to rotate then those things are just going to move along a circle right foreign okay so if I have an initial guess then my my claim is that actually all you know this point here is going to contribute one cost term which is the distance of that Arc right the distance of that Arc is going to be a function of my angle Theta and the distance from the origin right I can just use my standard cosine law of cosines to figure out what that is if I know this length and I know this length are both some radius R and I have I can write it as a function of this angle I can tell you what that distance is right it's going to be what is it r squared plus r squared minus 2 r squared cosine Theta should be the the length of that distance let me just make sure that checks if cosine is if Theta is zero so it's exactly the thing that's one and this is this is a zero I'm pretty happy with that it's close to that okay and let's say that's the distance for the r for the ith point okay now the sum of the distances is just going to be a sum of all these things added up these are constants from the point of view of this optimization I'm just optimizing Theta so I just have actually cosine Theta times a big sum of you know 2ri squareds plus whatever this is right that's my cost landscape even though the shape is kind of interesting whatever in polar coordinates it's actually really easy to write the cost landscape so if I'm searching in this parameter space it just looks like a cosine roughly adding a constant we'll move it up and down adding a multi you know multiple like this will scale it up and down but roughly my cost landscape for in 2D reconstructing the orientation looks like this okay my claim is that these then that's supposed to be a nice cosine not a lumpy cosine which is sort of important to the point I'm making here right so maybe I'll just make that a little lower but those are the same yeah if I start a nonlinear optimization and it's basic behavior is to go downhill until it finds a local Minima that in this setting it's actually not bad right it might tell me something that's the angle I expected it might tell me something that's 2 pi away from the angle I expected but it's still right it's not wrong and the answer it gives me gives me might depend on what my initial guess was but it's actually going to solve that problem very nicely in particular there's a trendy way to say this right um all Glo this this function is non-convex but all Minima are Global minimum so we like to try to say that about neural networks too okay and there's other you know I think this is a new trend is understanding cost Landscapes where that have this property that are not the simple picture but they that they're somehow still good optimization problems okay so all Minima our Global minimum Minima so they achieved achieved the same cost so it's sort of not crazy if you're trying to you know to parameterize 2D estimation problems in terms of theta this is a little bit too Rosy of a picture in 3D it's going to get more crazy in and when you have to search for the correspondences also we know from ICP where we should expect from ICP that there are going to be cases where it can get stuck in local Minima so we'll see that happen too but I just I want to establish that you know out of the box it's not terrible to think about trying to search over directly over a uh this kind of a parameterization okay and these are the kind of tools that you have in non-convex optimization yes yeah I just need one more color to make that uh who I was just trying to find a function to compute this um this distance right the distance between the point on the model and the point in the the corresponding point in the scene and my claim was that that's an easy thing to compute if I call this r and I do it in polar coordinates this is also going to be R since it's just rotating around the origin that's all that's all I've given it the ability to do if I've subtracted out the means properly and this is R so then my math was just to compute this distance as a function of my rotation angle Theta yeah yeah so good so so when I get the right the correct Theta my cost is zero I've lined up all my points in the in the noise free case when I go all the way to the opposite side I've got the biggest possible distance and my biggest possible error but if I keep wrapping around 2 pi then I'll get 0 error again in fact so yeah I should have drawn this stuffing it that's the in fact that would have been more insightful on my part I didn't think of it that way I just looked at it and the algebra said they've got some constants so it could be anywhere but actually that should be zero right it touches zero when things line up perfectly good I drew it with a zero here I forgot okay I had that inside once but okay that's our toolbox now we want to start saying okay if that cost function isn't enough is it rich enough or I could potentially use constraints how do I want to change that in order to capture these more Rich phenomena so remember last time we talked about um generalized correspondences but we still did that um in a two-step optimization so let me write that so for instance when we talked about um the coherent Point drift CPD last time my claim was that it was minimizing over some pose the some function like this right and this was step set um as a using a gaussian kernel in an in an iterative algorithm right so I the same way ICP set the correspondences based on minimum distance and then um and solve this problem CPD was setting the correspondences to be soft correspondences using the gaussian kernel and then solving the SVD problem set it solve the SVD problem and Alternate the dream would be that we can solve for the correspondences and the poses at the same time now we have you know if we're if we're willing to go to the non-convex optimization then we can do that we can write that down our mileage may vary because there could be local Minima but we at least can write that down so you know today we can do that same sort of thing in fact the more General way to write this would be to say minimize I could do in terms of X but maybe I'll minimize directly over Theta now for instance and I could save i j let's take a non-linear function loss function I'll be careful about this in a second and maybe it's a function of um the transform points still let's keep that structure okay but I restricted myself to just quadratic functions before now I'm going to have an arbitrary loss function the quadratic form is best for optimization maybe but the the arbitrary loss function allows us to capture things like outliers and other features so here are some sort of standard choices you might hear about Huber loss gaussian you could use the gaussian kernel that's roughly what the CPD was doing right which is nice because the the idea from the gaussian was that the that um functions that are you know points that are far away have no effect on your optimization it's a little subtle I drew I drew this here the way that you would normally see it in terms of a loss function the gaussian doesn't go to zero in the way I've drawn in but it's still because it's flat it has no effect on the optimization so points that are far out here if you were to move your guess a little bit and move them from here to here it doesn't change your cost so even though they in the in in a cost landscape setting you might shift yourself up or down because it doesn't go to zero but it doesn't change the shape of the landscape okay I'm saying something I think simple I hope I didn't say it in a complicated way is just that uh we were a little restricted by this quadratic form before and we had to do these games with the coefficients the more General cases just write the function you wanted directly in the um in the laws yeah any questions about that yeah we could have done this so the question maybe I'll just I'll try my answer hopefully it's clear from the the um yeah why didn't we do this right from the get-go this is a certainly one instance of this this is just a more general form if I choose L to just be the quadratic form it could just be take whatever's inside this function and square it and and add you know and multiply it by C so this is a special case of this this is a more General thing and because it's more General it can do it can do things like taper off on the sides which is a general way to to handle outliers that's the same that's the same way that we're handling outliers before we now have a Machinery that will handle outliers like this yep before what we were doing was we were taking the the losses we got from corresponding points if the points were very unlikely to be corresponding we were making that loss effectively zero here in the same way we're saying if the points are too far away with a let's say the gaussian loss if they're too far away we're saying it has no effect on my cost it's a flat it's not quite zero it's but it's flat so it has no effect on the shape of my cost yeah so those points that are far away are given our current guess this is always going to be based on your initial guess you're saying that that you know my model here is here my scene is here I'm going to give no weight to the difference between you know I'm not going to worry about trying to make this one match this one that one's just too far away to worry about I'll worry about the points that are closer that I where my model to scene is is smaller yeah I had a request just to convince everybody that uh I do read all the surveys I really do not many of you write them in terms of surveys you tell me how many hours it takes which is a little bit high this time but um uh but you also a few of you write comments and I read them all and someone asked for a stretch break all right so let's take like two seconds to to just stand up and stretch yeah uh it's a good time for it you have to listen better now thank you sure good all right they also said it could be small they just was like I just want to stretch my legs so we're back at it that's it um okay we when we were there's a bunch of things to know about sort of writing loss functions and writing code that uses this loss these kind of loss functions right so um in particular when we're operating on this euclidean distance a lot I mentioned that there's data structures that are really good for that right so if you want to just find nearest Neighbors in some euclidean distance sense a natural data structure is to use KD trees or other data structures you've learned from computer science classes okay just to be efficient nearest Neighbors nearest neighbor when you get into these more General loss functions that are functions of of distances there are other clever tricks one of the ones that I like best is actually to use these sign distance functions okay depends on exactly what compute thing you're Computing but a common data structure that's used useful to make these things fast SDF not the scene description format it's a different SDF too many of them flying around okay so the sign distance field or function um again Stanford bunny comes up often okay um it's just it's you can pre-compute if I have a let's say a mesh and I want to compute what is the distance the closest distance from any point in the space to that mesh if I'm going to be doing a lot of queries that's just asking what is the distance from a point to the mesh then an efficient algorithm will for instance just pre-compute on a grid all of the distances from points in space to the mesh okay they'll throw that on a GPU and then be able to access that super fast to do lots and lots of Point queries lots of distance computations on these 3D objects the sign distance functions it's signed because you have a positive distance when you're outside the mesh in a negative distance when you're inside the mesh because it's important to distinguish those two yeah okay and zero when you're on the boundary it's just another representation of geometry but it happens to make these distance based objectives and queries very efficient yes yeah that's a good question so the way I've written it here you're going to move the model every time because as you change the decision variables but remember I said you could have also done everything like this uh o w p s i so the question was where you know which one are you going to compute pre-compute the sine distance function for um typically you have a model that you want you can afford to pre-compute with and the scene comes in every time right so it makes sense to try to do your pre-computation on the model not the scene and if you were to flip over to this representation where you're moving the scene points around to match the model then that makes this more efficient it's not crazy actually to take a sign distance function and rotate it and translate it also but I think this is more natural great question yes yep good so so that's a good question so are these all convex right so the definition of a convex function is that if I take any two points on that function that that I could draw a straight line and it would have to be for a function it would have to be above the function okay so um so for let's take the truncated least squares is the is the most extreme point if I have this value here and this value here then because I can the straight line between them is below the function it's actually not a convex function and that this looks benign and there are cases where uh kind of like the low you know all Global Minima are okay there are some non-convex functions that are benign but once you get into this space and you start adding them and shifting them and multiplying them and having multiple together you can quickly get into things that have local Minima great question yeah so this this this one actually the Huber loss probably is convex yeah I think I have to look at exactly but that sure looks convex to my eye but uh um but the others are not necessarily okay so yeah I think um you know some of the more efficient algorithms for doing these sort of especially tracking will actually pre-compute sign distance functions for the for the different models and put them on a GPU okay so um so we can so that's nice we don't have an iterative algorithm in the sense of correspondences then implementation you know but uh and then as SVD we can write it all as one optimization and hand it directly to a non-convex solver and get to some local Minima using this kind of formulation but so far we're still in the space of objectives we already had roughly the real power of going to non-convex optimization and the real motivation to do that is to handle those non-convex those those constraints that that I'd said weren't fitting nicely into the standard uh you know our previous formulation the non-penetration the static equilibrium and the free space constraints so um just as an example of sort of non-penetration let's say I have let's say I have a bunch of scene points let's say I've just got a box that I'm trying to find okay I've got a bunch of scene points like this and I want to fit my model to my scene with my green jaw I was looking for blue right then but I want to say that my my transformed model should not be in penetration the box I know shouldn't be in the table it shouldn't be in the wall okay how can I write that as an optimization you know I would like to think of it coming up with something that's like as close as possible you know it's sort of trying to match the data but it refused to go into the wall right we could do the same thing we did before we can say let's do it with just Theta and 2D here so that objective maybe isn't so bad yet but let's say I'm going to subject to the constraint that all of the model points once they're transformed into the world coordinates are in this case let's say it's a zero zero here let's just say that they're greater than zero so the X component is greater than zero and the the Y component is greater than zero this is a non-convex constraint because this is still um if I write that for all J to flesh that out remember we we have still a function of my decision variables and it's a non-convex it's not a it's not a linear function in this case um but we know how to write it this landscape is going to be not as simple as that cosine I've suddenly added new constraints on possible at admissible thetas and P's but the initial picture isn't that crazy and uh isn't that different and and we can definitely ask our solver to do that kind of thing okay and if you do I did and I got this right so here's my salmon red scene my blue model uh I drew the correspondences in but um so if I did an ICP sort of loop but with that constraint then uh then I get a good solution out so once we go to this non-convex optimization we can start writing Rich constraints what I said last time too quickly was that there are some ways to to do convex optimization for this for you know for this type of constraint and a handful of them there's a there's a a world of trying to make the best convex approximations of these which I'm a fan of that world but I it's a more uh it's a more subtle thing that I failed to make the point of last time it's in the notes if you want okay so um you know how would you write this down just to make it sort of hopefully uh actionable right so if I had decision variables P there's two of them in 2D right for my positions my X Y position Theta is one variable and I can add my cost just like this our remember is now a function of theta so this is a non-linear non-convex objective so I have to do it by I can't say add quadratic costs anymore I add a cost and I hand it a function um and then I'll add constraint and I'll pass it a function to do that bottom one but those functions are easy enough to write I just can say what's the position in the world I take R is cos Theta sine Theta sine Theta cos Theta and I compute my it's just the sort of the normal math right and then similarly at the squared I can just call Square you know multiply the two vectors together in numpy and most mostly it'll just work there's a little you know crap about uh getting your variables into the function and out this whole using partial and then unpacking with split I'm not a huge fan of that we'll make it better someday but that's that's how we that's the tunnel that goes into the solver and then comes back out of the solver so there's a little bit of boilerplate to make that good okay and that's what solved the previous problem and it can solve much more complicated problems the generalization of this idea is in general you can say I have multiple models or you know maybe the world is part of my model it could be a function of the decision variables or it could just be welded to the world that's fine right in general you might say I want to search for Q the positions of my kinematics you know such that the bodies match the points to see why I think of it this is a kinematics problem right and subject to the fact that there's no constraints there's no collisions no penetrations right so there's a a huge library of these tools that that you'll build up on the kinematics pipeline they're all doing something like this under the scenes but they're calling the kinematics methods to make it effective right so you can compute the sign distance between of closest points between two bodies you know that's queries that you can just ask the geometry engine for and even better you can just say I want the distance between two bodies to be at least some distance right and there's actually a bunch of a bunch of details inside there to try to make that good for solvers um the comp the computational geometry of doing um of doing those queries is is sort of subtle but um I'll talk a bit about it in next week when we talk about simulation but if I have two bodies and already you can imagine that finding the distance between two bodies the closest distance between the two bodies that's like an interesting computational geometry problem that seems sort of manageable when I draw it like that it gets worse when they penetrate and you have to do the interior then the answers are not typically clean okay even for relatively simple objects and in particular for a solver if you have multiple bodies okay and if the closest point between bodies flops from being this body to this body for instance then that gives you um you know cost functions that can look typically their continuous but not differentiable right so you can have things like this in your cost landscape and we try to do a lot of work to I mean that if it's at the minimum that's one thing it could also be on the way down to a minimum it could do some silly things like this right these all make it harder for the optimizer to find its way to a good minimum so there's hard work in the middle there that this these functions do for you to try to smooth that out a little bit just I mean not enough to change your answer significantly but enough to make the numerics good right there's a lot of details behind it there's all open source right so you can look at it but um the difference between writing your own like this and getting it all right versus calling one that's been hardened a little bit is in all these different little nuances okay non-penetration this example of bodies moving around and not being allowed to penetrate is a classic example where you will have local Minima you sort of don't expect to um have the the nice picture of this so let me try to make that point so imagine I gotta get all my colors right so I've got some scene points over here okay my box is here I have some um part of my world here which I'd say don't run into the table or the wall or something like this and my current guess is my model is over here right this um this sort of an optimization is going to try to pull it in here and it's going to stop when it hits the constraint and there's nothing powerful enough in the description we're giving it to suddenly change its mind and try something fundamentally different in general even if you have if your bodies are convex bodies like the interior of the of the body that are convex because we're working in the space outside those bodies you know collision avoidance constraints are the classic example of local Minima in optimization it's one of the other multiple Minima in optimization okay so don't expect miracles it's gonna get stuck here it's going to say here this is the best I could I could do it's not going to pop over here but when you're close it can do wonderful things with complicated constraints okay I mentioned three to begin with I mentioned non-penetration which I give a very simple example of there um I mentioned static stability constraints I'm going to talk about that more well you'll even do a problem set I think on it um when we talk about the physics engine because you're going to use some of the equations of of the equations of motion in order to write that kind of a constraint but the one the last one which I think is so important and maybe one of the biggest reasons why the point registration problem I think is not sufficient is that gaze constraint the free space constraint right so and I think it's so it's so clever so this is the free space constraint so if I have my scene points over here and my cameras over here right I should I guess I didn't get any returns over there because it's partial it's occluded then I should immediately be able to say that any Solution that's putting my model over here is a bad solution right it's not just that even if it's close in the sense of getting those points close it's violating something that I know about the problem which is to have gotten this point from that camera it must not be anything here okay so there's various ways to write that um the one that um convinced me first of how important this was used sine distance functions on a GPU to make it fast and the way that they did it was they actually made a new obstacle they turned it into a non-penetration constraint so they basically said I'm going to go ahead and make an obstacle I'll call my observation obstacle which is all which is the con with not the convex all it's the the body defined by the Rays between my camera and those points and they say I have a new body like this I can compute the sine distance from that body on the Fly and and ask that the points in my model have a positive sign distance with that constraint okay and I think um that's just such an important cue that if you write an objective which says make my points match in a squared in a quadratic sense you just can't capture that in fact even if you've only reasoned about Point clouds you've already lost that piece of information right the depth image in some sense had more information than the point Cloud the point cloud has thrown away where the cameras were okay but that is a hugely rich source of information to rule out a lot of possible strange candidates in the perception problem and the folks that did that I think best and for the one that uh I learned it from was this project called Dart which was trying to start to solve for Q does that play oh yeah okay they compute these sign distance Fields right that's a visualization of the sign distance function and they do it for each body and then they search over Q and they make these incredible demos of even tracking real people's hands with an approximate model of a of a hand where they're just using the you know this basic idea to solve that problem subject to the kinematics constraints we're using Theta instead of um instead of pose non-penetration constraints and free space constraints written as non-penetration constraints all with sine distance functions I put it in the middle because I had some cool visualizations of the optimization at work fast forward here okay so when you're seeing that what are you seeing right you're seeing the solver take an initial guess it has some Q it's somewhere on the landscape and it's walking down the landscape and it's slowly fitting into the snapping into place and they just did incredible tracking demos with this kind of work even tracking humans super impressive it can get okay so um the property of these algorithms are that if you start with a good initial guess it can it can snap in but if you if you just think of this as like in this in the context of this demo you you actually saw it lost the arm for a second did you guys notice the frame where it was it lost the arm so if it gets confused then it's sometimes very hard to get back so these things will work incredibly well until they don't and then they fall off the rails so don't you know don't put it in a safety critical application maybe but uh uh they're super powerful okay and that's an example of using it recursively to solve the tracking problem like like we talked about so we took we took the initial guess we found some Minima and then we took a new Snapshot from our camera the problem data moved a little bit but hopefully the Minima didn't move too much and we just keep solving and it's trying to track the moving Minima in the landscape those uh no the limbs is all are all using the same tracking method the comparison is is I think only one of them is visualized in that yeah so the parameterization is Q which is like the The Joint the generalized positions and it's just searching through Q in order to solve this okay so that was um you know that's the sort of not the importance of non-convex uh geometric reasoning in there like I said there are there are more advanced topics in geometric perception um for instance I really like the convex relaxations where you try to find um versions of the hard problem that you can solve very reliably I put a very simple example of one in the notes I think this this notion of tracking gets into um a nice set of different tools where you can solve the tracking problem differently than you'd solve just the One-Shot perception problem like I say that you can use differential kinematics instead of inverse kinematics so you know dynamic you know and dynamic obstacles if you will and I would say um dense reconstruction is also a nearby problem here if you wanted to build a map of the of the world or a build your object model from just having a camera moving around it these two both are sort of um you might have heard of Slam simultaneous localization and mapping in robotics it's really based on the same foundations that we've talked about here very dependent on getting correspondences correct but in hugely powerful and important and successful applications of these so if I just reflect back on sort of what we did and where did where does it fit in the space of manipulation tools um I think geometry is incredibly important I think our depth sensors are superhuman in in their uh you know accuracy for instance so in terms of refinement of an initial guess they're incredibly valuable but um it was I remember a few years ago when I uh we started doing more work with deep learning perception and we're using RGB more than as much as we were using depth and that transition sort of happened where we started using the color values handing them into a neural network and I remember asking the guys in the lab I said okay if you if you were to give away either the depth or the RGB if I can take one away which one would you keep and that that answer flipped and I think nowadays if if people have to pick just one they would say take my depth give me my RGB because there's so much value there's so much information that is not captured in the XYZ value of the point Cloud you know context about you know when objects start when they stop but also much much more than that that um you know we haven't talked enough about RGB but we will okay I do think that it's very natural to combine guesses based on data-driven methods you know and RGB and then refine them with the with the geometric methods and I think we learned a bunch of cool geometry stuff it's good okay uh see you next time
Robotic_Manipulation_Fall_2022
Fall_2022_642102_Lecture_11_Deep_perception_for_manipulation_part_1.txt
okay foreign foreign thank you all right how are people feeling for their uh project proposals good shape thumbs up thumbs down still figuring it out no idea somewhere that thumbs down I think I'll stick around a little bit after lecture if anybody wants to chat right now you know today about it but uh just reach out to us and we'll give you any feedback or ideas we can this week okay well welcome back everybody so we're gonna uh talk today about deep learning for the version The Deep learning version of perception and I actually this is one of the harder lectures for me to give because I think the variance of experience in the room is is the highest of all the of all the topics we cover right I can sort of assume you might be it might be a few years since you've drawn a free body diagram but like everybody's drawing free body diagrams at some point right and and here you know I think some people have just have heard everybody's heard about it some of you have have uh you know might be training deep networks right now on your laptop but while we're in lecture some of you haven't dabbled yet right um and there are plenty of courses here that can teach you uh the details of deep learning which I would highly recommend I'm not going to try to do any of that even computer vision right there's huge there's whole courses on computer vision I'm not going to try to cover that what I'm going to try to do is dial in a few of the key topics enough that if you haven't seen it you can you can use it effectively if you have seen it I hope to bring some ideas from manipulation that you maybe haven't thought about the computer vision pipelines from this perspective okay more than any other lecture I would say I'm going to be reading you guys you know and trying to speed up or slow down based on on what you guys are feeling feel free to ask questions feel free to be like yeah within reason okay so um I do think that the needs of perception of manipulation for you know put particular pressures on our deep learning pipelines that are unique and interesting let me just remind you sort of the motivation we talked about is that we've done a lot of work with geometric perception and we had a whole pipeline of clearing clutter out of a bit out of the bins that didn't use anything from Deep learning right and it's surprisingly good like it can just pick up objects all day long you could throw any objects in the bin it'll do its thing but it doesn't get done everything that we want right it has flaws if even if your goal is object agnostic you don't care what objects are moved you might still have problems because it's not just the fact that it doesn't know about objects it can make silly decisions about where to put its fingers right it might pick up the hammer from the corner and that's just a bad strategy because there's a large wrench due to the gravitation that would cause the grasp to be fragile right so there's there's a lot when you go to decide what to do wait you know where to pick things up you're bringing a lot of background information into the into the picture that geometry alone doesn't tell you right even physics alone doesn't tell you maybe don't touch the sharp part of the knife for instance right um okay uh in in practice in these particular in the particular tool chain that we gave you there are quirks like picking up two objects at once because why would you don't even know where the extents of an object are right and so that's a problem that these come in the geometric stuff can do it's take its best effort with partial views uh if you have cameras only on one side there's a back side of the object that you can't see you know your point Cloud reasoning is only going to get you so so far at some point inherently the only the way to get farther is to have previous experience which tells you what's behind on the other side of the object right it's a data driven method becomes sort of these a key approach right and it happens that that because of our sensors today and you know it turns out that some of our geometric reasoning falls down uh for transparent objects and other things like that but fundamentally you know there are tasks that just require you to understand objects if you say I don't care about the cheese it box I don't care about the spam I want to move the mustard bottles over because I'm going to you know someone just bought a mustard bottle and I need to put it in the box and ship it right then that is fundamentally requires knowledge of the objects okay so um there's been a revolution in deep learning over the last few years it was powered mostly by data right it was among other things and and compute and good ideas and a lot of things but but everybody talks about the big data so one of the first topics I want to throw in uh and talk about here is how do we get to Big Data for manipulation so the watershed moments for deep learning for computer vision ran through the imagenet data set and the imagenet challenge where they and her team acquired labels of images of like 128 million images right a lot of images I think it's one question is it 1.28 million that's that's I was off by a couple orders magnitude but it's a big number that's the point yeah right so the go-ahead idea in imagenet was to crowdsource image labels right but it took a lot of acquiring images cleaning images and then labeling images to make one big data set which powered a lot of computer vision if I want to pick up mustard bottles I don't want to start by picking by labeling 100 1.28 million mustard bottles so what are we to do right so let's just I want to just talk that through and how do we get there for manipulation and to say to tell that story let me just make sure I Define our basic concepts right so when you're talking about diff the standard computer vision tasks um in learning we have to distinguish between a couple different categories right so the first one would be image recognition I just say is there a sheep in the in the image is there a dog in the image with what probably with what confidence would I say that there is a a sheep in the image or a dog in the image okay and that was the classic first task for for computer vision that's what imagenet had a lot of labels for initially image that also got labels for object detection right which is to say not only that there is a sheep but here's a bounding box around the Sheep so the output would be two numbers for instance the the two sets of numbers four numbers the you know the pixel location of the lower left and the pixel location of the upper right for instance well we talk about exactly how that's done too and then there's two different types of segmentation which is very common uh which would be the semantic segmentation I think the Picture Tells this very well semantic segmentation says make all of the pixels that are sheep pixels blue and the dog pixel's red for instance in this case versus instance level segmentation which is to say you know for every different sheep I want a different label okay so that's the background uh for this which of those do we want for manipu for a manipulation pipeline if we just want to pick the mustard bottles for instance out which of those is going to be the most useful object detection is going to be super useful right to at least we could then go in and use our geometric reasoning on the point Cloud inside the bounding box for instance if we can get a pixel-wise segmentation we could do even better right maybe we can ignore you know we've talked about the limitations of ICP for instance with outliers if you could really take away all of the points that are not associated with the mustard bottle then that's that's even the dream so instant segmentation is actually the the one that's proven to be the most transferable I would say from the computer Vision World directly into manipulation context of this we're going to see where I'm spend most of the time on Thursday talking about all of the things that computer vision people normally don't do that that help that are more specific to manipulation but first we'll stock we'll try to understand how to get instant segmentation into our manipulation pipeline okay and there's actually two ways we'll talk about it at the end so I'd say both of our pipelines so far we could say and people do actively use instant segmentation so we'll take um we'll take a RGB image we'll pick pick out the pixels and we'll then do for instance ICP to find the known location and then pick up a known you know use model based grasp synthesis but you could also use instant segmentation with our our clutter clearing grasp selection so if I just took the point clouds that were let the points in the point Cloud that were left after a segmentation and that I did my antipodal grasping that'll work too both of those are super powerful Pipelines okay so imagenet was mostly about object detection and image recognition Coco data set was the was the um the sort of watershed moment for more instant segmentation they got many fewer hundreds of thousands of labels but at the pixel wise level now imagine going through and labeling every pixel of every image right for a hundred thousand images it's a good annotation tools right and and a lot of people on Amazon Turk at the time right and and they got it done but that's a laborious task okay uh the first well some of the first annotation tools came out of csail Antonio toraldo's upstairs he's got a lab that's done some of the really defining work in this and when this was all starting and and he was educating the rest of us about crowdsourcing image labels and stuff like this he said he said if you look at the quality of the image labels you pay people like a penny to do that right and they're incredibly good he's like if you paid me a penny I'd be just like you know go like let's go to the next one but but somehow people are just really really meticulous about getting every single Pixel right and he said if you he looked at the statistics and there was an anomaly of like someone who had labeled way more than anybody else and it turned out it was his mom he said my mom's an incredibly good labeler yeah okay but that was this revolution where people started to get crowd-sourced large-scale data sets for image segmentation the the one of the crazy things one of the you know the magical things let's just you know if you look at the Coco data set for instance it's got a bunch of different categories you can just go to the website and list them right it's got bicycles cars motorcycles traffic lights that's useful for autonomous driving right Birds cats dogs horses sheep those aren't the things that I wanted manipulate most of the time there's a few manipulation specific ones there's you know no words and plates there we go that's a useful one plates and bottles and cups and forks I'd say more than if you take a something that's pre-trained on Cocoa it's going to call most of the things in your your bin a a mug or a bottle or a fork or something like that okay um so so this is super useful but it's not quite enough for us to do most of the things we want in manipulation one of the biggest ideas and I don't think anybody really saw it coming it's even hard to justify still given our best theory of deep learning but one of the amazing things that happened in deep learning is the story of transfer learning right so Coco is a hundred thousand uh image data set imagenet was 1.28 million images the crazy thing is that if you train on imagenet first even though it's only got image detection and object detection labels and you take those weights and then you retrain on Coco instead of starting from scratch and only using Coco that you can actually do better on Coco because you trained on imagenet before okay so this is this is idea of uh transfer learning or fine-tuning okay training on one data set pre-training let's say on one data set almost always imagenet for instance because it's big and diverse in the right way um improves performance on a downstream let me just say on Coco here it didn't have to be that right why should you you know why would you do better having trained previously on imagenet than just training directly on the objective that's from the optimization point of view that's a super weird thing to to think right if I were to say if you solve an inverse kinematics problem on a panda then you move it over there you're going to solve an inverse kinematics problem better on a con edua I'd look at you like you're crazy because that's just a crazy thing to do okay but um but that is a property that we've seen in the Deep Network architectures that people are using day in and day out so a standard thing to do even if you change tasks a little bit from object detection to instant segmentation you can take your original let's say imagenet data set you've got a deep network with many layers right and the last layer of imagenet is a mapping from some weird you know neural representation to the labels of imagenet you don't have the same labels in your Coco data set okay so we rip off the last layer and replace it with a fresh last layer which has outputs for the labels that you want in the Coco data set okay and then and then um you you just retrain but you don't retrain from scratch when you're training I'll talk a little bit about training but not too much about training but when you're training you take the weights that you already acquired from imagenet and you just fine-tune them for these layers and you've trained from scratch the last one by just running more gradient descent on Coco this is the magic of transfer learning yeah that's right up to the the last layers now what we're going to see for instance is for the insta the instant segmentation you actually put a pretty sophisticated last layer that that's different but still using the front half of the network for from from imagenet for instance is enough to do better on the the full task now the intuition might be that you've somehow learned you know imagenet was big enough you learned something about natural images about you know you learned some intermediate representations that captured the diversity of natural images and this put you in the right lands part of the of the neural network parameter space that somehow staying near there as you and finding the best instant segmentation was better you Leverage The diversity of the imagenet data set to do better at cocoa that's too hand wavy for my taste but that's the that's the the view of it okay so this fine tuning is is one of the biggest things that happened in deep learning it didn't have to happen it's also what provides us the ability to do you know similar things with smaller data sets in manipulation so now the prospect is I don't have to label 128 around 1.28 million images for manipulation it turns out in many cases you can label tens of images hundreds of images and do surprisingly well sometimes zero right but you have to some you know at least that output layer needs to be trained with your new data set okay so how do we make the instance Level Training data for um for manipulation there's a few sort of standard tools that I'd say almost every manipulation pipeline is using something kind of like this you know that that if you want to come up with a lot of labels pixel wise labels of objects that you're going to manipulate this is one called label Fusion and let me tell you the steps I mentioned it once before when we were talking about ICP but now you have you you have more context here okay so you've got a drill in the uh in the lab there you want to somehow use this to create a training data set with pixel wise labels of the drill okay the steps are are pretty simple we're going to first just take a lot so you just move the camera around the drill and lots of different ways then we're going to do a dense reconstruction which people do uh you know a few years ago it was always with Point clouds and uh these Fusion algorithms which are a lot like ICP but but basically you're going to make all of these views with RGB data into one big Point Cloud the same way we've used our multiple views on the of the cameras just imagine doing that with a moving camera right join all the point clouds together it happens that that step is pretty effective at also estimating the pose of the camera we we assumed we knew where the pose of the camera was but this will just estimate the pose of the camera okay now we said that ICP isn't strong enough to label the drill if you just give me a huge Point cloud which we wanted it to be it's not okay but it turns out with a good guess then ICP is fantastic right so so the pipeline is basically make a human a user interface so that after taking a whole video of images a human clicks like three times to to you know they say here's three points on the drill CAD model here's three points in the in the generated Point cloud that's an initial guess humans aren't very accurate at that but then it just snaps into place with ICP now you know a ground truth you know up to our ICP resolution location pose of the drill but we want instance labels out okay so given the cad model and given the the videos you can just render back the drill into all of the images you took and now suddenly you have a bunch of now this isn't you know these are very correlated images so it's not as good as having a hundred thousand different completely different images but although they're very correlated you have perfect labels almost perfect labels of the pixel wise mask of that uh of that drill okay and there's various ways to make that pipeline but something like that has been used over and over and over again to generate training data for Relevant manipulation items what's amazing is that you know relatively small amounts of training data with a pre-trained instant segmentation network works incredibly well in practice yeah does that make sense okay so that's one way that we get ground truth labels for our manipulation the other big way is synthetic data okay over and over and over again now people have been turning their pipelines to be more towards using simulation based data generation to train deep learning systems that are going to work in the real world so I motivated our clutter clearing example by this case but if you if you look at the rgbd sensor that we've been using the whole time right we've been using the color image out the depth image out in order to make our Point our Point cloud but there's another image that comes out which is the label image which the real camera of course doesn't have but this is designed entirely for generating training data and this is just a random example of me dropping the the objects in the bin and then making perfect pixel wise masks from the from the label image it's super interesting that um I mean this doesn't look very realistic right uh we can do a better job we have better renderers that are slower uh and and we would I would absolutely recommend you use that if you want your your system trained in simulation to work in reality but always even the best rendering is going to have some we call it a domain Gap right that the you know most of the time a human eye can tell you which one was simulated which one was real there's a really interesting trade-off that's that's that happens though so so you could you can give me you know some amount of hand labeled data Maybe using the label Fusion kind of pipeline where a human has annotated it it's close but it's it's slightly imperfect but they're realistic images or you can give me arbitrary gobs of basically free data that are perfectly labeled down to the pixel level with a domain Gap it turns out that I think the standard recipe now is to use a lot of simulated data and a very little bit of real data but a lot of times the simulated data is actually enough to outperform the real data having a domain Gap but absolutely perfect labels can actually be better than having the real images that are imperfectly labeled it's actually it's there's a there's a you know well-known story that most of the big real data data sets even mnist which is the number data set that everybody starts with in machine in deep learning they have errors in the labels right so there's somehow a ceiling in what total performance The Learning System probably can get unless it has to learn the error uh you know which was probably almost random right uh so human labels are imperfect synthetic generation can get around them so I went through and I made just exactly from the Clutter clearing I just dropped a hundred thousand like 10 000 images uh I just dropped the just randomized the initial conditions picked from the random bin dropped it waited until they settled took a picture rendered both this and the object uh labels the The Masks the object instances a handful of sort of metadata and I just made a big data set and it'll you'll use it on your your pset and we'll use it in the examples today okay I did 10 000 images which was probably way more than I needed but I was just I was going for it you know rather have too many than too few and we're going to use this to train our Cheez-It box and mustard bottle detector okay questions about that so far yeah good good question so so if I could render so that there was really like no domain Gap I think I would probably always pick that but the domain Gap um is more subtle than than you might think so um I I emphasize the rendering quality you know so oftentimes like the Shadows would look just a little artificial or the lighting you know is a little too Spotlight then the real image is more diff you know whatever there's um it's almost always the material properties not the geometry that's that makes the rendering hard but that's not the part that um I think the bigger part of the domain Gap that's that you might not think about is just that is like the random way that I made these um images right I dropped objects from the sky and they ran it in some initial condition but if you look at like real sinks probably people kind of put the plates down first and then the mugs there's some statistics of the environments that I probably didn't capture perfectly in the um in the sink and there's probably I used 10 objects right the ones I had 10 CAD files for and the real world is open world anybody could put anything in the sink so I think that's the domain Gap that's a bigger one it's it's more about the art assets and the distributions over initial conditions than the render quality at these these days yeah sure that's awesome question so yeah what about the noise people have been increasingly making higher Fidelity noise models a standard thing to do that tends to work really well is you you actually if you're doing physics based rendering PBR right you can actually just render from two images and uh you can you can compute the depth and add noise to the yeah you can sort of do you can capture even the you know the fact that transparent objects are are missed you can capture the fact that uh you know sides of objects are often low quality people are making those renders and the better simulators the rendering based simulators do it I do think it makes a difference um another thing people have done is they've trained networks to noise the image so you take a perfect image coming out of out of a simulator and you just basically make it more artificial maybe there's yeah you just learn the noise model those can work too yeah I think it does make a difference good okay so in most of the deep learning for perception world things move so fast that if I told you about a particular algorithm today it'd be obsolete by tomorrow right but actually in instant segmentation it's not true there's an algorithm that came out in 2016 2017 that's exactly when it came out and we're still using it today it's it's uh it's definitely going to win some sort of test of time award okay it has had more staying power than anything else sort of in the in the Deep learning world and that algorithm for instance segmentation is mask rcnn so I and people use it in robotics all the time so let's make sure you get the user's level perspective of what's happening in mascar CNN how many people know NASCAR CNN how many people don't know mascar CNN all right yeah it's good so that's my dilemma for the day but that's good okay Okay so if you're thinking about making a neural architecture let me say I'm just trying to say a few of the high level sort of interesting things about it right but um so we think about neural networks typically as you have some image coming in here object detection for instance would just be a label coming out maybe it would be um you know or maybe there's it's a vector that's like uh a cat a dog it could be an entire Vector elephant although cocoa tasks right just one vector coming out that's not what we're doing here we're doing something more clever here which is you have an image coming in and some variable number of outputs right it's going to tell me one output per potential object recognition and for each of those object recognitions it's going to tell me what it thinks the pixels are right so the first question you have to ask yourself is how do I go from something that goes from like an image to a scalar to an image to this Rich output and it's it's it's pretty simple but it just takes a few steps the first step was just fully connected Networks fully convolutional Network sorry fully convolutional Networks which proposed and the architecture that seems to have staying power using convolutional kernels to go from an entire image in through my neural network to an entire dense image out okay pixel wise images out the second component is going which is pretty common in in a lot of the object uh and visual recognition things is to do these region-based um region-based Vision systems okay which is what I started to show on this next slide here so the simplest way to think about that is imagine if I if I want to be able to have an arbitrary number of detections come out one way you could achieve that is by just running your algorithm on a fixed size window a fixed size image but run it for lots of possible Windows all over my image every time I run you know maybe I'm looking for a car maybe that one just as I went by I get a car detection that's above some threshold so I'm going to create a new output for the for the the sliding window that landed on top of the car right that's the basics of A region-based Sort of convolutional architecture and rcnn was was the the one that NASCAR CNN obviously Builds on okay NASCAR CNN does something much much more clever than that instead of trying every possible because you don't even know what size your detection is going to be right so you might have to try small boxes large boxes whatever um there were algorithms for proposing regions that would just look at the image maybe look at the statistics of the image look at look for edges look at look for blobs and propose boxes that are likely to have a detection without knowing anything about the object but they would just guess hey try this box try this box try this box okay that's what these the rcnn kind of architectures would do would take a first step that would propose a bunch of boxes it would then do the object recognition object detection kind of thing inside it and then you'd get a variable number of outputs getting more fancy than that um it used to be that there were in the first versions of this sort of pipeline these were um you know classical computer vision type even when the recognition side was using deep learning the region proposals initially were using classical algorithms that were like I said were looking for blobs or edges faster our CNN if we go from Fast Arsenal to faster our CNN they switch they go ahead and pulled out the the part that he that was uh you know classical computer vision and put a another neural network in there to propose the regions okay so first neural network looks at the image and just says here are some possible regions to to consider and then the next part of it takes for each of those images it tries to run the basic algorithm because those images are that bounding box proposal is very general and potentially very weak the other important step that happened was inside the image of Interest the network might not not only say that this is a sheep but you should refine your bounding Box by this amount okay so if I had some bounding box proposal like this it might say yes there's a sheep and your bounding box really should have been this inset box that made a big difference in sort of getting more accurate boxes your region proposal Network didn't have to do all the work it just had to get you close enough that a modification could get you tight okay and mask our CNN did that plus adding the the sort of natural now you have pixel wise labels for all of the possible detections that would be fit in each of the regions and labeled appropriately and the whole thing goes through the Deep learning pipeline to get trained end to end and you get amazing networks out like really amazing so I saw I told my my I have a daughter that was doing first robotics the LEGO League first robotics right and I was like just try just go go on collab this is you know Google Club just try take the pie torch tutorial take a bunch of photos of the Legos with your phone and yet I showed her how to annotate it like 100 of them or something that was a bunch and she trained to mask our CNN and by God if it didn't recognize the Legos really well right right off the pipe torch tutorial it's like incredibly good incredibly robust so many people have um have success right out of the box with mass Garcia default parameters I didn't we didn't do any tuning of like learning rates we didn't do any tuning of of like you know region proposal parameters anything like that it just worked it's incredible and you know six years later people are basically still using I mean they've been made some refinements there's a version two but it's incredibly good I don't want to talk too much about the architecture but maybe at a high level that does that kind of tell the mascara CNN story a bit yeah Okay so I similarly don't want to tell the entire deep learning uh optimization story but I just feel I want to say a few words to connect the type of optimization Landscapes we've talked about since this is an optimization problem that's being solved with What's Happening Here Okay so when we talked about non-linear optimization from four right we said we're going to minimize f of x for instance and maybe f of x was this complicated landscape one of the ways that you could do that is with just a gradient descent kind of algorithm right so you have some initial guess and you go downhill and you land at at a Minima it's not guaranteed to be the global Optima but uh but you know it would get you get you somewhere for inverse kinematics this can be a real problem right we can get stuck in in bad local Optima there might be a good solution for inverse kinematics and we don't find it with great dataset deep learning is very much using gradient descent right it's using a stochastic version of gradient descent the standard way that it's stochastic is just by taking if you have a huge pipe I have my 10 000 bucket images I'm going to take a small subset of them 32 of them or something at a time pass them through the network and I'll pick a random 32 each time so that's because it gives me a random evaluation of my gradient and I'll do stochastic gradient descent to get down that's when you hear people talk about SGD that's stochastic gradient descent but for the purposes of our discussion here it's almost the same what can happen you know the stochastic version you know General optimization problem you would think that it has a lot of the same properties it might walk downhill a little slower right you might kind of take a Meandering path down to the to the Optima it might also um the stochastic version of it might bounce out of a local Minima by luck you know but it's rough roughly this uh you know this same sort of picture so just like fine-tuning and transfer learning was just amazing thing that happened in deep learning something else amazing happened which meant that I was that my ability to train this with high confidence and good Solutions despite it solving this non-linear optimization it's somehow working and that's been one of the Mysteries that we've been the people doing a lot of work in deep learning theory to understand why do people know the basic story of that because people heard the basic story of that there's a couple uh Big Ideas one of them is over parameterization the idea is that the pictures I draw here are wrong they're not the pictures for deep learning because we have so many parameters even compared to our data that the landscape is so high dimensional that even though I have many nooks and crannies they're with high probability probably connected in some weird way in the super high dimensional space in particular I can talk as much as little as you guys want about this but the um and there's people that know much more about it for sure but the basic story is um you know uh if you have let's say this is the simplest version of the is this if my second to last layer in my network imagine I have the last layer here and it's really big let's say it's you know a million possible neurons in my second to last layer and then I have a function I'm trying to learn from X to y even if this first part of the network is just completely random if I have random vectors here in some high dimensional space then I can actually with just my last layer fit most functions almost perfectly and this last layer is actually like a typically a least squares problem and I can expect that to work and I can expect my training error to go to zero for big complicated networks just because I had a ridiculously large even random Network to start with that's that idea is called the neural tangent kernel if you want or Ultra wide Networks okay and then the second thing that seems to happen is something that people refer to as implicit regularization of stochastic gradient descent which is that um when I'm let's say after I've gotten my training error to zero I've got sort of random vectors here gradient descent seems to do something good in the null space of the optimization that makes the weights here solve the problem not just in an arbitrary way but it somehow chooses not random vectors in here in a way that generalizes incredibly well to new problems in the fine-tuning sort of story and that's been a big object of study in in theoretical machine learning is trying to understand why and how the things we're accidentally doing for gradient descent on these data sets leads to strong generalization okay but there's two points I want you to have from your from the user perspective it is not the case some some of you might disagree with me on this it is not the case that you can put an arbitrary cost function on the end of these of these networks and experience success right this this pixel wise protective laws you know this pixel wise uh cost that they use in Mass rcnn the particular architecture they used leveraged these ideas in a deep way and uh uh and was very successful you could mess it up very easily it can't learn everything arbitrarily if it did then you know I would be using it for inverse kinematics and I would probably be mining Bitcoin or you know something like this right there's things that we've figured out that it would it does extremely well and there's things that we that don't fit in that framework yet a good example actually is if you were to do pose estimation I mentioned this before right if you trained a network for pose estimation and you chose rotation if you parameterize rotations badly the network would have a very hard time learning but but other pose parameterizations work well and that's because the landscape even in the high dimensional space is more suitable for learning so unfortunately the story is a little bit complicated but it is all connected there it's all there's all one truth here which is that I'm trying to do non-linear optimization and these neural networks are setting up a rich landscape that works shockingly well when my kid wants to identify Legos right that's like from here to there yeah okay questions at that level of detail yes we're increasingly understanding yes these are good things and people would argue about what are the most important features but I think both of these have got a lot of a lot of consensus behind them yeah the over parameterization story there are two different pieces of the puzzle the first thing is that for most deep learning problems we put ourselves in a regime where we get training error equals zero you basically for your for your training set you expect to basically perfectly recover your your desired your your training set yeah and the reason that that is possible is this over parameterization story that's the training error equals zero part of the story why does it generalize to new things out of your training set that's the implicit regularization story this is the generalization part of the story both are amazing and uh and deserve more study foreign related to great question yeah yeah so okay so you've probably heard of overfitting so the the concern the classic picture of overfitting would be if I'm trying to regress some points maybe the right function is something like this and the points had a little bit of noise right but really I wanted a some simple function to come out and except that the data was generated with a little bit of noise my overfit solution if I'm really trying to get the training error to be almost zero on this might have in order to fit the function might have done something like this which is not the solution I was looking for but it set the training error to zero foreign this is the story of implicit regularization is that it tends to somehow find the solutions that are more like this that seem to generalize better and not this okay there's very good theory and and details I mean the noise story of how does it fit how does it do both of those at the same time it sounds like that's inconsistent but it's actually not inconsistent there's people understand that the function it is learning even in noise you know at least my mental image I think that this is a continually moving story is that it actually learns something like this but if you zoom in it's actually learning like little Delta functions that explain the noise and it does actually get the training error to zero but it learns these smooth beautiful generalizing functions okay theory of deep learning is an awesome topic this is a poor representation of it but maybe just enough for you to um put it in context I'm happy to take more you know those are those are useful questions okay so can we play with it for a second so this is I made you know I made a uh I actually made three notebooks okay all of them are are there now it's actually um deep note is completely awesome in every way almost every way they um their they give free compute you know it's incredible it's got a good interface it's most of the time works uh their GPU support is not free that's the one you know it's a buy-in if you needed it for your project honestly there's a chance I could ask and get you you know like a a special thing for because they really are they like the class um it happens that one of them that's high up and the company did robotics so it's like yes we got an in um okay so so um so that's great uh but these are the three notebooks that we're gonna we actually point you to Google's co-laboratory instead of deep note and the reason for that is that collab is just another online server it's from Google it happens to have a different pay structure and gives you gpus for training a deep Network you want a GPU okay so I'm going to run it locally here but it runs fine on uh on collab Okay so there's three notebooks one of them was the data generation notebook which I just you probably don't want to run that ever you can look at it and say like okay when you want to use it for your own pipeline that's great it's the thing that runs the Clutter clearing for like a while generates a huge file on my disk with all the labels and everything like that then there's the training which runs for a long time that uh will will train the neural network given pre-trained weights from Coco V1 which was pre-trained from imagenet okay and that works better than if I were to just train from scratch the last one is the inference Network that's what I'm going to run now which is just I'm going to put a new image in I'm going to just drop my bins again New Image in and render the output okay we'll see how it works it's interesting so this is just uh in it's using pi torch if I didn't say it we're using pytorch for these parts of the class uh although my end was it was quick he said make sure you tell them that if you're really running you know you're training Pi torch is great but when you're running it on the robot you should use tensor RT or something else that's that would compile it into a much faster uh you know pie torch is not the fast inference engine it's the great training flexible thing but you should compile it down into some more optimized code for runtime okay the output of NASCAR CNN if you just say give me an input what's the output it has oh that's the bat that's the model sorry if you look at the output it comes it gives you this big dictionary right it gives you a dictionary that has like it could do multiple images at a time this is for each image it tells you a list of boxes that were possible detections a list of labels for those boxes which are the numbers I assigned in my training data it gives you scores on how confident it is it looks like in this one it was actually very confident very confident very confident and then not very confident at all so we'll probably see something ridiculous on the last uh the last detection because it's a 0.05 confidence compared to 9.99 on for all the others okay and then it gives me the images which are the masks that's a crazy thing to come out you should let's just appreciate for a second that that's absolutely nuts that a network would produce all that stuff I mean I remember I'm old now so um like we had projects with yanmakun for instance a while ago right and Jan lukoon used to come to our meetings and he would bring a camera around and like train a neural network on a little convolutional neural network on the fly-in his demos were just always awesome but I I completely admit that I I every night I would go home and be like okay but there's just that just doesn't scale like what are you going to have like a million outputs for all the different possible labels on your network like that's no one's ever going to do that and I was just wrong people do do that right you have like cats and dogs and elephants and everything you know he always had three labels that he was training on the Fly and I thought that was great but these things are enormous massive millions of parameters millions of outputs and it's all on the GPU and super fast I'm running on the CPU now by the way so this is my image in [Music] and this is my masks out okay take a look at your image mask number one amazing right I found a mustard bottle mask number two it's probably my Jello right the other jello okay and the last one is ridiculous right because it was they told me it was going to be ridiculous right that one looks weird oh that's like the clue oh wow right that's the occluded mustard bottle and it just gave the pretty darn good look at that you can actually see the Box cut out of it that's actually incredibly good right and I didn't train change anything this is just the default parameters of everything okay now let's change check the object detections okay that's a bit embarrassing completely missed the Cheez-It box on the side uh that is pretty funny but all right the rest of them incredibly good right incredibly good I wonder that might be I should have changed my region proposal Network parameters right maybe it didn't have one big enough to to get the big flat Cheez-It box it's probably dialed in for things that are about the size of a dog in a picture okay right amazing so each one of those gives a bounding box which is the the pixels that's what the the corners of the pixels and the label which I can associate back with my text label um you can run it a few more times here this is find spam cans finds Domino cans potted meat sugar box it's incredible right absolutely incredible amazing right okay so that like obviously we should use this in robotics it's just so good any questions on that anybody you know we could poke it if you have like something else I would I wanted to see what you try I mean I can't retrain it on my laptop now but I could uh do any inference queries you're curious about or you can tonight yeah sure so there are uh okay I could probably get let me do it so that it has the right image the caveat here is there's like order a thousand or more uh images proposed that I didn't visualize all of them so the test you I think your test is extremely good but oh okay well so we did learn here that is that the boxes are at least big enough so my concern about them not being big enough is wrong uh but I can't say that it didn't have a box around the cheese it it I would guess it probably did there's like a thousand order a thousand uh region proposals it could I mean it could be that I never had a flat Cheez-It box in my data set could be not a perfect Network you know it's um that is the one is is it powerful and amazing as it is when it doesn't work the only recourse we have is to add more data really I mean you could change parameters and retrain and do some hyper parameter sweeps but um you know a lot of the stuff we're talking about in this class if it's if it doesn't work I could you know we can tell you why we can tell you how to debug it right now um this one I can't yes uh so so the question is why wouldn't it produce Corn Flakes or something from the Coco data set that's a good question so it will never do that because I've ripped off the cocoa head and the last layer is is specific to my data set so it'll only ever say the things in my data set it might be biased towards things that were in the Coco data set because of its pre-trained layers but um you know it has it has confidence thresholds that it'll only report that the object was there if it was above some confidence threshold so it might be that there's a great box right around the Cheez-It and it's just slightly below some confidence threshold good okay I want to land a few more sort of high-level ideas well let's let's we can take our stretch today yeah let's take our quick stretch that's a good time thank you Okay so if you've learned one thing so far or you know I think the NASCAR CNN is going to be a tool that you will use if you understand its inputs and outputs you're already going to have an incredibly powerful tool at your disposal and maybe you picked up a few of the buzzwords from from Deep learning theory and the like that that I would encourage you to study further but I still think we haven't I haven't told you the complete story yet about how to get big data for robotics right we told you two examples the label Fusion kind of idea where we annotated our our lab captured and then the synthetic data both of those are somewhat limited because for instance I only have a handful of different object models that I put in my simulator I can generate as many of them as I want but I don't have the diversity of the real world right and the same thing what I can get in lab is not going to represent the diversity if I want to think about open world manipulation I want a robot that I go to program it leaves and it's going to manipulate anything in your house I haven't given you an answer for that yet and I don't have a complete answer yet but but the word this is what people are working on hard now how do you have this kind of a tool chain that could manipulate incredibly large classes of objects um there was one more point I was going to make but I think you know roughly that you could go from the NASCAR CNN to the model based grasp selection or the antipodal grasp selection okay so I'll come back to those at the end to close things but um the big new trend is self-supervised learning right I think many of you will have heard that also in in particular we have this amazing property that if I've trained on imagenet on object detection for instance then I could use those weights to help me do better on instance you know on Pixel level segmentation I trained on one task on a relevant data set and I did better on a different task so if you open up your mind then and say well why did I pick object detection which required human labels for my first task why don't I pick something that doesn't require human labels that I can Auto label for my first task I just need to pick some surrogate task that the network in order to achieve learns something relevant in those first layers to copy over so the new thing is find find clever new tasks that don't require human supervision unleash them on the entire internet and use those backbones as pre-training for your for your run time okay so one of the most famous examples is simclear where the idea is very simple right it's basically I'm going to take my original image of my dog and I'm going to just start perturbing the image in lots of different ways okay I was crop and resize whatever I Google basically through the this was a shotgun approach to research right which is powerful and good right but they're like let's try every possible perturbation and then we'll take the you know 10 that worked the best roughly and we'll call that our algorithm right but they just absolutely tried all kinds of crazy stuff and then most of these uh you know one of the dominant ways to do self-supervised learning is to set up something where it's you take the the training data and you just compare and contrast things that you know to be true or false so this is a contrastive learning Paradigm the animation is a little annoying but hopefully it gets the point across okay so instead of having labels for the dog and labels for the chair it turns out to be enough to say the dog is not the chair okay if you can say that this and this are the same image because they're just perturbations of the same image which you know to be true you've constructed that by construction to be true you say those are the same image and those are not the same as the perturbations of the chair then you don't need any human to annotate that and what's amazing okay the general trend is that um they oftentimes these don't do quite as well in Peak Performance but for free you know you get so much you can feed them with so much data that they they do incredibly well I mean at some point you can actually outperform some of the original uh human labels one more second one second another example that's a little closer to robotics which is tends to be I think learning representations that are more about 3D understanding of the world uh is monocular depth estimation this is actually this is the one that works the best right now or one of the best right now but the original idea is even simpler so imagine I have two cameras and I could say from two cameras I can figure out the depth using stereo stereo algorithm and I want to train to be able to guess stereo from one camera okay well I'll just carry my two cameras around and use the two cameras to project the computed stereo to give me the ground truth answer but just train the function from one camera and you train monocular depth from an image to predict the depth okay the the newer architectures are a little bit more complicated they're they're doing reconstruction they take your current frame you take your second frame either from in time or or alongside you try to predict the relative I mean it's more complicated architectures now but these work incredibly well to the point where people can take just an RGB camera and move it around as if it's a depth camera except by the way it still does well on blank walls and by the way it still does pretty well on uh you know visible you know transparent objects and things like that it's incredibly good incredibly good sorry did you have a question can you explain this yeah the cost function instead of saying it is a dog or it is a chair is to say I want to I'm going to predict an outcome that says that the dog and the chair are the dog is not a chair and the and the the pieces of dog that have been translated are the same so these are contrastive learnings you take typically two images you push them through the ones that are the same you try to make close together in some representation space and the ones that are you know to be different you push them apart so you couldn't take a table and a chair because they have like beaches which are a lot of close to them I I wouldn't ever say you couldn't do that uh with a deep learning perception system what I would say is you need more data Maybe or a bigger Network right or more hours of SGD yeah it's just a matter of levels of accuracy yes okay that's a really good point good point so yeah sorry the point to say it into the microphone here is that I could have accidentally picked two pictures of the same dog in my data set and then the contrast of learning is actually sort of wrong right because I could have had I could have said this dog is not the dog the objective is basically that that happens rarely enough in a big data set that it's okay yeah good point good point yeah there's no there's there's no labeled dog No Label chair anywhere here it's just raw images good point okay so this is actually um so uh Leroy's been playing with this kind of uh the self-supervised paradigms and asking about some of the harder questions about what it means for representations for manipulation and it's actually super interesting if you if you think about I mean everybody's on this quest for finding you know the Big Data moment in manipulation right and so we've been working with Amazon who has big data and they're doing manipulation right and Leroy started asking the question uh you know do they have enough data how should we process that data in order to learn representations that would be incredibly powerful okay and the interesting thing that happens in a lot of these real world applications is something called distribution shift so if you think about I mean I took the same image three times that was sort of antithetical to my point but uh you can imagine if you have similar robots deployed in different warehouses right that they have very similar data coming in but there's a big question of should you train only I mean but they're they're a little bit different like the I'll show you a couple different specific ways that they're different okay um so should the robot in New Jersey train on the perception data from Boston yes or no right it seems like probably it's more data but the boxes in Boston might be statistically a little bit different than the boxes in New Jersey or the boxes at peak season might be more dense for instance on the conveyor belt than the the box is off peak right if they have crazy shifts in distribution so there's this question of more data is not actually is we know that more data is not always better that if you have diff data that is different district from a different distribution it can sometimes hurt you sometimes it doesn't but but we know that it can so it's a big question how much should you share data how much should you snuff up how much should you specialize and even more interesting I won't talk much at all about this is that that you're doing this training in a decentralized way so you have many different neural networks potentially living in different places and different copies of the neural network how do they uh how do you do the gradient descent update on that okay but the the distribution shift is very real in their data sets right they have different lighting conditions different the densities the um they have different robots and Upstream Handling Systems things like the sensors respond differently at different altitudes and stuff like this they have different suction grippers in different places so um you know if you look at a few different locations you might see very different types of packages or density of packages this place does mostly you know the those envelopes that you can almost recycle um and then you know this one has more of these right for instance and sometimes you have uh very dense sometimes you have very sparse so it's just this interesting question of like self-supervised learning seems to work but how exactly do you deploy it at the scale and is it you know is it going to get us big data in manipulation and this is the takeaway is that um if you just train with all with um directly on doing image segmentation with supervised learning for instance on your local data you can actually overfit I just I said overfitting isn't as big of an issue anymore but you can overfit still to your data and and have worse performance if you start applying that across a distribution shift you have limited robustness to distribution shift if the if peak season comes your your distribution moves you can have overfit to your data and there's a really big thing that seems to be happening and you guys know this you've seen like the gpt3 and Dolly and stable diffusion and all this craziness right but there's something about these self-supervised objectives that seem to be learning something more General about the internet or about the data in the warehouses and they tend to be more robust to distribution shift and this is like the big big question in supervised learning self-supervised learning for manipulation is what's the right way to learn these representations using as much data as possible that work for lots of Downstream tasks and the self-supervised objectives seem to be learning representations that transfer more generally than the supervised ones it's almost like saying that this dog is not a chair forces me to learn something general about images and saying it's a dog I could have special cased the dog okay and the last thing I'll mention here is all the gpt3 uh everything it's coming into manipulation in a big way which is even if we can't get enough labeled data or self-supervised data locally people are asking the big questions of how do I take everybody else's Foundation models right so the the models that have been trained on huge language corpuses huge Vision corpuses huge text division corpuses and somehow use that information to augment my small data in robotics and clip is the one that I think most roboticists have pick up of the big models clip was is a vision to text model and it's like immediate every roboticist is like oh I could have I could have taken my image and put it into it it isn't coating and I oh I could have a sentence I could put that into the encoding and so every people are finding lots of different ways to use that and the takeaway is that compared to The Mask rcnn pipeline which I talked about which does reasonable things on the labels you've trained these Foundation models are getting us to the point where out of the box you now have potential labels from the entire internet like kind of the captions that everybody has put on the internet you could just walk around the lab and point it at stuff and there's a chance it will label some it will give you a sensible label out of the box in the open world right not it's not perfect it's not perfect but it's mind-blowing it's mind-blowing so how do we use even for the instance segmentation problem how do you leverage super large data trained with self-supervised learning even on a surrogate task to make us pick up any object and manipulate any object okay good so the the instance segmentation is a very much a geometry computer vision task it is not enough for manipulation if I want to know how much the object's weight right Flickr doesn't have a data set that where people everybody I mean actually probably does you can probably say how much is that way and it would say you know 2.7 kilograms it would probably be like almost right but I'm not going to count on that for my pipeline right um so if I if you want to know like how what's the friction where is a good place to pick this up right that I don't think we have the answer directly from this that's why I tried I wanted to say in this lecture I wanted to give you like a super fast overview of what computer vision relative to you know the standard computer vision pipelines for manipulation can look like on Thursday we're going to say that the computer vision pipelines don't answer all the questions that we need we did more than computer vision to pick things up there's other there's other properties of the object that we care about rather than just what it's every pixel label is okay so we'll talk about that on Thursday any other big questions about that there's going to be an entire lecture on it uh a little bit later yeah so but on the control Side Learning is having a big impact too for sure and the biggest impact is I is connecting with vision so are a lot of our classic Pipelines didn't have a way they're incredibly good but they didn't have a way to talk to cameras because that's handing a 640 by 480 RGB image into a PID controller doesn't make sense so um uh so we found we're getting new tools for connecting those wires yes good of what uh yeah I didn't put it in he told me I should put in his this yeah I read the paper I read the paper but propose so many regions for like a calendar like in fact like uh if there are more than models that do this much much better and it's like you input the text and like it can find the object really accurately with fine grain detections errors for example like some let's say I put a sticker a yellow sticker on my laptop and now like I take a photo of a room with all the laptops and acquire with a laptop with yellow stickers in the latest the state of our model can do it pretty well it will only give you like it design the highest probability to the laptop with yellow sticker and uh I think it's going to be very very impactful in robotics because previously we could only do like a fix a fix the list of like 20 categories or like 100 categories now it's just anything described as soon as you can describe it with language awesome yes thank you for saying that so yeah I think that is a part of us in my mind a zoo of ways that people have found to put these large language models and connect them to manipulation right good any other I mean any other big commentaries that it's good okay I'll hang around outside if I think we have a lecture coming in so I'll hang around outside if anybody wants to talk about projects but I will see you Thursday
European_Civiliization_16481945_with_John_Merriman
1_Introduction.txt
Prof: I'm John Merriman and this is History 202. I'm here every Monday and Wednesday, 10:30 a.m. to 11:20 a.m. The way this course is, these are all really major themes. I'm going to go over this a little bit, and I'm going to talk about some of the themes. I kind of lecture on things that I think that complement what you're doing. Let me give you an example. When I talk about the New Imperialism, why it is that Europe basically took over the entire world between the 1880s and 1914, you can read the chapter in A History of Modern Europe, which I had fun writing, but I lecture on the Boy Scouts. I often say that I lecture on the Boy Scouts because I was thrown out of the Boy Scouts in Portland, Oregon, when I was a kid, because I didn't manage to accumulate a single badge and was totally useless after sports seasons ended. But that's not why I do it. To understand the New Imperialism, why Europe took over essentially all of Africa, where they had places that were totally uncharted that suddenly became highly contested between British, French, German, and Italian conquerors, one has to understand the culture of imperialism. The ordinance of the Boy Scouts in Britain has a lot to do with that. Why generations of British youth and their counterparts in Germany, and even Australia, New Zealand, and other places, began to think that it was important to be able to look at a map in their schoolhouse that had the color red for Britain increasingly taking over the map of Asia and Africa, and lots of other places as well. So, instead of -- at the very beginning of that lecture, I'll say, "Look, there are three things you really ought to know about the New Imperialism, why they did this." Then I talk about the Boy Scouts, so that those two things will fit together. Or, when I talk about World War I, and we'll have two lectures. My friend and colleague, Jay Winter, is doing one of them on the Great War and modern memory. Instead of trying to do the entire war, and there is, I think, a quite sporty chapter on that in the book, I'll talk about trench warfare. You'll see a film called Paths of Glory. That's an early Kubrick film about the mutinies in 1917. I'll talk about the mutinies in 1917 when people just said, "Enough is enough. There's no sense dying for nothing. We won't go over the top." Which is to say that it's important to come to lecture, and it's important to come to sections. I've cut back on the reading. I used to use about four more books than I use now, but it's better to concentrate on what you're doing. The books are A History of Modern Europe, second edition, which I wrote for people like you. Then you'll read Persian Letters, not all of it. That would be a rather lengthy day or so. You'll read excerpts in Persian Letters, and Montesquieu talks about relations between West and East, and it's a phenomenal moment in the history of the Enlightenment. Then you have a pause where you're basically just reading me, for better or for worse, but I hope for better, until you get to Émile Zola, his great novel, Germinal, which is a classic. Zola was the first sort of naturalist novelist, at least in France. When he wrote Germinal, germinal means budding, like the budding of trees. But he means the budding of people being aware of themselves as workers. He went down to the mines in the north of France in the Anzin. One of the heroes of the book is a woman called Catherine, who is fifteen years old, but has seen a lot of life for being fifteen years old. When Zola wrote Germinal, he went down into the mines to look at fifteen-year-old young women, barely older than girls, working in the mines twelve hours a day. It's a book that resounds with reality. It's really kind of an amazing book, and I think you'll like that. I hope you will. Then Helmut Smith's The Butcher's Tale is about accusations of ritual murder in a German town. It's about the second German Reich, and it's about anti-Semitism in a small place with bigger consequences. George Orwell went off to fight the good fight in Spain during the Spanish Civil War, where it was sort of a dry run for an even more horrible war, and even more horrible fascists. It's about his engagement and disillusionment in the Spanish republican forces, the loyalist forces, and about the tensions and the duplicity of Stalin's folks undercutting the Trotskyites and undercutting the anarchists. It's one of those classics that's a classic for a very good reason. It's really a marvelous read. Finally, there's Ordinary Men. I go to Poland a lot. In the last couple of years I've been there four or five times for various reasons, and I'd never been to Auschwitz. I went to Auschwitz a year ago. I don't know, some of you have probably been there. As you're going through the horror of it all, and as you're seeing empty suitcases with people's names on them that the people don't exist anymore, and you're seeing baby shoes and things like that. You think, "Who could have done this? Who could have gone out and simply, in an assembly line way, killed people?" Or in fields around Lodz, which was a large industrial town and still is in Poland, simply gone out and blown the brains out of mothers, babies, grandmothers, and anybody they found. Who could have done it? Well, the answer that Chris Browning has is ordinary men. And he had the quite brilliant idea of looking at a German unit, essentially policemen from Hamburg, the port town of Hamburg, an old important Hanseatic port. And he follows them from the lives of very ordinary people into the killing fields, it was nothing less than that, of Poland. It's also short. Germinal is long, but these other ones are short. It's gripping. It's quite amazing. So, those are the books. I think that A History of Modern Europe--I hope--is fun to read. I think you will enjoy that. The lectures kind of--you see the themes speak for themselves. Sections, everybody likes Wednesday night sections. One of my colleagues has only Wednesday night sections. We've gone increasingly to that, because sometimes you don't find a large audience even on Friday morning 10:30 slots. We've abandoned that. So, tentatively, we're going to have two at 7:00 p.m., two at 8:00 p.m., then Thursday at 1:30 p.m., and Thursday at 2:30 p.m. I don't know. When are we starting sections? Sometimes we don't do it until the second week. It depends on what day. What day is this? Wednesday. I don't know. Maybe we'll start them next week. Maybe we won't. Who knows? But they will happen, and there's also a short, sporty paper assignment. By short I don't mean two pages, but something like seven pages, eight pages on something that you want to write about. Now, let me give you some examples just off the top of my head. If you have any interesting in painting, for example, it would be interesting to take looks by, say, two Impressionist painters like Pissarro and Renoir, and to see how they viewed the transformation of nineteenth-century Paris, the big boulevards and all of that. Or you could take another novel. Germinal, one of the interesting things about it is that it's a document of history. It's a novel, so these are invented people, but it's a document of history in some ways, as is lots of the great literature of World War I. There isn't any period in modern history that has so much gripping literature about it as the Great War, the British war poets like Siegfried Sassoon. And a lot of these people were dead after they wrote. Sassoon wasn't, at least not immediately. I can't remember if he dies in 1918 or not. But to take some of the poetry, or the writing of the war, and write a paper about it. Or, if you're into diplomatic history, or something like that--I don't know, a paper re-evaluating the origins of the Crimean War. That might put you to sleep before it puts your TA to sleep. But you can imagine a good paper on that. You can do whatever you want. When I do the Enlightenment, borrowing from my good friend Bob Darnton, I'll do a thing at the beginning about why the Enlightenment was important, what it is. There's secularization, rational inquiry, and all of that, stuff that you may already know, maybe not. But it's in the book. But then what I do is I look at some of the third string, or the third division in the European football sense, of Enlightenment hacks, and what they wrote about royalty, and about aristocrats, and the way they kind of undermined those traditional hierarchies that would be swept away, to a large extent, by the French Revolution. Or you could take somebody out of the French Revolution, such as the steely Saint-Just, who ran off with his mother's silver at age sixteen or something and went on the grand tour of France, and talk about him on the Committee of Public Safety that signed away the lives of lots of people, but may have also saved the revolution. You can do whatever you want. Well, it should have something to do with the course and in the time period we're talking about. Nothing on the Red Sox or something, but you would work with your teaching participant. I'm an email animal. I'm always available on email, and I have office hours as well, but people don't come much anymore. They're doing NBA.com, because email has made office hours sort of oblivious. I mean, irrelevant, not oblivious. But people are oblivious to my office hours. But, anyway, they are Mondays, 1:00 to 2:30 p.m. It used to be 3:00 p.m., but I just sit there by myself, 1:00 to 2:30 p.m. in Branford College, K13. There are also two other movies when we get to fascism, when we get to Adolf Hitler. He was only one of a whole bunch of dictators. There were hardly any parliamentary regimes left in continental Europe by the time 1939 comes. A woman called Leni Riefenstahl, who just died in 2002 at age 102, when she was a young woman did a propaganda film for Hitler. Hitler, like Mussolini, believed in high tech. He was one of the first people to use the radio. Franklin Roosevelt used the fireside chat of the radio. But Mussolini was already there piling falsehood upon falsehood, and Italians who could barely afford to eat all had their radios. The same thing happened in Germany as well. She did a movie, a documentary called Triumph of the Will, about Nuremburg. It is truly chilling. It's amazing, it looks like a political convention or something in some ways. All of these movies you can see in the privacy of your luxurious suites in Branford or Pierson College or wherever, because they're available now in ways I don't even understand, but on your Internet. We used to actually show them here. I used to use a great movie called The Sorrow and the Pity, Le Chagrin et la pitié. It was four hours long. People described it as a two six-pack movie. The janitors complained because there were so many beer bottles rattling around. But, of course, this was before the drinking age was raised. So, of course, I don't show that movie. I take that back. I don't take that back, but what the hell. Anyway, I don't show that movie anymore. But I do show Triumph of the Will, and you can watch that at home. The other one is Au revoir les enfants. Because one of the last lectures I talk about resistance and collaboration in Europe, and because I live in France much of the time, I talk about France. Au revoir les enfants, Goodbye Children, some of you have probably seen. It was made by Louis Malle, who just died a couple of years ago. It was about when he was in college, so he was the equivalent of 7^(th) and 8^(th) grade. There was a new boy that shows up at school during World War II in Fontainebleau, which is just southeast of Paris. He's a boy who hadn't been there before. He's a Jewish boy. It's about his friendship with this boy, and what happens. At the end, it's not a happy film, but it's a great, great film. What else? What to say? There's a midterm. I don't like to waste a lecture giving a midterm. I would rather give a lecture, but we have to have something to report to you. If you tube it, if you don't do very well at all, we don't count it as much as if you do well. People ask these questions, I know. How much is it worth? Geez, there's more to live than grades, but it's something like twenty-five percent and the paper is twenty-five percent. Section participation is ten percent, whatever we work out, then the final. It's an exercise in seeing how you're doing. It really is no big deal, but it will help you pull the themes of the course together. It's no scary situation. We all live in this sort of A-, B range. I'll tell you, a couple of years ago I ran into this student. When I run into students, I'm a friendly guy and I see people, and I say, "Hi, how are you?" I ran into this one person. I said, "Hi, how are you?" And she went, "Oh, hello." Oh? I remembered her name and I went and looked it up, and there was the B . It wasn't that, "Hi, how are you? A- or A," but whatever. I'm sure she had all A's in the other courses, and a B is not the end of the world, and most people get A's, but whatever. You have to take the midterm. That's the way they run it here. That's not my idea, so that's what we're going to do. Okay, now I'm going to talk about some of the themes. At the end, I'm going to read you a poem. I started history in a serious way because I read this poem. So, I'll leave that until the end of it. I didn't got to Yale. I went to the University of Michigan, maize and blue forever, very sad since last weekend. I came from Portland, Oregon. I don't know if any of you come from Portland, Oregon, but that's where I'm from. When I went off to Michigan, I'd been at a Jesuit high school. Jesuit high school was a sports factory, in part, but it was a very good school, but it was very repressive. I went off to the University of Michigan after having been in Jesuit school for four years. It was wine, women, and song. There weren't enough in the middle and probably too much of the first. My first semester I got a 1.93 grade point average, and my mother asked me if that was on a two-point scale. I'm serious. I had an F. I shouldn't laugh at myself. My kids say, "Oh, my god, not the same story again." But I got an F, and I got two C's, and I got a B. The people I hung around with in Ann Arbor were so unaccomplished, some of them anyway, that they thought I was smart because I got a B. I'd go by in the dining room and they'd say, "He got a B." They asked me to tutor them. Can you imagine that? Some of the people that I hung around with were amazing. You may even know people like that, but I don't think so. But one of the guys that I knew, I've got to get back to the topic in a minute, but I just thought of this, was sort of the king of malapropisms. One day he was going on and on. These are the people I hung around with. He was going on and on about this good meal that he had of one course after another, and it was fantastic. It was a really good restaurant, and somebody snuck him some wine. Finally, I'm tired of the whole thing and I said, "Was it gratis?" He said, "No, it was chicken." Those are the people that I hung around with at the University of Michigan. But I've taught here a long time and I stand by maize and blue, but I love Yale. One of the things I love about Yale is being able to teach people like you. And I mean it, and I love this course, so I hope that you will enjoy it, if indeed you take it. What about some of the themes? What kind of stuff are we going to do? Could you get some syllabi for some of those folks back there? They're up on the thing. Thanks a lot. A couple of themes. I don't believe, and I've never believed, that history is a series of bins. I guess I wrote that in the book, but that you open up and you say, "Well, there goes the Enlightenment. Shut that baby down." Then you open up the next one, and here comes eighteenth-century rivalries, and you shut that baby. Then the French Revolution, "Oh, I know all about that now." Pretty soon you go on to Russian Revolution eventually, and all that. To do a course like this where you're going to learn much of what is important to know about western civilization, I do believe, if you do the reading and stuff, and if you enjoy the lectures, there have to be some threads that go all the way through that make it worth it so you learn something. One is certainly state-making. Even if you take a sort of federalized, decentralized state like with this very bizarre electoral system like the United States, that the growth of modern states, it doesn't really just come in the twentieth century with the welfare state beginning in England, and even before that in some other places, insurance programs and things like that. It begins with the consolidation of state power in the late Middle Ages with territorial monarchies, the Spanish, and the French, and the English monarchies. It has a lot to do with the growth of absolute rule. That's what I'm going to talk about next time, absolute rule, absolutism. The growth of standing armies, huge standing armies, never seen before, of big forts built on frontiers. It has a lot to do with bureaucrats who could extract resources from ordinary people. A lot of the rich didn't pay anything or hardly anything at all. It has to do with an allegiance, a dynastic allegiance that could be transferred later to a nation, the idea of nation. That starts in the eighteenth century. It doesn't start in the nineteenth century. It starts in the eighteenth century, at least in Britain. That's an argument that we'll make also. In 1500, which is kind of when that book gets rolling--they only start in about 1648--there were about 1500 different territorial units in Europe. Some were no bigger than Archbishop's Garden in Germany, and some were larger states--not yet what they are now in terms of size, such as France, which expanded under Louis XIV into Alsace and Lorraine, and various other places. But there's about 1500 territorial units. In 1890, there were thirty. So, the consolidation of state power, which is looking at it from the state out, or the emergence of an identity where you see yourself as German as opposed to Bavarian, French as opposed to Gascon or Provencal, Spanish as opposed to Castilian or as opposed to Catalan. The Catalan language was illegal until 1975, until Francisco Franco finally croaked in 1975, in November. This is a great phrase; I wish I'd said it originally. I don't know who said it, but someone once said that a language is a dialect with a powerful army. That's it. That's true. France at the time of the French Revolution, half the people in France knew French. There was bilingualism. You could know Catalan. You could know Auvergnat patois. We live in the south of France where a lot of old people still speak a patois, though that's mostly dying out. How does it come that identity, a sense of allegiance to a state or a country? Not everybody, but how does it come to 1914 when people go marching off to get killed singing the Marseillaise, the French national anthem, in pretty good French? How does that happen? How does a state increase its reach? How is the modern world created? We call this process, it's a clumsy word, but state-making. How do states form? The other side of this is how do identities change? In the sixteenth century, seventeenth century, ask somebody they were. Say, "Who are you?" They'd say, "I'm so and so. I'm of this family." Or, "I am Protestant," if it was the sixteenth century or late sixteenth century, any time after 1520s or 1530s in parts of Germany. "I'm Protestant. I'm Jewish." In much of the Balkans, "I am Muslim." In most of Europe, "I am Catholic." In Eastern Europe, "I am Russian Orthodox. I live in a mir (village) in Russia." How does that happen by the end of the nineteenth century that people have, even Russia as they're starving to death, starving in the famine that Tolstoy, the great writer, called the world's attention to. A lot of them died in fields thinking, "only if the czar only knew that we were starving, and that his ministers were treating us bad, how angry he would be." Well, they didn't get it. They didn't know that the czar could have given one damn. But the allegiance to the czar, the sense of being Russian or being dominated by the Russian czar, is something that had to be constructed. So, the state constructs its ability to extract taxes, extract bodies for national armies, also to provide resources, but identities are transformed. So, I give this as an example, because state-making is one of the themes that kind of ties everything together. This course ends in 1945, but look at the problems in the post-communist world of state-making. Look what's going on in Georgia, which is more complicated than the newspapers present in very many ways. Look at the horror show of the Balkans in the 1990s. A lot of the issues, religious hatreds that we thought only would be limited to Northern Ireland. That's another theme that's very important to the whole thing. Another, of course, is economic change. Obviously, this is not a course in economic history, but the rise of capitalism, that's what it's called, capitalism or large-scale industrialization. It changes in ways that we'll suggest in the reading, and then I'll talk about a little bit, the way people live in very fundamental ways. There's lot of continuities, but there's lots of big changes. Everybody doesn't end up in the assembly lines right away. There are other ways of rural production. Women's work remains terribly, terribly important. I'll spend some time doing that. A very dear friend of mine, my mentor indeed, Chuck Tilley, who just died a couple months ago, to my great sadness, once said that "it's bitter hard to write the history of remainders." Lots of people were left out of all of this. I'll do one lecture when I talk about popular protest. I'll take three examples of people rebelling. I stand back and say, "What does this mean? What is going on here?" I take the example from the Pyrenees Mountains, a place called the Ariège. You're not responsible for that name, would never be. But where suddenly men dressed as women carrying guns, or carrying pitchforks, came down out of the mists, out of the snow and drove away charcoal burners and drove away forest guards. Why? Because they'd lost access to glean, to pasture their miserable animals. Because the wealthy, big surprise, got the law on their side as the price of wood goes up. They didn't walk around saying, "Well, I'm a remainder. Eventually, I'm going to have to move to Toulouse and my great-great-grandchildren will work in the Aero Spatial, in the air industry there." They didn't say, "I'm remainder number 231." But they fought for their dignity, and for a sense of justice they thought existed at one time that had been taken away by these economic changes they couldn't control. Then I take an example from the south of England, from the same time, 1829,1830, when they find people dead with only dandelions in their stomach, dead of hunger. Then these people start marching the poor, the wretched poor. Rural laborers start marching and threatening people with threshing machines. Why threshing machines? Because threshing machines were taking away their work as harvesters. And one day they found a sign that said, "Revenge from thee is on the wing from thy determined Captain Swing," suggesting that they were many. They were righteous. They were just. They were armed. They were ready. Did Captain Swing exist? Of course not. They were weak and they get lost. They get defeated. Some of them are hung. Lots of them are sent to Tasmania to the prison at Port Arthur, Tasmania. They're sent to Australia. That's why when the Australians play the English, a lot of the Australians sing that old Beatles' song, "Yellow Submarine", which you don't remember, which I vaguely remember. "We all live in a convict colony, a convict colony, a convict colony. Captain Swing, they lost, but they went down fighting. It's bitter hard to write the history of remainders. But when you look up from that and you say, "Look what's going on here." When you look at people fighting for grain, fighting for food, they're fighting a larger process that they can't control. But it tells you a lot of what's going on over the big picture. That's another one. Then there is, I'll just take one more, maybe another ten minutes. I'm going to read you my poem. Then you can go. But I hope you come back. War--war as a dynamic of change. Warfare changes with Napoleon. There were already changes in the eighteenth century, but it's still basically professional armies or people getting conscripted in the British navy, because they were drunk at the wrong place at the wrong time outside of a tavern in Portsmouth or something. The next thing they know, they're throwing up on a ship bobbing off toward the English empire. But warfare changes with the nation's state. The French called it leveé en masse, that's mass conscription, the sense of defending the nation. There's this magic moment where the artisans of Paris defeat the highly-professionalized army at a windmill called Valmy in the east of France. It changes the way things were. Napoleon is arguably the first total war, because of a war against civilians where there are no longer the traditional limits between fighting against civilians and fighting against armies. Those limits hadn't existed in the Thirty Years' War. I'll talk a little bit about that next time around. But the wars are very different. There's famous Goya paintings of peasants being gunned down by French soldiers, and atrocities against peasants in Calabria in the south of Italy. So, warfare really changes, but it becomes a dynamic of change. If you think about the Russian Revolution of 1917, the Russian Revolution was not inconceivable without World War I, but it was sort of inconceivable without the Russian Revolution of 1905, and the defeat by the Japanese in an extraordinary shocking event, at least for Europeans in 1904 and 1905. And World War I provides opportunities for dissidents in Russia to put forward their claims. So, when the whole thing collapses on the czar's head in February 1917, and the Bolsheviks come to power, the war itself was a dynamic of change as well. And what a war. What wars. There have been nothing ever like it. A few journalists who had been in the Russo-Japanese war had seen trenches in Manchuria that had been built. But nobody could have imagined that the war that was supposed to be over in six weeks was going to destroy four empires--the Ottoman empire, the Austro-Hungarian empire, the German empire, and the Russian empire, and, arguably, we can talk about this and we can debate this, the British empire. Because lots of people who had fought in India, Indians who had fought in the war, or people now we would call Pakistanis who'd fought in the war, or people from Kenya who'd fought in the war are no longer going to be satisfied with simply arguing that they're part of the great empire, even though they have hardly any rights and no money, and simply work for the big guy. So, the war transforms Europe by destroying these empires. What it also does, and it's very possible to argue this, and my friend, Jay Winter, who is a great expert on World War I, and Bruno Cabanes also, who's on leave this year, would agree with this. You could see the whole period in 1914-1945 as a new and more terrible Thirty Years' War. Because Europe is in depression all through the 1920s and ‘30s, agricultural depression the whole time. Only between 1924 and 1929 is it not a big industrial depression. The poisoning of the political atmosphere--I'm going to do a whole lecture on Hitler and the national socialists. World War I created Hitler. He was already just this pathetic guy with grandiose plans, no friends, and sort of a sad sack going to the theatre and droning on and on about all he knew about Wager, whom he loved, and the theatre, and in a threadbare coat. But World War I transforms him into an anti-Semite. He was already an anti-socialist. It transforms him into an anti-Semite. The troops that came back, many of them simply kept on marching. They'd survived the war and they kept on marching. The poisoning in the political atmosphere was something that was simply extraordinary. To understand fascism, this is terribly, terribly important, you have to understand what happens in World War I. Great expectations dashed the Treaty of Versailles, which only the great British thinker, John Maynard Keynes, really got right, predicting the disaster that came out of it. There's no more fascinating period in history, in my mind. It's absolutely fantastic. What a war. It's all obvious. Everybody's seen these films from Imperial War Museum -- which has been kind of wrecked the way they've done it now, it's sort of too high-tech -- in London. But I leave you with just a couple thoughts. The Battle of the Somme in 1916 that started on July 1^(st) when they blow the whistle and say, "Over the top, guys." There are more British soldiers killed and wounded in the first three days of the Battle of the Somme, S-O-M-M-E, three days, three days, than there were Americans killed in World War I, Korea, and Vietnam combined. In three days. Where are the great British leaders of the 1920s and the 1930s? They're all dead. They're hung up on that old barbed wire, as one of the war poets put it. They're hung up on that old barbed wire. One guy, a soccer player, said, "We'll get some enthusiasm." He tried to dribble a ball across these trenches, across the craters. He doesn't make it. He's killed. In 1914 on Christmas Day, the Germans and the British soldiers, some would say, "Enough of this stuff" for the day. They sing to each other. They actually play soccer; they play football. In 1915, a British soldier said, "Let's do the same thing." They put him against the wall and shoot him. The horror of the war transforms Europe, every aspect of Europe. It's impossible to understand the growth of the agrarian sort of semi-fascist regimes in Eastern Europe, very much under Nazi influence, without understanding World War I. The war that was supposed to end all wars; of course, it doesn't do that at all. That's a big stop on our agenda as well. We did used to read All Quiet in the Western Front, but everybody's read that. Then we read Robert Graves' rather long and self-indulgent Goodbye to All That. That was pretty long, so we don't do that. But we will try to rock. Let me just read you my poem and then you can go. Well, you can do whatever you want, but anyway. I remember this. I remember reading this poem back at University of Michigan at 2:00 on a Saturday, trying to figure out what I'd done the night before. But, anyway, no. This is Brecht, the great East German poet. It's called "A Worker Reads History." Let me begin by saying that we're going to study "great," I mean really "great" men, "great" women. Hitler is obviously not a great man. He's awful, just awful. But the people who are thought to have made history: Napoleon, Peter the Great, other people. I do talk about the folks that you read about in textbooks, including mine. But I ask the same question and pose to you the same question that Brecht poses. It's a short poem, so just hang on. Who built the seven gates of Thebes? The books are filled with the names of kings. Was it kings who hauled the craggy blocks of stone? And Babylon so many times destroyed. Who built the city up each time? In which of Lima's houses, The city glittering with gold, lived those who built it? In the evening when the Chinese wall was finished Where did the masons go? Imperial Rome Is full of arcs of triumph. Who reared them up? Over whom Did the Caesars triumph? Byzantium lives in song. Were all her dwellings palaces? And even in Atlantis of the legend The night the sea rushed in, The drowning men still bellowed for their slaves. Young Alexander conquered India. He alone? Caesar beat the Gauls. Was there not even a cook in his army? Philip of Spain wept as his fleet Was sunk and destroyed. Were there no other tears? Frederick the Great triumphed in the Seven Years' War. Who triumphed with him? Each page a victory. At whose expense the victory ball? Every ten years a great man. Who paid the piper? So many particulars. So many questions. If you hang with us this semester, we'll get at some of those. See you. Thank you. Thank you.
European_Civiliization_16481945_with_John_Merriman
20_Successor_States_of_Eastern_Europe.txt
Prof: Today I'm going to do a fairly impossible task, which is to talk mostly about Eastern Europe in the interwar period. I sent around a rather lengthy list of terms, so I don't have to write it on the board and you can't see it anyway. I'll work from that, so it'll help you understand. But the big points are clear, and the maps will help you as well. Just a couple things at the beginning, which are perfectly obvious. In 1914, few people could have imagined that they would sweep away four empires, etc., etc. and take the lives of millions of people. In 1918 and 1919 there was the Great Illusion. The Great Illusion was held by Wilson and lots of other people, that wars were started by evil people in high places, which may often be the case. But the problems left by the Treaty of Versailles were basically insoluble. The 1920s and 1930s are basically a continuation of the war. You can look at the entire period from 1914 to 1945 as a thirty years' war. Europe was in depression basically the entire time between the wars, as we'll see when I talk about Eastern Europe and East Central Europe. That has a lot to do with the chronic instability of the period. Western Europe and the United States were really not in depression between 1924 and 1929. Then the thunder comes in 1929. The United States doesn't get out of the Depression until basically World War II. The war economy helps them do that. But Eastern Europe, in the places where the instability and the lack of parliamentary traditions was so important, was in agricultural depression the entire time. By 1939 in Central and Eastern Europe, only one state, Czechoslovakia, remains a parliamentary regime. In all of the others, the Eastern Europe of little dictators, and fascist parties, and rightwing agrarian populist parties--some of them didn't start out rightwing--poisoned the political atmosphere. The Treaty of Versailles, when they meet and these delegations meet--including the former president of Yale, the future president of Yale then, Charles Seymour, was in the American delegation there--they were convinced that they could put an end to all wars. They would get Germany to sign on the dotted line saying, "We started it all." As I'll argue next week, and it's perfectly clear, that was a catastrophic mistake. Germany arguably had a greater role in starting the war than the other places, but this guaranteed the perpetual hostility of an ever increasing number of rightwing parties, of which the most vicious and the most successful would be the Nazis, opposed the very existence of the Weimar Republic and became, as I'll explain a minute, a revisionist state. A revisionist state is one that wanted to revise the Treaty of Versailles, because people of their dominant ethnic group had ended up on the wrong side of the frontier. If you had fought a war that was based upon national claims in 1914, in which aggressive nationalism was one of the root causes of the war, the successor states that are created out of these collapsed empires find themselves facing the reality that you could get all the maps you wanted, and you could get all the cartographers that you wanted, and geographers, and bring them all to Versailles, and bring them all to the French suburbs, Trianon, and Sèvres, and Neuilly, and the others, Saint-Germain-en-Laye, that became named after the treaties with the individual powers. But you couldn't draw lines around national groups that were going to incorporate everybody within the country of their choice. You couldn't do it. That leaves a permanent factor for instability. If you don't believe me, look at the Balkans in the 1990s, which were the worst atrocities since the death camps in World War II, in which ethnic cleansing and rape as a means of waging war became a reality again, and in which those old hatreds had never been extinguished. So, the Great Illusion was that there wouldn't be anymore wars. In the case of Germany, as we'll see, the troops who were demobilized, who came back, they kept drilling in their basements, the Freikorps, the Free Corps in Germany. Lots of people among them, just one of millions, the young Adolf Hitler--the view that they held was that the problem wasn't to have fought the war in the first place, which was a Wilsonian view of the war, but the problem was not to have won the war. The problem is, how do you explain to home that you've lost the war when your troops are far, far inside Germany? I'm getting ahead of my story, but it's just such a complicated subject, all this. It's a little hard not to. The other problem was, if you're punishing losers in World War I, you're often punishing them in a way that seems to violate the very principle that you hope to espouse of each people, more or less their own country. The powers that become the revisionist powers, almost all were on the losing side. They're the ones that, by the very principles espoused at Versailles, that basically just simply get screwed. As the Hungarians put it, "No, no, never." That was their response, "No, no, never." These powers, of which the most dangerous, ultimately, is Germany, are far more powerful in defeat than France is in victory, basically vow to get even. Not the Weimar Republic, but those people who wanted to destroy the republic. Then Eastern Europe will be full of little Hitlers, little racist dictators who are also convinced that "in the next one, we'll get it back." "We'll get it all back and we'll bring it back with a percentage of interest as well." Increasingly, a point that I'd better make in a while, they begin to look at, in the Europe of extremes, Germany as a very compelling model. The Eastern European states that had certainly reasons to increasingly fear Germany, they begin to see Germany as a rather successful model. The Europe of the extremes, as Eric Hobsbawm has called it, is basically, if you exclude the Soviet Union, which is another kind of totalitarian state, and if you exclude the role of the Communist parties as a destabilizing force in many of these countries, is a Europe of fascism. They just keep right on marching, because fascists are better at describing how they will take power--marching, violence--and whom they hate, than what they will construct afterward. What they will construct afterward in all of these places will be a totalitarian, fascist state that is based upon the principle of totally over-the-top, aggressive nationalism and anti-whatever the minorities are, particularly the Jews. Anti-Semitism becomes an important part of all of this. So, the guys with the maps, and the pencils, and trying to draw the little squiggly lines--it doesn't really work out very well. Wilson comes back in utter defeat and the American congress doesn't approve the Treaty of Versailles anyway, and America enters, as you know, a period of isolationism, at least until the next time around. That was a rather lengthy introduction to what I was going to talk about. Let's try to be more specific now. What are the big-time revisionist states? First, let me just start out with one that didn't lose, but is one in which, as you'll see when you read the chapter, where fascism is first saluted and then takes power. That is Italy. Italy wins. They're open to offers, open to the highest bidder, and they join in 1915 because the Allies can promise them more. They promise them part of the Tyrol between Austria and Italy, and they promise them much of the Dalmatian Coast. Italy went to war for that reason, but also because in a country in which a sense of national unity basically didn't exist, there was a strong feeling that war will make Italians out of all these different people. That's not real good reason to go to war, but they do go to war. Of course, they were being systematically denigrated by the other Allied leaders for woeful aspects of their army, which also took huge losses fighting the Austro-Hungarian forces. So, when they come to Versailles, Orlando, who is their representative, he's a junior partner in this. They don't pay a lot of attention to him. Wilson says, "You can't give them the Tyrol, because they don't have Italian majorities there and they certainly don't have Italian majorities in the Dalmatian Coast," which is populated by, logically enough, by Croats. So, they don't get what they want. Mussolini, who began his career as a socialist, he was the editor of Avanti!, "Forward," which was the socialist paper, he becomes one of the originators of fascism, and he appeared, as you'll see in the book, on the cover of Time magazine eight times. He's the guy that got the railroads to run on time in Italy, but only the ones to the ski resorts. Anyway, I don't have time to talk about him now. So, they're kind of a revisionist power. Mussolini's discourse about how he's going to turn the Mediterranean into an Italian lake, revive the Roman Empire, and all that has to be seen in that context. Italy was not "a loser" in World War I, but rather an aggrieved winner. I guess that's a good way of putting it. The revisionist powers are those that lost. Revisionist powers, of course, no one was more revisionist than Germany, and the German Empire is destroyed. Germany, again it's hard to summarize all of this, but Germany from the very beginning says, "If you're going to argue that national ethnic groups and where they live should determine the drawing of boundaries, then what the Allies did to Germany seemed extremely unfair," and not only to the far right in Germany. This leaves aside the question that I'll come to next week. John Maynard Keynes, who was a brilliant, brilliant guy, he's the one who saw that this is recipe for disaster, the Treaty of Versailles. He's the one who said, "This is just a truce. It's not the end of the war. If you make Germany pay for the whole war, based on that they've signed the war guilt clause, you're going to so destabilize this country that eventually the right will take over." That's exactly what happened. That's exactly what happened. From the point of view of Germany, the most egregious loss that they suffered was the Polish Corridor. If you go to Gdansk, which is a wonderful city, which was destroyed during the war like most every city in Poland except for Krakow, which really got lucky. I can remember going to Warsaw, au temps des camarades, when it was still into the communist regime, when I was a kid. You could see where there once had been boulevards. The whole place was just absolutely razed. Krakow was very lucky because it survived. What they do is Gdansk gets rebuilt. Gdansk was a German city. The Polish population was extremely small. You may know of Gdansk because that's where Solidarity began in 1979 and 1980. There's an important port there, and that's where Lech Walesa got his start, and the whole Solidarity movement, including some of my friends, historians, who were young printers for Solidarity in those days. As I've said a couple of times, I go to Poland all the time, in the last couple of years five times or something like that. Anyway, Gdansk, from the point of view of the Germans, was German. The vast majority of the population was German. What the Germans called "the Polish Corridor" divided the rest of Germany from Pomerania in East Prussia. It was resented by the Germans because there was a strong German population that remained. Of course the Poles, when they look at Gdansk, they look back to when Gdansk was an important port then, too, in the Polish-Lithuanian commonwealth. In fact, Poles are quite insistent. Polish historians--and the whole concept of sovereignty that emerged in the Netherlands and in England in early-modern times was also being constructed in and around Gdansk. The other big wound for the Germans was Czechoslovakia. The second largest ethnic group in the newly one of the successor states, along with Poland and Yugoslavia, and I have the statistics in what I sent around to you, the first were Czechs at about fifty percent. Germans, if I remember correctly, were twenty-three percent, and Slovaks were sixteen percent. The others are other minorities. They're Ukrainians, and Poles, and all sorts of things. The majority of Germans were concentrated in Prague, though there weren't as many of them as before. Above all, the whole region of Bohemia and in that region called Sudetenland, the Sudetenland Germans. When the allies capitulate to Hitler's demands that that part of Czechoslovakia--then of course he launched the whole thing--;be passed into Germany. One of the reasons that they appease is they said, "Maybe he's got a point. In 1918, we couldn't really put the people where they were supposed to be. There are German majorities in a good percentage of Bohemia, within Czechoslovakia. Maybe he's got a point." That was an excuse. That was the rationale for appeasing him, but at a time when he might have been stopped. His generals were just scared to death that the Allies were going to fight them, because they weren't ready for war. Germany is, above all, the big revisionist power. More about this when we talk about Adolf Hitler next week. The other big one--I can't find it but it's in there--is Hungary. Hungary loses--I've got to remember these statistics. I might have put it around. I think I can remember them. They lose twenty-five percent of the Hungarian population to other states. They lose about between a half and two-thirds, I should remember but I don't, of the land of the old Hungarian domains when they were in Austria-Hungary, when they were, supposedly after 1867, the equal partner of Austria. They lose the greatest percentage of that Hungarian population to Romania. The tensions between the Romanians and the Hungarians were extremely great. The linguistic differences are enormous, because Hungarian is such a difficult language. It's an isolated language. I have a friend who retired here many years ago from Russian and East European languages and literatures who knows eighteen languages. He knows mostly every Central European language. I said, "Do you know Hungarian?" He said, "No, it's too hard." That accentuates this isolation from the Romanians. Again, in 1989 the great groundswell against Ceausescu and his horrendous wife, the dictators in Romania, began with Hungarian dissidents whose families had been generations since World War I stuck, from their point of view, in Romania. When you're trying to look at each of these countries and trying to figure out why does some rightwing maggot, some rightwing dictator, take power? It's because, as in the case of Adolf Hitler, if you say the same thing over and over and over again, pretty soon you get people to believe you. Admiral Horthy, who was an admiral in the Austro-Hungarian Empire, he becomes an extraordinarily vicious dictator and egregious collaborator during World War II, sort of an admiring junior partner of Hitler and the Nazis, and the frenetic, aggressive Hungarian nationalism. You can go to Budapest, go along the Danube River in Budapest and see where they've made a sad monument out of the shoes of Jews who were shot or just simply pushed into the swirling waters of the Danube by the Hungarian fascists in 1944. I'm not saying Hungarians as a people, obviously not. I have Hungarian friends and I love Hungary. Budapest is my third favorite city. But the damages done by revisionist claims in Hungary were simply amazing. Of all of the--after the Germans, arguably even more than Germans, they had the most to be aggrieved about, because of losing so much of their country awarded--because they had lost--to other places. Now, the case of Austria, also. If you go to Vienna, it's such a wonderful, huge, musical place full of baroque, baroque, baroque. It's really a great city. You think, "Oh, this is an enormous city for such a little country." What happens when the Austrian-Hungarian Empire is dismembered, Austria becomes a small and overwhelmingly German-speaking state. There are, comparatively, very few ethnic minorities living in what became Austria after World War I. It's an imperial city. It's an imperial city not reduced in size, but reduced in importance. So, Hungary has huge reasons to be extraordinarily angry by the whole thing. Yugoslavia. Yugoslavia loses some land that they wish that they would have gotten, but again, in Yugoslavia you have this tremendous ethnic complexity. In a way, the successor state of Yugoslavia--maybe some of you have or will take Ivo Banac's course. He's a great Balkan historian. The ethnic complexity, of course, is sort of a mini version of the Austrian-Hungarian Empire, where Serbs were something like forty percent of the population of Yugoslavia between the wars. That would be really about the percentage until the whole thing collapses in the early 1990s. The Croats were the next largest percentage, followed by Slovenes, who were the wealthiest region and remained that until the end. The standard of living in Slovenia in the 1980s was about that of Italy, where if you went far, far down to Kosovo, where I've been, about where so much has been going on, and which now has been proclaimed independent, it was absolutely impoverished. So, the ethnic complexity is, in itself, going to be a factor for destabilization. What about Poland? Poland had not been independent since 1795, since the Third Partition. You had, as I've already discussed in other contexts, you've got Polish intellectuals. You've got political militants in the 1830s and again in the 1860s who don't want to be part of congress Russia. They don't want to be congress Poland. They don't want to be part of the Austrian-Hungarian Empire. They don't want to be part of Prussia. They want and dream of the independent Poland. They get their independence in 1918, immediately. But the complexity is enormous there. You've already got these ethnic minorities who are there. The Germans are there. You've got huge numbers of Ukrainians. In fact, in Eastern Poland the cities like Zamosc, where Rosa Luxemburg was born, which is a beautiful city, these are Polish cities, but the vast majority of the rural population are Ukrainian. They are Ukrainian. This was a force of "instability." They will be killing each other off during World War II. The other minority is the Jews. How many Jews were there in Poland in 1918-1919? Poland was considered to be, before Israel, Poland was really the cultural heart of Judaism. There are three million Jews living in Poland. In the east, mostly Orthodox and Labavitcher. By the way, just as an aside but it's a telling aside. When I was in Zamosc we were taken to a synagogue which had been turned into a place where high school students and middle school students exhibited their paintings. I asked the guide, who was taking these academics around, and press editors, and all this stuff. I said, "Look, what was the population of Zamosc in 1939?" He said, "The population of Zamosc in 1939 was 39,000." I said, "How many Jews were living in Zamosc who had been coming to this synagogue in 1939?" He said, "12,000." I said, "How many Jews live in Zamosc now?" "Zero. Zero." The others, if they were lucky enough not to have been killed in the death camps--and I just reviewed a book for the Globe about the ghetto in Lodz, -they were likely to get out. Not to jump ahead, but the number of Jews who survived out of three million was about 300,000 Polish Jews who, I think, came back to Poland after the war or who had managed somehow to survive. The point of this is that these ethnic tensions, particularly in a part of Europe where anti-Semitism had been just replete. It's a Thirty Years' War. How can you not leap ahead into World War II? Some of the massacres of Jews, for example, during this horrible period were done by Ukrainians in Ukraine, Lithuanians in Lithuania. Many of you have seen just horrific pictures. I can't remember. We have this awful picture that used to be in the first edition--I don't even know if it's in the second edition--who were beaten to death. A former colleague here whom I don't really know, Jan Gross, wrote an important book called Neighbors about how Jews and Poles who lived in Poland in the 1920s and 1930s lived very peacefully in one village, and how, without apparent instigation from the Nazis who would have been happy to kill them all and planned to kill them all, just simply one day started killing them all. They shot them all dead, beat them to death, put them in barns and burned the barns. These tensions, which are aggressive nationalism. I go to Poland all the time. It's amazing. There still is this undercurrent of anti-Semitism. I hope, since this is being filmed, I sometimes forget, they may get letters, but it's really incredible. I was being interviewed on Polish TV with this other guy and it was a pleasure to denounce our president. Anyway, I don't speak Polish. Another time we were all being interviewed and they said, this guy said, "What do you think of the Jew problem in Poland?" I was about to kill him. I shouldn't say that, but take him out. Not really, but I was pretty mad. Then this woman who was there, who represents the Jewish community, such as it remains, in Israel, she said, "No, no, no. It's a question of language." But when we went to one of the Polish museums, which is the Museum of the Warsaw Uprising--they don't have a Museum of the Ghetto Uprising--we had to complain about how they depicted Jews in the 1920s and 1930s. This is not to dump on Polish--well, it is to dump on Polish anti-Semitism, but not as Poland as a state. These tensions are exacerbated between the wars. If you've got these frenetic right wing leaders who are aggressive nationalists in all of these places, who are they denouncing? They're denouncing what? The Treaty of Versailles and the other treaties in conjunction with it. Who? Romanians, if they are Hungarians, etc., etc. You pick the nationality, and Jews most anywhere. It was there. These folks are often--like Horthy, they are preaching to the converted. They're preaching to the converted. It's an obvious, sad story. In these revisionist powers, all this stuff is going on. I know less about and don't have much time to talk about Bulgaria, which lost, but you have these same kind of tensions. Turkey was just completely--that's arguably the harshest treaty, that with Turkey. They lose most of what's left, really in the Middle East, of Turkey. This gets transformed into mandates under British and French control. As a losing power, they lose land to their bitter archenemy, Greece. Then this enormous exchange of populations begins, forced exchange of populations, as followed by voluntary exchange between Turkey and Greece. But the case of Turkey is special because Attaturk, whom you can read about, becomes the visionary president of a new Turkey, of a secularized Turkey, and does not go the way, for all occasional stridency, of this sort of Europe of little dictators, the Eastern and Central Europe of little dictators. Even as I said--where are we? I kind of left my lecture behind, but that's all right. I'm doing the themes that we should be doing anyway. You can read about the rest. Even in Czechoslovakia--one can say, "There was a democracy that really truly functioned." But there are enormous tensions in Czechoslovakia as well. You've got your Czechs. Your Czechs are the dominant population in Czechoslovakia, but they are basically Protestant, mostly Protestant. The Czech part or what would become the Czech Republic is, as you already know, largely Bohemia, or much of it is Bohemia. There's Moravia also, which is poorer, but it is very industrial. It is much more prosperous. Slovakia is almost entirely Catholic and much more rural. It's basically a peasant society in which the Catholic Church, and particularly the very rightwing aspects of the Catholic Church, as opposed to the case of Poland, where the Catholic Church has basically been a force for progress, except for anti-Semitic currents in some parts of the clergy. It's not surprising that in World War II, one of the most horrendous collaborators and people cheering on the guards of the Jews as they're packing them onto the trains to be taken away and killed was a priest. Again, I'm not dissing the Catholic Church. I was raised at a Jesuit high school. But there were tensions within Czechoslovakia also. Even in the triumphant case, triumphant until the German legions start marching in and start killing Jews again there. Not again, because there really hadn't been pogroms and stuff like that there. Even there it's the complexity of the whole thing that is simply amazing. But the big point, it's not big news, but the big point is that ethnic contentions contested borders. Diplomatic problems caused by or inherent in having these new states--Czechoslovakia, Poland, and Yugoslavia--will continue to be very important in a place that had virtually no parliamentary traditions. There were no democratic traditions, even in a country like Poland, which had been divided up between these three empires. So, it's pretty darn hard to suddenly say, "Now we are a republic," and try to make that work. It's very, very difficult. Indeed, there's a mistake in that book, or at least my Polish friends tell me it's a mistake. I have described Pilsudski, whose parents and who himself thought he was Lithuanian at the beginning, but I already talked about that. He's described as being rightwing. He began his career as being kind of leftwing. But he's the first to destroy the parliamentary regime. He does that in 1926. Pilsudski's a great hero in Poland, still. I was taken on almost a forced march to see his tomb. Why? Because he, in the miracle of the Vistula River, the Vistula is this monumentally important river in Poland, that Trotsky's Red Army is moving toward Warsaw and imagining that they're going to move toward Berlin, and assist the revolution in Germany. They're turned back in the suburbs of Warsaw. The miracle of the Vistula by Pilsudski. This gives him a kind of a prestige and identification with the Polish state that is obviously important. So, in 1926 he says, "Look, this is impossible." He puts an end to the parliamentary regime, at least in reality. It's in 1929 or 1930, he arrests the progressive opposition. He behaves like these other dictators, except he's not putting people against the wall, or having them beaten to death by iron guards, and all these groups. That's one of the first to go. Compounding all of this is again, to go back to what I said at the beginning, East Central Europe--incidentally, the Poles no longer want to see themselves described as Eastern Europe. Then it was East Central Europe. They say, "You should go through your book and take out all references to Eastern Europe with regard to Poland. We are Central Europe." That's how they see themselves. Again, it's impossible to overestimate the hatred and still the fear of Russia. That's why they had this ridiculous idea of having American bases in Poland, which is just a crazy idea. Anyway, that's just my personal opinion, in parentheses. Compounding all of this is that you've got a peasant society. All of these are peasant societies. The vast majority of the population are peasants. I think it's about seventy-five percent in Poland. In Poland, Warsaw is already very big. Krakow is very, very big. I keep talking about Poland, because that's the one I know the best. Hungary would be less because Budapest is such a large city. Also, lots of the rural parts of Hungary have been amputated. It's a peasant society. What brings, and this is the argument of my good friend, Kim Snyder, what brings peasants into politics in the 1920s is one thing, besides hating the people not of the same ethnic group with them, depending on the place, and maybe being anti-Semitic because of the tradition of rural money lenders and all of this who happen to be Jewish, and Jewish storekeepers in the peasant perception of the world, is the hope of land reform, of land reform. Of maybe breaking up the big estates, but at a minimum helping out poor rural people. What happens is that in the 1920s and the 1930s is that poor rural people are being screwed, to put it a bit crudely, by what? By the agricultural depression. The price of agricultural products, which is the economy, is the economy, plunges to practically nothing, and they can't get by. One of the factors for the rise of fascism in all of its guises--it's called fascism in Italy; it's called National Socialism in Germany; in France, a part of the extreme right called it francisme, in Spain--he's not really a fascist, Franco's a rightwing authoritarian. He's still a murderer, but he's a rightwing authoritarian, but he's still a murderer, period--is the economic situation. I'll make this clear when I talk about Germany. That's what drives the middle class, which is the first class to embrace Hitler. It's the big crisis of the great inflation in the early 1920s. What helps drive peasants in all these countries, the ones who become politicized and finally say, "What did parliamentary regime bring me and my family? Not much. We still can't get by." So, there we go. "These little dictator guys thundering away, they seem to be telling it like it is." It's the Jews, or it's the Bulgarians, or it's the Romanians, or it's the Hungarians, or it's the Serbs, or it's the Muslims. It's the Greeks. It's the Turks. You name it. You fill in the national group. It's a Europe of hatred. It's a Europe of fear, an absorbed, integrated fear. I suppose that's kind of a silly way of putting it, but not that bad after all. When they look around, what do they see? First, these countries are frightened. These powers, the new states ally. They say, "We're going to have to lie together," and some of them join up with France, and that won't do them much good in 1939. But there is this model that seems to be working in Germany. The French, what they do in this inflation in the 1920s and 1930s, they were loaning money everywhere before. They pulled in the reins. They bring in the credit. Nazi Germany, particularly after it is Nazi Germany, after January 1933, they provide this sort of model. They say, "We'll help you out. We'll help you out. The other countries are not buying your products. We'll buy even more of them. We'll loan you money. We'll organize this." It seems to be an orderly society. More about that. It's not just a society of coercion, without jumping ahead. Hitler seems to be providing things to the German people that they want. Work, the armaments factories are preparing for war. Order, they're arresting petty criminals. I'll talk more about that. And racial purity. They begin thinking, "Hey, that's a good thing. It's the fault of the Jews and the Poles. It's the fault of the Poles." When they invade Poland, I don't emphasize this as much as I should have in your book, but they begin right away carrying out genocide. They begin killing the Polish intelligentsia right away, and they kill the Polish generals right away. The Russians are doing the same thing, actually, the Soviets further on. It's a permanent source of instability, this agrarian depression, this economic depression. What it does, it's a factor for further destabilization. Talk about parliamentary regimes comes pretty cheap, but they disappear one after another. And in each and every case, with the fascists and variants, in each and every case the discourse is, "We, the real people of this place, do not want these other people here. We don't want them here." Of course, not all of these people carried it to the outcome of the Nazis, or of the Lithuanians and the Ukrainians who just started beating Jews to death along the way. But it wasn't just the Western states that had great instability. The willingness, indeed the eagerness of people like Horthy to collaborate with Hitler openly, enthusiastically, all the way through until the bitter end is, in part, a result of--these rightwing movements become mass movements in these places, as they did in Germany. And as they did in eight percent of the population of the Netherlands votes for a guy called Musser, who's their little fascist guy, or in Belgium, which is a town of shopkeepers. They support their guy who just died about ten years ago, Degrelle, who died on the Costa Brava. They all seem to die on the Costa Brava. They all basically get away with it and end up going to Spain, and a lot of them protected by the Franco regime. They all seem to croak on the Costa Brava. Anyway, I got away from the text but it doesn't matter. I think I made my points anyway. The points are that Europe is in a period of instability. That with the exception of the big powers in the West between 1924 and 1929, all these places are in depression and that the sweeping away of parliamentary regimes in places that had virtually no parliamentary traditions at all was not all that surprising. It was compounded by the outcome of World War I. Again, as I said before at least twice, the demons of the twentieth century emerged from the war. Nowhere more tellingly, more appallingly, with greater costs, with greater devastation, with humanity sinking to an all-time low, than that in Nazi Germany. That's what I'm going to talk about next Wednesday. Monday, another cheerful topic--Stalinism. We will go from there. Have a wonderful weekend. I'll see you.
European_Civiliization_16481945_with_John_Merriman
10_Popular_Protest.txt
Prof: I'm going to talk about some of my favorite people today. Over the weekend I was at a memorial service and a conference in honor of my late mentor, the great historian and sociologist, Charles Tilly. When I first knew him, which was a long time ago, he was working on collective violence. He's someone in his career who published literally fifty-one books and over 600 articles, but above all was a generous mentor to a whole bunch of people, including yours truly. He was working on collective violence. He once told me--in fact, I couldn't find exactly where he wrote this, if he did--he once told me, and I mentioned this the first day, "It's bitter hard to write the history of remainders." Some of the people I'm going to talk about today didn't see themselves as remainders in history, but they're people who didn't quite fit in, and were really overwhelmed and ultimately defeated by the economic, social, and political processes. If one of the themes of the course and of any course, really, I suppose, that deals with the modern world experience, especially in this day of globalization, is the dynamic duo of capitalism or large-scale economic change and the state, you're going to see that today in some of the folks that I'm talking about. I once used the example of trying to get you to imagine parachuting over the European continent over a very slow descent. Let's say now from maybe mid-sixteenth century until mid-nineteenth century. If you could see every incident of collective violence, of political protest, popular protest that occurred--and all protest is ultimately political--by far the most prevalent would have been the grain riot. This itself is terribly significant and so is its disappearance, given what I just said about the state and large-scale economic change. Another way to imagine this is if you had a Richter Scale that moved or that registered every incident of collective violence. Tilly and his whole team were counting up every single incident of collective violence that they could find in Europe between mid-eighteenth century, in terms of his study, or even earlier than that, and 1936. They would come to the same conclusion that you would, were you floating over this great continent for that period of time. What I'm going to do today is sort of a trilogy, talk about three things, and they're all related. First, grain riots. Second the swing movement, Captain Swing, drawing on the classic book a long time ago of Eric Hobsbawm and George Rudé, two truly great historians along with Tilly. Third, talk about something that I did, the Demoiselles of the Ariège, and I'll make that clear in a while. All three fit together, I think, very nicely. One of the underlying themes, which you'll see, is that popular protest and collective violence attached to popular protest is not random. It can be spontaneous, but it's not illogical. There's a logic to popular protest right through the ages in the early-modern period as well as in the nineteenth century, and ordinary people put forth their demands by protesting and, in doing so, hope to affect change, to appeal to authorities who hopefully will do the right thing, often imagining a world in the past where a sense of justice prevailed. In terms of grain riots I'll talk about the just price, because they talked about it. Having said that, let me enter the world of grain riots. I'll give you some examples. This is from Spain, 1856, with the pretext of the high price of bread and for lack of work. The workers of Valladolid and Burgos rose and burned flour stores, mills, and inspection offices. The civil governor intervened to put down the rising, but the rebels overwhelmed him and attacked the chiefs of his forces. The sacking and burning continued. Most likely, as a consequence of the spreading of news of the incident, disturbances also spread to the countryside and to other cities. The governor of Palencia tried to hold back the uprising in his city, but he had to retreat before a hissing crowd. In Benavente, Rioseco, and along the Castille Canal, the disorders recurred. They had the characteristics of the old type of rebellion aimed at spectators and hoarders, among whom the masters of workshops were counted. In their hatred, the insurgents set fire to shops and storehouses with the cry of, "cheap bread, cheap bread," and attacked the boats that served for the transport of grain as well as putting the torch to grain not yet harvested in the fields. If you back up almost a century ago in France, if you look at the memoirs of a royal official from May 3, 1775: "The musketeers, who had been warned the day before, hurried to the markets. The fleeing rioters overturned baskets full of bread and blocked the way to the horses. It was 9:00 a.m. The watch was supposed to be getting its orders at that hour and the people had already gone to the bakers and seized the bread they found in the shops. That pillage had a special character. People did it without violence. The shops of the bakers were emptied and those of the pastry makers and the dealers and other foods which were equally exposed were left untouched." Or 1816 in England, East Anglia: in early summer a surprise came that the agricultural laborers (who we will come back to hear about in a little while) of East Anglia had come out in revolt. Conditions had worsened since the end of the Napoleonic war and riots and disturbances were everywhere in the towns. The point is that I could give you examples from virtually every country over a very long period of time and the grain riot would dominate. This is France-based, but it's true of almost everywhere--again, do not write this down--men fight for food in the following years particularly. They come in waves, 1693-94,1698, 1709-10, 1728,1739-40,1749, 1752, 1768,1770, 1775 a big one, 1785,1788-89,1793, 1799,1811-12,181 6-17,1829-30,1839-40, 1846-47,1853-54, and then never again. By never again, I mean in France never again and slowly the grain riot disappears as a form of political protest, of collective violence or collective nonviolence, depending on the case. That's the big question. What's going on here? Popular protest is a way of finding out what's going on when you look at all this stuff. Why do the grain riots disappear as a form of collective violence? Why? Here's a placard, that is a poster, scrawled in the town of Vaville in the West of France in 1709. "We are dying of hunger. We must absolutely order you to set prices on bread and grain or else we will break from our homes like enraged lions, weapons in one hand, fire in the other." Arson--fire, by the way, is one of those my friend, Jim Scott at Yale, calls weapons of the weak. A match to a harvest or a roof of a chaumière, a thatched cottage, can do some serious mischief. What these grain rioters want to do, very ordinary people, men and particularly women--remember, women are responsible for the household economy. Also young people and also children. What they want to do is they're putting forth claims. They want the government, the monarchies, the administrators, the officials, the intendants, the governors, the sheriffs in England to set the price of bread, as Maximilien Robespierre and the Jacobins had wanted them to. Why? To set the price of bread. To keep the price of bread low so that everybody would have access to buying bread. Bread--literally, that commodity, whether dark bread in the poorer regions of central Europe or in the south of France or in parts of Spain and southern Italy, or white bread, which is more associated with more prosperous peasants, represented more than half of the expenses of ordinary people, not just food but bread. Bread is what people ate. Black bread in the poorer areas; white bread in the wealthier areas, to make a generalization. There were all different kinds of bread. Here's how I can approach this. In the town of Liege, which is in Wallonia, that is in what's now eastern Belgium, Liege, famous for its cork among other things, and now part of the rust belt of eastern Belgium, they had a municipal statute, that is a municipal regulation, from about the fourteenth century. The date 1317 sticks in my mind somewhere. I must have read that decades ago. It said that on market day merchants from other places would not be allowed in to buy grain until the third day of the market. By the way, the term for merchants they used were engrosseurs, which has a sense in French of a bunch of people who would make themselves fat. Why? Because they could afford to buy bread at the price that nobody else could afford. They wouldn't be allowed in until the third day of the market. But, of course, that's not what happens through most of European history. The pressure from these crowds, the logic of these crowds is to force municipal authorities in the name of order--but perhaps, who knows, in the name of justice--to set a price of bread so that everybody could have a shot at it. As some wag once put, criticizing this kind of social history, "Well, it's pretty obvious that grain riots occur on market days in areas in which there is grain being exported out of a region." But that is precisely the point. The peasants or townspeople don't riot. Ordinary people don't riot necessarily when the price of grain reaches its absolute maximum. When they riot, they seize grain or they pillage shops, at the moment when--particularly when they see grain being taken out of the community, removed from their sense of moral authority over something upon which they depend to live. Women, as I said before, played the major role in grain riots. Why? Because they're responsible for the household economy. Right through this whole period there's a familiar scenario. People pour into town for market day. They see the stagecoaches, the diligences, the wagons carting the grain away and they stop it. They stop it. It's the same format everywhere. It's as if you had some sort of Internet, or CNN, or something like that telling people, "Here's how you grain riot." But they don't rip off the grain and they don't rip off, as the one example I gave you, fancy pastries and stuff like that. They take the grain often to a communal piece of property, such as the commons or the shed, the covered market. Some of the most fantastic examples are in the south of France, but all sorts of places, too. They sell the grain to ordinary people at what they consider to be the just price. They use that expression, "the just price." There's a sense of moral outrage that some forces they can't control are taking away what they need to survive. In 1789, a year you know, what's the big collective action in Paris? It's the seizing of the Bastille, of course, but above all it's the attack on the customs barriers, the tax offices that ring Paris, which forced the price of food, grain, bread, everything up higher. They attacked them as a symbol of what they considered to be an unfair economy that's depriving them of the right to have enough to eat. So, these wagons that are carrying grain away, who's in these wagons? Who are these folks and what are they doing? They're merchants and they know that when they're buying up grain, where are they taking it? They're taking it to Berlin, or Stuttgart, or Munich, or to Milan, or to Paris, or to Lyon. Why? Because that grain will command even a higher price there, where you've got all these people. What is the interest of monarchies and other forms of tyranny, if you will? It's their interest to feed the cities first. The growth of cities, the growth of bureaucracies, of the state, the growth of garrisons who have to be fed increases the pressure on grain in times of harvest failure. Grain riots not only have the timing of markets when grain is leaving the town, but obviously the subtext is that in times of these cyclical harvest failures. The harvest fails. Credit is withdrawn. The price of bread goes up and the riots start. If you look at where the riots start in any of these countries that I've talked about, it's in response to grain being taken out of rural regions and being taken to cities to get higher prices. You've got the merchant on the wagon, too, and you've probably got his driver. Who else do you have there, increasingly? You've got the Guarda Civila in Spain, or you've got various police in the Italian estates, or you've got the tough, hardened Berlin police, or the Prussian army, or you've got the gendarmes or the maréch aussée, as they called them in the eighteenth century, or the gendarmes in the nineteenth century. Here again is a way of looking at this theme. The state and capitalism are on the wagon here. You've got the merchant and you've got the police guarding him. That's the dynamic duo of change over the long run. To be sure, people who have big plots of land in Pomerania, or in northern Italy, or in the Beauce south of Paris, around Chartres, or someplace like that, these people are not out grain rioting. What are they doing? They're hoarding. They're waiting until the price of grain even goes higher. That's why a form of popular protest, of collective action throughout this whole period are attacks on hoarders. 1789--you read about it in the book. The famine plot, the idea that wealthy aristocrats are trying to starve out the poor to get their way and that hoarders have huge, just sacks of grain, which they often did, in their chateaus. They're holding it from the market and laissez faire says, "Let the market decide the price." "Okay, let's keep that stuff back. People will go hungry. Too damn bad for them. They should have more money." Over and over and over again in all of these places the grain riot is the most important form of collective violence, of popular protest. Then it just disappears. Again, France is the most studied, but it disappears earlier in Britain. You'll know why already. You already know. I'll tell you again in a minute. There ain't no peasants left in Britain. In France, the last wave is 1855. Now, there are protests at the end of the nineteenth century against the high cost of food. One shouldn't imagine people surrounding the Stop N Shop and blocking it with their little pushcarts that they're putting their frozen food into, but they're the equivalents of that. There are protests against the high cost of food. Food still is an important dynamic in protests. In World War II, for example, irritation about the rich doing even better than ever, and rationing cards and all that business. But the grain riot simply disappears as a form of popular protest in Europe, period. It doesn't mean that bread wouldn't be terribly important in the Russian Revolution. It does. There are riots in Russia against the high price of bread. But the classic beginning in Western Europe and moving east, the classic, the quintessentially popular domain of protest, expression of popular protest just disappears, period, point. Why? The battle's been won. The merchants, the police, the gendarmes, the troops, they're already there. Beginning again west to Eastern Europe, you've got the depopulation of marginal rural property, marginal rural lands. People can't make it anymore producing little bits of this and that and they plunge themselves into the ever-increasing urban world in order to find work. And, so, if you look at what people protest over, we can see this big economic change. The nineteenth century just transforms the way people live. This is for sure. The nineteenth century didn't invent consumer culture. We already know, Jan deVries has just published within the last month or two just a brilliant book showing that ordinary families made all sorts of sacrifices to try to improve their lives beginning the middle of the seventeenth century, participating in this consumer culture, buying soap, buying forks, and that kind of thing. But these big-time economic changes, the nineteenth century is the crucial period in the whole thing. The disappearance of the grain riot as a form of popular protest is a fantastic demonstration of that fact. You don't want to complicate the thing or undermine the validity of what I've said by imagining, "Well, that's all pre-modern, this kind of protest. Then we've got modern protest and more strikes and all that." It's true that there are more strikes and strikes become another classic form of political protest, but the world of the nineteenth century was changing. The big losers in all of this are rural people, rural laborers, peasants who simply couldn't make a go of it and their world was transformed. That's part one of this trilogy. Secondly, let's look at the Swing Movement. These next two things I'm going to talk about take place really at the same time. That's not a coincidence. It's not a coincidence at all. The first story is that of Captain Swing. I alluded to this the very first day, those of you who were here. It was in 1829-30. Those are the big years. The Demoiselles of the Ariège comes at just the same time. They are similar and fascinating. I think they're fascinating. I hope you will, too. They have so much in common with each other and with what I've just been talking about. It's not about winners and losers in the brave new economy of high-powered capitalism and all its incarnations in Europe, big and small. We see the faces, the remainders of this kind of economic change. Again, the people who worked on Captain Swing were George Rudé and Eric Hobsbawm. How do I put this? There was a real person called Ned Ludd in about 1816-17 in England. The English word Luddite is someone who's a machine breaker. Luddism is machine breaking. You break machines because machines are putting you out of work. You're a handloom weaver. There are famous cases in Silesia in Germany. Then machines for glassworks and then later in the century, things I talked about last time, were putting them out of work. Ned Ludd busted up machines. He broke machines. 1829 was an awful economic year everywhere in Europe. A freezing winter and real hunger in Britain. That's my example. It's from Britain, from England specifically, from the south of England even more specifically. They found people who were starved to death in the fields with only dandelions in their stomachs with nothing else to eat. If you were a Frenchman, or a German, or a northern Italian, and you went to England in 1829 or 1830, if you did the reverse of the Arthur Young track--Arthur Young was always wandering through France and discovering people he thought were sixty-five or seventy. It turned out they were twenty-nine. They were so battered and beaten down by hardship. There's famous case of a woman he met in Champagne near Reims of that case. Arthur Young saw all of these peasants in Europe, but the counterpart of Arthur Young, people from the continent going to England, were amazed. There weren't any more peasants, virtually no peasants. You'd find a peasant as a small property owner existentially committed to the land, that's a little bit overly fancy, but dependent upon a lopin de terre, a small piece of land, for family survival, the household economy. There weren't any more peasants left in England. There were gentry, including big property owners who were masters of all that they saw before them, in the portraits they had painted of themselves. You had gentry. You had yeomen who were sort of smaller versions of gentry. You had middle-rank property owners and this sort of thing. You had wealthy people from the city wanting to live in an aristocratic way buying as much land as they could take. There was no place in Europe in which such a small percentage of the population owned so much of the land. That's still true today in Britain. But there weren't any peasants. Why? Because the big fish had eaten the smaller fish. And because, beginning in the sixteenth century, the enclosure movement, which you've read about, I think, meant that basically, no surprise, the big guys get the law on their side. Parliament passes thousands of acts of enclosure which allow people to enclose and divide up the common land. The big fish eat the smaller fish and the peasantry is basically destroyed. What you have, as I said before in another context, in England, is you've got all sorts of textile workers. You've got all sorts of governesses and domestic workers, and you have hundreds of thousands, millions of agricultural laborers. You have agricultural laborers in other countries, too. But you also have small property owners. There's hardly any of those folks left in England. To return to the story, in 1829 ordinary people start participating in protest and collective action. They start threatening, and smashing, and burning threshing machines. Why threshing machines? Because threshing machines are taking their work. The way they survive is during harvest they go from place to place working, the way people still do in the wine harvest in the south of France, working from place to place bringing in the harvest, prosperous agricultural land in the south of England. This is how they get by. They don't live very well. They don't do very well, but they're lodged. They're fed. They have a crummy place to sleep, but they can do okay. Then the big guys, the big farmers start buying threshing machines. The threshing machines do the work of these people. Next time they guys come along in groups of ten, twenty, thirty, families, pals, friends, they come around and say, "It's harvest time. Here we are. Nous voilà." "Sorry, ol' chap. We don't need you. Maybe a couple of you, but we don't need you. We've got these machines. They do the job that you used to do. We don't have a problem with them working hard all the time. The machines work all the time at our command. See ya!" Are they mad? They're furious. They start burning these machines. They start smashing down the gates and going in and burning these machines. Then they started to find posters that had been written, scrawled posters, sometimes barely literate, because this wasn't a literate population. They begin to talk about a mythical-like figure like Ned Ludd, except this wasn't a real one. My favorite--I had this former girlfriend a long time ago at Mighty Michigan, and we were going to write a screenplay about this. We never did. One of them said, they all said, "If you don't get rid of the threshing machine, your agony will begin. We will destroy your machines. We will burn you out." But the best that I ever saw was, "Revenge for thee is on the wing from thy determined Captain Swing." Who was Captain Swing? He was a sense of popular justice. He was what used to be called the moral economy. He did not exist. He should have. He still should exist. He did not exist. But Captain Swing gave this kind of paramilitary sense to "We are many. We are correct. We are right. We have God on our side. We are organized. We will win." The subtext is that "Maybe we better negotiate and see. Maybe you keep a few machines around, but we want our jobs back." Captain Swing was everywhere. He was in Kent. He was in Cambridgeshire. He was in Devon. He was in Wiltshire. He didn't exist, but he was everywhere, at least in the popular imagination. By the way, the Captain Swing folks had some allies. They were the smaller farmers who couldn't afford the big-time threshing machines, and they thought, "Maybe if they burn the machines of my ravenous neighbor, that wouldn't be the worst thing in the world." So, they get moral support from folks like that. They spread. They're in all sorts of places. Do they win? Are you kidding? They lose. The sheriff comes in a country that only had a police force starting in 1829 in London, that didn't like the idea of uniformed armies. That was something the French and the Spanish had. They bring their military contingents and they beat the hell out of these people. They put them on trial. They hang some of them, not all that many. They sent lots of them to Tasmania or to Australia. "We all live in a convict colony," as they sometimes sing in Australia whenever they play the "poms," as they sometimes call the British. They send them a long, long way and they defeat them. It's bitter hard to write the history of remainders. It's fun, though, too. It's fun. Captain Swing disappears. The big guys win. Enclosure keeps rolling along. Tens of thousands of people are on the road. Oh, they get a little victory. The poor law of 1832 was probably influenced by this perceived threat of popular people voting with their feet, threatening but also negotiating, cajoling, trying to imagine a time in the past when everybody had a shot at doing well enough. The same thing as the grain riots, imagining a time when there was a just price for everything. Captain Swing was really part of that. Obviously, I find what they did thrilling, but I probably shouldn't say that. And, of course, they probably had a lot of time to think about what they did on that extraordinarily long trip to Port Arthur, not the Port Arthur in Asia, but the Port Arthur in Tasmania, or to what would become New South Wales or Victoria in Australia. But Captain Swing disappears. He never existed, but he sure should have. A long time ago, when I was starting out, I was working in the archives in Vincennes on the edge of Paris, the military archives in the big Chateau de Vincennes. I was working on other stuff and I kept finding these incidents that were occurring at precisely the same time as Captain Swing, 1829-30, not in England but in France. I kept finding these reports from a part of--;I didn't bring a map and I can't draw worth a damn either. Imagine la belle France. Nice, huh? Mountains. Toulouse here. Bordeaux down here, voilà. I kept finding in a department called the Ariège, the mountainous department, I kept finding these reports. The capital is Foix, but that's in the plain, kind of. Then you go up in the mountains. You've got real serious mountains. Now people drive through those mountains to get to Andorra, to buy cheaper pastis, and cigarettes, and that kind of thing. It's very beautiful there. It's also full now of people from 1968 who went off and formed communes in the Ariège. Sometimes you'll see some of them staggering around there. I kept finding reports from the police or from the gendarmes, above all, or from mayors saying that men dressed as women were coming down the mountain in the mists, and fog, and snow, armed with pitchforks, armed with rifles and were chasing away two groups of people: charbonniers, charcoal burners--forest people cutting down trees--and forest guards--those people guarding the forests employed by the state, employed by the rural bourgeoisie, or people living in Foix or Toulouse who owned a lot of land in the forest, or by communes if the forests were communally owned to try to keep out ordinary people. These people were coming down the mountain and taking shots at them and yelling nasty things to them, threatening them with pitchforks and trying to drive them away. And then, they would find notes saying, "If the forest guards and the charcoal burners come back, your agony will begin. Signed Jean, Lieutenant of the Demoiselles." A demoiselle in French is obviously a young woman. These became known as the Demoiselles of the Ariège. I found so many of these in 1829,1830, 1831, a few in 1848, a couple in 1872, and never again. Who were these people and what were they doing? What they wanted, again think of Captain Swing and think of the grain riots, what they wanted was access to the forest. They'd always had access to the forest. It's cold in the Ariège. You need wood that's gleaned from the forest to make fire to stay alive. You need berries, roots to eat. You need places to pasture your animals, pigs which eat right down through the root, goats like we have in our village in Ardèche, or sheep. If they're really rich maybe cows, but that didn't happen very much there. The rich guys owned the cows. The peasants didn't own the cows. They'd always, for centuries as they well remembered, had access to the forest. Why did they lose access to the forests? They're looking back at an imaginary time. It wasn't imaginary. They always had had access. There was lots of forest--the whole place. Deforestation is a big problem in France since the seventeenth and eighteenth century. But the Pyrenees have many forests, many mountains. They'd always been able to go there as they wanted. They didn't own the forest, but use and property were not categories that meant anything to anyone. They're told they can't go there anymore. Some of them go to churches and they're looking for deeds that would have given people of those villages rights to be in the forest centuries ago. No. They don't find them. So, why can't they go in the forest anymore to pasture their miserable animals or to find something to eat or some fuel? Why can't they do that anymore? Aha! We're talking capitalism in the state. The price of wood has increased. Why? The metallurgical industry, what they call Catalan, as in Catalonia, forges small metallurgical industries. The price of wood goes up. Suddenly the people that own the forest say, "Hey baby, we don't want those peasants and their animals in the forest anymore." They start hiring forest guards. They start hiring charcoal burners, the charbonniers, who are chopping down the trees, slicing them up, as people still do in the Jura, or in the Black Forest, or do anywhere that you can think of in Europe. In Oregon, where I'm from, logging was a good way to make money. I never did it, but when you're in college. And, so, the wood is leaving the forests. People are getting even richer and the peasants are out of luck. Why are they out of luck? Because what the big-money people do that own the forest is they do exactly what the wealthy did in England, the big landowners. They get the law on their side. What a surprise. They have their lobbyists. They get the law on their side. They pass a new forest code in 1827 that keeps ordinary people out of the forests to which they had always had access. They always had access. They can't go there anymore. They go up there and they find armed guards there. They are many and the guards are few and they scare the hell out of them and they drive them away. The mayors of these little villages, Massat--I love to go there. I've written about it. Massat was one of the mayors. He's in a difficult position. He knows damn well who is causing the troubles, les troubles, in the forests, but he's got to live with these people. He isn't going to be telling on anybody. I followed this through. I read this and wrote something about it in a book I did a long time ago called 1830 in France, an edited book. So, what happens? Who is Jeanne, Lieutenant of the Demoiselles? Jeanne. Who is she? She didn't exist either. She's exactly like Captain Swing. She says, "We are many. We are right. We have justice on our side." By the way, these people did not speak French. It was hard for them to find somebody to write in French "Your agony will begin," because they don't speak French. They spoke a patois that's very much influenced by Spanish that's not even really like Catalan and certainly nothing to do with Basque. It doesn't have anything to do with anything except vaguely Hungarian and Finnish. They write these things and they said that Jeanne, the Lieutenant of the Demoiselles will toast you one day if you don't leave the forests. What they want is they want the government to restore their rights in the forest. Does the government do that? Are you kidding? Of course they don't. Now, 1830 comes along, "revolution, liberty, fraternity, equality, red, white, and blue." What do they do? They say, "Well, this liberty must mean that we can have our forests back, doesn't it?" One of the interesting things about this is they become petitioners just for a little bit. They get people who can write French to petition saying, "We hear about this liberty in Paris. That surely is our forests, isn't it?" Un-uh. The government says, "Oh, no. The Forest Code of 1827 is evermore in use and you can't have access to the forest." The forest guards return. The forest guards return to the forest, but so do the demoiselles. They begin to dress up as women again. They lose. What a surprise. They're driven away. Why do they dress up as women? One thing they do, mocking the charcoal workers also, they put charcoal on their face and they wear sheets. They try to make them look like dresses. Why as women? It's more than a disguise. What it is is an enraged carnival. If you think of carnival, think of Mardi Gras, anywhere in Christian Europe, what you do during carnival, look at the floats in New Orleans when everybody's getting wasted and dressing up in various things. The old version of that was that you dressed up like you're exploiters. The three or four days you could mock the judge who handed out unfair sentences. You could mock the big fat noble who had seigneurial rights over you. You could mock the gendarmes, who you thought treated you badly. You could pretend. It was a carnival, but an enraged carnival. During carnival you stand the world on its head. What you did in this case is you dressed up as women, reversing reality from their point of view. This enraged carnival is intended, rather like the charivari, the shivaree in English, in which you pound on pots and pans outside the house of a couple or a woman, for example, who has married somebody from another village. You try to set things right again by pounding and calling attention to a misdeed that has violated the communal sense of justice. That's what these people are doing. They're saying, "Respect us. Respect justice." And, so, they're dressing up like women. They're standing the world on its head. It's an enraged, deadly serious carnival. They'll shoot at these people and beat the hell out of them when they catch them. So, Jean, the lieutenant of the demoiselles, represents justice and this kind of acting out of this enraged carnival. It's more than a disguise. But they don't win. Captain Swing doesn't win, either. They're arrested. They're put on trial. Some of them are put in jail. But in the end, they come back at the end of 1830,1831, some in 1832, if I remember right, 1848 they're back, 1872, but then never again. The Ariège as a department depopulates rapidly in the second half of the nineteenth century. These people can't make it in the forest anymore. They can't go into the forest anymore. They can't survive with their little plots of land. So, they bail out. They get out. Their great, great, great grandchildren, many of them work in the aérospatiale, the aircraft factories in Toulouse, or go to Bordeaux, or go to Paris, or go to Agen, or end up somewhere else. The demoiselles, like Captain Swing, like these grain rioters in all of these countries, in Spain, in Britain, in Prussia, everywhere, these are the remnants of what people viewed as a traditional way of doing things that at least was infused with a sense of the proper, a sense of popular justice. The demoiselles tried to stand the world on its head during carnival, and tried to get people to do the right thing, to return the forests to them, to get the threshing machines the hell out of the big farms in Kent and other places, restore grain and bread to a reasonable price so everybody could have a shot at buying some. But in the end, they lose out to this dynamic duo, this more powerful duo, the state and capitalism. It's bitter hard to write the history of remainders.
European_Civiliization_16481945_with_John_Merriman
4_Peter_the_Great.txt
Prof: Ok, I want to talk about Peter the Great today. The Russian empire is one of those empires that continued, arguably--;not arguably, it was the case--after the four empires disappeared with World War I. The Russian empire continues, though it continues under a very different way with what became the Soviet empire. That's what it became. With the end of the Soviet Union in 1991-92, the Soviet Empire collapsed, the Soviet Union collapses, and I guess the only remaining empire in the world is that of the United States, which is a more informal empire, but still one that is out there almost everywhere with the military bases everywhere. So, the rise and fall of empires is obviously a theme of this course. The Russian empire, the state of Muscovy had already expanded greatly, but it's really Peter the Great, it's the big guy who expanded Russia, its territorial size enormously. Muscovy had been one of the tributaries of the Mongols, who sacked Kiev in the 1230s, after pouring into Russia and what now is Ukraine. Muscovy was a princely state. It gradually expanded in size, reaching to the Southern Ural Mountains and the Caspian Sea, and emerging as a dynastic state. But yet Muscovy was considerably less important than the commonwealth of Poland-Lithuania, which was considerably much greater and subject to struggles with and influence by that state. What Peter the Great did was he pushed back the neighbors who had blocked the expansion of Muscovy, that is Sweden, who he defeats in a battle worth noting in Poltava, it's in the book, 1709, Poland and the Ottoman Turks. Peter the Great expands territory beyond the Euro mountains along the Caspian Sea at the expense of the Turks. Like all of his successors, he dreamed of conquering the Turkish capital of Constantinople, that is, Istanbul, which would have given him control of the Dardanelles Straits there, the passage between Europe and Asia leading to the Black Sea. All this stuff really matters, because all these events now with the problems in Georgia. That's a very, very delicate, strange situation, where the great power now, the United States, finds itself rather incongruously arguing that Kosovo--which should be, obviously, independent--I remember going to Pec in Kosovo when I was a kid--It's a little hard to argue that Kosovo should be independent and that what was the territorial unit of Serbia should not be respected. I agree with this, Kosovo should be independent. And to argue that Tibet should be free, and I agree with that. Then to turn around and argue that the people in Georgia, who are not Georgians, should not have the same rights that the people in Kosovo have. The whole thing--;the presentation in the press is absolutely hypocritical and just bizarre. But the only point of that little diatribe in parentheses was that the Black Sea really does still matter a lot, and that Peter the Great was the first of the Russian czars to dream of this access on to the Black Sea and then finally controlling the Straits of Constantinople. And Catherine the Great--they all like to call themselves "the Great"--She would make this an important part of her policy, but she doesn't get there either. In the nineteenth century, the Russian czars are still trying to get there as well. Peter's new fleet, we'll talk about his new fleet, which he oversaw and, in a very minor way, helped build himself, sails down the Don River in 1698 and takes the Turkish port of Azov, A-Z-O-V, on the Sea of Azov. This gave them access to the Black Sea. But then they're forced to back up. They lose, and they're forced to surrender Azov to the Turks after an unsuccessful war. So, Peter the Great, despite his dramatic expansion of the Russian empire, does not get this outlet on the Black Sea. But what does change, and what the battle represents is, of course, Poltava--is that Russia's participation in European affairs had been totally minimal. There's a story often told, indeed told in the book that you're kind enough to read, that--I think it was Louis XIV's equivalent of minister of foreign affairs--sends a formal letter to a Russian czar--it couldn't have been Louis XIV, but one of those dudes--it might have been--he sends a letter to a czar who had been dead for twelve years. Russia was that far away. It was not in the consciousness of the great powers. But after one of Peter's victories, the Russian ambassador in Vienna reported that the news of Peter's victory, people began to fear the czar as they feared Sweden formerly. It's very difficult for us to imagine a fear of Sweden. But that fear--the last real--Gustavus Adolphus, during the Thirty Years' War, was the last really major Swedish interlude in the continental European affairs, although Poltava doesn't come until 1709, so voilà. But what he does is that there's no other European state expanding their empires overseas, in the case of the Spanish and the English, that adds so much territory on land to its empire. Between the 1620s and the 1740s, the land of the Russian empire increases from 2.1 million square miles to 5.9 million square miles. Now, to be sure, in Siberia in the far reaches of the north, North Asia, this empire amounted to little more than a series of trading posts, and it took a very long time for any semblance of Russian authority from Moscow and soon from St. Petersburg, for reasons that we'll see, to reach there. But nonetheless, Peter the Great creates this huge empire that will have, over the long run, an enormous influence in European affairs. Because, after all, European Russia is part of Europe and will have an enormous influence on Asian affairs as well. Witness the Russo-Japanese War in 1904-1905 that the Russians lose. This is a key moment in the evolution of revolutionary politics in Russia as we shall see a little later on. Now, what about Peter the Great? I used to ask my friend Paul Bushkovitch to come in and give the lecture, and I got so interested in it that I did some reading on my own---Lindsey Hughes and various other people put this together last semester--because he is not a terribly engaging or warm personality in many ways, since he enjoyed watching people being tortured, including his own son; but, he is an interesting person. One of the things he does, and I guess this is one of the things to be put in neon from this lecture, is he opens up Russia, which had no secular influences at all, to western ideas. This is an extraordinarily important and transforming accomplishment of Peter the Great. He emerges from the violent world of Boyars. Boyars are the nobles; the Junkers are the Prussian nobles. Boyars, B-O-Y-A-R, are the Russian nobles and royal politics. Peter was the first child of his father's second wife and thus a potential threat to the ambition of the relatives of the first wife. His mother and her allies among the Boyars, that is the nobles, overthrew the regents in 1689. Peter's rule is from 1682 until his death in 1725. There were no strict rules for the succession of the czar. It's basically just kind of an uncomely family battle royal, in which there were bloody settlings of scores. There was no foreign minister. The Boyars' counsel, that is the counsel of the nobles, literally met in the throne of the palace. It became known as the Duma. Eventually in 1905, Nicholas II will be forced to grant a Duma, an assembly to Russia. Then he withdraws, eliminates basically all of its rights, and then the Duma will come back later on. Peter is one of those cases in European history where one person's personality and interest does make an enormous difference. He is an absolute ruler. He can do what he damn well pleases. He is personally responsible for the reforms, the opening up of Russia to a considerable extent to western European ideas. This is something that he does himself. As a boy, he was very smart and he was very interested in science. He always wanted to know how things worked. He was fascinated with astrolabes and was interested in sailing. Russia doesn't have a port. He's interested in sailing and learns about sailing on lakes, and ponds, and rivers. It does have a port, but it's frozen all the time. His travels took him into contact with observatories, museums, hospitals, botanical gardens. He's fascinated by gardens. When he goes to Europe on his big sortie, on his big-boy trek through Europe, he goes around and he visits all of these botanical gardens. He sketches things. He's constantly sketching the way things work, the way things are. He had this intellectual curiosity that defied the kind of orthodox religious skepticism about any kind of rational belief. This permeated not only the Russian Orthodox Church, but it permeates the Catholic Church as well. Look what happens to Galileo, who was lucky enough to have been burned at the stake by his friend, the pope. He was interested in math and in geography, and thus in maps and map making. What he does is he takes this archaic state structure in which literally nothing had been written down. It's all just passed down from word to mouth. He transforms Russia into an European absolute monarchy with much in common with Frederick the Great, with Sweden, with Austria, the Austria of the Hapsburgs, with Spain, and with France. He tries not only to copy European absolutism, but to open up Russia to commerce, realizing that trade meant wealth and that wealth meant improvements in the lives of the Russian people. More about that in a while. He makes Russia a military power. Indeed, arguably a modern military power, at least in the seventeenth and early eighteenth-century sense, and he injects European culture into Russia. Now, just as an aside, but it's one that we'll come back to particularly in your reading, the tension in Russia between an absolute repugnance for western influence and the constant assertion that Russia's traditional ways of doing things are the right way of doing things, long identified with people who would be called Slavophiles. Their tensions with westernizers would last right through Russian intellectual history in the nineteenth century. I remember when I was a student of one of the biggest classes at the University of Michigan--go blue! I am so sad this weekend--it was Russian intellectual history. We would read the Slavophiles, and we'd read the westernizers, taught by the late scholar of Russia, Arthur Mendel. It was fabulous to read these people as they debated what will happen to Russia, the kind of westernizers and those in between. A lot of them were writing from Paris in the nineteenth century. Some of them were great, great writers. There was this kind of intellectual energy. But it came down to this theme that still is so important, and was already so important, which is, what is Russian that should be kept uniquely Russian and closed to outside influence? And what is Russian that should be modified by being open to non-religious influences that come from other places? This is not just a uniquely Russian tale, as you can see. What about Peter himself? Somebody figured out that he was at least 6'7". Now, that's just huge. That's tiny in the NBA now. But the guards, the "giants," they were called, who guarded Frederick the Great were giants because they were six feet tall. The average person was about 5'3" or 5'4" in France. Napoleon, who was always considered to be kind of a dwarf, a midget, really wasn't at all. He was just sort of corpulent. He was the average kind of increasingly corpulent. He was the average height of most people in France. Peter the Great was a big guy, 6'7". He had extremely small hands, very small feet, which meant he sort of lurched and stumbled sometimes when he walked, particularly because he drank enormous quantities. He had these odd kind of facial ticks that he couldn't really help at all. We don't have a documentary showing Peter the Great in action, for obvious reasons. But the people that he visited when he was snatching huge roasts off the table at the fancy parties in London commented on these facial ticks that he had that would bend his face. He had a misshapen lower lip and his head sometimes when he was talking would seem convulsed to the right. It would move to the right all the time. He was so much bigger than everybody else, so people really were not taken aback, because this was a time when physical imperfections were commonplace. You couldn't go anywhere without seeing people who nature had given, in many ways, a very bad deal along with crushing poverty. But the fact that Peter was a czar and was rather scary in his temperament, because he had a rather bad temper, too, meant that he could be alarming. He could be scary. His generosity was legend, but so was his cruelty. Several times at public executions in 1698 and 1699, he brushed the executioner away, grabbed the axe and did the dirty work himself, chopping off a head or two. In 1718, he ordered a man kept alive after being horribly tortured, so he could be tortured some more and suffer as long as humanly possible. All officials from the chancelleries that he had created and all of the officers were obliged to attend this torture scene as a way of warning them that, "You'd better not mess around." Remember, this is a time, as we'll see in a while, where the czar was on the road a lot, and distances in the Russian empire are enormous. When he is gone from Moscow and then St. Petersburg, there was always this tendency to have these sort of cabales, to get together and sort of plot. His son has the bad idea of getting involved in this later, as we shall see. Yet there were incidents that his merciful side came through as well. But when it came to treason, he was less likely to be merciful, as the case of Alexis, his ill-fated son would demonstrate. One thing is clear. Peter the Great had an enormous ambivalence about his role and his image as a czar. His second wife was a Latvian peasant maid. This horrified the Boyars, who thought that this was unbecoming. How can you marry a Latvian to begin with, if you're a Russian and marry somebody who was a commoner? He was capable, and there are a lot of paintings of him dressing up and playing the role of a czar, dressing in fancy clothes. But there are more images of him identified with horribly worn boots with his toes sometimes sneaking through at the end that seemed to reflect his great personal thrift, wearing stockings, it was said, that he darned himself, and a battered hat that he kept on wearing that he had worn at the Battle of Poltava in 1709, complete with a bullet hole that supposedly tore through it, missing his head. He liked the company of ordinary people. This was a constant trait. He identified himself with the Russian people. More about this in a while. He avoided carriages. He liked to walk. He would leave the carriage behind, dismissing carriage drivers who were more well-dressed than he was and his guards. He ignored carefully-crafted seating plans at dinner. He jumped from table to table, eating standing up--he didn't like to sit down very much, his back hurt--;or walking around. He couldn't stay still very long. When he was lodged, he liked living in your basic Russian, wooden, peasant house, such as you could find on the outskirts of Moscow. One of the things that's very true about Moscow, right into the twentieth century, is that you had all sorts of peasants living on the edge of Moscow living in these wooden houses. He liked that. He said he slept well in these wooden houses and that probably is because of his very unhappy childhood listening to relatives shouting at each other and plotting to kill each other in the big house. And he never liked Moscow. We'll see more about that in a minute, in part because of its overwhelming religious influence and that's one reason, besides the quest for a port on the sea, why he builds St. Petersburg When traveling abroad, he refused the fancy lodgings that were reserved for such distinguished visitors. In 1717 in Paris, Lindsey Hughes reports, he went to a private house instead of the Louvre palace. The museum had a big palace, which is, as I said before, where the small boy, Louis XIV, lived, and then was burned in 1871. Instead he just went and rented a private house and fell asleep almost immediately. He loved sleeping on ships; he thought that the rocking of the waves rocked him to sleep, so he liked doing that. Legend has him eating peasant food--cabbage soup, porridge, meat with pickled cucumbers, ham and cheese, and kvass, which is a drink that I've actually tasted from fermented black bread. But he was not indifferent to foreign food. Somebody found an order. He ordered 200 bottles of Hermitage wine. Hermitage is a really wonderful and now just horribly overpriced wine from the Drôme, on the left bank of the Rhone not very far away from where we live. But it's totally out of price. But Hermitage was a wine that was known by connoisseurs in the seventeenth and eighteenth century, and ever since then. For all the talk about eating cabbage and eating with ordinary people, even then Hermitage cost a lot of money. He ordered 200 good bottles of Hermitage wine. So, voilà. I wish I had been invited to these things. He was very informal. There is this legend that he used to grab these huge chunks of meat and started gnawing on them like he would on a hotdog or something like that at a Yankee game, except that other people were already dressed up and the meat and the gravy was flying over them as he just sort of walked around gnawing on this stuff. Sofia Charlotte of Brandenburg--I don't know who the hell that is, but it's got to be some royal hanger-on--wrote that "It is evident that he has not been taught how to eat properly." But she liked his natural manner and informality. King Frederick of Denmark found him ill-mannered and inappropriate. He liked to masquerade. Again, this is his personal ambivalence about his own role, hanging around with ordinary people, sleeping in peasant lodgings and that sort of thing. He liked to dress up as a sailor. One thing that he always did when he went on his grand tour is he always took fake names, as if he was signing into a hotel as Mr. and Mrs. John Smith. He would take the name of a commoner. He didn't sign into a hotel as Peter the Great or whatever, but he would go in with a name that wasn't his. Again, this is just his personal ambivalence about who he was, and his uncertainty about the role that he had in his own family. So, this was a play-acting that was part of his life. These ornate masquerades and charades that are part of his complexity reveal something about his identity. Then when you see these portraits of him, he looks very czar-like. He looks like one of these royal people that were painted all the time. In 1697 he goes west with the name of Peter Michaeloff. Even in his own account books, which he kept very carefully about how much money he spent. He wasn't radin, he wasn't a cheapskate. But he paid attention to royal expenses, personal expenses. He refers to himself with a variety of names and titles, as captain this, colonel that, general that. He's not trying to hide his identity to the future. It was very obvious who he was. His handwriting is recognized by experts. But role-playing takes on dimensions of the state. I mentioned, I don't remember in what context, the other day that the most famous example is the drunken assembly. It's sort of a mock parallel government with his buddies. It has personnel who are his buddies, those in favor, lots of eating, drinking, et cetera. It has its statutes, sort of the mock constitution, and its rituals that involve basically getting wasted. Again, this is part of his split personality about who he was. If Hughes is correct, this is part of him saying that being a czar is more than just dressing up, and playing the role, and going to fancy dances, and hanging around with fancy people who don't do a damn thing. You have to do the work. You have to walk the walk. That he did. You had to manifest strength, and firmness, and bravery, and worthy deeds that would be recognized as being real deeds by contemporaries. He constantly warns his son, who was kind of an n'er do well, that, "You better work hard," or, "You better work a little harder and pay more attention to what you're doing. You'd better care about military things more than you do. This is what I'm telling you you'd better do. You'd better listen to what I'm doing." But sometimes he would drink a lot and eat a lot because he just needed to relax. Being a czar was a busy job. He got up at 4:00 in the morning. He was at his office before anybody else was. People began, like in any business, to be attentive to whether or not they were early enough. "Does he see me that I am still here when it's getting dark?" Of course, it gets dark in the winter in St. Petersburg about noon. But anyway, he loved practical things. He loved firefighting, for example. He had a passion for fireworks, explosives, cannon fire. He played the drums. He loved dancing and he loved religious singing. He loved the choir music of Russian Orthodox services. He loved to play chess and he loved to play billiards. And, as I already said, he loved mathematical instruments and telescopes. He carried a telescope with him wherever he went. He knew how to use it and he knew what he was looking for. He loved globes. He loved to see where things were. He liked to see that they mapped parts of Siberia that people didn't know what was there. It was rather like parts of Africa before the 1880s. You see these big blanks, because nobody had ever been there. He was interested in that. He was self-taught. You didn't have to go to some fancy school if you were a czar, to be a czar in training. You had tutors, as all these folks did. He made spelling mistakes, which you can see. I don't read Russian, but he made spelling mistakes when he wrote. His handwriting was awful. Bad handwriting is the nemesis of historians, to be sure. But he built a private library, and it wasn't just full of religious books. It was practical books about fortifications, hydraulics, artillery, navigation, and shipbuilding. But he also sang religious music. He had many religious books, the kind of standard liturgical texts that were the stuff of religious enthusiasm, and more modern theological works in the Russian Orthodox tradition. Another point about this sort of ambivalence about being czar is that he often made a point of choosing his most trusted advisors from the ranks of commoners and gave them the right to become titled after a certain amount of time in the royal bureaucracy. But sometimes he had a tendency to pick people whom he liked a lot but were totally unqualified, military commanders who weren't very good. But what he does is he cuts off Russian absolutism from this totally religious culture that represented 100 percent of the official culture, in a real sense the culture of pre-Petrian, that is pre-Peter, Russia. Russian culture was entirely religious. If you went to Poland, where I go often, as I've said before, if you went to Krakow, where Copernicus worked--I've been in Copernicus' workroom, and it's absolutely fantastic--in Krakow you had a major university. It was an important center of learning, a diffusion of scientific ideas in the scientific revolution and later of Enlightenment ideals. In Vilnius, which is the capital of Lithuania, you had a university as well. But there wasn't a university in Russia. There was no equivalent of the Royal Society that you're reading about in London that was very important in the diffusion of the scientific revolution. There's nothing like the Académie des sciences, the Academy of Sciences in Paris, which was founded in 1666. There's no legal tradition, so there's no law school. There's no medical school. There's no secular culture. Ninety-percent of all the books that were published before Peter were devotional texts in the church. There was no word literally in Russian for the state, or for the monarchy, or for the government. They did not exist. The state was an abstraction, but in the person of the czar it was a reality. Now, the Boyars, in the 1660s and the 1670s, some of them began to learn Latin and Polish. Diplomacy is still in Latin until the end of the seventeenth century and then, as you know, it becomes French. So, what about his accomplishments, besides the ones I've already mentioned, to which I will return in a little bit. Muscovy had already conquered the Volga basin in the sixteenth century, where the nomadic Tatars were, T-A-T-A-R-S. This is important, because it's the black earth region of rich agriculture there. What happens in Russia is what happens in Prussia, as well, and in other parts of Eastern Europe, particularly in the Hapsburg domains--you have, to make a very bad pun, a resurfacing of the region by people forced into serf contracts. Not contracts. They become legally part of the land, literally. This follows the arrival of the expansion of serfdom. Serfdom expands along with the Russian empire. They already held Siberia and they reached the Pacific Ocean in the 1640s. So, I already said what the number of square miles that were increased, but it increases from six to sixteen million of the population. They expand north to Archangel, so this gives them a port, but a frozen one. And, as I already said, south and southeast to the expense of the Turks. But Peter wanted a navy. It's sort of circular reasoning. If you want to have a navy, you have to have a port. And if you're going to have a port, then you have to have a navy. Why not? He first built a navy on rivers using Dutch shipmasters from Amsterdam. He once said that if he wasn't the czar of all the Russians, what he would want to be would be an English admiral. He learned Dutch--and Dutch is a very difficult language--in 1696, while the Turkish war went on, or one of them, he went off to Western Europe incognito as an embassy soldier. Again, it was part of his gamesmanship, his pretend games. There he learned carpentry in a very serious way. He went to the Leiden Medical School, because he wanted to see how you dissected bodies. He went there, too. Then he went to London, along the Thames, the major port of the world then, along with Amsterdam. He learned shipbuilding there as well. Because the building of ships was, to him, the application of rationality, of reason, thinking, and experimentation, this got him interested in the scientific revolution. There's nothing too surprising about that. The essence of the scientific revolution. He may have even attended a Quaker meeting, but we're not sure about that. The problem was if you have a basically landlocked power and want to get to the sea, then you'd better have a navy. He'd learned to sail when he was young, but on the river. This also, by the way, gets him interested in Baroque and these kinds of Baroque masquerades that he had back in Moscow and then in St. Petersburg as well. So, in all of this, what he does is he makes, to use an expression that I've already used before in the context of absolutism, he makes the Boyars, while he's building his navy and expanding Russia, he makes the Boyar junior partners in absolutism. That phrase again. Now, there are only about 200-300 Boyar families. They own, by the way, 40,000 serfs, own 40,000 serfs, just these 200 to 300 families. They build huge houses in the seventeenth century in Moscow with very old-fashioned traditional Russian architecture. The Russian empire was rather like Charlemagne's empire in 800 in the coronation at Aachen or Aix-la-Chapelle and all of that. You have this bureaucracy that's--it's not really a bureaucracy, but you've got these royal officials representing the royal will, but the actual impact in this vast expanse of the Russian empire isn't that great. There's nobody telling people what to do on a day-to-day or even a month-to-month basis. Yet there's an enhanced sense of obligation to the czar of all the Russian people. There's an advanced sense of state and of organization. That also is one of the things to put in neon. He creates committees of advisors that, in many ways, are not that different than the kinds of ministries that would evolve in western absolute rulers--absolute states, and in non-absolute states as well. By 1708 and 1709 he has created a more European-style administration for this vast empire. He wants to build a capital city. That's where St. Petersburg comes from. It had been Swedish territory, and his victories give him this land. I once went to visit the summer palace that he created and that Nicholas II loved so much there. He constructs this new capital. What's important about the construction of St. Petersburg--I don't know how many of you have ever been there; I haven't been there for a very, very long time--this city is not like Moscow at all. When you look at Moscow you see these old traditional, the skyline is dominated by churches, the influence of the Russian Orthodox Church. St. Petersburg is completely different. It is an example of classic great power, absolute urban planning. It has a long boulevard, the Nevski Prospect, very important in 1917. The most dominant buildings are not churches; they are state buildings. They are state structures. It's a different place. It reminds you of Madrid. It reminds you of Berlin. It reminds you of Versailles and it reminds you of the post-Haussman Paris, that is, post 1850s and 1860 Paris. It is an example of what I call the imperialism of the straight line, where you have large boulevards that you can march armies down to, reviewing stands and all of that, totally different than Moscow. The religious leaders did not like Peter, because Peter is bringing in to Russian culture foreign elements. They were already suspicious of the implementation or the annexation of Baroque religious forms, architectural forms and liturgical influences from Austria and from Central Europe, and now they've got a guy who's telling the Boyars' women to dress like western European women. At a time when beards meant a great deal religiously, he's telling the men to shave off their beards. He's telling the men to wield forks and knives as well as weapons, and to adopt non-Russian customs, to bring them into Russia. There's tremendous tension with the church. But he remains--he is a true believer, but he is bringing into Russian religious culture changes that were deeply resented. Some aristocrats began to put on western style wigs, such as you could find at the court of Versailles. Women had to wear high heels and they were tottering along and falling on the cobblestones wearing high heels and European style dresses. He promulgates decrees as czar about daily life. This is a big transformation. As to his son, his son was more under the influence of these traditional religious influences. He is plotting against his own father. Peter, on one of his trips, has to return back to Russia. In 1716 and 1718 Alexis had taken his mother's side in the divorce and did not like the Latvian peasant second wife. He also didn't like military service. He was lazy. His father said, "I see you are spending more of your time in idleness than in taking care of business at this crucial time." But Alexis doesn't get the point. He begins to plot in various ways with dissident Boyars. He goes off and gets the support of the emperor of Austria to wage a war against his own father. Terrible idea. When he returns, his father orders him tortured. Under torture, Alexis--who probably dies of cold, not of torture, in a very frozen cell-- named Boyar accomplices. These people are toast, obviously. The son probably died of TB, but it related to all this other business in his weakened state. That was the end of the son. But what lasted longer than Alexis was the Europeanization of Russian culture. Peter the Great has books translated from the west, including John Locke, into Russian. This, itself, was a remarkable accomplishment. After all, the Russian Orthodox churchmen had not been interested in the Renaissance at all, not interested in the scientific revolution at all; and, by 1710, Russian students are being sent abroad to foreign universities, particularly in Italy, but also in France and in England. They're studying practical things like marble work, and metal work, and copper work, and not just shipbuilding. They're also studying the life of the mind. In a way, it's possible to argue, which is what I'm arguing and I'm not the first to do it, but Peter the Great was, in many ways, himself a child of European rationalism, of a scientific culture of rationality and of, at least in the earlier stages, the Enlightenment. He was not against the church, but he thought that people were wasting time being monks, and other people were all over the place in their Russian Orthodox equivalent. He believed whatever one wag once set of monks, "I sleep, I eat, I digest," and they prayed, of course. To him this was not useless, because it didn't serve the state. It didn't serve the dynastic interests of the dynasty, which he identified with the Russian people. He did not ever imagine the abandonment of the table of ranks, which set everybody in a hierarchy, not for a minute--we're talking about the end of the end of the seventeenth and eighteenth century. But he believed that it was important to take the tools of science, to take the tools of rational thought and apply them to the good of the state, even if you saw that good of the state as in ships that could lob cannonballs even further against hostile ships, and that kind of thing. But he founds the first Russian museum, the first school of navigation, the first school of this and that. There are 100 times more books, pamphlets, prints, and maps produced in Russia under the time of Peter the Great than there had been in the whole previous century. Peter, as Lindsey Hughes has argued, was highly suspicious of any alternative to state service, especially the monastic way of life, which I've already said. But he thought that it should be channeled through state obligations, be there taxes or labor duties. At the same time, he's equally suspicious of the godless. So, he remained very Russian, but it was the importation of more western ways of looking at things that were very important. He wrote once that, "The chief thing is to know your duties and our edicts by heart and not put off things until tomorrow," like his son did. "For how can a state government exist if edicts are not put into use?" et cetera, et cetera. Their lives should be better. The concept of the state was fundamentally new to Russia, but gradually came into existence, and his accomplishments had a lot to do with that. He wrote his son in 1704, he said, "I may die tomorrow, but be sure that you have little pleasure if you fail to follow my example. You must love everything that contributes to the glory and honor of the fatherland. You must love loyal advisors and servants, whether they be foreigners or our own people, and spare no effort to serve the common good." The common good comes right out of enlightened thought. It comes out of Locke and those folks. "If my advice is lost in the wind and you do not do as I wish, I do not recognize you as my son." That, in the end, is what came in the long run. He remained a fanatically Russian patriot, the father of his people. His admiration for foreign things and approaches was tempered by, as Hughes argues, his devotion to Russia, which he oversaw. The common good became a real concept and one that, unfortunately, some of his successors didn't take terribly seriously. In conclusion, he defiantly, deliberately, and effectively broke with tradition. In doing so he made himself sort of an outsider to traditional Russian ways of looking at this thing. This ambivalence that was part of his personal life, the way he lived, would be a constant theme in subsequent Russian and still, in many ways, is today. Between Slavophiles and westernizers, those are the people that look inside Russia to finding what they think to be eternal truth and those people who want to temper such looks with a look to the west. That, in neon, is what Peter the Great did above all and for which he shall most be remembered. Next time to the Enlightenment, and some people whom you might not think of at first glance would seem terribly enlightened. See you on Wednesday.
European_Civiliization_16481945_with_John_Merriman
18_Sites_of_Memory_Sites_of_Mourning_Guest_Lecture_by_Jay_Winters.txt
Prof: What I'd like to do today is to talk to you about what it is that distinguishes European ideas about the shared history of the last century and the United States. What makes Europe European and what makes its sense of history different from ours. I think the primary difference between Europe and the United States will be seen in about six days, on the 11^(th) of November, when Armistice Day is commemorated all over Europe. In fact, it's now commemorated in Eastern Europe as well as in Western Europe, since it took the fall of the Soviet Union to remind people that two million Russian soldiers died in the First World War, and that the Eastern Front was the place where the German army won the war and where the Russian Revolution came directly out of it. So, the First World War is what made Europe in the twentieth century European. And the war created a series of wounds that, to a degree, have never healed, to a degree. The primary reason for that is the bloodshed, is the staggering casualties of a degree and magnitude that no one had ever seen before. When we talk about losses on the scale of the First World War, we enter a surreal terrain. I have great difficulty getting my mind around figures of one million casualties for the Battle of Verdun, in 1916, or just about the same number for the Battle of the Somme. The Battle of Verdun between February 1916 and November was the longest battle in history. It was ten months without a break. There was nothing like it in the Second World War. It pushed soldiers, human beings, beyond the limits of human endurance. The primary way in which this wound has been remembered is in terms of an array of commemorative practices which describe what European identity is, not only was, but is. I want to suggest that there are many reasons why the remembrance of the First World War is carried on throughout the twentieth century in a defining way. The first is technology. It's an accident that the First World War happened at the very moment that the film industry became the centerpiece of mass entertainment. Hence, this was the very first filmic war. It was filmic in a fictional way. That is to say, the technology of the day provided motion picture cameras for all major armies, and indeed they were used in all kinds of ways. The problem was they never filmed battle, or almost never filmed battle. There are one or two exceptions. But the important point is that generals and their staffs didn't want cameras on the battlefield, partly because it might produce evidence that would be useful to the other side. The other part of it is that the film might get back home. If families got to see it, then what would happen? There's a famous story of the fictional film representation of war which is one reason why I think it is so iconic as a descriptive element in European consciousness about the past. In 1916, in the middle of the Battle of the Somme--which was a six-month quagmire started by the British army on the 1st of July 1916 and ended roughly November 1916, for no gain whatsoever and a million casualties--in the middle of it, a part of the British propaganda office, it wasn't a ministry until 1918--it was all done more or less informally until then--they decided to make a film to buck up public morale. What they did was they filmed the Battle of the Somme while it was being fought. But they didn't film the fighting. They filmed mock episodes where soldiers in training would go over the top in a totally fictional representation of war. The problem was that the people who saw this didn't know that it was phony. When the film was shown in the middle of 1916, in August-September 1916--the battle started on the 1^(st) of July--it was shown all over Britain. Twenty million people saw it. That is half of the population of the country. There has never been a film that was seen by half of the population of any country in the world before that date or since. It broke all box office records. What it showed in silence was the preparation for the battle, the huge artillery barrage, and then men going over the top. Because of this phony--there were men who went over the top, stopped for a moment and then slid right down again, which caused women in the theaters that saw that to faint. They didn't know that this was simply nonsense, that it was fiction. I think the critical point to bear in mind, therefore, is that as a filmic war, the war turned into myth at the very moment that it was being fought. Nobody had ever seen the landscape of the dark side of the moon that was created by industrialized war between 1914 and 1918. The way it was represented by film was completely fictional. Film comes straight out of theater. It has a proscenium arch and it has a vanishing point. Anyone who's ever been anywhere near a battle will realize that battles don't have vanishing points. People vanish in them, but they go in every conceivable direction. The representation of war became a matter of myth right in the middle of the war itself. It became a battle of myth to be remembered. I'll give you another example which makes the point really powerful, very, very powerful. In February 1916 the German army decided to push through French lines at Verdun. This big ten-month battle, which is the biggest of all time, took place. In the course of it a series of completely made-up stories turned into legend. One is called the Trench of the Bayonets. What it shows is there were no trenches in the Battle of Verdun. There were isolated pockets of men in big underground forts. There was simply artillery barrage going on day and night for ten months. Little pockets of men would be caught in one part of the battle, and they stayed put to make sure that the Germans would not get through. The French line was in ils ne passeront pas. They won't get through and they didn't. One group of such men were almost certainly buried by a landslide. The weight of artillery barrage in the mud would mean the earth would move when, indeed, the artillery barrage hit a particularly wet part of the front. So, a group of men were buried alive, which is a very normal practice in the course of the First World War. The German group of soldiers, the platoon that took it, put bayonets basically sticking up out of the ground to indicate to the Frenchmen where to find the dead, so that they could be buried during a lull in the fighting. The French didn't interpret it that way. What they said was, "Here are fifteen French men who stood with their bayonets there until they were buried alive and they didn't move an inch in the passeront pas." This is a completely made up story. But it became a sacred site commemorated every 22^(nd) of February 1917,1918, 1919. In other words, the war itself created a mythic set of representations of war that have come up to the present. The Great War created myth in other ways. Another one came from the landing in Gallipoli. Gallipoli was a Turkish peninsula south of Istanbul, Constantinople then. It's about a four-hour taxi drive in lousy traffic. It probably took longer then. The idea of the Allies was to knock Turkey out of the war, help Russia and possibly encircle Germany by not attacking directly through the western front, but coming around through Asia Minor. This landing was a catastrophic failure. It was the brainchild of Winston Churchill who, until 1940 when Hitler made him the great man that we all remember, was a complete failure. politically and in military affairs. Gallipoli was his idea, and he shared in the form of what is now called Orientalism, a complete underestimation of the capacity of Muslim populations, Asian populations, brown people, to fight against Europeans. So, nobody had a look at the ground where the Allies were supposed to land at Gallipoli. They didn't actually take account of the fact that there were very big cliffs to climb. When they got there, they just reproduced trench warfare that had already existed. It was a complete failure. The landing, though, took place on the night of the 25^(th) of April 1915, and the people used for it were Australian and New Zealand troops, alongside British and French ones. That landing was the birth of the Australian nation. To this day, Anzac Day--Australia, New Zealand Expeditionary Corps--Anzac Day is sacred. It's the 4^(th) of July in Australia. It's the moment of celebration, through the shedding of blood, the winning of national pride. The point I'm trying to make initially is that remembering the First World War is remembering sacred themes that define nations. The oddity of the First World War is that these nations were defined first of all because they're a part of imperial powers, but this war was at the one and the same time the apogee and the beginning and the end of empire. Hence, nations that affirm their loyalty to Britain by dying on the beaches of Gallipoli, or in the hills of Gallipoli, earned the right to break away from Britain. This sacred moment is how the Great War turned into myth. If you think this is light, you're mistaken. This is big-time politics to this day. Yesterday, in the Sydney Morning Herald, two Australian politicians virtually came to blows about how to remember Gallipoli, because it is at the core of the notion or of the idea of what the nation has to be. The first point I want to make is remembering the First World War is remembering a series of myths. They're iconic in the sense that they describe not just what happened at a particular moment, but they describe what the rest of the twentieth century might become and did become. And that is the second point I think I'd like to draw to your attention. What makes remembering the First World War so important is that it became the way in which war was configured throughout the twentieth century in Europe. In many respects, this is a defining difference between the United States and Europe. The Second World War in this country is quite different from the First. The United States didn't really suffer the injuries of any major European country in the First World War. One hundred thousand American soldiers died in the First World War. Perhaps 40,000 of them, and there's a dispute on this, but perhaps 40,000 of them died from the Spanish Flu, the worst influenza epidemic in history. It hit everybody. It hit civilians. It hit soldiers. But it particularly preyed, as many mutant viruses do, on young adults. So, it got soldiers. Well, 100,000 dead was roughly, just roughly, what the British army suffered in three weeks on the Battle of the Somme, in one battle. The scale of casualties in the First World War is what makes it everybody's business. The second reason why remembering the First World War is iconic is that it is universal in Europe. It's family history. Let me give you an example of what that means. If you ever visit the extraordinary power of individual graveyards at the scene of the landings at Normandy, you will find that there are graves of 3,000 American soldiers who died in the course of the first day of the landing on the 6^(th) of June 1944. That landing on the 6^(th) of June 1944 was terrible. It was an extraordinary day. It was a day that should be remembered and is remembered. If any of you see the Steven Spielberg film Saving Private Ryan, you'll see it. It is iconic. It should be. The first day of the Battle of the Somme, on the 1^(st) of July 1916, was 20,000 British men killed. The first day of the landing at Normandy, 3,000 Americans killed. The landing at Normandy, compared to the Battle of the Somme, shows us that the iconic battle for Britain of the First World War was six times as murderous in one day as the landings at Normandy. That sheer scale of casualties means that remembering the Great War means remembering loss of life that became universal throughout families, throughout Europe. This is extraordinary in many respects. Until 1914, war was not democratic. Military service was not democratic. Either it was aristocratic and rural, in terms of the officer corps, that's why the cavalry mattered so much. They came from the land. Or it was more or less the men whom the general who defeated Napoleon, Wellington, put it, who were "the scum of the earth." Either the unemployable overpopulation of major cities, or indeed the unemployable populations of rural life in Europe, as well. Now, what happens in 1914 is conscription, universal conscription antedating the war presented armies of a size that had never before been pulled together. These armies suffered casualties of roughly one out of eight killed and one out of three wounded. We're talking about seventy million men in uniform in the First World War, nine million men killed, roughly twenty-five million men wounded. One out of every two men who served in the First World War was a casualty. There were eight million prisoners of war. In those camps illness was likely to kill you more than anything else. The critical thing to bear in mind, therefore, is that the First World War created an astonishing and unprecedented challenge of commemoration. The first challenge was the missing. I want to take you through the commemorative forms that the First World War created, which created cultural practices that are still very important today. Anybody going to England today--and I mean today, I was there last weekend in Oxford--will see everybody wearing a little red poppy in your lapel. This is what you buy for a couple of pennies, whatever you want to give as a contribution to a charity called The Royal British Legion, the biggest charity in Britain. It is still to this day the biggest charity for those families, and indeed survivors, and successive generations of those who served their country and who were wounded or died in it. The critical thing to bear in mind about this is that the mythic representation of war, which came out of film, has been matched by what I would call a family representation of war that comes through cultural practices of remembrance. We should never ever get away from the fact that remembering is a business. People make money out of it. That's why films sell so well. The History Channel is dominated by stories about war. It's an important thing to bear in mind that people make money out of representing war. It's an important thing to keep in mind. But that's too cynical to suffice for a discussion of how the First World War was remembered. It was remembered and still is remembered within families. The answer to "Why is that?" It's because of the universalization of bereavement. What's the problem? The problems are threefold. The first is the missing. The second is, in some sense, the irrelevance of conventional religious practices. The third is the search for some kind of collective statement of why these men died. For what? What price, victory? The missing. Half of those men who died in the First World War, and we're talking about nine million men, have no known graves. Not a trace of them exists. This, by the way, is exactly the same proportion of those who were killed at Ground Zero on 9-11. Half of them have vanished completely. There are traces that matter a great deal to the families of the survivors, the survivors who need something to remember, to mourn. The fact that roughly four million men died without a trace made commemorating war very, very difficult. Conventional religious practices require a site, a grave, a place to go to where individuals can honor those who die and take their lives up once again, let the loss go. What possible ways do they have to handle this? During the war, nothing. Because the confusion over casualties of war, which always happens in wartime, was overwhelming. If a family got a message saying, "Your husband," "Your brother," "Your son," "Your fiancé is missing in action," it could mean anything. It could mean that the individual was in a prison camp on the other side of the line. It could mean that the individual was in a hospital. It could mean that there was a confusion of identity and that the person was still alive, but somebody else found his dog tag. It could mean that the person had been blown to pieces and there was nothing that remained of him. None of that could be sorted until the end of the war, and even then it couldn't be sorted out. The loss of knowledge, the lack of knowledge about the most fundamental question of war is the most poignant origin of a series of commemorative practices that followed it. Given the scale of the losses, the conventional churches were not able to handle the problem of helping the bereaved or those who were, as it were, in no man's land, in Purgatory. In fact, Purgatory is an interesting idea. If you think about it, it's centered to a certain kind of popular Catholicism in the nineteenth century. Purgatory means that people who are on the way to heaven have to wait a while, and maybe, maybe just the good works that you and I might do will help them get there sooner, rather than later. It's a medieval idea. The Catholic Church had to jettison the idea of Purgatory, which died in the First World War, because no one wanted to put up with the idea that an individual who died for his country had to wait for 100 years in Purgatory in order to be able to get to heaven. Religious practices had to change to handle the unprecedented losses of war. Those who went to the churches for solace found very little, because there was very little the churchmen could do, could say. Why did I lose three sons? What did they die for? The phrases, the noble phrases of patriotism last only so long, and most of the time don't get you through the night. Well, what did individuals do? The first thing they did was to move to the pagan perimeters of Christianity. John Merriman referred you to a wonderful film by Abel Gance about the return of the dead called J'Accuse, I Accuse, which is accusing war, accusing the sun of not stopping war. It's accusing everybody of this insanity that had no end to it. Well, the pagan perimeters of Christianity are the areas where the occult lives. It's the areas where people believe, faute de mieux, because they have no choice in extra sensory perception. This is a period not just of the film industry, but the emergence of radio. It's a world where telegraphy was a quite normal means of communication, with underground cables, Reuters dispatched stories all over the world. For millions of families the idea of getting in touch with the missing or the dead seemed quite appealing. These are not fools. These are not people who are, as it were, bought by the Elmer Gantrys of this world. These are ordinary people or very intelligent people who are prepared to suspend disbelief about extrasensory perception in order to be able to find some solace, some way of understanding the world in which they live. There was an extraordinary efflorescence, development of spiritualism, of séances. One of the great carriers of this message was Arthur Conan Doyle, who was the author of the great, the ultimate rationalist, Sherlock Holmes. When his son died and was missing, completely missing, he became one of the great figures in the development of the spiritualist movement. Churches have no part in that because speaking to the dead has no mediation. Christianity couldn't be interested in this. Jewish religion has no time for it. Islam has no time for it whatsoever. But it just shows you that the scale of the catastrophe of the human loss of the First World War challenged conventional institutions and frameworks for understanding what was happening. If séances are a kind of collective remembrance, they created, as it were, the precedent for the ones that have left their most powerful marks, not only on Europe but beyond it, as well. These are war memorials. The need to create a substitute tomb, a substitute place in front of which to mourn, is what creates the extraordinary vogue of war memorials. You don't have to go very far to see them. All you have to do is go through Commons, and you'll see two war memorials that were created at the same moment, right after the First World War. When you see the names of Civil War vets who were Yale men, when you go through Commons on the walls, you should recognize that was completed in the 1920s, at the same time as there is a façade, a war memorial in front of Commons with the names of the battles that American soldiers fought in. In front of that is a cenotaph. The cenotaph, an empty tomb, says that these men died for liberty, and so on. It is an empty tomb. This is the critical point to bear in mind. The enormous development of commemorative forms, in particular sculptured, architectural war memorials in the twentieth century, comes from the First World War. Anyone who goes--and I think it's marvelous that we can talk about this in this particular room. Anyone who goes to see Maya Linn's Vietnam Veterans Memorial will see an outcome of a lecture on First World War commemoration that took place in this room, where Maya Linn was a student. She studied First World War memorials in order to create the Vietnam Veterans Memorial. Why? If you go to the Vietnam Veterans Memorial you'll understand the genius of First World War commemoration. The only thing that matters are the names. The names are what matters. The highly-polished granite surface of the Vietnam Veterans Memorial has your own reflection forced back upon you, to touch the names is the way to find a means, inadequate perhaps, symbolic perhaps, to bring the dead back home, to bring them to the center of American history in the middle of the Mall at the intersection between the Lincoln Memorial and the Washington memorial. It is an extraordinary gathering together of the bones, of the remains of the dead who were buried in Europe or never found. Why does this matter? It matters because the universalization of mourning, of bereavement in the First World War meant that these war memorials are all over Europe. They're everywhere. There are 38,000 of them in England. Every village has one. Every commune, I think, bar twelve in all of France. There are 30,000 of them in France alone. These war memorials are extraordinary in many respects. I want to tell you about them today. They are places where, next Tuesday, on the 11^(th) of November, there will be ceremonies. It's a public holiday in France. The mayor of the town will be at the head of a procession--this is choreographed all over the country--in which there will be 100 school children who will march in the rain and the sleet, it doesn't matter, to the local war memorial. What happens then is that the mayor reads out the names of those from a small village or from a town who died in the First World War. The children in the school after the name of Cohen Albert will say "pr&eacu te;sent," will answer for the men who aren't there. This bonding between the living and the dead, the bringing back of the dead to their own villages, to their homes, was a substitute burial ceremony for the ones that could never take place. How did it all happen? The first point that has to be made is that the commemorative wave took place through political leadership. Politics means many things. The first thing it has to mean is that there is a fundamental difference between the way in which men are remembered in the winners and in the losers. In the case of Germany, where there were two million soldiers who died in the First World War, this is an enormously difficult problem. The reason is that you not only need to remember the dead, but you have to find a way and a form to answer an eternal question. The question is: how is it possible to glorify those who die in war without glorifying war itself? The extraordinary wave of commemorative activity, the cultural practices of commemoration that were universal in Europe in the 1920s and 1930s have many different answers to it. Most of the time what happens is that politics became local, that small groups of people in small towns and villages took it upon themselves to answer the question: What will we do? How will we remember the men of our village? Given the numbers, we're talking about three, four brothers in agrarian towns. We're talking about fathers and sons who never came back. We're talking about the absolutely personal, face-to-face culture of village life. Everyone knew the names. Everybody knew the families. What this means is that it may be the case that high politics set out certain lines--the cabinets, the politicians, the generals. But what's extraordinary about Europe between the wars is how democratic commemoration was, and how much life there was in civil society in order to create forms that were separate. That's why I mentioned the poppy fund. This is a private organization. It's not a public charity. It's not the state. It's civil society speaking its compassionate language of remembering not only the fallen, but those left behind, the widows, the orphans, and so on. I'll give you an example of how civil society and state power differ and vary. On the 14^(th) of July 1919, just two weeks after the signing of the peace treaty on the 28^(th) of June 1919, when the Germans were forced to accept the terms of the Treaty of Versailles, there was a victory parade in Paris. That victory parade had a march past the Champs-Elysées, through the Arc de Triomphe. It's only happened twice in history and this was one of them, to celebrate the victory. The French were there. The Americans were there. The Brits were there. The Italians were there. All the Allies were there. There are two things that happened. One was that Georges Clemenceau, the French prime minister, decided in this spectacle, "We need a symbol of the lost generation." So, he had a papier-mâché catafalque built, a very big ornate plinth. On top of it was a cenotaph, an empty tomb, to symbolize the tombs of all those soldiers who died in the war, half of whom have no known graves. To start the victory parade, Clemenceau insisted that the people who lead the way are the most badly mutilated men of the war, the gueules-cassés, men with broken faces, the men without arms, the men without legs. The use of this vanguard of the suffering transformed a victory parade into a day of mourning. This was extraordinary. It was absolutely extraordinary. The Brits decided if there's going to be something on the 14^(th) of July, we'd better do something, too. It'll take five days for us to get everybody back over. On the 19^(th) of July we need a victory parade, too. Three-quarters of a million British men died in the war, another 250,000 from the empire and so on, dominions. A million men from the British forces died in the First World War. We need a victory parade. So, they asked the architect, Edwin Lutyens, to put together another papier-mâché memorial called The Cenotaph, an empty tomb. They put it right in the middle of Whitehall, official London, right next to 10 Downing Street, next to Buckingham Palace, basically a small stroll down to the houses of parliament, right in the middle of official London. They had their parade. But that wasn't the end of it. Two million people came to it, and they all deposited whatever they had to offer to the dead of the Great War, because this was an empty tomb. It wasn't the empty tomb of Christ. It was a Greek form. This drove the churchmen leading the Church of England apoplectic. It meant that the language of commemoration was ecumenical and not Christian. Why should that be? Lutyens was the man who designed New Delhi. He was the architect of empire. He wanted a memorial that would suffice for Hindu soldiers who had died, Muslim soldiers, Jewish soldiers, Anglican, Catholic, Irish, whatever, people of no belief at all, and he found it. He found the simplest possible way. As a result of this extraordinary outpouring of feeling, literally flowers they kept on having to shovel away because there were so many things left. Understandably. These are families who finally found a way to express perhaps a form of symbolic exchange. It happens in the Vietnam Wall, too. People leave things. Why? These people whose names are on the wall, those people who died, represented by the cenotaph have given everything. I need to give something. Pilgrimage is hard. It's not tourism. It should be difficult. You should give, not just get. The critical thing there is that clearly the British people voted with their feet for the national war memorial. So, the cabinet said, "Lutyens, could you do it again, this time in stone?" He did. He did it again in stone. A year later when the Unknown Soldier was buried in Westminster Abbey, where did people go? They went and paid their respects. You can still do it today. The Abbey is the home of kings and poets. No, the people's monument is The Cenotaph in Whitehall, not the church of the kings, but it's the sacred space of the people. It remains so to this day. Now, that man, Edwin Lutyens, designed another set of war memorials that lead us directly to Maya Linn. Thiepval, T-H-I-E-P-V-A-L is a small village that no longer exists in the Somme, in northern France. There he was asked, eighteen years later, to do a memorial for the 73,000 British soldiers who died in that one battle, and have no known graves. What he created was an extraordinary arc, an Arc of Triumph that basically has small arches on top of it and then nothing. He reduced the Arc of Triumph to nothingness. The only thing you do is when you walk up to it, my eyesight is dreadful, but younger people do it too. It just depends. When you get close--for me it's very close, for other people it's further away--you all of a sudden see that the walls of this arch that he built in Thiepval are completely covered with names. There's a vanishing point where you suddenly see them. From a distance you can't see it. It just looks like a façade. There are the names. It's that which Maya Linn heard about in this lecture hall, when Vincent Scully talked about Lutyens and commemoration, that inspired her to create the Vietnam Veterans Memorial. She's told me what it felt like to sit in this room and do it. She actually submitted her design for that memorial as the design that was ultimately the winning design in the competition. It was anonymous. By the way, she got a B for it in her class. I'll leave that aside. "'Judgment is mine,' sayeth the Lord." All I can do is to tell you that the forms that were created in the cenotaph were ones that have endured throughout the twentieth century to describe how war is remembered. Now, The Cenotaph, as I say, is pre-Christian. It's one more move away from the institutionalization of religion. It's not that the sacred died in twentieth-century Europe; it moved out of the churches. It can be found elsewhere. One of the places where it will be found next Tuesday is in front of war memorials that were placed all over villages, towns, marketplaces, all over Europe. Let me return to that process. The first I said is political. Small groups of people, the busybodies, the committeemen that always exist in small towns with nothing better to do, retired men or men of leisure, sometimes women. What they did was, "We want to design this ourselves." The first thing you have to figure out is: How much does it cost? Hence, we should always recognize that commemoration is a business. The cost factor actually matters substantially. The reason why it matters substantially is that if you want something sculptural, if you want something like a piece of architecture, the cheapest possible form of stone is an obelisk. You don't have to do much. You just hack it here or there and that's it. It has a great advantage, which is that it's Egyptian; it's pre-Christian again. It doesn't require you to distinguish between Protestant, Catholic, Jews, or anybody else. It's an ecumenical form and it's the most popular one. The second problem is that in France, in particular--Germany has its own headaches--but in France, in particular, church and state had been separated in 1905, and rather violently separated. No crosses, except in some Catholic areas where they said, "I don't really care. We're going to have a cross no matter what," which is true in Catholic Brittany, in the northwest of the country. Most of the time there are not crosses. What they show primarily are two kinds of representations. The first is of a Gallic rooster, which again could be bought through a mail order catalog, or a soldier, the poilu. The British liked to see their soldiers shaved. For the French, the idea of a soldier should be somebody who's a hairy one, a poilu, somebody who never shaved. Having a beard is being masculine, being a tough guy, being a soldier who won the war. A poilu could be bought, again, on a mail order catalog. Overwhelmingly, and this is a very important point, overwhelmingly, the images are not triumphal. They are mournful. Again, these were decided by small groups of people who put together money in order to describe the ways in which war memorials should be organized, should be designed, and, indeed, should be paid for. They were paid for by popular subscription overwhelmingly, pennies, sous, francs, deutschmarks, whatever, whatever you had. That's the way it was done. What about the inscriptions? Once more I want to reinforce the point that I made earlier about the democratic nature of loss. Ninety-five percent of war memorials list people either alphabetically or by the year or the time in which they died, the sequence of their death. Only five percent of all war memorials in Europe that I've ever seen, and I've tried to collect material all over the place on this, have men listed by rank. There is a democracy of death and of commemoration in highly inegalitarian societies. It is something extraordinary that goes on when loss is so general it becomes apparent that it isn't possible to separate those who died in uniform, in high rank, from those who died as private soldiers. One important point to bear in mind is that once the choice of place was made, and the choice of form was made, and the money was gathered together and paid to the artist or the sculptor who would do this, then we come to the third part of the commemorative process. The first is political. By that I mean small politics more than big politics. The second is business, the money, the commissioning, the putting together of the project. The third is the ritual. What do people do when they stand in front of a war memorial? The answer is very different things. The first thing that happens in the front of war memorials, and it still happens, is that women enter the narrative of war. Women are at the center of the commemorative practice. They are not at the center of the narratives of war from the battlefield or, indeed, from the military perspective itself. There are those who believe that, indeed, the gendering of the narratives of war separates the stories told by soldiers in novels and memoirs from those of the societies for which they fought and for which they died. I'm not sure if that is true or not. But what we can say, and there are thousands of photographs that show it, is that the ritual that happens in front of memorials are rituals of families. In conventional terms, by that I mean historically overdetermined ways, women have been associated with mourning practices since the Egyptians. There are tombs in the Valley of the Queens in Luxor that show professional mourners, women who have tears painted on their cheeks, from the time of the pharaohs. Whether that is true or not, the notion of Mater Dolorosa, Stabat Mater, "His mother was there," is a Catholic trope of great power and importance in understanding how societies configure loss of life in war. So, the first point is that women and children, families are there. The second is there is a didactic function. School children come there. This, I think, is a very important point to bear in mind. Overwhelmingly, and this is true in Germany until the 1930s, Italy it's true until the late 1920s, and it's true all over Britain and France, and certainly in the dominions. The rituals have a by-word that dominates the message. It is "never again," the phrase we frequently associate with the Holocaust, with the war against genocide. Yes, that's true, but the phrase "never again" comes out of the First World War. It's what dominates the commemorative practices of the inter-war period. This is the war to end all wars. This is the war that makes war impossible. This is a war so dreadful that it is not at all the purpose of those who go to commemorative forms to prepare the next generation for their turn. On the contrary. The notion of commemoration in inter-war Europe is "never again." That explains why the commemorative power of the period around the First World War is not repeated after the Second. In France you can see this anywhere. In Britain it's there, too. In Germany there are more difficult reasons, obviously, to handle this. In Eastern Europe, where the massacres were so gigantic, it's almost impossible. What happens in Western Europe is that the names of those who die in the Second World War are tacked on to First World War memorials. Part of the reason is financial. If the First World War impoverished Europe, the Second World War bankrupted it. Without the Marshall Plan, who knows? The important point is that there's another reason. How many times can you say, "never again"? If the idea was that these men died to make war impossible, in other words, their sacrifices were such as to eliminate the need for their children to go to war again, then what do you do in 1939? This is true in Germany, too, where the 1^(st) of September, 1939, the outbreak of the Second World War was not greeted by marching bands, and parades, and so on. It was a day of sadness in Germany as it was elsewhere, because everybody knew, and they knew the costs; the Great War had told them what war is. The conclusion I want to draw on is this. Remembering the First World War has taken many different forms. I've dealt with the filmic mystification of it. There's a big business in novels, in memoirs, in an area of what we might call factoids that are half fictional and half true. Robert Graves' Goodbye to All That is still in print eighty years after he wrote it. There are many such novels, All Quiet on the Western Front, that are enormous bestsellers. We should accept the fact that the media matters. I think the critical point to bear in mind is that the casualties of the First World War were so devastating that even the losses of the Second World War didn't change the landscape of remembrance that was constructed between 1918 and 1939. War means something in Europe that it doesn't mean in this country. The reasons can be found in all of these commemorative practices. It is clear to me that political culture follows history, follows the understandings people develop of the world in which they live. Europeans see war differently from Americans. It doesn't stop there from being militaristic groups, and those like the Nazis who wanted to "get it right this time around," and reverse the verdict of 1918 under the Treaty of Versailles. But there's no doubt in my mind that the First World War message of "never again" survived the Nazis, survived Stalin, to create a different kind of Europe in which armies don't matter anymore. They're there. But in the question of a great historian, James Sheehan from Stanford, who just wrote a history of twentieth century, and the question that he put in his title, we have, I think, the final legacy of the commemoration of the First World War. His book is entitled Where Have All the Soldiers Gone? In late-twentieth-century Europe, states are defined in terms of the way in which they defend the wellbeing of their populations. No longer are states defined in terms of the military force that they can deploy in defense of their national interests or their imperial power. The First World War put, as it were, the beginnings--hammered in the nails in the coffin of the old vision that the state is that institution which has the monopoly on the legitimate use of physical force. The story of warfare killed the old idea of state sovereignty. It wasn't dead before the Nazis made it necessary for us to develop something different. But it is the remembrance of the First World War which left traces in families, which are the most powerful reasons why the First World War has become and remains the iconic disaster that has created a Europe that no one had ever seen before, and that was vastly different, in the minds of ordinary people, than the Europe that existed in 1914. Thank you very much. >
European_Civiliization_16481945_with_John_Merriman
21_Stalinism.txt
Prof: Today I want to talk about Stalinism, and, in doing so--fifteen years ago, lots of what we now know about the Soviet Union through its entire history we didn't know, because the archives weren't open. When the Soviet Union collapsed, fell apart, disintegrated in the early 1990s, gradually lots of the archives were opened. What I have to say today--I don't work in the Soviet Union--draws upon the work of Peter Holquist. He used to teach at Cornell and now teaches at Princeton, and more recent work on Stalinism by Sheila Fitzpatrick. Let me just lay out the overview at the beginning. I sent around this morning--I didn't get home until real late last night, so I sent along at about 12:30 in the morning various terms, but I forgot a couple--democratic centralism, right opposition, and Bukharin--but this stuff is all in the book. I hope you're able to get that. I'll do the same for next time. The question remains: did Lenin inevitably lead to Stalin? That's a hard one. I guess basically the structure of what morphed into the Soviet regime was set, the way that the Bolshevik party operated, even before taking power. The fundamental concept was democratic centralism. That is the way that the Soviet state became organized as a top-down way of making decisions and sending out relevant communication. In principle, it was supposed to involve debate at the highest levels. Then, once decisions were taken, then they were communicated through the Communist Party. But, of course, the sheer paranoia of Stalin--he was, as you know, a clinical paranoid. His paranoia led to the deaths of millions of people. Debate itself became, as a concept under Stalinism, identified with anti-Soviet behavior. Essentially, what had begun as a popular revolution on behalf of working people--also, to an extent, on behalf of nationalities--became the dictatorship not of the proletariat, not of the proletariat but of the Communist Party, of the Bolshevik Party transformed into the Communist Party and the dictatorship of Stalin. Of course, as his paranoia increased the purges followed. The big show trials, some of which where people, Western communists went in, and sat and listened to hear people confess to having been in cahoots with Nazis, or with English royalists, or whomever, and confess to things that they certainly had never done not long before their execution. One of the points to be made today is, following Peter Holquist, that the structure of the Stalinist terror--there were antecedents that one could see in the civil war, and in the period of Lenin's domination in the early years of the Soviet Union. But very early on, the people who imagined that nationalities would have autonomy, those hopes were destroyed quite quickly. Stalin, he had been minister, or whatever they called the Commissar of Nationalities. The idea that workers' self-management, self-control, control of the means of production, would be implemented in this new brave world was shattered rather quickly with strikes and protests by workers, smashed by the police, by the Soviet state. The illusion would be perpetuated in the 1920s, and even in the 1930s, that this was a true workers' paradise, and that everything was groovy. But the talk was always about the "radiant future." Radiant was a word that they used a lot. I'll talk more about this in a while. The future would be radiant. It would be glorious. But the sacrifices had to be made now, and they had to be made now to protect the revolution against the Americans, against the French, and against the British, and the powers of capitalism, etc., etc. Of course, as you all know, it never came to be. That's the tragedy of the Russian Revolution. Well, more about this in a while. First of all, as backdrop to all of this, because of the civil war, and because of famine conditions, and because of just enormous economic hardship, trying to have a state--;it was not clear how this could possibly be done. You've got a country with all these nationalities, which is basically still essentially a country of peasants, despite the industrial work in the Ural mountains, and in the mines, and in the town of Petrograd that became Leningrad, obviously, later. You're going to know how to do this. The war itself, as in all of the countries that participated in World War I, had done enormous ravages. The Germans are fighting inside Russia, and the subsequent civil war that decimated large parts of the countryside left the country barely able to function. In 1921 and 1922, this is in what you're reading, more than seven million people died of starvation and sickness during that period. Remember, the wars go on. The war against Poland goes on until finally the situation is resolved with the Treaty of Riga in 1921, which I don't think is in your book. Lenin implemented the New Economic Policy. As he and the other Soviet leaders grappled with a country in virtual collapse, he recognized that the ideology of communism, which called for the abolition of private property, private ownership, and the destruction of the free market, would have to be sacrificed for the future. You simply had to be able to feed people. You had to have peasants not hoarding what they produced, or waiting for higher prices. You needed to have a free market system, at least for a while, because of the fact that people resisted, which was what was called war communism, which was sort of a shock communist therapy that had proved not to really work at all. So, the New Economic Policy is promulgated in March of 1921, and the features of it you can read about. Basically, the state maintains its centralized control over the economy, and there's still centralized planning, but the NEP allowed peasants to use their land as if it was their own, and largely, in most cases it still was, and allowed them to market their products, and sell them at market prices in order to get food to people who needed it. Otherwise, they too would have died. The state maintained its control over heavy industry. But this is all going to be just a small retreat along the road to socialism. It succeeded. It succeeded. It works. Gradually the production of food reached prewar levels, and small-scale industrial production revives. Another name I should have written on the board--I was so tired when I did all this last night--is Kulaks. Two groups of people who profited during this period, these two groups of people would, in the long memory of the communist leadership, get theirs during the Five Year Plan. The first are those people who, for example, were small merchants--not in size, but they sold products on the free market and did very, very well during this period of the New Economic Policy. They became known as NEP men. NEP is the New Economic Policy, and the NEP men were people who did well during this period. The other, and it's a term, as we'll see in a while, that became a term of denigration, and indeed could lead you straight to the gulag if you were lucky and weren't executed before that, were Kulaks. Kulaks basically were prosperous peasants. They were well-off peasants. During the period of the New Economic Policy the Kulaks, the people with land who had something to sell, did well indeed, because their goods fetched good prices and they did very well. With the gradual ending of the New Economic Policy, which sort of trickles to an end after Lenin, they would become targets in the mass collectivization campaign that accompanied the Five Year Plan, that is 1928 to 1933, and would be themselves victims of the purges as we'll see. The tragedy of the Russian Revolution. There's a quote in there from--I guess it was somebody in the Red Army who said, "Did we do all this? Did we fight in the civil war? Did we try to save the revolution against some really basically horrible legions and the white armies during the civil war? Did we save the revolution in order to round up Kulaks, and put them in the middle of a field, and line them up in the middle of a field, and gun them down with machine guns to kill them? Did we do all of this to eliminate from the face of the earth these people who had, despite being relatively privileged, had struggled, and had managed to survive the whole thing?" Sadly, the answer is yes. It came to that. The question is: To what extent was this automatically--was this part of the system from the beginning? I'll give you some examples that Holquist cites in a minute. He says you can see this coming if you look at the first years of the Soviet people. Let me give you some examples of this. This is before the period where Stalin becomes a success. Stalin worked very hard to make it seem--as Lenin had one stroke and then another--Stalin tried to keep access closed to other people. He worked feverishly to make it seem that Stalin was the chosen successor of Lenin. There are some famous doctored photographs where Stalin has had himself inserted next to Lenin, in famous poses of Lenin, who was a pretty good speaker, but nothing like Trotsky. Trotsky, with Jean Jaurès, was the greatest orator of the entire period. But Lenin wasn't too bad. Stalin literally had himself stuck into pictures where he would be there. He also took some of Lenin's writings and sort of "updated" them to make it seem like the mantle was there. As everybody knows who follows this stuff at all, Lenin was, in his final days, most concerned that comrade Stalin's leadership was potentially very dangerous. He expressed those fears in the letter, if I remember correctly, written with a very shaky hand of a man who had a stroke, a very serious stroke, and who was close to death. He had his doubts about comrade Stalin, not exactly from the beginning. Also, it's important to note that Trotsky--whom as I said the other day would finally be tracked down and assassinated in a garden in Mexico City--the differences between Trotsky and Stalin went beyond ideology. Trotsky, in what became known as the "left opposition," that was pushing for more active instigation of revolutions in other places, and, ironically, was pushing even more for collectivization even early on. But it went more than that. There was a rivalry between two men who both were extraordinarily sure of themselves, and who thought that they were the person who should first save, and then lead, the Soviet Union. Trotsky's role in the Red Army as a strategist was extremely important. But there was more than that also. There was more than a small trace of anti-Semitism in Stalin. When he would refer to "cosmopolitan enemies," and things like that. Cosmopolitanism was sort of a code word for Jews, for Jewish people within the party. There was more than that to that. Trotsky is expelled from the party, and then finally is tracked down and killed. Some of these that you know from reading Orwell, some of these factions and these differences play themselves out in this anticipation of World War II that was the Spanish Civil War. The followers of Trotsky are a very important faction in the Spanish Civil War. You know from reading Orwell the role of the Stalinists in all of this. Just a few things before I turn to Holquist's argument about how you can see some of the horrors coming early on. Let me just define Stalinism as a term. It's a set of tenets, policies, and practices that characterize the Soviet government during the period when Stalin is in power. Stalinism lasts until Stalin finally dies--when is it? 1953. That is when Stalin dies. The beginning of the Five Year Plan, that is 1928 to 1933, is really the real beginning of Stalinism. You can anticipate some of this, as we'll see in a minute. Stalinism not only takes a sort of democratic centralism of decision making, but what it does is it employs state coercion, and more than this, state terror, with the goal of transforming this still relatively backward society into a Soviet state that could sustain itself, that could build heavy industries. This was the obsession of Stalin and Stalinism, that heavy industries would have to be built, and that they would be built on the backs of the peasantry, who would lose their land and become industrial workers. There's a massive urbanization, as we'll see, in the Soviet Union during this period. The obvious central characteristics of Stalinism, as you've already seen, are the abolition of private property, first of all, and the end of free trade, the end of the market. The market is to disappear. If you're abolishing private property in a vast, vast state, in which two generations before you still had serfs, what you want to do from their point of view is collectivize agriculture. The massive collectivization of agriculture. This would, as you know, become a characteristic of the satellite states in Eastern Europe and Eastern Central Europe with varying degrees, varying degrees in those states. The economy is planned. It's run in a centralized fashion and predicated upon mass industries, rapid industrialization. Part of this was the liquidation--not a nice word--of those "exploiting classes," that is the bourgeois, the NEPmen, the Kulaks, the aristocrats, and the clergy. This involves deporting people to the gulags, or incarcerating them wherever they were, or incarcerating them in the gulags. The purge, the terror, the political terror against alleged enemies, including those who disagreed with Stalin within the leadership of the Soviet Union. Thus, the purges that you can read about of the left opposition of Zinoviev and Trotsky, and Bukharin's right opposition. With this comes the cult of personality. Stalin himself becomes, and I say this in quotes, "a czar-like figure." There are still truck drivers in Russia who have pictures of Stalin in their trucks, and other people as well. This leads, obviously, to kind of half-baked political scientists' interpretations, "Well, you have a czarist state. You have an autocracy. Inevitably it becomes another autocracy with a czar-like figure, the cult of personality of Stalin." When you think of Mao's China, and there the cult of personality, if anything, was even more than that with the Little Red Book and all of this business. To repeat the obvious, the Soviet Union was the dictatorship of the Communist Party, and the dictatorship of the Communist Party was the dictatorship of Joseph Stalin, and the paranoia of Joseph Stalin. Now, ironically, Stalin, as I'm sure you know, was not Russian, even though the Soviet Empire, and it was that, was largely run in the interest of Russia and Russians. Stalin was a Georgian. He was from the country in which all these things have happened in just the last five or six months. He started out as a seminary student. He was expelled for reading Marxist tracts. Like a lot of these people, he took aliases, because he robbed banks to get money for the Bolshevik party. Stalin means in Russian "man of steel." That was his alias. That wasn't his original name. Remember, he's a Georgian. As I just said, he becomes the administrator, the Commissar of Nationalities, but determined to snap the head off of the "hydra," of the danger of nationalist revival in these states. Communism, in theory, was antithetical to nationalism; although, ironically, as you all know, it doesn't really work that way. There were strongly nationalist communist states that still retained--Hungary, for example, or Czechoslovakia that retained, even within the satellite nature, kind of a pride in trying to make it work even in 1968, as Dubcek, before the tanks roll into Prague, tries to make a human-faced socialism with a Czech and Slovak face. Of course, it doesn't work. Through the whole period, as everybody knows, they execute millions of people. In World War II, the figures of the number of Soviets who die in World War II is about twenty-five million people. If you're thinking about Stalingrad, and thinking about the siege of Leningrad, which goes on and on and on and takes a million lives, but within that twenty-five million, lots of those people are people who died in the gulags. They did not die from reasons of war. They died in the gulags, and many of them were executed for being a Kulak, or being a NEP man, or whatever. Can you see this coming? Can one see this coming? Just a couple of points along the way. During World War I, states had increased their power, their ability to control what became sort of command economies to mobilize the resources of the state. In Russia in World War I, the imperial government, Holquist writes, "initiated a deportation of ‘the Jewish element'--remember, the rabid anti-Semitism of the czar and Alexandra--;"on the borderlines as ‘pernicious, harmful, and dangerous to the Russian people,'" and thus in wartime they are, just as in Italy. Italy went to war in part so that they had the idea that they could somehow make citizens Italian, feel themselves Italian. There is this nature as the war is being fought in Russia that you will increase the Russianness of the effort of the empire, even though all these other nationalities are involved, by excluding people, by excluding people. They're not excluding people by putting them up on the wall and shooting them. Nonetheless, the language is somewhat there. For example, they describe the Whites, who were a nasty group, many of them, and just nasty, bloodthirsty group, not all of them, but many of them. They describe the Jews as "microbes." This is the Whites describing the Jews as "microbes" and Bolshevism as a "social disease." The kind of disease metaphors, the next step is if you have a disease, if you have a cancer, you cut it out. You exclude it by cutting it out. The language of exclusion is already there. At one point the Soviets, in the very first part of the Soviet Union, began a program of what they called "dekazakhization." They want to remove an entire Kazakh population, which they viewed as potentially disloyal to the revolution. But you can't do that. It's just too hard to do that when all these other things are going on, so they can't do that. In the early years, the Lenin years, you still find these things. In 1920, when there's a campaign against banditism, that is bandits or people who don't support the Communists--that term is often used, by the way, in France under Vichy, that the resistors are bandits, or they're terrorists, etc. etc. Bandits become a dangerous epidemic. Again, the disease metaphor. They're dangerous because they are. In 1920, Stalin informed Trotsky, Holquist found, that an order would soon come directing "the total extermination of the White officer corps." Of course, total extermination is pretty strong language. That's not just simply putting people in jail or in a re-education camp. That is getting rid of them. They create camps which were called "filter spaces," where people could be kept until they had seen the light, etc., etc. They had all these White prisoners, some of whom executed, and the Cheka are the police who oversee all of this. That's an obvious term. Examples from the civil war, in Holquist's words, "show the project of fashioning society by excising particular elements was an intrinsic aspect of Soviet power from the very beginning." But it's not on the scale that would come later. There are lists of people drawn up in the early 1920s that would be used in 1937 and 1938, during the Great Purge. So, when the campaigns of collectivization, which are bloody, which are massacres, come along, the dekulakization campaign, get rid of the Kulaks, in 1929-1930, becomes on a more urgent and more paranoid scale. This is the kind of stuff that we now know from the archives. A memorandum from March 15 1931 states that with regard to the Kulaks, the goal of deportation from all regions was "to totally cleanse them of Kulaks." Another, slightly earlier, in February, calls for them to be "immediately liquidated. We will exile the Kulak by thousands and when necessary shoot the Kulak breed. We will make soap of the Kulaks. Our class enemy must be wiped off from the face of the earth." Strong language. Thus, in 1930, more than 20,000 Kulaks were sentenced to death. Many more are gunned down when they protest. And they protest. They kill their animals rather than turn them over to the commissars. They burn their harvest. They burn their farm. These are "weapons of the weak," as my dear friend, Jim Scott in the political science department, would call it. Weak indeed they were. They're confronting these enormous military forces. But they fight back. They fight back. They just don't go down in a heap without fighting. One of the interesting things about this is just as one of the key trends in the end of the nineteenth century is the origins of sociology--intellectual trends, the idea of counting, and figuring, and thinking about contemporary society, Max Weber and all that. Really important stuff, positivism and all of this. These kinds of censuses developed way before all this. The first really accurate census in France, outside of municipal ones, is in 1941. But what they do is they use sort of modern tools of censuses, surveys, and questionnaires, to get information on the entire population. You'd better be damn careful when you write down who you are on one of these forms. When they say, "Who is your grandfather?" "What did your grandfather do?" What are you going to write? You can't write down he was an industrial worker if he wasn't. What if he's a Kulak? What if he's a noble? What if he's an Orthodox priest? You're guilty by association. Once a Kulak, once a clergyman, once an aristocrat, by class identity you are guilty. You are guilty. They used the censuses of 1926, and 1937, and the last one before the war of 1939, as a way of deciding who should get passports and who shouldn't get passports, and who should be sent off to wherever. By 1934, twenty-seven million people in the Soviet Union had been monitored and given state ID cards. The French, you had to have an ID card, also, to go from one department to the next, to go from Marseilles to Nice you had to have an internal passport. What they do, it's the same state thrust, except that it has a murderous outcome. If you're classifying people, if you're counting people, if you're registering people, this is way before "the quiet violence of the computer," as Michelle Perrot memorably put it. The outcome is very different. They use archivists. Some of my friends were archivists. They're not going to be doing this kind of thing. These are French archivists. But they're using archivists who are fearful for their lives. If you're an archivist with writing and reading skills, you're potentially an enemy of the state, because you're from the wrong social class. They say, "We want to look at your archives. We want you to find out who's in what category in your region." You better do it. Archivists in 1939, according to Holquist, identify 108,000 enemies of the people. Once you're classified as an enemy of the people, baby you're toast. That's it. You're toast. Sheila Fitzpatrick is a wonderful historian. She was one of the first to study what she calls "the extraordinary everydayness" of Stalinism. What was life like in a place where the only way of getting anything, and potentially the only way of surviving, is your relationship to a bureaucratic figure? Stalinism, the essence of state collectivization, of state totalitarianism, is you have to have this enormous bureaucracy. It's the bureaucracy that calls the shots. Let me put some of the points that are important. In getting by, who are apt to be the militants in all of this? Who are apt to be the true believers in all of this, the most loyal to the project of creating this new world, this new world that never came? The answer is young people, younger people. There were many cases of younger people denouncing their parents, being asked to denounce their parents. But the most militant and the most faithful were people who had not, who in the 1930s, for example, if they were twenty-five years old, they didn't really remember the old regime. They didn't remember the czarist autocracy. They are more likely to think that there's nothing wrong with trying to decide who still has religious icons on their walls, whose parents religiously went to church. The young people were more apt to be the militants in all of this. If you were a militant, what you did was you denounced class enemies, these Kulaks and the priests, members of the pre-revolutionary nobility, former capitalists. Again, once a capitalist, you are always a capitalist. Once a Kulak, you are always a Kulak. People who had been declared as "non-toilers," that is people who are not really workers or really peasants, who are Kulaks, they are deprived of the vote, not that elections subsequently meant anything in the Soviet Union, as early as the constitution of 1918. These young militants undertake a war on bourgeois specialists. One of the problems with the campaign for rapid industrialization is they're really torn. You need these bourgeois specialists, because they're technocrats. They're the ones that have to keep up the production count. They have to keep it up there. Then you go into this period and you say, "You can't have a bunch of bourgeois specialists who are educated." So, those people get liquidated, maybe not killed but get removed. Then they will bring in and replace them with peasants who sometimes had absolutely no education, which is not their fault at all. The Soviets do educate people in this period. There's a huge increase in literacy in this period. But they're turning over important management positions within the Soviet Union, in this push for rapid industrialization, to people who can't read and write, and really just don't have the kind of finesse or the kind of ability to do it. That causes all sorts of problems. The bureaucracy is increasingly filled with people who are not competent, but are there because of their party loyalty. If you weren't loyal to the party, there was nowhere you were going to go. How does this affect ordinary Soviet citizens? There's constant propaganda, talk about the radiant future, that enormous sacrifices now will be worth it in the end. Marx, after all, said scientific socialism is going to take a long time. Thus, if you see these, and I've been in the Moscow subway a long time ago, but these sort of heroic murals of the Soviet worker, the Stakhanovite. Don't write it down. I think he's in the book, but this guy, Stakhanov, was a guy who had apparently set a world record by extracting the most coal any human had ever done. It was basically made up. But he became this kind of image of hard work. I'll tell you, you'll see a lot of these art deco murals in Detroit, Michigan. Or, for example, I'm not making this as an analogy, but there was the equivalent under National Socialism, also, the idea of the German worker toiling away and all that, with the interests of the state. Basically it's the idea that Russia could be moved by hard work out of backwardness toward this radiant future. It does keep people going. There's always this contrast between "then," the bad old days when these folks ruled--the NEP men, and the Kulaks, and the aristocracy, and all of that--and the inevitable future. The landlords were gone. There's collective ownership of the means of production, so everything has got to be okay. The motto is, "The party is always right," and you'd better believe it. The state shaped the way people lived. Part of it, those people lived through the purge. I'll tell you a story. I had this colleague a long, long time ago when I first came here who grew up in Moscow in the 1930s. His father was the Persian ambassador to Moscow. In the purges people were now seeing their parents, and people were being taken away in the night, and there were boots in the hall, and it was a pretty damn scary time to live. One day he was in this big school. He was in collège, middle school. He's twelve years old, basically. He is sitting in one of these big buildings there, and the bell rings to go from one class to the next class. The bell rang and the guy who was sitting in front of him, a twelve-year-old boy, gets up. He puts his cartable down, his book carrier down, and he goes out. They're on the fifth floor. He goes to the stairwell and he jumps over to his death, just like that. He stepped over the thing and fell. If you're twelve years old you're going to remember that kind of thing. He didn't know, because you didn't discuss such things, whether he had denounced his parents and felt badly about it, or whether his parents had been taken away and he didn't have any idea where they were. This was one of the tragedies also. A lot of it is self-deception. These were very poor people. You believe in the radiant future. This is a very, very poor place. You saw these Stalin skyscrapers. There's a big debate in Warsaw whether the one that's there should be kept. You saw more literacy. You saw sometimes products on the market. But the deception was there. The great hopes were there. The reality was completely different. You would have bizarre things happen. Suddenly, you have state planning. For a while there were all these red female stockings on the market. That was supposed to be cool. You had a lot of Western visitors who had seen these same female red stockings, which were very much "in" in Paris and Berlin. But this wasn't Paris and Berlin. This was Moscow. Suddenly, you have a lot of that. Someone thought, "Ketchup will look good to the outsiders when they come." So, they start producing ketchup. There's nothing to put the ketchup on. The most ridiculous example I've ever heard is that they started producing lots of bathtubs, because people are waiting in line to get apartments, which was true until 1992. They're waiting in line to get apartments, though things got a lot better after World War II, but still. If you're going to have an apartment that is of progress, the radiant future, you've got to have a bathtub. So, they produce all these bathtubs, but they forget to produce corks or stoppers. So, for a long time you had people that were lucky enough to have bathtubs, but the water just runs out and they could show them off to their friends. It simply doesn't work. But there is the illusion, and I'm going to have to end with this, because I went too long earlier. There's more to say but there's always more to say. There was the illusion. There were lots of true believers and people who also wanted a radiant future. In the late 1960s, please, not the early ones, and the 1970s, we were all dealing with our ideology of the weak. I had an uncle who meant a great deal to me who was a communist. He was trained in Berlin as a psychoanalyst. He worked for a Communist newspaper, and he claimed to have know Georgie Dimitrov, the Bulgarian communist leader. He was a true believer always. At the end of his life he ended up passing, or having his wife pass Save the Will petitions. He was no longer a communist. But I remember when I was a little boy him telling me that the people trying to escape from communism were psychotic. That people trying to get over the wall in Berlin were psychotic for trying to leave this radiant future of a workers' paradise. There were a lot of true believers. These people, a lot of them, and I'm not dissing my uncle whom I loved deeply, and who meant a great deal in my life, and especially my aunt. They believed. People would go to these show trials and they would see people saying, "Yes, I was in cahoots with Romanian fascists," or with Dutch fascists, or with Georgian nationalists, or something like that. They would admit to all sorts of things, possibly hoping it was going to save their lives. It didn't. It never did. They were executed. Stalin executed them all. He executed the entire general staff practically of the army. One of the most amazing things about the Second World War is how the Red Army not only survived but won, and retrained people. There weren't any admirals left. There was nobody left. He killed them all. He killed them all. But people continued to believe. They believed. The whole phrase, maybe you've heard of the phrase, "a Potemkin village." I guess that's a good place to end--a Potemkin village. For example, if you're watching an old TV western and you see a façade, you've got the bar, and you've got Miss Kitty, and you've got all of this, and a few people punching each other out; there's nothing behind it. It's just all a fraud. It's nothing. They would bring these visitors from the West in this brave new industrial world, and they'd see parts of towns that had been reshaped. They meet the first literate people in a family. It was very true. Some good things happened, too. But they were far outweighed by the bad things. But a Potemkin village would be, you'd go and you'd see this façade and you'd be whisked through. "This is where the children's railroad will be." "This is where the kindergarten is going to be." It's always the "going to be," and it never happened. That, I suppose, is the tragedy of the Russian Revolution. Arguably, maybe, who knows, a good idea gone terribly, terribly bad.
European_Civiliization_16481945_with_John_Merriman
2_Absolutism_and_the_State.txt
Prof: So, what I want to do today--again, this is a parallel holding pattern lecture. I'm going to talk about absolute rule. This parallels what you're reading. It's just to make clear, with some emphasis, about the importance of the development of absolute rule. Now, one of the points I made last week, for those of you who were here, is that one of the themes that ties European history together is the growth of the modern state, of state-making. This tends to be an awkward expression or term that is used by historians. If you look at the way states are in Europe now, whether they be relatively decentralized, such as Great Britain, or extraordinarily centralized, as my France, the origins of the modern state must, in part, be seen in this kind of remarkable period of European history from the early seventeenth century through the middle of the eighteenth century. Now, we have a process in late Medieval Europe of the consolidation of territorial monarchies. You did have monarchies like Spain, England, and France, namely. Those were the three most important ones, in which rulers consolidated to brush claimants to power aside and consolidated their rule. But the period of absolute rule really begins in the mid-seventeenth century, and is to be found in those states that had specific kinds of social structures. This is a point we'll come back to, particularly when we're talking about the two most important states, two of the great powers of the period that did not have absolute rule. And which, in the case of England, the civil war was largely fought, to a great extent anyway, trying to prevent the English monarchy from taking on characteristics of those emerging absolute states on the continent. I'll talk next Wednesday about English/British, because Britain doesn't exist until 1707, self-identity and how not being an absolute state is part of what emerged in the sense of being British and being Dutch certainly, arguably even more, had to do with that because of the proximity of the direct threat to the Dutch by the megalomaniac, Louis XIV, who modestly refers to himself as the Sun King. So, between 1650 and 1750, and this is right out of what you're reading, the rulers of continental Europe, of the biggest states, extended their power. And, so, there were two aspects of this. One is they extend their ability to extract resources out of their own populations; and, second, they work to increase their dynastic holdings at the expense of their neighbors munching smaller states, or by marriages, or by wars against their big rivals. One of the most interesting examples of that is the Thirty Years' War, which starts before this course and ends before this course or with the beginning of this course, 1618-1648, which I'm going to come back to a little bit in a while--they say while it begins as a religious war between Protestants and Catholics, it ends up being a dynastic struggle between two Catholic powers consolidating their authority over their own peoples, and expanding their dynastic domains, thus Austria and France. That's an important point, because it tells you what really is the big picture that is going to emerge. So, when we're talking about the growth of absolute rule, we're talking about France, that is, the Sun King; Prussia, particularly Frederick the Great about whom you can read; Russia, Peter the Great, about whom I will have something to say in a week or two, I don't know when; Austria, aforementioned; and Sweden. Sweden kind of disappears from the great power state when they're defeated by Peter the Great in--when is it?--1709. Now, what did it mean to be an absolute ruler? What it meant was that in principle, your power was greater than any challenge that could come from those underlings, those craven reptiles in your imagination over whom you ruled. But there's a balance to it that I'll discuss in a while. There really can't be a challenge to them from the state itself. So, they make their personal or dynastic rule absolute, based on loyalty to them as individuals and not to the state as some sort of abstraction. Of course, one of the interesting things that we'll hear about in a couple days is the fact that British national identity, which is formed precociously early in European history, arguably in the seventeenth century and for elites perhaps even before, has this sort of constitutional balance between the rights of parliament, victorious in the English Civil War, and loyalty to the monarchy. So, absolute rulers assert their right to make laws, to proclaim or to announce laws with the waive of their chubby hands, to levy taxes and to appoint officials who will carry out their will. So, it's possible to talk about the bureaucratization of medieval states if you want, but when you look at the long-range growth of bureaucracies as part of government, as part of state formation, that's why the growth of these bureaucracies is one of the characteristics of these absolute states in all of these big-time powers. So, what they do is--well, let me give you a couple of examples. One thing absolute monarchs don't want is they don't want impediments to their personal rule. What was a kind of impediment to their personal rule? One would be the municipal privileges. For example, in the German port towns, Lübeck and Hamburg and the others, they formed this Hanseatic League, and Germany remains to be centralized. There are all sorts of states. Some are more powerful than others. But Germany is not unified until 1871. But if you think of Spain, if you‘re hitchhiking through Spain or something like that, or through the south of France, or Eurail passes, and if you go to a town like Avila in Spain. Avila is one of the most fantastic fortified towns in Europe. Or, if you go to Nimes in the south of France, you'll see boulevards that people race motorcycles around all the time and they keep you up all night. There are no walls there anymore, because the king had them knocked down. So, what happens with municipal privileges, towns that had municipal privileges, these are eroded and then virtually eliminated by powerful potentates. In the case of Nîmes, N-I-M-E-S, which was largely a Protestant town, they knocked down the wall so the Protestants of Nimes could not defend themselves against this all-conquering Catholic monarch. So, municipal privileges--walls were put up for a variety of reasons around towns. Plague, for example. Dubrovnik, one of my favorite cities in Europe. Dubrovnik had these magnificent walls you could walk all the way around. They have a quarantine house where they would put people who were travelers arriving there, because walls kept out plagues. Walls keep out malfaiteurs--;evil doers. They keep out bandits and things like that. The doors literally slam shut at night. There was a case of a very minor insurrection in an obscure Italian city in 1848 where the people of the town literally locked the ruler out of the town--and Italy remains decentralized. The tradition of these decentralized city-states that were the heart of the Renaissance. Italy is not unified--to the extent it has ever been unified--until the 1860s and 1970s. What these kings do, these kings and queens is they get rid of these impediments to their authority. Even take the word burgher or bourgeois. Bourgeois is a French word. It's more of a cultural sense, but it also has a class sense. A bourgeois or a burgher was somebody who lived in a city and assumed that some of the justice that was levied against him or her would be the result of decisions taken locally. Now, big-time, powerful absolute monarchs don't want that. So, part of the whole process is the elimination of these municipal privileges and replacing municipal officials, to make a long story short, with people that they have appointed. They eliminate--the one privilege above all that the big guys want to get rid of is the right to not be taxed. Part of being an absolute ruler is being able to levy taxes against those people who have the joy or the extreme misfortune of living in those domains, and more about that later. So, what happens with all this is that absolute rule impinges directly on the lives of ordinary people more than kingly, or queenly, or princely, or archbishiply power had intruded on the lives of ordinary people before that. So, these rulers have a coercive ability in creating, and I'll come back to this, large standing armies that will be arriving not immediately, they're not arriving by train or being helicoptered in at some distant command, but they will get there if there's trouble. They will arrive and they will get there and they will enforce the will of the monarch. We'll see the statistics are really just fascinating about how big these armies become. The argument that I'm going to make, drawing upon again Rabb--he's not the only one that's made this argument, but he's made it more thoroughly than most people--absolutism may be seen as an attempt to reassert public order and coercive state authority after this period of utter turmoil. The English Civil War, the Thirty Years' War, in which in parts of central Europe a quarter of the population disappeared, were killed, murdered in ways that I will unfortunately show you in a while. More than this, what happens is that the nobles, who in all these countries going back to the Medieval period, had privileges that they were asserting vis-à-vis their monarchs, they will say, "We agree to be junior partners in absolutism in exchange for the protection that you, the big guy, and your armies can provide us, so that we don't have to lie awake wondering who is coming up the path to the big house. Is it peasants who are come and assert the rights of the poor against us?" And at a time of popular insurrections in all sorts of countries. Think of all the insurrections or all the people who followed false czars to utter slaughter in Russia. The nobles say, "All right. We agree to be junior partners in absolute rule in exchange for recognizing your supreme authority over us in exchange for the protection that you will afford us." Private armies are disappearing. The armies of the state, as you will see in a while, are growing, and moreover, "you, oh big guy, you will assert our own privileges. You will recognize our privileges as nobles." So, it's a tradeoff. But in absolute states, there's no doubt who rules and who helps rule. So, in absolute states big noble families are very happy to send their offspring to become commanders in the army and navy, where they never do a damn thing, or to become big bishops like Talleyrand, and to profit from the state while recognizing that the big guy, the king and the queen, have absolute authority over them. Now, the classic case, of course, Louis XIV you can read about. Louis XIV when he was a kid, he was about twelve or thirteen years old, he lived in Paris. He lived in the Tuileries palace along the Seine, which was burned in 1871 during the commune. There was a huge old insurrection called the Fronde, F-R-O-N-D-E. A fronde was a kind of a slingshot that Paris street urchins used to shoot fancy people with rocks as they rode their carriages through the muddy streets of Paris. It's a noble insurrection against royal authority, and in Auvergne in central France you have people rising up against their lords saying, "Hell with you. We're not going to pay anymore." When he's a boy, he hears the crowd shouting outside of the royal palace in Paris. It scares the hell out of him. At one time they burst into his bedroom and he's a little guy. When royal authority conquers these rebels, the frondeurs--;you don't have to remember any of that, F-R-O-N-D-E, it's good cocktail party conversation, or something like that, but it's important--he makes them, literally, he's a bigger guy then, they literally come and they bow down, and they swear allegiance to him in exchange for protection and the recognition of their privileges as nobles, as titled nobles. That's really the defining moment in absolute rule. What does Louis XIV do? He goes out and builds Versailles. He only goes back to Paris I think three times ever. He doesn't like Paris. Versailles is only eighteen kilometers away. It's about eleven or twelve miles away. The women of Paris in October, many of them will walk to Versailles to bring the king back to Paris. After that, he's essentially, well to put it kind of ridiculously, toast, French toast, when that happens. He builds this big--I call it a noble theme park, basically, at Versailles. It's not the most interesting of the châteaux at all. The most interesting is Vaux-le-Vicomte, which is southeast of Paris. It's a big sort of sprawling--gardens everywhere. Ten thousand nobles lived there. How boring! But the point was that they could be watched, that they're not going to--they can chase each other's wives and mistresses around, and they can eat big drunken meals. The château was so big that when it freezes, they were trying to get to the bathroom and most of them never made it and peed on these long corridors that some of you have seen. The wine would freeze on the way from the kitchen through--it is sad--to the big dining hall. But he has 10,000 of these dudes and dudesses there that he's going to watch over. They can conspire against each other, and they can hit on each other's wives and mistresses. He could give one damn. But he can control them there. He only goes back to Paris three times ever. All the time he's expanding his own personal power vis-à-vis his own population, conquering Alsace and parts of Lorraine and going to these inevitable natural frontiers. Napoleon thought the natural frontier was the Pacific Ocean. That would be another story. So, this is what, in a nutshell, kind of what absolutism was. But let me say two things now, after having said that. There were doctrines. You can read about this stuff--geez, it's obvious. But there were doctrines of absolutism that originated with jurists early. This was out there. There was a theoretical conceptual framework for having a king or queen having absolute powers. Even the development of this theory of absolute rule is in response to the rise of these territorial states like Spain, and France, and Russia later. France is a good example. I quote in here a guy who croaks before this course starts, Jean Bodin, B-O-D-I-N. He says, "Seeing that nothing upon earth is greater or higher next unto God than the majesty of kings and sovereign princes," he wrote in Six Books of the Republic, "the principal point of sovereign majesty and absolute power was to consist principally in giving laws, dictating laws, onto the subjects in general without their consent." So, for absolute rulers, the link to religion you can read about, but there's always the sense that he or she is doing God's will by exploiting ordinary peasants, ordinary people and conquering other territories. But there's a theoretical framework, and it will catch up with the French monarchs, among others, later--that the ruler must be a father, a benevolent figure. As I said, in some context last time, how many Russian peasants died in the 1890s thinking, "Oh my god, if the czar only knew that we're starving, how angry he would be with his officials." Well, he could have given one damn how many millions of them died. But this was the image, that the big person is there to protect you, and that his glory is your glory. But along with this conceptual framework, provided by none other than Thomas Hobbs in England, who had lived through the English Civil War and thought that you shouldn't mess around with this rights business, you need some sort of big powerful monarch there--but there was a sense inherent in all of this. This will be important to try and understand the French Revolution, La Révolution française, that there's a difference between absolutism and despotism. And that even conceptually, theoretically, if the monarch goes too far against the weight of the past that there is inherent in this the idea that he or she might well go. Of course, you can imagine the thoughts of Louis XVI as they were cutting back his hair to await the fall of the guillotine on the 21^(st) of January, 1793. In the cabarets and the estaminets, the bars of Paris of which there are many, many, many--happily so--in 1789, when ordinary people are drinking to the Third Estate, and talking about despotism, and finding examples from what they saw around them as representing despotic behavior. That line had clearly been crossed and helps explain why it was that in a country in which there weren't ten people who wanted a republic in 1789. It was possible to imagine life without a king. Imagine that. So, that's there as well. Now, let's characterize--oh, geez. we've got to move here. Let's characterize absolute rule. Now, you did have, in many of these countries, diets, or parliaments, or some representative bodies. Again, the king doesn't have to call them. In the case of France again, since we're talking so much about Louis XIV, they call the Estates General, which is to represent all the provinces after the assassination of Henry IV in 1610 or 1612. Appropriately enough, he was stabbed to death in a traffic jam in Paris when his carriage gets blocked in the center of Paris, and this mad monk sticks a big knife into him. So, they call the Estates General then, but the king never calls it again until 1789. So, you have these diets and you have these parliaments, but one of the characteristics of absolute rule is that you don't have to call these bodies, because the king is the big person. Now, in the case of England, one of the causes of the English Civil War is the refusal of the kings to pay any attention, to recognize the rights of parliament that people in the British imaginaire, in the British collective memory--I believe started on June 15^(th), which is my birthday, 1215, although I wasn't born yet in 1215. And, so, the idea of the freeborn Englishperson, Englishman is what they would have said in those days, meant that rights of parliament had to be respected. When it looks like those kings are going to restore Catholicism, at least have lots of paintings of swooning cherubs, and cupids, and Baroque Italian art in Windsor, and London, and these other places, then you've got a revolution. So, absolute rulers didn't really have to pay attention to these assemblies. The best example I can think of offhand, I should let this wait, but Peter the Great, the czar of the Russians, who may or may not have beaten his son to death, at least he ordered him tortured. Peter the Great was a huge sort of power-forward-sized guy at a time when people were very small. He had this thing called the drunken assembly, which was in a way kind of a mockery of parliamentary representations where his cronies would come and just get wasted and would make all sorts of flamboyant proclamations that seemed to represent what a real parliament would do. But in fact, Peter the Great listened to whom he wanted to and ignored the others. And sometimes had them killed if he had to, if he thought that's what he should do, because there wasn't any sort of challenge to his authority. That, my friends, is part of what it meant. So, I already mentioned about how nobles become junior partners in absolutism. That's not a bad phrase, junior partners in absolutism. So, what happens? Two ways of measuring how this happened and what difference it made is to realize, to return to what I said earlier, that big state structures involve bureaucracies. So, the king's representatives go out in the name of the king. They give out justice, or the lack of justice, or they send armies in, or taxes, or this stuff. Now, the Renaissance city-states of Italy had relatively efficient administrations, to be sure. But these are royal bureaucracies that expand dramatically in size. Even though decentralized England expands its bureaucracy and collected taxes much more efficiently than across the channel in France, state-making involved more officials there. So, in order to raise money, you have to enforce taxes. So, you may farm taxes out to someone. They'll keep as much of the cut as they can possibly steal. Or to make money you'll sell noble titles. This gets the French kings into trouble. Or you sell monopolies. Peter the Great had a monopoly on dice, because people gambled a lot. The nobles gambled all the time. You could gamble serfs, real people. You could gamble them. You could lose them with a bad hand. This was Russia. So, the monopoly on dice he sells. He sells the monopoly on salt. Salt was a big commodity, obviously, for storing meat. That monopoly is sold in various places. So, these officials, nobles get these kinds of officials, and really, they could rake it in, get these titles and they are representing the king. They're governors, or intendants you call them in France. And it expands the number of officials dramatically. Then there's warfare. There is nothing more symptomatic of the growth of absolute rule than the growth of powerful armies. Again, when you traveling around Europe, if you're lucky to do that, you'll see these big fortified towns. In the case of France again, they are the work of a brilliant military engineer called Vauban, V-A-U-B-A-N. You go to a place like Perpignan or Lille or Montmédy, they're all over the place. And these are fortress-like defenses in an age of essentially defensive warfare. But if you're going to have a big old fort, and you're going to have lots of cannon that you hope to use against your craven, reptile enemies that would want to get in your way, you've got to have people to try out the cannon. You have to have people who live in these fortifications. So, the size of the armies for these megalomaniac wars, these dynastic wars between Austria and France--and then they changed partners in 1756, and all of this business. You can read about that. But the big story is huge, huge, huge amount of troops. During the sixteenth century, the peacetime armies of the Continental Powers were about 10,000 to 20,000 soldiers--very, very little. By the 1690s, 150,000 soldiers. The French army, which was then in the 1690s 180,000 people. That's twice the Michigan football stadium. Can you imagine a stadium packed with soldiers and all that? How boring. But, anyway, it rose to 350,000 soldiers, the largest in Europe. I think I have in this edition a table the size of European armies. Habsburg empire, 1690,50,000; 1756,200,000. A polyglot army, too, because of all the different nationalities. Prussia identified with the Junkers, the nobles who were army officers, the dueling scars that they had--that Bismarck would have in a unified Germany, a mere 30,000 people in 1690; 195,000 people during the Seven Years' War; in 1789,190,000; in 1812, as they're fighting Napoleon, 270,000 people. This is in a state that barely extends beyond Brandenburg and Pomerania in what now is Western Poland, and still Prussia in the unified Germany. Even Sweden, 100 at the time of the Battle of Poltava. Forget it. Well, don't forget it, but read about. In 1709, that's when Sweden loses to Russia. The Swedish army was 110,000 people, soldiers. That's an awful lot. So, that's one of the things that happens. The modern state in action, the absolute state in action is the army. Even in peacetime, military expenditures take up almost half of the budget of any European state, and in times of war, eighty percent. Having said all that, let me just--oops, try to turn this baby on. Did that go on? Why didn't that go on? Oh, I've got to put this thing down. That's it. Again, these just illustrate my point, which is: Why did nobles and even other people agree to all of this? If they're being exploited, they've got big armies that can crush them like grapes if they get in the way. But one argument that can be made is that things were so terrible and so out of control in the earlier period that the strengthening of the state is something that people saw as beneficial. Again, Hobbes is over the top. Hobbes wants this sort of dictatorship to keep people from brawling in the state of nature. Again, the elite in Britain were scared, because you've got all these Ranter groups and Levelers and people who believe that everybody ought to have the right to vote, whether they have property or not and people that believe in the right of women. This is pretty scary. So, people like Hobbes thought, "Well, we need a really strong state." But that's not the outcome of the English Civil War. But how did this work in other places? Theodore Rabb's argument is basically that the terrible wars of religion that had ripped central Europe apart in the middle of the nineteenth century led people to look for the kinds of safety provided by a strong ruler. That what had begun, and we'll see this in a minute, as a war between protestants and Catholics, a war that began in Prague when somebody gets defenestrated, which is a fancy word for throwing somebody out of a window, that this ended up being a war fought by just vicious mercenaries who slaughtered the populations of central Europe. It didn't matter if they were Protestants or Catholics or anything else. They simply killed them. And that this terrified elites in much of Europe and had the same equivalent of what the Fronde did for scaring elites in France. One of the arguments that he makes, and I can't make it as strongly because I don't know enough about it, is the scientific revolution. What I know about it is what you're kind enough to read. It was hard to piece all of this stuff together. But there is this sort of sense of uncertainty that you see in someone like Descartes, who finally just goes back to basics and says, "I think, therefore I am." Here I am. They go from there to a methodology of science, a methodology of trying to study things in a rational way, to get rid of the kinds of blind faith that seem to have led to this, this utter catastrophe of mass slaughter in Europe. There are signs all over the place that this has happened. "I think, therefore I am." There is a return to these kinds of theoretical defenses of absolutism that even preceded the growth of the absolute state as I've described it. Absolutism did not simply just emerge out of this turmoil. As I already suggested, and I would insist upon this again, that the consolidation of territorial rulers had already given the basis to an expanding, more formalized state structure, even in England. This is for sure. It all just doesn't start like that. Louis XIV was preceded in number by Louis XIII. Louis XIII helped expand the compelling course of structures of the French state. But yet when you look at all of this, you can see that the kind of chaos, the political upheavals finds in response in the growth of central government authority and the growth of bureaucracies. It wasn't only in Sweden, Austria, Russia, France, etc. where you found this. Even in smaller states like Württemberg, a state in Germany which was a sort of middle-sized state. Even there you see the same phenomenon on a very lower, smaller level, at least in terms of the size of the state, where people are giving up, willing to compromise on their privileges in order for the protection of the ruler of Württemberg, who would never be confused with Louis XIV or Peter the Great. So, this really becomes a sort of European-wide phenomenon. You can apply this also to the Glorious Revolution in England as well. People are happy to have a monarch back who is going to reassert control. In the case of England, they're very happy to have a monarch back who was not threatening to turn England again into a Catholic state. So, this is the sort of argument that you can make, even in a state that had a constitutional monarch such as England. Let me just give you a couple examples of what one can mean here. Again, these are painters that you may have come across. It doesn't matter if you've never heard of them or if you never think of them again--but, Titian. The famous Titian. This is his picture of Charles V at a battle in Germany in 1648. This is a pretty dramatic representation of war. This is like Clint Eastwood, The Good, the Bad, and the Ugly. This is a guy, he's armored up; he's ready to go. He's somebody to be emulated from the point of view of the viewer. But at the same time, this is slightly earlier, this is a painting--thanks, Dan--this is a painting of Bruegel the Elder. The first is the Triumph of Death, where you see what happens in real battle when people are just sort of slaughtered, and the commanders are off at a safe distance. Here again, the massacre of the innocents, where villages are just being executed because they are there. The Triumph of Death, the dialogue of the mathematician Pascal is quoted by Rabb. "Why are you killing me for your own benefit? I am unarmed." "Why? You do not live on the other side of the water, my friend. If you lived on this side, I should be a murderer. But since you live on the other side, I'm a brave man and it is right that I kill you." When the Swedes get into the act, Gustavus Adolphus brings this huge old Swedish army down and they do a lot of damage, too, and people are absolutely being devastated. Here's Reubens' The Horror of War. There's a reason why the first attempt to even write about international law comes at this period. Again, this is before the course, but why not? Hugo Grotius writes the Law of War and Peace. He publishes it in 1625. The goal was to stop stuff like this, to try to create a legal framework in which states could resolve their kind of differences without kind of butchering each other. So, there we go. But somewhat here in war and peace of battle is slowly being relegated to the background. But let me give you another example. Here's the famous Spanish painter. Again, don't worry about it. Velazquez, who died in 1610, I think. No, it's 1660, sorry. This is his portrait of Mars. Mars is the god of war. Now, how different that is than the portrait of Charles that you saw by Titian. Here, this guy looks like kind of an overweight NFL player who hasn't really gotten ready for the drill. He's very human. There's nothing admirable about him. It's just war is being dissed by those people that are just so tired of the killing. And Mars has this sort of human, flabby torso that's not--it's sympathetic, but it's a different portrayal of war. People are getting tired of the whole damn thing. He's dull. He's uncouth and he's extremely human. Now, one of the reasons why people would--it's unthinkable for someone like me or for probably most of you to imagine giving up your rights to a kind of absolute rule, though we seem to be in a situation like that, where that's happened quite a lot recently, even in this own country. But these are just illustrations that come out of the Thirty Years' War, which people are trying to put behind them. This is a French painter, drawer, lithographer called Jacques Callot. These are just many ways that people died during the Thirty Years' War. This is simply The Execution. You don't even need the formal titles of these. But these get around. Peddlers who had these big, big leather bags that would go around Europe and sell things like pins and miraculous images of the Virgin Mary and the stories of saints and all this kind of stuff and Joan of Arc or Robin Hood, in the case of England, become part of the collective memory. These kinds of images do get around of the horrors of war that the misfortunes and horrors of war, which is basically what he calls this entire series. Here's the people sort of standing around watching this execution. This is somebody being tortured at the stake for merely existing, for having not confessed to being a Protestant or a Catholic or whatever. I'll tell you, in the south of France near where we live, when there was a lot of resistance in World War II against the Germans, there were some Protestant villages there that were noteworthy for their resistance. A lot of Catholics resisted, too. But one of the interesting things about some of the villages that I know down there is that there were big mission crosses that were put out after the wars of religion that were sort of symbols of conquest by the all-Catholic king. Is it in the collective memory that people remember three centuries later that the Catholic Church was identified, at least as a hierarchy, with the Vichy Regime in World War II? That's interesting, a fascinating subject. But, anyway, this poor guy's not doing very well up there and becomes this sort of big spectacle. These are dying soldiers along the side of the road. It's sort of a sympathetic look at--that's the name of this--of these expiring dudes there. Here's the attack on a stagecoach. The point of this is it didn't matter who you were. If you were in the wrong place at the wrong time, you were history. That was all. There were new ways to be killed. Certainly in Europe, not until the massacres of the Armenians, and arguably some Napoleonic atrocities, and Napoleon's armies' atrocities in Palestine, or in the south of Italy, or in Spain as well. But there was nothing like this really, including World War I. There were some atrocities at the beginning of World War I, but there was nothing like this again until World War II and, of course, Bosnia. The point is this is why lots of people thought, "I don't like this guy sending people around and taking my taxes, but I don't want to get offed by some marauders. Just hang ‘em high, hang ‘em all high." These were real ways that people were executed--stakes, massacres, and this sort of business. There's a convent, church that's going to go. It's a Catholic church. You can tell from the top. So, maybe these are Protestant mercenaries. It didn't matter, because the Protestant armies had Catholic mercenaries and the Catholic armies had Protestant mercenaries. Everybody had Dalmatians, people from the Dalmatian coast, and Swiss. You have to imagine a time when Switzerland wasn't extremely wealthy. Swiss were great, famous mercenaries fighting in these armies. Again, the Swedish, the "Swedish cocktail" was sort of suffocating people by stuffing manure down their throat until they died. This was a nasty time. I guess this is what Hobbes meant by "nasty, short, and brutish," or whatever the fourth was. I don't remember, but what life was in the Thirty Years' War, that was the way it was. Now, out of all of this, again to repeat, we are not making the argument that the Thirty Years' War itself led to absolute rule, that the growth of state structures can be seen in the beginning and the late medieval period with the consolidation of these territorial monarchies. There were already bureaucrats representing the royal will. There were already armies. But many, particularly two--bureaucracies and powerful standing military forces--are characteristics of modern states. And to try to explain why it was that absolute rule came to Europe at the time it did, one has to not only look at the particular structures of states, but one has to look at the overview and the sheer horror of it all. The boy king, Louis XIV, hearing the crowd shouting outside of his room. He goes out to Versailles and creates this noble theme park and sort of a Euro Disney for nobles where he can watch these nobles. They agree to be junior partners of absolute rule and they weren't the only ones. The great power struggles of the eighteenth century would be very different than this bloodletting of civilians that had preceded it. There were professional kinds of armies and all of that. But those are more themes for future lectures. Wednesday I'm going to talk about exceptions to absolutism, what the Dutch and what the English had in common that gave them very different political outcomes. That's important, too, in the emergence of the country in which many of you live. See you.
European_Civiliization_16481945_with_John_Merriman
11_Why_no_Revolution_in_1848_in_Britain.txt
Prof: Today, what I want to do is talk about the revolutions of 1848. Read the chapter. It's not very long. I want to talk principally about why there was no revolution in Britain in 1848, since there were revolutions in France, in the Hapsburg domains of the Austrian empire, what would become in 1867 Austria-Hungary, in Prussia, in the Rhineland, and in Northern and Central Italy, but not in Britain. I want to talk about that. But before I do that, let's just think about revolution in general and how revolutions work. I'll mention this again when we get to the Russian revolution. If you think about what you know about the French Revolution in this context, hopefully it will make some sense. In the 1960s many social scientists, not all, believed that revolutions came when pressure builds up and you've got intellectuals bailing out, leaving the regime. Then you've got all this tension building up and then boom, you've got your basic revolution. But revolutions don't work like that. If you think about revolutions you know anything about, the Iranian revolution in 1979 would be a good example, or the revolutions we're going to talk about today a little bit, that you're reading about in 1848, or the French Revolution. What happens is that in the case of 1848, or in the case of the other ones that I've mentioned, is that there is a seizure of power by a group who have come together because they oppose the policies of the government in power. But it's at that point that you've got an increase in social and political tension. It's at that point that tension really increases and all sorts of interesting things begin to happen. In the case of 1917 in St. Petersburg, in February the bread lines are very long. There are not a lot of troops around. They're at the front. There are not a lot of police around. Suddenly, the czarist regime is just sort of swept away, like that. It's at that point that things heat up. In 1917 things weren't any tenser than they were in 1916, and there are a lot of things happening vis-à-vis the war that helped people mobilize. Try to imagine a post-czarist world or a reformed czarist autocracy. In the case of 1848 you've got demonstrations in Paris. In February--it rains all the time in Paris in February. It's gray. People want electoral reform. Troops open up and shoot a bunch of people. The same thing happens in Berlin not long after that. At that point you've got the regime that's swept away. What's interesting about the revolutionary process is that after you've got this kind of basic, provisional government. In the case of all--what the constitutional monarchy in 1789 and 1790 becomes, and in the case of Kerensky's provisional government in 1917, and in the case of this kind of moderate republic in France, or in Germany, or in Austria, you've still got the monarchy, but you have all these contenders for power who are saying at that point, "We want to take advantage of the situation so that we will have a republic," or, in the case of France, "that women will have more rights," or "that workers will have more rights." In the case of the German states, people who want a unified Germany put forth their claims at that point. In the case of France, people want the monarchy, the Bourbons, that is the legitimists who were chased out in 1830. They want them back. They put in their claims. You've got your basic moderate republicans that are saying, "We want a moderate republic." In the case of Austria you've got all these Viennese students who want reforms. They want a progressive regime. It's at that point that this sort of social tension and political tension increases. What the revolutionary process does, and what's important about 1848, is that it brings, for the first time, lots more people into the political process. In the case of France, my friend Maurice Agulhon has called the "apprenticeship of the republic," that is 1848 to December 2, 1851 and really 1852. Because now you've got universal manhood suffrage. All men can vote. Lots of women want to vote, too. It's not a dominant course, but it's still important in Paris. So, you have a politicization of ordinary people. You have this in the German states. You have it in the Italian states as well, and you have it in Austria-Hungary. In the case of Austria-Hungary, you've got Hungarians putting forth special claims within the Hapsburg domains, and the national question surfaces in central Europe and in Italy, where people can imagine a unified Italy, which would mean you have to get rid of the Austrians, basically. You can read about what happens subsequently. You have these people putting forth their claims. You have this remarkable politicization. In the case of Paris, the barricades go up, which happened in February. Then the June Days follow, which is basically a sort of class war in which lots of people get killed and put in prison--massive unrest for three days in June 1848. That's a much more violent confrontation and telling confrontation than the initial revolutionary seizure of power by groups who don't necessarily agree on what's going to happen. In the case of France, which is the best documented, you've got all these newspapers that begin publishing because now suddenly you can publish things. You've got all these political clubs, just as in 1848. You've got neo-Jacobin clubs. You've got clubs of women. You've got the club of the two sexes, as it's called. This becomes a way of politicizing ordinary people. When men go to the polls in 1848 in April for the first time and they elect a relatively conservative, indeed monarchist leading national assembly, they do so with considerable knowledge about what they want. They want schoolteachers. They want credit available. They want the right to vote. This politicization, or the apprenticeship of a republic that would finally be permanent starting in the 1870s, is one of the most important aspects of the Revolution. The same thing is in the Russian Revolution, which I'll come back and talk about. You've got Mensheviks. You've got Bolsheviks. You've got socialist revolutionaries. These are three big radical groups. You've got constitutional democrats. You've got monarchists. You've got people who want the czar to have all the power that he had before, and to lop off the heads of those people who are against him and that kind of thing. In the case of the Russian Revolution, how you get from the February revolution to October, when the Bolsheviks seize power, is really very fascinating. That's just a way of thinking about revolution. You can think about other revolutions that you know. The point is that the revolution as a process brings into play the aspirations of a lot more people. The French case, which is fascinating, is what begins as this kind of urban, middle-class revolution of people fought by artisans, as usual, who want the right to vote, ends up in December 1851, after Napoleon's nephew Louis Napoleon Bonaparte, who had become the good old Napoleon III, not so good old--he destroys the republic, where he completes a process of repression that he'd already, as president of France, elected on the 10^(th) of December 1848, initiated. But that very ordinary people, peasants for the most part, but also rural artisans, particularly in the south and not necessarily speaking French, at all rise up to try to defend the republic or what they call la Belle, the beautiful one, against the rape, as they called it, by the repressive apparatus centered in Paris. This urban revolution ends up with over 100,000 people taking arms, and the largest national insurrection in France in the nineteenth century trying to defend the republic against its abolition by Louis Napoleon Bonaparte, who would not be able to stand for a second term as president of the republic. That's really fascinating. A long time ago I used to read all these interrogations of people, including the great, great, great uncle of one of our neighbors in Ardèche. It's being translated from Occitan, which is the language of that part of the south, into French by some translator as he's being interrogated. "When did you join the secret society?" "Who did you initiate into the secret society?" to defend the republic, the democratic and social republic that's going to provide more things to more people, etc. etc. Fascinating, the politicization of this. Now, in the case of Germany and of Italy, it's a different kind of revolution. It still had its kind of liberal and democratic component, but you had this scene, for example, in Frankfurt in the Frankfurt parliament, which was basically a lot of professors and lawyers debating long into the night in St. Paul's church. They imagine the unification of Germany, what they called then the "springtime of the peoples," that the German states, which Prussia, Saxony, Bavaria, and Wurtemburg were the most important of these states. All these other little states, too, are going to be unified along liberal auspices. They were just dreaming. In the end they were kind of dismissed as a servant who was not wanting to be kept on at the big house anymore. They're sort of dismissed. But they debate far into the night. The significance of all of this is when you read the chapter--please do--on the Second Republic, and you see what happens in the case of the German states is that when Germany would be unified, and that unification is proclaimed in January 1871 in Versailles, in the Chateau of Versailles after the Franco-Prussian War, it would not be a liberal unification. It would not be a Germany united by lawyers and professors meeting in a Frankfurt church with what would become eventually the German flag, the colors of it hanging all around the rafters. It would be unified, as Bismarck accurately put it, "by blood and iron," in the context of the Prussian aristocracy, the nobles, the Junkers, a term you already know, J-U-N-K-E-R-S, or Prussian nobles, and that basically unified Germany would be, as some wag once put it, "an army with a state trailing behind it." Germany would not be a republic; it would be an empire. Because one of the things that happens with empires is that emperors can do whatever they damn well please, just as czars can, and you might have had an assembly called the Reichstag, in which socialists became the leaders or the most dominant numerically by 1914, but power rests in the hands of a thoroughly irresponsible, intellectually lazy, sort of madcap ridiculous guy who happens to be the Kaiser, William II. Over the long run, the costs to Europe of that fact would be, in retrospect, given what happens in the twentieth century, which is the one that follows the nineteenth, would be one thing that historians would go back and say very obvious things, sort of clichés like, "Well, in 1848 history in Germany reached its turning point and failed to turn." That German unification would not come because of professors, and liberals, and merchants in Hamburg, and this sort of thing, and that the German middle classes would, in a way, abdicate their political responsibility and not having much political power, and the state would be run by a bunch of Junkers, military officers with dueling scars and veterans of the fraternities of the Prussian universities. Prussia wasn't just Pomerania and Brandenburg and the marshes of northeastern Europe. Prussia also controlled the Rhineland. But the kind of magnates of industrializing Prussia were not going to be the ones who were running the show. In the case of Italy, you can read about that, too, is that there were a lot of people running around saying, "Long live united Italy," and all of this business, but that was to be very hard, too. In order to unify Italy under any auspices, and most people wanted under liberal auspices, you have to get rid of whom? You have to get rid of the Austrians. The Austrians control almost all of northern Italy and much of central Italy, too. There are lots of impediments to Italian unification, not the least of which was the fact that the vast majority of people in Italy did not speak Italian. That itself was not a major impediment, but I guess it was, too. Only about four or five percent of the people in Italy spoke what now is considered to be Italian, which I guess is--I don't speak Italian--essentially the language of Tuscany, the area around Florence. They spoke all sorts of other dialects. The Tuscan language was virtually unknown in the south of Italy or in Sicily, and was identified with money-grubbing tax collectors coming down from the north. After the tide of the springtime of the people or, as somebody once called it, "the great illusion" of 1848, after that tide had passed, what you had is still lots of fervent hopes and dreams that Italy was going to be unified along liberal auspices. That is what happens, even though it's a monarchy, and unification comes because of basically the expansion of the state of Piedmont Sardinia, which was that most influenced by the French wave in the times of the French Revolution and of Napoleon, and also the wealthiest part of Italy. In the case of Austria-Hungary--this is a long story, and you can read about it--but the springtime of the peoples meant the dreams first of number of nationalities, and I'll talk about nationalism in a week or two, who suddenly think that now they, too, will have their time. A bunch of Czech nationalists were sitting in a room rather like this. Somebody looked up and said, "Geez, if this ceiling collapsed, that's the end of the Czech national movement." There was something to that. The springtime of the peoples would not bring an independent Czech state. It wouldn't bring Czechoslovakia, which only lasts until 1993, despite what John McCain thinks. It doesn't come until 1918, after World War I. But in the Austrian-Hungarian empire everybody says, "What if we have an independent Galicia?" "What if Poland is independent and the parts of Russia and Austria and Poland will be independent?" But these are pipe dreams. National awareness and great power politics will mean that this isn't going to happen until later. Poland becomes independent in 1918, for reasons that you already know. The chill of reaction, revolution, reaction, repression is what really happens. That's the theme running through the whole thing, besides the big hopes of the spring of 1848. The Austrian imperial system, and I'll come back and talk about Austria-Hungary quite a bit, run in German in the polyglot Austrian-Hungarian empire, where there at least fifteen major languages, will be the story of Austria-Hungary in the 1850s. Of course, the Hungarians get separate status, equal status in principle, as of 1867. One of the interesting things about the Austrian-Hungarian period is that it's in 1848 that Franz Joseph becomes emperor of Austria-Hungary. He is in power until 1916. He is around even longer than Queen Victoria. The world changes in such dramatic ways between 1848 and 1916. When I was younger than you, I was in Vienna for the first time and I was sitting in a coffee shop, as one does there, in a café. This very old man started talking to me. He had actually seen, when he was young, had seen Franz Joseph. That was the most amazing thing for me to actually meet somebody who had seen, when he was a boy, Franz Joseph. That's just extraordinary continuity. Anyway, Austria-Hungary is another story and it's an interesting one. That we're going to come back to, but I can't do that now. We've got to go back to the question of England. Another of the legacies of this is that after this tide of reaction what it does is it sends waves of political refugees to places like this. Not New Haven so much, but yeah, there were some Italians who ended up in New Haven who were Italian political refugees in that period. But lots of Germans who were thrown of the German states and had better not come back; they end up where? They end up in Philadelphia or in New York. Lots of Irish, who I'm going to talk about in a minute, end up in obvious places--Boston, New York, those two above all, but also Philadelphia. The glacial wave of repression sends these people, a lot of them, to the United States. That itself is an interesting story. Although the revolutions of 1848 failed--and you should read about that, please do; I love this stuff, talking about this--the political legacies that they left are extremely important. These demands for political rights would be something that would last for a very long time. Again, to repeat and to end this little part of what we're doing--oh my goodness--that German unification would come under very different auspices than that of the revolutionaries of 1848, what they wanted. The King of Prussia rejects this crown offered from the gutter, as he called it, to unify Germany under liberal monarchical auspices. That ain't gonna happen and it doesn't. Okay. There's revolution in all these places in 1848. The big wave. Why not in Britain? Why not? You probably already know some of the answers. There are really two major contexts in all of this. First is that the Reform Act of 1832 puts down the drawbridge and opens it to more voters. More people can vote now. Again, voting was based on property qualification. Feargus O'Connor, who is an Irish Chartist whom I'll talk about in a minute, he didn't even have the right to--he is not disbarred, but he's thrown out of parliament because he doesn't make enough money in order to actually qualify to vote himself. You could vote if you paid X number of pounds and shillings in taxes. What happens is 1832 opens up the drawbridge and more people can vote. The political arena expands a little bit and the same thing happens in France in 1830, as you know. In France the revolution of 1830 doubled the number of people that could vote. But it still leaves people on the outside looking in. In 1867 they would pass a second reform bill that lets more people in. In 1884 they pass another one that lets almost everybody in except for, I think, domestic servants and maybe rural proletarians. I can't remember exactly. The political arena is expanding. The point of this is it's expanding through reform. Britain reforms. The self-image, the self-identity of the freeborn Englishman, tracing more or less, at least in the imaginary, antecedents back to June 15, 1215 at Runnymede near London. The idea that the freeborn Englishman has rights and that we British citizens, our identity is we reform. We don't rebel. Clearly, as I will demonstrate in a minute drawing on the work of John Belcher and other people, too, what happens in 1848 is when there might have been a revolutionary moment in Britain, "France has sneezed and Europe is catching a cold," as they like to say over and over again. It doesn't come to Britain. British national identity, like all national identities, have to be systematically reinvented and reconstructed. This happens in 1848 and subsequent years as well, the sense that we are respectable. I've written "respectability" up on the board. Respectability means reform and not revolution. The aristocracy of labor, who were craftsmen and artisans who could be seen walking through Hyde Park with their ladies on their arms wearing suits, of all things, on Sundays, the bourgeois respectability had a political aspect of it, too. In 1848 Britain does not rebel. More about this in a minute. The other context is Chartism, which you should read about. Chartist campaigns were campaigns to get ordinary people to sign an enormous charter with millions and millions of names. There's two big waves in the 1830s and 1840s. What they do is they sign and they say, "We the humble poor, we ordinary people, we entreat you, big time lords, property owners of great distinction who are representing property and parliament, we entreat you to give us political rights." They bring these huge petitions in on wagons signed by zillions of people, many of whom can only mark an X instead of writing their names. They bring them to parliament in the rain, as always. Parliament says, "Gee, thanks a lot. We don't want to see that." Then they say, "Oh, we'll do it again. O great lords, give us political rights." They sign. They don't pick up their blunderbusses, they sign. In 1830 the French middle class was more than happy to turn their artisans loose on the street to fight their battles for them for political reform. In Delacroix's famous painting, Liberty Leads the People, the bourgeois with his top hat, in this romantic picture romanticized view of revolution, does not have any place there, because the bourgeoisie does not fight, unless you consider master artisans petty bourgeois. But in England that's never going to happen. The Chartist campaign remains respectable. It is class-based to the extent that most people who signed the chartist petitions are ordinary people. But really they saw themselves as moral reformers. They see themselves as trying to do--and it cuts across class lines. They're trying to get the government to do the right thing, to pass more reforms. The Reform Act of 1832 was passed by a conservative government because they knew that inevitably it was going to have to be passed. Who knows? It would create more lords in order to--and you'd have these not real lords who are there so the bill gets passed. It's passed by a conservative government. Then everybody says, "We British, we reform. We've opened up the drawbridge. More people can vote." There was a component of Chartists who were called "physical force" Chartists. They're not so sure that reform without revolution is possible. They are a minority within the Chartist movement. They are a very small minority, the physical force Chartists. What you've got in 1848 is you've got two things that are going on. First of all, you don't have a revolution. There's this big date in April, I think it's April 10^(th), where there's going to be this huge march in London. What the government does is it deputizes 25,000 men of property. They become sheriffs. They become--I don't know what you call them, sheriffs, I guess. Louis Napoleon Bonaparte, who had not yet returned to France, he was one of those who was actually deputized as a sheriff. The business people in the City, which is what the financial capital that overlooks the British Empire was increasingly called, they come with their hunting rifles to work. They get those file cabinets ready to barricade the door. And with their rifles they're going to blow apart anybody who rises up and would try to bring revolution to Britain. There are 25,000 of these people. The number of marchers was far, far smaller to that. It's a very peaceful march. There is no revolution in Britain in 1848. But if there had been a revolution, where would it come from? From where would the revolutionary ranks have come? That is the interesting question. That's by far the most interesting question in all of this. Because of what I said before, 1848 helps the British re-invent or reconfigure, reconstruct their identity. There has to be an unwanted Other there who's frightening them, who makes them convinced even more that they're doing it the right way. I alluded before, when we talked about British identity in the eighteenth century, I said what the British weren't in the seventeenth century helped determine who they thought they were. What they weren't were absolutists. They reformed. What they weren't was Catholic. The biggest riots in the eighteenth century were the Gordon riots, which were anti-Catholic riots. They are not the French, not at all. France has a centralized state and France is full of Catholics. Many of the Protestants who had left France after the revocation of the Edict of Nantes in 1685 come, not just to the Netherlands, but come to Britain as well. So, this may have already tipped you off on who is the unwanted, dangerous Other in the British--particularly upper-class, but not just upper-class--imaginary. They are the Irish. What happens in 1848 and the subsequent years is that British nationalism is redefined again or re-infused with a sense of "what we are not," and we are not Catholic and we are not Irish. If there was going to be a revolution in April or any other month in 1848, the components would be, from the point of view of the upper classes and from the police, the Irish and groups of physical force Chartists or revolutionaries who might join forces. Chartists looked to the Irish to get them to sign the big petitions. They see them as allies. Remember, because of the Irish potato famine in the 1840s that tens of thousands of Irishmen, hundreds of thousands of Irishmen--I have it in the book somewhere, but the number of people who leave Ireland in the 1840s is in the millions, along with all those who just simply die in the fields. They go to where? They go to the United States and they go to England, particularly Liverpool. That's why the Liverpool--this has nothing to do with anything, ça na rien à voir avec--soccer team is very much perceived as a Catholic team in the way that in Glasgow that the Celtics are the Catholic team, because so many Irish immigrants went to Glasgow and to Scotland. The Rangers are the Protestant team, very anti-Catholic. That's why these people were brawling in 1900. They played before 100,000 people in 1900--one hundred thousand people at a soccer game in about 1900. They hate each other's guts and they still do. All of these Irish are going to London, also. They live in the Irish neighborhoods. From the point of view of the ruling classes, and from the point of view of British nationalism, and from the point of view of the police, the possibility was there that the Irish confederation, who are extremely militant--one of the important Irish leaders, Daniel O'Connell, who is in the book, he dies in 1847. But you've got these people who are far more militant. Many of them believe that the only way Ireland is ever going to be independent is by rising up and rebelling. That's what happens, isn't it? That's what eventually happens. They were right about that. What if they start rebelling in 1848? What if you had, for example, your basic Peterloo massacre, as they called it, playing on the word "Waterloo," where the British troops shoot down well-dressed demonstrators in Manchester. What is it, 1817? Either 1816 or 1817, I don't remember. What if people who said, "We're never going to get anywhere if we don't do what our French colleagues have done, and that is take arms against these people." So, there is a potential for an alliance between militants in the physical force Chartist movement, other radicals, and members of the Irish confederation. Because it never happens, there are a few marches and a few skirmishes, but basically the only news is no news, does not mean that this wouldn't have a big effect on the reinvention of British nationalism, British self-identity. What do I mean by that? Here, one of the interesting things is that John Belcher told me a long time ago, on a train in Germany coming back from a conference in Wurzburg, that there were more boxes of documents in the public record office about the surveillance of ships coming into the port of Liverpool, than there were any other documents about any other aspect of 1848. Why? What are they doing? Where is the potential insurgency or infusion of militants for Irish coming from? It's coming from the United States. One of the interesting things about this, and you can see also in the time of the troubles in the 1970s--I was in Ireland, ironically, when it all started up again in 1969. At the time they were really worried about the IRA, and there was a lot to worry about then. The IRA was getting all sorts of money from Irish pubs in New York and Boston, just tons of money, big bucks all the time to buy weapons. In 1848, one of the interesting things that Belcher and other people have discovered is that the real Irish militants, the most committed, were Irish immigrants, immigrants to and immigrants from Ireland living in Boston and New York. What they had done is taken the notion of "American liberty" and said, "All right. Our role will be that of the French Revolution, to carry liberty in principle across the borders and free Europe from nobles, from priests, etc., etc. What we will do, as first generation Irish living in New York, and Philadelphia, and Boston, and maybe Connecticut, too, is we will raise money and we will make Irish independence. We will achieve this with violence, with guns." Every ship that came into any port that was coming from the United States was thoroughly searched for weapons and for money. These Irish immigrants to the United States were the most militant, arguably, within the Irish political movement for independence. For one thing, they had more means. Some of them had come from Ireland to the United States maybe going through England and maybe not. They had jobs, and the people in Ireland themselves were just starving. They were dying in the fields. They were ending up in London living with other Irishmen, which is not surprising, maintaining these kinds of patterns by county, Cork, and all this. The Catholic Church was terribly, terribly important in their lives. It was terribly important as a means of charity and all this. But what it meant for the upper classes is that the unwanted Other, Catholic--remember in 1798 they tried to ally with the French. What happens in World War I? Roger Casement, who is an Irish militant who is absolutely against the exploitation of workers in Peru, and in Africa, and in everywhere else, Roger Casement ends up being sent off with a little boat off of a submarine off the coast of Carey. He tries to stop the Easter Insurrection. Of course, he's arrested within about twenty-five minutes and hung later. What he tried to do was organize, in prisoner of war camps in Germany, Irish militants to fight the good fight and to free Ireland. This is looking later, but there was always this potential fear among the British upper classes that they--these Catholics who are no longer just across the channel. They're not over in Ireland across a very choppy, gray, freezing sea. They are living in Liverpool in huge numbers. They are living in London in even bigger numbers. They are there. They are dangerous. If they ally with these dissatisfied workers from the Chartist movement, all hell is going to break out. During the next time, and I don't have much time at all--and I didn't even use this, but you see the point. This potential alliance never occurs, but one of the interesting things about the re-invention or the reconstruction of British identity, self-identity--and it's one shared not just by nobles and big time gentry but by ordinary workers, "Tory workers," we tended to call them dismissively, those of us who couldn't stand Margaret Thatcher and always were amazed to go through working-class parts of Britain and see these miserable council houses with these big pictures of Margaret Thatcher or the royal family. Anyway, it's just amazing. But these Tory workers, and not just Tory workers, they see themselves as respectable. They see themselves as British and they increasingly see this unwanted enemy within, Catholic enemy within, as the Irish. The newspapers are full of cartoons and caricatures of the Irish, who are portrayed invariably as drunken, as stupid, and as lazy. "Paddy" becomes, for the caricaturists in British newspapers in 1848 and subsequent years--Paddy is portrayed as the unwanted Other who is a threat to and has no place being in, except to do menial jobs or as a factory operative if they maintained respectability and don't try trod out their Irishness too much. So, if anyone wants to know why these issues become so extraordinarily bitter at almost any time you can think of, Ulster in the 1970s and 1980s or even into the 1990s, or Ireland in 1916, 1848 plays a major role in that. Now, this is not to say that all people who saw themselves as British were necessarily nasty, aggressive people. But it does simply remind us, and here again I counsel Linda Colley's book called Britain, as a very eloquent summary of all of this, that part of the construction of any kind of identity, and I'll talk more about this when I talk about Eastern and Central Europe, is what you're not, and what you're not for the British in 1848--afraid of catching this cold coming from the continent was, again, you are not French. You are not Catholic. This, if anything, was going to further sour the relations between Irish and British authorities, particularly given the fact that the British Protestants owned the land in Ireland. I was in New York the other day, and I heard a talk and I saw some pictures of these marvelous things called "mass slabs," or "mass rocks," that were in rural Ireland, where priests in the sixteenth, seventeenth, and eighteenth centuries said mass in secret using these slab rocks, rocks that they happened to find, as altars. Practicing your religion was illegal, and the Protestants have the law on their side and they own the land anyway. So, in 1848 there was no revolution in Britain. You know clearly why this is not the case. It's almost surprising to think that they could have imagined the lines between the Irish confederation and other Irish groups and physical force Chartists. But 1848 has another role to play in British identity. I've tried to convey that to you today. Have a wonderful weekend. See you on Monday. Bye.
European_Civiliization_16481945_with_John_Merriman
14_Radicals.txt
Prof: So, what I want to do today is resist the temptation to talk about anarchism the entire time. I sent around the terms for today of which I only forgot one or two. What I think I'll do at the beginning is I'm going to talk very quickly about socialism, and the difference between revolutionary socialism and reform socialism. Add syndicalism to the mix, and these are all terms that I sent to you, so I'm not going to write them on the board, because I need the board. Then for most of the lecture I'm going to talk about anarchism. Anarchists didn't want to reform the state. They didn't want to seize control of the state either by revolution or by electoral process. They wanted to destroy the state. So, I'm going to talk about those guys for a while. Most anarchists were not terrorists, but at the end I'm going to talk about a guy that I followed around for four or five years who was a terrorist, and arguably--it's a book I just finished--one can find the origins of modern terrorism in this guy, who is called Émile Henry. This will fit into Paris. It's obviously sort of a sub-theme in this course, and so is the state and capitalism. This particular person, Émile Henry, set out to bomb and to kill. His targets changed the name of the game for terrorists. That's obviously what I can't wait to talk about. But first I'm going to just review briefly for you, it's getting briefer every second that I think about this, the socialist stuff, which you can read about. With the rise of mass politics in Europe in the 1880s and 1890s, there was the rise of mass socialist parties. Basically just to review: there are two kinds of socialism. There were the revolutionary socialists, of which Marx was an obvious example and his son-in-law Paul Lafargue, a name you don't have to retain, who brought Marxist theory to France, who believed that revolution would come when the proletariat was class-conscious and after a bourgeois revolution, of which he basically thought 1848 had been a good example following in 1789. And the proletariat would rise up and break their chains and bring this brave new world. And, so, there were revolutionary socialists in Italy, in Spain, and in France, and even a few in Germany. Reform socialists said, "Look, states are becoming stronger and stronger, and can break revolutions very easily. Look what happened to the Paris Commune of 1871 when about 25,000 people are massacred, men, women and children are gunned down, and that the way to bringing social reform and abolishing the abuses of capitalism is through reform." This is the reformist tradition in Germany. It's identified with somebody who's in the book, Edouard Bernstein. And you can see this in the growth of the SPD, the German Socialist Reformist Party, which was the largest party in the Reichstag in 1914 when war breaks out. What you do is if you organize, you can--legislatures, if you have enough socialists and enough well-meaning other people in the legislatures, in the Reichstag, or in the Chambre des Députés, or the other parliaments in places that had parliaments, then you can vote in laws. You can vote in mine safety regulations, because mining accidents killed so many people. There's one in the Pas de Calais where hundreds and hundreds of people, a thousand people get killed in one accident in about 1910 or 1911. You can pass an eight-hour day or a ten-hour day. You can pass laws making it harder and harder to employ children, particularly in dangerous tasks. You can do things for women. You could do things for families. If you elect the right kind of people, you can have a revolution through reform, through the ballot. And the great socialist leaders such as Bernstein, whom I just mentioned, or the great Jean Jaurès, whose death on the 31^(st) of July in 1914 was really the end of an era and the beginning of another era, a scary era of the war--people had that sense. He was the one who in France unified the reform socialists and the revolutionary socialists, though there are still fissures in their approaches. Of course, the old revolutionary socialists would become the Communists after 1920, when the French Communist Party is begun in the wake of the Russian Revolution, which seemed to be, even though it was a revolution in a very complex situation, that seemed to say that you can have a revolution. But anyway, reform socialists dominate in Germany. They dominate in France. They dominate in Belgium. The Socialist Party is terribly important in Italy; they become more important in Spain as well. Those are the two big traditions. Some of the tensions between revolutionary socialists and reform socialists can be seen in the fact that revolutionary socialists said, "Look, if you are working in the Reichstag and you're trying to get better insurance plans," ironically it was Bismarck's Germany that gives really the first substantial insurance program for workers, "what you're doing is you're propping up the bourgeois state. You're buying into it. You're supporting indirectly their armies that crush workers and strikes," and they did in the heroic age of syndicalism, but more about that in a minute, 1895 to about 1907 in France. But in other countries it's about the same thing. "You're propping up the bourgeois state by participating in electoral politics." But lots of revolutionary socialists, and this is all in the book so don't worry about this, but a lot of them say, "Look, if we don't run candidates and elections, how are they going to know about us?" So, they, too, would run candidates and elections. So, they're put at really kind of coincé. They're really sort of stuck between a rock and a hard place, because they're running candidates in elections in which they do not believe. I'm not here talking about Russia, because that is more complicated with the Bolsheviks, and the Mensheviks, and the socialist revolutionaries, and we will come back to them. I'm talking basically about Western Europe. You have to imagine that Lenin and the other Russian socialists are sort of walking around the lakes of Geneva and of Zurich in exile and trying to imagine this future. But that's the big difference between socialists, reform and revolutionary. Now, to make things even more complicated, you have a group called the syndicalists. The word "syndicalist" is an English word which I sent along on your friendly email from yours truly. Syndicalist, the word comes from the French word for unions, which is a syndicat. What they said is, "Hang on." They kind of believed with the revolutionary socialists, saying, "If you get involved in electoral process, you are propping up this corrupt bourgeois state. You are propping up this dynamic duo of the state and capitalism." Syndicalists say, "Look, we will organize from the ground up, beginning in the shop floor, the factory. That will be not only a means to obtain a revolution, it's a way of seeing what the future world will be like when everybody tutoies everybody or Dus everybody. The kind of relations, the friendly equal relations of the shop floor become, after the revolution, the way the world will be organized." In the South of Europe, in Italy and in Spain, these folks are called, and I sent this around to you, anarcho-syndicalists. You see a transition here. Anarcho-syndicalists. Because they're rejecting the state and they're looking in the future, the decentralized organization is part of it. Anarcho-syndicalists have considerable influence in Spain and Italy. They believe in direct action, and sabotage, and strikes, but in union organization. One of the most interesting--I mention him in the book--Fernand Pelloutier, who wrote a book called The Dying Society, was one of the theoreticians of anarcho-syndicalism, of syndicalism. He was dying. He was dying of TB. Though he was not a worker, tuberculosis was a working-class disease. Go to Pennsylvania. Go to West Virginia. Tuberculosis, the ravages of the mines in the United States, were just incredible. In porcelain factories and all sorts of places, glass factories all over the place. Pelloutier creates these things called labor exchanges. He's one of the people who come to these things called labor exchanges, bourse de travail, you call them in French, which were towns where you had municipal socialists in power at the municipal level--and you did, in some cities, are giving municipal funds to start these labor exchanges which are buildings. You can still see them. When I was invited to Limoges to give a big talk by the Conféderation Général du Travail, we started the apéro, the first round of drinks, about noon in the labor exchange, in the maison du peuple, the house of the people. They were places where workers coming from other places could come and get a meal, get some money and, above all, find out about jobs. So, syndicalists--the way they imagine the future, and preparing for this brave new world of post-revolutionary relationships, that has an important privileged place in the way they view the world. There was an engineer called Georges Sorel, whose name I should have send around, S-O-R-E-L. He comes to the notion of the general strike. One day all workers will simply put down their tools and say, "Hell with you, capitalism. Hell with you, the State. And they will bring capitalism to its knees." It doesn't really ever work out that way, does it? The capitalists and the State win the day. So, having rushed through all of that, let me talk about what I want to talk about. That is anarchism, of which there is only a couple short, and I hope sprightly, paragraphs in which you're reading. I am not an anarchist. Sometimes when I give talks at various places--I was at St. Louis recently, and other places, people at the end will think--hopefully, they'll never think I'm a terrorist, because I'm certainly not, and when I talk about this guy I do not do so with affection or admiration. But I know him because I followed him around. I followed him around. Most anarchists were not terrorists. One wants to make that clear. It's not surprising that the great strengths of anarchism are in Spain in Catalonia and in Andalusia in the south of Spain and in southern Italy. Why? Because that's where the Italian and the Spanish states have very limited success in convincing people that they're Spanish or Italian. Why should they believe they're Spanish or Italian? Southern Italians thought that the republic was a monarchy. But the progressive monarchy, so called, was a plot launched by tax collectors and industrial capitalists in the North of Italy. In Andalusia and in Catalonia were the Civil Guard, who tended to be from Galicia, a conservative part of Spain where the odious Franco was from or from Castile, a huge area around Madrid. It was easy to see how they associated the state with something that they didn't want. In writing about anarchism, I tried to put myself and tried to think of how anarchists viewed the world. I want to tell you a story that's a true story. If you're trying to imagine how anarchists viewed the world, this story is not a bad one. It's about a cork worker making corks for bottles of sherry in the south of Spain. He's dying. He'd been an anarchist his entire life. He hated the state. He hated capitalism. He hated the church. He's dying. He's on his deathbed. He had married a woman from a religiously practicing Catholic family. In the scene in this room in which he's dying, in one part of the room is his family, who hated organized religion, who view it as a prop for capitalism and the state. On the other side of the room are people who are not so sure. They went to church sometimes. They knew the priest. When he's lying there, the end is near. His wife's family says, "Pedro, don't you want me to bring a lawyer in? A lawyer who will take your last will." Anarchists don't have wills and they don't have very much property. The other side of the room is just utter terror, horror. How can they suggest such a thing, that Pedro is going to make a will? That's a bourgeois thing to do, to make a will. Then somebody else from his wife's family says, "Pedro, the end is near. Don't you want us to get a priest for the last rights?" He'd never set foot in a church and proudly so. Consternation on the other side of the room. How will it all end? How will Pedro end his life? With a lawyer and a priest? So, Pedro looks up and he says, "Go and get me a lawyer. Bring me a lawyer." Then he says, "Go and tell father," the priest, "to come to see me." Joy--utter consternation. Pedro's lying in a bed in the middle. So, pretty soon the lawyer comes dressed in his little suit. He doesn't yet have a calculator to tote up the bill, but he's got his legal pad. He's never been in that house before. He comes down by the bed and he says, "Pedro, you have a few possessions, a fork, a knife, a couple of plates. Don't you want to give me your will now?" Pedro says, "Wait a minute. Wait a minute, señor. Wait a minute." Then the priest comes. He has his purple--I remember this from Jesuit high school days--thing that they wear on Easter. He has his little case also, which he has the holy oil to bless Pedro and give him the last rights. He comes close and he says, "Pedro, the end is near. You've led a good life, but I haven't seen you in church very much or ever, for that matter. Your children are not baptized. Don't you want to make a confession? You're going to meet your maker soon. Don't you want to make a confession to me right now? Nobody can hear you. Don't you have something you want to tell me? Isn't there something you can tell me?" Consternation on one side, silent joy on the other. Pedro says to the lawyer, "Come here, señor. I want you to stand on the left side of my bed." He says to the priest, "Father, come here please. I want you to stand on the right side of my bed." Then he smiles a smile of utter contempt. He says, "Now you can all see both sides of the family. Like Christ, I am dying between two thieves." And he died. To imagine the kind of hate that anarchists had of soldiers, and priests, and of officials, and of Castilian Guardia Civil, that was how anarchism was born. When was anarchism born? There's a couple antecedents in the eighteenth century, but they're terribly irrelevant people that hardly anyone read, including a British one. It really starts with Proudhon, a name a sent around. Pierre-Joseph Proudhon. He was from the east of France, from Besançon, in the mountainous Franche-Comté. He believed that if you didn't have the state, the people could live pretty much as with a little bit of prosperity that they had, a few chickens, a little piece of land. People tended to live like that there. It was a place where nobody had very much but everybody had enough to get along. He writes the following. It's in my lecture notes. To be governed is to be watched, inspected, spied upon, directed, law-driven, numbered, regulated, enrolled, indoctrinated, preached at, controlled, checked, estimated, valued, censored, commanded by creatures who have neither the right nor the wisdom nor the virtue to do so. To be governed at every operation at every transaction, noted, registered, counted, taxed, stamped, measured, numbered, assessed, licensed, authorized, admonished, prevented, forbidden, reformed, corrected, punished, under the pretext of public utility in the name of the general interest to be placed under taxes, drilled, fleeced, exploited, monopolized, extorted from, squeezed, poached, and robbed. At the slightest resistance or the first word of complaint to be repressed, fired, vilified, harassed, hunted down, abused, clubbed, disarmed, bound, choked, imprisoned, judged, condemned, shot, deported, sacrificed, sold, betrayed, and to crown all, mocked, ridiculed, derided, outraged, dishonored. That is government. That is its justice. That is its morality. He wrote a pamphlet in 1841 called What is Property? His answer is, "Property is theft." He didn't mean that all property was theft. What he meant was too much property was theft, or unearned property was theft. Proudhon had lots of influence among some peasants, but mostly artisans. In the world, by the way, that the great painter, Gustave Courbet painted a lot, in the area around Besançon or Nantes, lots of really marvelous paintings of people at work, the stone breakers and people just working their guts out for very little. Proudhon, in 1848, it's said he went down in Paris and put a brick on a barricade and threw up, nauseated by the thought of violence, of revolution. His successors, sort of the leaders of the anarchist movements, were Mikhail Bakunin, an enormously tall, bearded, heavy-drinking, heavy-eating, heavy-sweating Russian noble, a prince who was an anarchist who said, "The revolution will come. It will come with a single spark and all the hundreds of millions of toilers, the serfs will rise up, and they will slay their social betters, and create this new brave world based upon their village harmonies." He said, "Destruction is a creative passion," entre guillemets. Destruction is a creative passion. In 1848 he led police on a merry chase. He spent time in the Russian slammer. He escapes through Japan, goes through the United States and ends up back in London, terrified people. His image, this is before photographs or photographs are just starting out, and there are photographs of him. He met Marx, whom he hated, and Marx hated him. He thought that Marx was ruining class struggle, was ruining revolution by preaching over and over again about waiting for revolution. Class-conscious workers. What you need is peasants to rise up as they had in Pugachev's rebellion, and all the other rebellions in the early seventeenth and eighteenth century. He dies in the early 1870s, a phenomenal character. The other Russian, a gentle man, a geographer, Peter Kropotkin, K-R-O-P-O-T-K-I-N, is in the book. He wrote a pamphlet called The Anarchist Morality. When you get rid of the state, people are basically good. Anarchists believe that people are good. There's a tradition from Rousseau, by the way, there too. Rousseau, who lived in the east of France, or what would become France, around Chambéry, who believed in the primitive. Anarchists believe that primitive social relations and even primitive ways of producing things, just associations you enter in because you want to, were the future. Kropotkin, who at the end of his life was once toasted by the king of England, and who had turned against the Russian Revolution--he died in, I think, 1921 or 1922. He was terrified, just disgusted by the Russian Revolution, which was already creating a centralized state. That's what he hated. He hated states. "You must destroy the state." But he was a gentle man. Yet he and a guy called Paul Brousse, you don't have to remember, who became a socialist leader and then went nuts later--that's not a very clinical term, but he had big problems later--they create the term "propaganda by the deed." What is a deed? A deed is a bomb. A deed is an attack. It's the murder of an official. The anarchists weren't the only ones doing this stuff. There was a Russian group, briefly mentioned, called Narodnaya Volya, who believed in kind of a hierarchical post-revolutionary system. What they wanted to do was set out and kill officials. But so did the anarchist terrorists. They killed officials, one of which you've heard of already. You've probably heard of some of the others. President McKinley in 1901, Buffalo, New York, is killed by somebody who received funds from an anarchist organization in Patterson, New Jersey, I think, or is that Bresci? Anyway, Bresci kills King Umberto I of Italy, another anarchist assassination. They kill five or six leaders during this period of anarchist heyday really, the late nineteenth century. King Umberto I of Italy said that he considered assassination a professional risk. There were two attempts on his life and the third one nails him. Alexander II, who liberated the serfs, is killed when he gets out of a sled to look at a bomb that doesn't work. Elizabeth, the empress of Austria-Hungary, who couldn't stand Franz Joseph and lived apart from him, is assassinated as well. But most anarchists were not killers. Now, if you think of American history, those of you who had Glenda's course, or David Blight's, other people here, you might know about Haymarket, the Haymarket affair in the 1880s. There they hanged four anarchists who were called les pendus in France. They had enormous influence. The les pendus were the hanged, as they're swinging in the breeze in a Chicago prison yard. They were anarchists and they inspire someone like Emma Goldman, who was a Russian-Jewish immigrant to the United States, who becomes an important American anarchist. Anarchists set off to kill policemen, or to kill heads of state, or to kill generals; propaganda by the deed, the spark that will ignite this revolution. But the man you see before you--and I have become today a very modern man. I want to tell you, because this is my first try at PowerPoint. The person that you see before you is the guy I've been following around. I got interested in him for two reasons. One is that he is the first, really, along with a bombing at an opera in Barcelona, the Liceo to target ordinary people. To say that people of a social class were guilty because they were who they were. The spark does not necessarily have to be lit by killing a head of a state. Sadi Carnot, who would get his in 1894 on the Rue de la République in Lyon, president of France. He takes the decision to kill ordinary people. Lots of terrorists since then have taken that decision. A classic example is insurgents in Punjab in India in the 1920s, bombing officers' clubs. There's a terrifying scene in that fantastic, but very difficult movie--because of the torture scene, I can't even use it in the French History course--The Battle of Algiers. You are with this woman who's planting a bomb in a café and she sees the people who are going to die. She sees babies with their mothers. She sees people having a drink in the French community there. She takes the decision to place the bomb. I believe that it kind of started with this guy. This is, again, not somebody I admire. I know him. Everywhere he's lived I've been. I guess if you're going to write a book about somebody as a prism on something or other, it's good to pick somebody who only lived to be twenty-one, because it's a shorter book. He was guillotined in May 1894. Again, I do not admire him, but I'm going to tell you about him anyway. With the help of PowerPoint! How in the hell do I do this? I've got to remember how to do this. All right. That's Émile Henry. His father was a communard who was condemned to death, somebody who'd fought in the Commune. So, this guy is born in Spain. His father contracted mercury poisoning in Spain, and comes back to France after the amnesty and dies. He has an older brother and a younger brother. What he does is, on a day in 1894, he sets out with a bomb and he walks in the fancy boulevards that, for him, represent all of the class differences between wealthy and poor people, center versus periphery and all of that. He goes down to the Café Terminus, which is on your right. It's those awnings there near the Gare Saint-Lazare. I've had the rather odd experience of twice, once with friends and once with my son, having eaten in the restaurant that my book subject blew up, because it's still there, and sitting at the same table. He goes to four or five cafes, but there are not enough innocent people in them. So, he goes to the Terminus where there is a gypsy orchestra playing, pays for two beers. That seems a little odd, anarchist paying for a beer, because the right to theft was something that they still believed in, many of them, not all. Most were not terrorists, remember that. He goes into the Café Terminus. He gets a cigar. He lights the fuse. He throws it toward a chandelier and it blows up, killing one and wounding about eighteen people. That's not him. This anarchist is called Malatesta, who doesn't look particularly scary there. Émile Henry had gone back to the Paris region. His mother had a very pitiful auberge. This is the era of bomb attacks in the early 1890s. That's a marmite, as bombs were sometimes called. He became an anarchist. His brother had already become an anarchist. To put yourself into the early 1890s in Paris, ordinary people, but not so much as the elites, were terrified of anarchist bombers. Ravachol, a name I sent around, who was a poor, pathetic in many ways but extremely poor guy who'd been sent out to beg by his mother. They had almost no money. They're from a place called Saint-Chamond, near Saint-Étienne. Ravachol was a counterfeiter and finally a murderer. He suffocates to death a hermit who had a mini fortune hidden in his bizarre cottage near Saint-Étienne. He escapes from the police and he goes to Paris. In 1891 police beat the hell out of three anarchists in a march, a rather one-sided brawl. Two of them are condemned to big-time sentences. Ravachol decides to go and to kill the prosecuting attorney. He places a bomb, not knowing where the apartment was, and the bomb blows up. Then he set other bombs, too. After having done this, he goes to a restaurant, and he eats rather well, and he engages a waiter in conversation. He tries to convince the waiter to be an anarchist. The waiter sees he has a scar on his left hand, Ravachol did. Then he stupidly goes back later and eats in the same restaurant. Instead of bringing the second course, the waiter brings the police. After a tremendous struggle, Ravachol is captured, put on trial for his life, and he memorably holds up--how do you work this thing?--anyway, he is guillotined. He's really kind of a bastard. More than that, and he killed other people too, probably. But he finally is guillotined. Two things happened, not only in Paris but in other places. The people of means living in the fancy quarters are extremely frightened. There are all sorts of death threats that get sent around. Ravachol, to those anarchist terrorists, he becomes a martyr. Look, here his head is framed by the guillotine. Ravachol, who had been betrayed by an anarchist friend, dies at age thirty-three. Christ died at age thirty-three, betrayed by a friend. So, he becomes this sort of, for radical anarchists, he becomes a vision of how life should be. There are songs called La Ravachol. There's another one called The Dynamite Polka, being sung. Dynamite, from the point of view of anarchists, leveled the playing field. They viewed dynamite rather as the way in which muskets helped end the domain of feudalism. It levels the playing field. Dynamite, after all, was invented by whom? By Nobel, as in the Nobel Prize. So the people who support dynamite in French are called the dynamitards, the dynamiters. The Dynamite Polka. In Montmartre, where you have a lot of anarchist writers and artists. Pizarro is an anarchist. The literary and art critic, above all art, Félix Fénéon, who was a friend of Émile Henry, is an anarchist as well. So, it's into this world that the young Émile Henry. Here you go. Here's this one. I've read hundreds of these things that said, "You've always been hard with your domestics. You're going to be blown up. Death to the riches." There are hundreds of these things. They're sent all over Paris, not just the fancy neighborhoods. There, hundreds and hundreds of times, at the airports you hear these explosions sometimes as they blow up suitcases that haven't been claimed. That's the first known machine that blows up suspect items, including lots of bad jokes--sardine cans with a little bit of powder left in them, and that kind of thing. It's in this world that Emile Henry learns to hate, and that he certainly does. That's where his mom had her auberge. Ironically, it's near Euro Disney now, but then it was a village. That's not the one. It was up the street a little bit. I did follow him around, you can see. There's his mother there. Up on the right is one of the places he lived, always around, except for one occasion, Montmartre. That's his girlfriend. He had unrequited love. She had the disadvantage of being married to another anarchist. He writes her clumsy poems. He falls in love with her. She blows him off. But in the end, she wanted to take full credit before the press for having been the lover of Émile Henry after his deeds, his bombs, but he wasn't. That's one of the places he worked in Paris. I love that. That's a beautiful sign and that company hasn't existed since World War I. But you can follow him around. In 1892, Émile Henry, two years earlier, he had killed before. There was a strike in the south of France, glassworkers in Camaux. The third block on the left is number eleven and that's where the company is. They found a bomb there placed about 11:00 in the morning. Right there. That's not the way it looked then. I've gotten in there twice to see where he placed the bomb. My son gets a little tired. "Dad, do we have to look at another one of these places?" They find the bomb and they carry it down to the police station, which is still there. It was a reversible bomb, which means when the chemicals run together, boom! It kills five people terribly, five policeman and a secretary among them. Body parts all over the place. Emile Henry, that's one of the places he lived. He is eliminated from the list of suspects because they said he could not have gone on the two errands his boss sent that day from near the station of the north, down toward the center of Paris, then up toward the Arc de Triomphe, gone back to Montmartre and gotten the bomb, placed the bomb, and gotten back in two hours and fifteen minutes. When he went on trial for his life in 1894, a detective said, "Yes, he could have done that in two hours and fifteen minutes." So, being a bit of an empiricist, I did it. And I replaced tramway and Omnibus with a bus and with a Metro. I never take cabs, but I took, instead of a carriage--of course I didn't take that--I took a cab and I subtracted eleven minutes when my cab couldn't turn left on the Avenue de l'Opéra. He did it. There's no question about it. In fact, when they did the reconstitution of this building, he knew every single part of this building. He did it. There's no question about it. Why did he hate so? Part of it again, this is the theme we've talked about before, was the social geography of Paris. Everywhere he lived, with one exception that you just saw, was in people's Paris. All of the facades are still there. That's where he lived on the Rue Véron, on the very top floor. That's where the poor lived. He gets the bomb there in 1892. They hated Sacré-Coeur, for reasons you already know. It was a symbol of penance for the Franco-Prussian War and penance for the commune. His father was one of the people condemned to death, who was lucky enough to get out and not be executed. In a Zola novel that's very underappreciated called Paris, published in 1898, it's about a priest who, to take an REM song, is "losing his religion." His brother is an anarchist, Guillaume. He has fantasies about blowing this place up. I think it's ugly as hell. I once went with my wife to see where they cast the huge bell that would drive people nuts and still does, called la Savoyarde. But that bell wasn't there. But as a symbol, you can't walk around Montmartre and not see it from various places. He becomes a terrorist because the people that he sees around him are very, very poor, and he convinces himself, even though he's an intellectual, he's a bourgeois. He could have gotten into the École Polytechnique, which is a super, Grande École. It's a big engineering school. He's a great student. He's an intellectual. That's the other thing. Besides picking "innocent people." All people are innocent, but you know what I mean. The other thing is he is not a sad sack or a dangerous one like Ravachol. He is not a guy called Vaillant, who places a little, teeny tack bomb and throws it in the Chamber of Deputies to call attention to the plight of the poor, and is guillotined. The first person in the nineteenth century guillotined who did not kill somebody. This guy goes out to kill. There's a scene in an old Balzac novel called--all Balzac novels by definition are old, obviously--Old Goriot, Père Goriot, in which Rastignac, who was sort of this down-and-out noble who wants to make the big time in Paris by sleeping with all the right people. After Goriot dies, he's up in the northeast quadrant of Paris at Père Lachaise cemetery. He waves his hand down toward the fancy quarters, down ironically near where Café Terminus would be. The fancy quarters, even before there are boulevards, and he says the equivalent of, "It's war between you and me now, baby." That's a rough translation, but that's what he said. Émile Henry, walking around on the hills and seeing people walk down to be domestics, because they couldn't afford to take the tramway or the Omnibus, horse-drawn carriages. He waves his hand and says, "It's war between you and me, baby, and I will ignite the spark that kills you MFs right away." That's what he does. When he walked out of his apartment, he looked down and happily he didn't have to see the Tour Montparnasse, which hadn't been built after huge payoffs in the early 1970s. Disgusting! But what he could see were these symbols of capitalism, the state, and the church. What did he see? He saw the Eiffel Tower, which was five years old, a symbol of the republic and the bourgeois revolution, as he saw it. He sees the Pantheon, where they buried all these Napoleonic marshals who basically got a lot of people killed if they didn't get themselves killed. And he sees Notre Dame. He says, "It's war between you and me." This is the façade. I love this stuff. The inside building where he lived isn't there anymore. Now it's kind of an area that's a little bit sketchy. There's a lot of drug dealing. When I got myself into there, I had to kind of--;I didn't want to look like a plainclothes policeman. Do I risk looking like a plainclothes policeman? No. I didn't want to look like a tourist sort of slumming. I don't look like that much, either. When I went by these guys who were sort of hanging out there, I said, "Salut les gars," or "Hi guys, what's up?" I got myself in there to see where he once had lived. The point of that is that you can see what he saw. That gate is exactly the same as that day when he walked out to kill for the second time. His bomb that he threw into the Terminus--this is back in 1894, this is where we started. It hit a chandelier and exploded. He said at his trial that he threw it too low. He should have thrown it higher; it would have killed more people. Only one died. He'd already killed five before. People are terrified. They run all over the place. It was speculated that his was sort of an indirect suicide, because his unrequited love, this lady who lived in the Boulevard Voltaire, whose name was Elisa. But no, because he tries to escape in order to kill again. They chase him and they catch him. A barber helps catch him. A controller on the tramway hits him with the control mechanism that punches tickets, just like in the old days on those things, not that I ever was in a horse-drawn carriage. They get him and they take him. He's arrested for murder, Émile Henry, and is put on trial. That's where he's writing his mother. We know a lot about him, because they kept all these documents. It was so fun doing this. I love stuff like that. His mother was devastated, as you can well imagine. She cannot believe that her Émile could have done this. He was her pride and joy after the death of the father. Those of you who have been to Paris will recognize this. This is the Conciergerie. This is where Louis XVI, Danton, Robespierre, Marie Antoinette, and others awaited, they put their head between the little window, as they used to say, la petite fenêtre, to be guillotined. It's also got one of the three most magnificent gothic halls anywhere in France. His guards took notes every single day about what he said and the anarchist songs. They want to prove that somebody else was involved in this. There probably was somebody else in the 1892 attack who helped him close the bomb. So, Emile Henry goes on trial in April of 1894. He makes the famous declaration in which he says, "You have hung us in Chicago," Haymarket; "You have garroted us," that's slowly strangling us, "in Barcelona." When Franco was croaking in 1975, they were garroting an anarchist even at that time, 1975 in Barcelona. "You have shot us in Germany." I don't know how they killed them in Italy, but, "You've killed us in Italy." He says, "You, the bourgeois, who are in this café, you are not innocent. It's because of you, the petty bourgeois. You support les gros," the big ones, "on every possible occasion. You forget about us when your factory owners throw us out when we can no longer work any longer, or women workers happy not to have had to prostitute themselves in order to pay their rent and their husbands' rent by the end of the month. But what you can never do is destroy anarchism. Its roots are too deep." On the 22^(nd) of May 1894--that's what I said about writing a book about somebody that doesn't live very long--he is executed at Place Roquette in Paris, which was, by intent, the place where the state meted out justice, right in the heart of working-class Paris. That was not an idle selection of a space. We have his notes up to the very end of his life. This guy, Deibler, is the executioner. Monsieur de Paris. This is the executioner meting out, in quotes, "justice." Public execution, by the way, in France, maybe I said this already, was in 1936 at Versailles--the last execution was in 1974. I'm constantly asked by people all sorts of political opinions in France, where executions, public capital punishment is repudiated by most everybody, how that we still have it here. This is not a political diatribe, so I won't say anything about that. Anyway, Deibler, his son was the last public executioner, did the last public execution back when. There he is. There's somebody else putting their head through the little window. That's what I mean by that. That's the equivalent of his mug shot. So, he was wrong about the roots of anarchism being too deep. What clearly happens is that there was a trial during that same summer where they put a lot of intellectuals on trial who did nothing except to say that they were anarchists or to criticize the state. What my book--it's really a book also about state terrorists. It's about the overreaction of states. Our state is a good example of that, and the tendency to try to denigrate anyone who doesn't agree with us as being terrorists, whether they were or not. The intellectuals, the jury sees through it in 1894 and only a couple thieves are condemned and them not to death. There's anarchists in 1968 in Paris. There's a band of anarchists in about 1909,1910, 1911, that hold up stores with most modern tools, shotguns and things like that. The anarchist attacks were over. There was no question about that. That doesn't make Émile Henry less interesting. As an intellectual, the cross-class kind of--you see this in Middle-Eastern terrorism, too. My book is also about state terrorists. What stops anarchist attacks in Spain is the fact that the public becomes aware that the police were hideously torturing people who did not agree with the politics of the State. They were torturing them. You see where one could go in a political diatribe, which this isn't. In Italy when the state overreacts, anarchist attacks virtually end. Now, anarchism does not end in Spain. It does not end also for reasons that are perfectly clear already in Buenos Aires. There's still a huge anarchist community of exiles from these Western European countries from all over the place living in London, ironically in one of the more chi-chi parts of London which was then very poor, around Charlotte Street, where you can't afford to have a pint of beer anymore. That's where they live. So, he was wrong about that. The roots of anarchism were too great. But the connection that I want to make, obviously, especially since this is being filmed it's not the place to do it, but when states, including our own, overreact, what they tend to do is to lash out in ways, and imprison unjustly, and torture, and don't give legal rights. What they do is tend to increase the number of those people who despise us. If you look back, and again the commune is not a bad way of thinking about this, somebody figured out that of all the victims of terrorist attacks, no matter how you define them, the ratio between victims of overreaction by states to victims of anarchist terror--and I'm not apologizing for anarchist terror. I hate it. I'm not apologizing for terror of any kind. I hate it. But the ratio was 260:1. I suppose there's a lesson to be learned there somewhere. But it was fun to follow Emile Henry around, even though I don't admire him, and to try to give you a sense of how people felt when they hated in the 1890s. Their answer was not the same answer as socialists, which are to take power mostly through electoral processes, but to smash the state by blowing it up. See you on Monday.
European_Civiliization_16481945_with_John_Merriman
3_Dutch_and_British_Exceptionalism.txt
Prof: We talked about different political outcomes. Over the long run, Great Britain remains a constitutional monarchy; even in the nineteenth century, when Victoria had great prestige, she did not have great power. The Netherlands also resisted absolutism, and the Dutch Republic remained the Dutch Republic; although, for reasons that we'll see later, the Dutch Republic ceases to be a great power in the eighteenth century. Given the very different route that Prussia, Austria, Russia, Sweden, and France went with a centralization of absolute rule, why did it work out so differently for England/Britain and the Netherlands? Again, this is the second and last of these sort of holding pattern lectures. This parallels exactly what you are reading. Again, until we get our class set and all that, then there will be a very different kind of lecture starting next Monday. But let's just think out loud about what these places had in common, and what this tells you about social structure and political outcomes in early modern Europe. Of course, the consequences are enormous for other kinds of outcomes. Let me give you an example. Germany is not unified until 1871. Ironically, unification proclaimed in the Hall of Mirrors at the Château of Versailles, which we'll visit for a few seconds later on. The fact that German unification was achieved by Prussia and that Prussia was dominated by nobles, who were called Junkers, you'll come to them later, and by an army which--the state basically was an appendage of the army--had rather enormous consequences for Europe in the late nineteenth and above all in the twentieth century. In the 1960s and 1970s people paid a lot more attention to social structure and class analysis. But when you look at the experience of Britain and the Dutch Republic, they do share things that, in a way, determine the kind of political economy that they would have. What are some of these things? I've written them on the board. Let's just start in that order and think aloud. Then what I'm going to do for the last twenty or twenty-five minutes is talk about the Dutch Republic. You can skip that part in the reading, which isn't very long, and illustrate with some paintings, for which you are not responsible, but just to make the points I want to make about the nature of the Dutch Republic, and in which you'll see ways in which it was very similar to England/Great Britain and very different in terms of France. First of all, it's not a coincidence that in both England and in the Dutch Republic you had, along with the city-states of Northern Italy, you had the largest percentage of middle-class population that you could find in Europe. The middle class in Russia, which I'll talk about on Monday, was just absolutely miniscule. The middle class was extremely small in Prussia. Prussia did not include the Hanseatic League cities, such as Bremen and Hamburg and the others. You have in the Netherlands and in England an astonishingly large middle class. Moreover, in the case of England, there was tremendous fluidity between elites. The percentage of the population who was noble, who had noble titles, was extremely small. Privilege came from wealth and wealth stemmed from the land. Yet, because of the rapid and dramatic expansion of the English role in the global economy, you had lots of very wealthy landlords, property owners investing in commerce, whereas in Spain and in France, and Prussia in particular, it was seen to be sort of slumming for nobles to participate in commerce. Marxist analysis has given us this all too rigid picture of the nobility sort of letting their nails grow long, "they are nobles because they do nothing." That was part of it. Certainly there were nobles in France who bought up vineyards around Bordeaux. There are nobles around Toulouse who have invested in commercial agriculture. But yet the fact remains that it's really in England that you have this tremendous fluidity within the elite, and that basically commercial money talks as much as propertied money talks. London, already by the late sixteenth century, one-sixth of all the people, I think this is E.A. Wrigley who pointed this out a long time ago--one-sixth of all the people in England went to London frequently, because London was absolutely gigantic as a city. The only cities in Europe that were comparable--and they were smaller--were Naples, an extraordinarily poor city, and Constantinople, Istanbul, and, of course, in Japan, Edo, which would become known as Tokyo. The percentage of the English population that would have considered themselves to be middle class is extraordinarily large. The same is even more true in the Netherlands. There were, to be sure, nobles in the Netherlands. They tended to live in the eastern part in rural Netherlands and in the south. But their lives and interests were far, far away from that economic large machine, which was Amsterdam. Amsterdam is dominated by the middle classes. Now, the middle class want political rights. They want prerogatives. They want their privileges for themselves. It is fair to argue that non-titled people in England were at the forefront of the victorious role in the civil war that parliament played. In the city-states of Venice, which was a major trading city already on the decline, and in Florence, and in Milan, and in Turin, and in places like that you find something very comparable, but Italy is not united until the 1860s. Northern Italy has a large percentage of the population who are middle class. But in talking about the political outcomes of states, that doesn't really fit into our analysis here. Part of that is that along with Northern Italy, the Netherlands and England/Great Britain have, by far, the most urbanized population in Europe. If you go into what now is Serbia, there basically was Belgrade, which was a small place. Poland had very lively, important cities, Warsaw and Krakow, and GdaÅ„sk as well. You can't just say, "In Eastern Europe there weren't cities," but there isn't any place, including France, that had a remotely as high a percentage of the population living in cities as England and the Netherlands. One of the great shifts in English/British history that you will become aware of is the shift of economic dynamism in England away from the south to the north. In the time we're starting this course in the seventeenth century, besides London, which is this gigantic place, the biggest cities in England were Norwich and Exeter, and York in the north. Of course, with large-scale industrialization, which begins in the middle of the eighteenth century, you'll see this dramatic shift up to the north. Manchester, which was a small town, becomes this enormous city, and Liverpool becomes ever more important. Cities are where the middle class lives. Bourgeois and burghers, as I said last time, are urban residents who are losing their privileges on the continent to big-time absolute states. They will defend, quite vociferously, their privileges as townspeople against absolutist pretensions of nobles, in the case of the Netherlands and also, to an extent, in England as well. They share those things in common, which is not to say that a country like France wasn't urbanized. Paris is already enormous. There are about 500,000 people at the time of the French Revolution. There are so many people you can't count, because they own nothing. Also, we don't have accurate censuses until the nineteenth century. The first accurate census, I think, is in Copenhagen at the end of the eighteenth century. Most censuses were taken, by the way, as a way of counting heads, the number of people who had to be fed at the time of a siege. We're kind of guessing on these population figures. The fact remains that the Netherlands and England/Britain share this. This is important in terms of political outcomes, and also important in the case of England/Britain in what we've come to call the Industrial Revolution, which I will talk about at another time. Secondly, as I tried to suggest the other day, these places resist absolutism. The English Civil War, it's kind of a generalization to underline that too much, but nonetheless, people living in England in the 1640s saw that there was a real threat to the idea of the freeborn Englishman that was coming from the trampling of long-assumed rights, since at least the thirteenth century, at least in the imagination of people by kings who wanted to dispense with the rights of parliament and run things as they wanted to. In the case of the Netherlands, it's the same thing. There isn't anything as dramatic as the English Civil War, but the important outcome is that in the end this decentralized federalist structure, which I describe in the book and we'll talk a little bit about in a while, is victorious over the pretensions of a potential dynastic ruling house, that is the Orange House, the House of Orange, who wanted to make the chief Dutch official, who was called the Stadtholder-- you can read that in the book--and wanted to turn that person into kind of a thundering, semi-absolutist monarch. That doesn't work as well. When you think of the origins of the Netherlands, it comes from a civil war, or a war of independence against the Spanish absolutist state, that begins in 1572 and goes on and off all the time until Dutch independence is recognized--it was a fait accompli for a long time, but until the Dutch independence was recognized in 1648 at the Treaty of Westphalia. For the Dutch when they imagine scary things, a scary thing is an army sent by the king of Spain to extract more taxes from the wealthiest of all the Spanish provinces--that is, the Netherlands--rich because of commerce and, as we'll see in a minute, to try to force people to remain Catholics at a time when the vast majority of the Dutch population had converted to Calvinism. Those people who believed in the Dutch Republic, which was the vast majority of the people, just as the majority of the population of England held to the rights of parliament, they have this scary scenario of their rights being violated, trampled upon, destroyed, eliminated, eradicated by big-time absolute rulers. The other scary thing for the Dutch is, of course, the big guy down south. Louis XIV would love to control all of the Netherlands. His invasions at one time are turned back when they literally open the dykes and flood the French armies back. In the mental construction of the Dutch and the English both involves one thing they don't want to be. That is to lose their prerogatives, their rights to an absolute state. In both cases, this becomes part of their self-identity. That's an essential part, as my good friend, Linda Colley, who used to teach here and sadly is not here anymore. She's at Princeton. She made an argument in her very successful book called Britons, the construction of British identity. I will argue later in the course, in 1848 it has to get reinvented again by imagining an other, who is perceived as sneaky and dangerous, and of course in that case it's the French, but also the point of view of the British, the Irish, who are conceived of as being capable because of their quest for--"I don't want to be trampled by the English, especially by English Protestants"-- of hooking up with France, which they tried to do in 1798, or in World War I with Germany, because there were some attempts by the Germans to stoke up Irish independence movements. Again, the only point here is that they see themselves as anti-absolutists. This helps them create this sense of identity, which helps determine their political origins. You'll find nothing comparable in Russia, obviously which I'll come back and talk about, or in Prussia, or in France. You can talk about the origins of French nationalism in the middle of the eighteenth century, but it's very closely tied to this dynasty, at least until they lop off the guy's head in 1793. So, that's that point. Third is decentralization. Both of these states are decentralized states. The British don't have a police force until 1827 or 1829, I can't remember which, when Robert Peale creates a London police force which they call the Bobbies, after, like, Robert, Bob, Bobbies. People didn't want that. They didn't want a large standing army. What have they identified large standing armies with? They always had to have a large standing navy for obvious reasons. But they identified large standing armies with France or with the Spain of Phillip II or with Prussia or with Russia. So, it didn't mean that the English state wasn't efficient in collecting taxes, because they were more efficient than the French were in collecting taxes. But it does mean that this decentralization is an essential part of who they thought they were. The local sheriff will call out the guys and restore order when there's trouble. There is this real fear that large standing armies could ultimately compromise the rights of freeborn Englishmen. That's in a way that they would have put it. In the case of the Netherlands, which I'll come back to in a while, you have these provinces that--although Holland, which is the province of Amsterdam, is by far the most important and most prosperous of the Dutch provinces, such as that we often miscall the Netherlands Holland, in fact Holland is just one of the provinces, as if you called the United States New York or California, because those are the two most powerful states in the United States. But this decentralized federalist structure is part of who they thought they were and who they continue to think they are. This is very different than these absolute kings who can send out their armies, can run by their minions to squish whomever they want like grapes whenever there's trouble. We can exaggerate the power of Peter the Great in this vast empire that's expanding south and already expanding toward Siberia and such distant places. It took a long time to get the guys there. But when they got there, there was hell to pay. Very, very different than this federalist decentralized structure of both of these countries. The political outcome is different. You can also make that argument, this isn't the course to do that, but you can make that argument about the United States and the evolution of the United States, because of the prestige of local leaders and the decentralized nature of the colonies already at the time of the War of Independence--which is going to have a strong role in the political outcome, for better or for worse--in this country where you have this sort of wacko political system that still exists because of people screaming, "state's rights," and all that. But that's another subject. Fourth, anti-Catholicism in both cases. Why? Because these are major countries in the Reformation. The English Reformation, which begins with Henry VIII wanting to divorce and kill his various wives along the way, still had an awful lot to do with the resistance to the power of Rome and the power of the Catholic Church as an institution. In the case of the Netherlands, anti-Catholicism is endemic. Why? Because it's identified with the Spanish empire, with Spain, which not only wanted to extract taxes and other revenue from its most prosperous province, but wanted to force people to remain Catholic. When they send this guy called the Duke of Alba up to the Netherlands, he burns people to the stake and all this kind of stuff. The association of Catholicism as the dominant religion in both of the enemy countries, France and Spain, is extremely important. This is not to say that the Dutch don't fight the English, too, because they do. There are various wars over control of the seas. But nonetheless, in the imagination, in the imaginaire, in the mental construction of these two countries, what we are not, that is Catholic, helps define their identity. Of course, the particular problem of Ireland, the challenge of Ireland as I suggested earlier, has an awful lot to do with that. And the reinvention in the nineteenth century of British identity will also have a lot to do with fear of the Irish, "the enemy within," as they were perceived. But more about that. I'll talk about that a lot and try to explain there was no revolution in England in 1848. In the course of Britain, it's even clearer. The French are "the sneaky French." From the French point of view, it's the perfidious Albion already there. You can go all the way up to the origins of World War I to see. When the British get into World War I, it's because of the violation of Belgian neutrality by the Germans, because the idea of having another enemy…we've already got the French across the channel and it's not that big a channel. You can swim across it. I couldn't and you couldn't either but lots of people have. They do it all the time. But if you've got the Germans in Ostende eating moules frites, eating mussels with French fries, and you've already got the French there, this is unthinkable. So, they go to war. I don't want to exaggerate this too much, but the largest riots in Britain in the eighteenth century are not the riots for political reform at all. They are the anti-Catholic riots called the Gordon Riots, which take place in London. Anti-Catholicism is very much strongly entrenched in the British sense of who they were. Anti-French--there we go. Those two are already linked, along with anti-absolutism and anti-Catholicism. Last, and all these things are linked. You could do one of these little boxes they do in sociology or political science, and have these arrows running all over the place. You could make it there. Who are the biggest trading powers in Europe? We forget about the enormous trading vitality of Asia, even sea vitality and land vitality at the same time, but they are without any question by this point--with the decline of the Spanish empire, which begins before this course--the Dutch and the English. What this does is it increases the role of this commercial middle class. It increases the role of cities, particularly port cities, which Amsterdam is. And it increases the role of these economic elites or their concern with maintaining their privileges against threats to their privileges and to their prosperity no matter where they come from. Just to amuse yourself, not for any kind of punitive think-about-the-exam exercise, but it would be fun to take these categories and think about these other countries, particularly those who were absolute states, other large important states in Europe and see to what extent you have these factors there. Prussia, I already said, you've got your big nobles. You've got all these guys with dueling scars, and for them to be indulging in commerce is just crass, and not terribly manly, and all this business. You've got your flute-playing king, Frederick the Great, who could be awful. He could lash out. Voltaire went and hung out with Frederick the Great, and after a while he said, "Let me out of here." But you've got Berlin, which was a very important town, but it's a very important city because it's got this huge garrison and it's got factories turning out military uniforms. It's got Potsdam Palace and all of this. It's not at all the same thing as Amsterdam, or London, or any of the other trading cities around. In the case of Russia, it's even easier. You've got a practically nonexistent middle class. You've got all sorts of nobles. They are involved in commerce, some of them, but mostly what they do is they serve the state. They're called service nobility. They're not serving the cities. They're not serving commerce. What they're doing is they're doing is they're serving the state. They're serving this huge, lumbering, strange guy, Peter the Great. Then you could take other places, like Italy and smaller cities. But you don't yet have these big state structures. So, if you're looking back, say, from the end of the nineteenth century, it's not easy to see, but you can see these--don't ever think that history runs on railroad tracks, and all you need is the timetable to show when modernization shows up. That's a most ludicrous word, really, in contemporary social science or orthodox Marxist, where you just had to say, "Well, eventually the proletariat will rise up, because the bourgeoisie did this before." But yet when you look back from the nineteenth century, these factors do count in explaining how countries turn out to be the way they are. When you try and look at the origins of World War I, it mattered that Germany is run by this kind of madcap dufus, Wilhelm II, who was intellectually lazy and liked to break bottles of Riesling over bright, shiny battleships and didn't concentrate on things very long, and sends off provocative telegrams here and there to make everybody mad. That has a long-run outcome, which cost the lives of millions of people. Anyway, here we go. It's just kind of fun to think about that, so that's what we are doing. We're thinking about that. Now, let's dim the lights. Here we go. How do we dim the lights? I can't remember. Is that good? We've got to get further down than that. So, the lecture… Okay, now paralleling what you've been reading, let's look a little bit at the Dutch Republic, because people talk about England and Britain all the time, so let me talk about the Dutch Republic. This will kind of bring some of these factors together, along with the idea of what people thought they were. What is their identity? Here again, we'll look at some paintings. You're not responsible for these paintings, but we'll illustrate ways in which the Dutch Republic, and their social structure, and what they emphasized, and who they thought they were was very different than, for example, la belle France. So, here we have Amsterdam. It grows dramatically because of this global trade in the seventeenth century. That was 1613. I made this. It's all a bunch of jumble. But this is 1640, or something like that--later. But what you have are these canals. Many of you, or some of you have had the good fortune to go to one of Europe's most wonderful cities. The canals were used to transport goods. Thus, the city structure itself, the way the city was built with houses along the canals reflects the economic primacy of global trade. At this time the Dutch are sending herrings, these long flat boats, herring ships are going all the way to Newfoundland in the seventeenth century, and Iceland, freezing off the coast of Iceland. They control and dominate the Baltic trade, and herring is an important part of that, because herring will keep once it's salted and all that. The city of Amsterdam grows up not only as part of this victorious struggle against the Spanish armies. There's a wonderful book by my former colleague, Geoffrey Parker, called The Spanish Road, which talked about how difficult it was for the Spanish to get troops all the way to the Netherlands. They had to go from Italy, because much of Italy was controlled by Spain, through the Alps all the way up along the Rhine and finally get into the Dutch Republic. It was a losing battle. But Amsterdam reflects this kind of primacy of the global economy, because it's such an important trading power, but also this federalist decentralized aspect that I've tried to describe. This is the shipyard behind. In fact, this building behind is still there. I go to Amsterdam--not frequently, but I've been there ten or twelve times, or something. I did a Yale trip there. I remember we took all these alumni around to look at all this stuff. That was mildly fun. What the Dutch did--the Netherlands is an extraordinarily small country, and it's the most populated country in Europe, then, per square kilometer, and is now--once. What they have to do in order to feed the population, you have to have more land. How are you going to get more land? One of the incredible things if you're driving, say, from Groningen, and you're going to go all the way down to Amsterdam, when you drive along the coast, you're driving along this sort of road that's out in the sea. All the land between the water on the left side and a long, long way has been reclaimed from the sea. This is the seventeenth century. This isn't scuba diving now off the Great Barrier Reef, or something like that. What they're doing is they're reclaiming the land from the sea. What this has to do with global economy is that you have to be able to feed the population. They have, along with the English--and these two facts are related--an agricultural revolution. They have an agricultural revolution, investment in commercialized agriculture, and increase in the production in rural areas. In the case of the Netherlands, it's because of this. I'll talk about why it happened in Britain another time. It's because they reclaimed land. How much land do they reclaim from the sea? Well, 36,000 acres just between 1590 and 1615. That's a phenomenal amount, and they keep going over and over again. The population of the Dutch Republic increases between 1550 and 1650 to almost two million people. This is in a pretty small--it's bigger than Belgium, but this is a pretty small territory. Amsterdam, by mid-seventeenth century, by 1650, increases to 150,000 people. They build these three large canals and this expands the area of the city by four times. What this means is that boats can dock outside these kind of big warehouses and can unload or, depending on the case, load goods. You have 500 miles of canals dug just in the middle decades of the seventeenth century. It becomes this economic dynamo because of that, and thus traders are to be found everywhere. In the 1630s there are 2,500 trading ships. They become the principal supplier of grain and fish in Europe. The Dutch dominate the Baltic trade. Cities like Gdansk, which we tend to forget about, unfortunately, which is a very important port then and still now. It's where Solidarity began, too, as many of you know, in 1980. It's an important port in all of this. They reach the East Indies in the 1620s and the 1630s. They bring back cinnamon, nutmeg, and all sorts of valuables. It's this kind of wealth that allow them to fight this long, hard war of independence, which they finally win. Now, why is this in here? This is Rembrandt, as most of you know. This is called The Night Watch. The importance of this painting is who is being painted and, more than that, who is getting Rembrandt to paint this. If you go down into France, if you go to la belle France, the painting is dominated by nobles who want pictures of themselves, or the tiresome Sun King and all his sort of miserable hangers-on, very rich, miserable hangers-on. What the Dutch painters painted reflects in the same way that Renaissance art reflected what was important to Renaissance Italy. Who did the commissioning of painting? I care because my mother was a painter, she was a portrait painter. That's how we survived in Portland, Oregon. Who commissioned these paintings and what they painted tell you who these people thought they were. That's pretty interesting. Who are these? This is The Night Watch. These are the guys who run Amsterdam. This is essentially the town hall of Amsterdam. In fact, that building itself, of which I don't have a slide, is extremely modest. It looks so terribly different than anything like the Spanish palace outside of Madrid or anything that ever had anything to do with the Prussian kings and all that. Well, that's pretty obvious. This is the weighing house. Here, this is very classic. I'm not a professor of architecture, but it's obvious this is northern European architecture that you can see in northern France, cities like Arras and other places, or Charleville-Mezieres in the Ardennes. It's one of the most fabulous plazas anywhere. Or in the Place des Vosges, which is by far the most beautiful plaza in Paris, you have this kind of architecture. But this is the weighing house there. Here's another one. The buildings are the most important. Buildings in the cities are not huge, over-the-top Baroque churches, such as the Gésu in Rome, for example. They are weighing houses. The town hall was in very modest proportions because it's Calvinist. Calvinists weren't exactly what the French call rigolo, weren't exactly wild, fun-loving types. Even the churches are completely denuded of the kinds of Baroque, swooning cherubs and clutter that you found in--;beautiful, I'm not knocking the Baroque--but beautiful churches--;or, in Vienna it's a good example of that, or anywhere is a good example of that. Here's another weighing house. This is in Gouda, as in the cheese, but the town of Gouda. Amsterdam wasn't alone. Now, here, these are houses that are built along the canals. You've got these warehouses along the canals and here's where the bankers--the Dutch had the most, along with the English, sophisticated banking system in the world. Lloyds of London, which now does things like insure quarterbacks' knees and things--but it begins in the eighteenth century when people go into the docks. Because a lot of these ships go blub on the way back, or are taken by pirates and stuff like that, they say, "We want to insure this ship. Will you sign up for ten percent of the value of this insurance?" That's how Lloyds of London starts. But you had the equivalent in Amsterdam as well. You have access to capital by those guys, these guys who are no longer there. The middle class guys behind the screen who are going to invest in these long treks. You send off a ship to Newfoundland, or to Iceland, or even to the Mediterranean. They start getting into the Mediterranean and that scares the hell out of their commercial rivals. So, you also build these houses for people to live in. Because there's not a lot of room between the canals, that's why they're so steep when you walk up these things. It's almost like that. It's an incline. They seem to be reaching toward the sky there, but not reaching toward the sky as in the cupola of a Baroque church where you're supposed to see God at the top. Here, they look up and they see money at the top, or whatever. They were religious as well, but it was a different kind of religion. Here, this is a more modern example with a little hash café next to it or something. This is Rembrandt's house. He had to live somewhere, and that's where he lived because he paints these people. Rembrandt did have one time where he started painting kind of Catholic themes, but basically he's like these other guys. They're painting--I'll tell you in a minute. But they're painting middle-class life in the Netherlands. They don't do big battle scenes. You have to go to the southern Netherlands or Belgium for that, or into France. That's what they do and that's what they look like. That's pretty obvious. This is an orphanage. They had, without question, the most sophisticated charitable institutions anywhere. In fact, we know what they ate. It was the most prosperous country for ordinary people anywhere. The diet here, we know what they ate in their meals. They ate much better than poor people did almost anywhere else. Indeed, some ordinary workers bought paintings by Steen and all sorts of these other people. Here is a workhouse. This is a prison, basically. They were organized for that, too. It was the place of toleration. There's no doubt about that. During the Enlightenment, the works of the philosophes that could not be published in France were published in Switzerland, more about that another time, and in the Netherlands. But they could lash out. They lashed out at gays sometimes. They lashed out at Catholics sometimes. There was an edge to them, as if the whole thing could collapse on their heads. Simon Schama is not the only person who made that point. Others have as well, perhaps because of the big floods. If the dyke goes--here's the image of the Dutch boy with his finger in the dyke. If the dyke goes, you are drowned. There's this whole sense that the thing is precarious and you'd better kind of mind your Ps and Qs, or whatever the expression is, and be a good person or this whole thing could kind of be literally flooded away. How different that is than this modest estate of Versailles. I worked in the archives in Versailles in the small stables. This is one of my least favorite palaces. The way the Dutch thought about themselves is a little different than the way the French nobility or the Spanish nobles, at least at the higher ranks, thought about themselves as well. I show these. These are obvious, but just to put them in comparison with what you'll see in the middle. A little modest bedroom there in Versailles. This is the war room, it's called, the salon de guerre. I don't like Versailles. What the hell. This is Vaux-le-Vicomte, which is much more interesting. I just put this in because I like it. It shows you there were chateaus in the Netherlands, but they were mostly in the east. They were nobles that had the chateaux, and they didn't dominate; they didn't rule. Vaux le Vicomte was fabulous. Louis XIV was invited by his treasurer, a man called Fouquet, to go and eat there. He was insanely jealous. They served him on gold plates with gold silverware, and he had huge ponds stocked with not only freshwater fish but saltwater fish. He was so jealous that he threw him in the slammer, threw him in jail and confiscated it. But the image is just that this is very different. The paintings you found were very different. Here's Rembrandt himself. That was Rembrandt. That was quick. Narcissism--he did something like seventy self-portraits. He was his own favorite subject. Anyway, my mother tried to paint me, but I'd never hold still long enough. There's only sort of two half-finished portraits of me. Anyway, what did people paint? Ruysdael, don't write this down. Well, you can if you want. Go to the great museum in Amsterdam and see it at the Rijksmuseum. Ruysdael painted ordinary people living and at work. These are windmills, obviously. Here are windmills with people. This is different. Generally, you wouldn't find these kinds of paintings in other places. This is a painter called Frans Hals, H-A-L-S. It's a family scene. These are middle-class people commissioning paintings of themselves. It's the equivalent of fancy patricians in Florence having paintings of themselves. But they're from a very different social class, the patricians of Florence or Venice. This is to set the theme. I love still life, especially if they have food and wine. There's some wine up there. This is Pieter Claesz, C-L-A-E-S-Z, probably mispronounced. This is still life. They paint food. They paint food, and people eating, and people having fun, not people at war, not the eighteenth-century inevitable paintings of the British nobles or land big gentry looking over all of the villages they've had knocked down so they could expand their hunting terrain, or fondling the nose of their killer hunting dogs, or something like that. It's just a very different way of imagining oneself. It's very attractive. I must admit it's very attractive. This is the village school. They had the highest literacy rate in the world, point, period, the Dutch did. They were very, very ordinary people. There were poor people in the Netherlands. Nonetheless, they were very ordinary literate poor people. There's something to be said for that. I like cats a lot. I hate dogs, but anyway, this is children playing with a cat. My cat yesterday actually undid my Yale password last night. I saw the thing that said password. The next I knew, she had literally typed my password. I had to put a new one. This has nothing to do with anyone, so you should take this out. Anyway, cats. There we go--boules. This is what we do in the South of France with a little chardonnay on the side. We play boules. It's not quite the same thing. That's like bocce. We have this sort of metal ball. That's for another lecture, ça n'a rien à voir avec… These are ordinary people having fun. Here they are. Here they're having fun. But they're having too much fun. This is part of the point. Part of this sort of this inveterate Calvinism, and part of the fact that, "what if the dams burst?" Or what if the British begin to outdo us in the world trade department? Or what if the French come and squish us like grapes? There's always this sense of vulnerability. Behind the paintings of people eating, the theme of people eating or praying prayers at mealtime, and this sort of thing, or playing boules, pétanque, bocce, there is always this sense of the ribald family. That's what this is called by Jan Steen, S-T-E-E-N. If you have too much fun, things will get away from you. These people are all drinking and leaving these poor little children to their own devices. They may be knocking down one or two themselves there, because nobody's paying any attention. You could go too far and then you end up like this. How does it all end up in the long run? How it ends up in the long run for the Dutch is that the Dutch cease to be a great power. But there's nothing wrong with that. They have gone on to live highly prosperous lives. They eventually end up with a monarchy. They eventually lose Belgium in 1831. They basically didn't care. The Dutch economy, the equivalent would be the decline of the Venetian economic power in the Mediterranean--and trade with the East diminishes. The Netherlands ceases to be a great power, whereas Britain in 1707 becomes the biggest of the world powers. But let us still remember these six or seven factors, or whatever I had up there, and remember what these two places had in common. It has a lot to do with the global trade. It has a lot to do with social structure. It has a lot to do with who they thought they were, the paintings they bought, the paintings they commissioned, the way they viewed themselves. Part of this reconstructing of national identity often has as much to do with who you're not, not absolute, not Catholic, not French, as it does with you who imagine yourself to be. In the growth of national awareness, that itself is an important theme. Have a great weekend. See you on Monday.
European_Civiliization_16481945_with_John_Merriman
7_Napoleon.txt
Prof: Okay, I'm going to talk about Napoleon today. It was about maybe ten years ago, before the French Open, the tennis tournament that BNP puts on every late spring. They took one of the American players, a female player, on a quick limousine tour of Paris for a full day. At the end a French host asked her, "What did you like best about the tour of Paris?" She said, "The best thing was the tomb of the little dead dude." I couldn't make that out. Napoleon continues to fascinate, though not necessarily me. The coverage in what you're reading is straightforward, so today I'm going to talk about a couple themes. First of all, what remained Corsican about Napoleon? Then maybe discuss a question raised by David Bell of late. Was the Revolutionary period, and particularly Napoleon, the first total war, in the sense that twentieth-century folks--and at least you were born in the twentieth century--have come to understand? In the end--not ramble a bit, but just talk about what the most important contributions of Napoleon were. Somebody counted up, not me, that by 1980 there had been at least 220,000 books and articles published on Napoleon in a variety of languages. Three recent books, if you're Napoleon buffs or simply want to read about him, that are quite good in English are my old friend Steven Englund's book, Napoleon: a Political History, which came out three or four years ago and was recently translated into French. Phillip Dwyer's book on Napoleon up to 1799--Phillip Dwyer hates Napoleon, but it's a pretty interesting look at the early career. Finally, I suppose most controversially, David Bell's book, The First Total War, which I'll discuss some of the themes in a while. It was only about six or seven years ago, I remember this, they discovered in Lithuania a whole bunch of dead bones. Well, bones are dead, I guess, if they've been there for 200 years, or whatever. But not of a gravesite, because they were never properly buried, but a place where expired in the snows of 1812 a good number of soldiers of Napoleon's Grande Armée, the grand army; and, so, 1812 still goes on. There is a book, also an interesting book if you're looking for paper topics, that I sometimes assign in the French course called Diary of a Napoleonic Foot Soldier. By the time of 1812, the majority of the Grande Armée were really people who had been conscripted or impounded, if you will, in various allied states. But it's a quite interesting account of what it was like in Napoleon's armies as he invaded further and further into Eastern Europe. By the way, I just did a subject search on Napoleon once. I don't know why. But, of the 220,000 books, you probably will want to not read the tantalizing 1894 classic Napoleon and the Fair Sex, or Napoleon and His Women Friends, which was from 1927, Napoleon in Love, 1959. There are lots of those, and Napoleon Seen by a Canadian, published in 1937. I talk a lot about Napoleon's life in the textbook, but let's look at the theme of Napoleon and Corsica. I once took a whole flock of Yale alumni to see Napoleon's house in Ajaccio, where he was born on the 15^(th) of August. He wrote in a letter to the Corsican patriot with whom he subsequently broke, Paoli, he wrote on the 12^(th) of June 1789, "I was born when the French were vomited upon our coasts," that is the coast of Corsica, "drowning in the throne of liberty and torrents of blood. Such was the odious spectacle that first met my eyes. The cries of the dying, the groans of the oppressed, tears of despair surrounded my cradle at my birth." Corsica, as I'm sure you know, is an island, a big island. It's north of Sardinia, which belongs to Italy. He, at first gloried in his Corsican origins, hating the French who had conquered his island. Of course, the French Revolution would change all that. That's why it's a good idea to look at him, as you look at Robespierre and others, and see what difference the French Revolution made. Between 1785--and here I'm drawing on Dwyer--and 1795, that is between the age of 16 and 26, he wrote a number of notes, and sketches, and short stories that reveal much about his attachment to Corsica, but also that suggest the dramatic nature of the change as he embraces the Revolution and France. He spoke Corsican and not French. French was his second language. Corsican is a language. It's a patois that is more closely tied to a patois or dialect of northern Italy. In fact, when you drive around Corsica, most of the radio stations that you can get are Italian and not French. He learned French and he made errors. Even at the end of his life he made errors in French, though he wrote French very well. He was bilingual, but he never lost his accent. One of the things about northern French people, in particular, is that they're less likely to forgive southern accents. Of course, one of the stereotypes of Corsicans is they all become policemen in Paris. Many of them have an "i" at the end of their last names. They have a very strong southern French accent. It's not really a Toulouse accent or our part of France, an Ardèche accent, where you can always tell. Those of you who know French--and again, if you don't know French it doesn't make the slightest bit of difference--but somebody who says "quatre- vaigne" instead of quatre-vingt, or "Cassaigne" instead of the great human rights advocate René Cassin, or "vigne," a glass of wine, moi, je prends un verre de vigne, instead of vin. It's a famous story about Napoleon when he goes off to military school as a very young boy that they made fun of his accent. More about that in a while. But anyway, at the beginning he hated the French and espoused the fact that he was a Corsican. He felt culturally marginal and this was compounded by his personal loneliness. When he was assigned to Valence, which could make anyone sort of mildly depressed, Valence on the Rhone River, he contemplated suicide quite seriously. He spent a lot of time reading and sort of hanging out by himself and through much of his early days he lacked friends. In 1768, Corsica, which had been part of the Republic of Genoa, that is the port city of Genoa, en face, just across the sea, gave up Corsica to France or, really, sold it. The French state actively worked to try to create a loyal Corsican nobility, and thus, the family of Napoleon, the Bonapartes, B-O-N-A-P-A-R-T-E-S, who had a "u" that he subsequently took out of his name in the first four letters, were ennobled in 1771 by the French. But all nobles aren't rich, as you know. He was sort of what you'd call in French un hobereau, a poor noble. Four of Carlo--that is his dad--Bonaparte's eight children received scholarships to study in France, including Napoleon, who was sent to a place called Brienne, in the north of France. Fifty of the 110 students in this school were called "royal scholars." Here again, here's kind of a comparison to that case of downward mobility--Robespierre, who is also a scholarship guy. There was nothing wrong with that at all, except that there were a lot of fancy noble offspring there, too, who had another reason to mock Napoleon. He wrote in Valance, when he was posted to Valance, again on the Rhone River about an hour now by car south of Lyon. He wrote that life was a burden, "because there's no pleasure. It is nothing but pain. It is a burden because the men and women with whom I live and probably will always have to live have customs that are as far from mine as the light of the moon is different from the light of the sun." But yet there was French influence in his life. He read the philosophes. He read Rousseau, Voltaire, Montesquieu, and in 1791, again, I don't want to push this comparison because he was different in many ways than Robespierre, but like Robespierre he enters an essay contest sponsored by an académie, in this case the Académie of Lyon. His writings mostly reflect an obsession with his origins. I haven't read a lot of his early writings, but Phillip Dwyer has. One of his colleagues in school drew a cartoon of Napoleon rushing to Corsica to aid the Corsican rebel Paoli. He must have discussed this with his friend. And he also battled with those he saw as his rivals. A long time ago, in the 1920s I guess it was, the producer Abel Gance produced this three-and-a-half hour film, which actually is extremely boring, called Napoleon, without sound. But the most famous scene in it arguably is a snowball fight, where Napoleon takes a snowball fight to a more serious dimension, and tries out tactics, and all of this. In a way, he's fighting for his independence and the status as a non-French Corsican, but who has been washed up on the shores of France by a fortune, good or bad. At that point he wasn't really too sure what it was. He began to write a history of Corsica less than 100 pages, which he took seriously enough to begin to revise in the early 1790s after the French Revolution. In it, according to Phillip Dwyer, he portrays Corsicans as courageous, even heroic in throwing off the rule of Genoa and battling the French. "For over twenty-four centuries," he wrote, "the same scenes have been repeated without interruption, the same vicissitudes, the same misfortunes, but also the same courage, the same resolution, the same audacity." But his letters and his writings reveal the folks that he was reading, that is, the influence of the philosophes portraying Corsica seeking liberty in the shadow of oppression, in opposition to royal authority. So, he links the themes of the philosophes in defense of Corsica's fight for freedom. Even in the 1790s, if this interpretation is correct, he did not see his identity as both Corsican and French, but rather as Corsican. But by 1799, when he with the help of the wily Abbé Sieyès comes to power on the 18^(th) of Brumaire, the French identity had overwhelmed his Corsican identity. The question is, did he merely catch the nearest way? Is it opportunism? Or was it his belief that the French Revolution and la belle France offered liberating possibilities for humanity? In a short story that he wrote in the summer of 1789, a rather important summer, the French were portrayed as tyrants, still--in his story called the Nouvelle Corse or the New Corsica. He used violence, and his life would be one characterized by violence, as a way of increasing sympathy for the Corsican people. Also it was a cultural expression of Corsican vengeance. Corsica, because of--this isn't just a stereotype, but because of the sort of flashing knives of clan and family rivalries, there were so many crimes in Corsica in the nineteenth century that the island of Corsica, which became a department, now it's two, but one of the departments of France, had to be excluded when somebody was doing a study of crime. There are so many more crimes in Corsica. In fact, still their tradition of flashing knives--and the Corsican independence movement still places bombs--there are various independence movements--and blows up a lotissement, a housing development being built for Parisian or Marseilles lawyers, or something like that. There are still these kinds of resentments. In the beginning he's still identified with Paoli, but he would break with Paoli. Paoli, the Corsican patriot, was sort of seen as the George Washington of his island. Napoleon was constructing a vision of what he thought he could become--that is, to help liberate Corsica from French rule. How ironic! His father, Carlo, had in his own view, that is Napoleon's view, betrayed the Corsican cause by going over to the French. In a way, you could argue that he's rebelling against his father, at least in the early stages. But the Revolution did bring a change, obviously. It transformed the relations between France and Corsica. In 1789 there were four deputies elected to the Estates General, and in 1790 Corsica is recognized as a département, a department. Corsicans demanded a royal decree that would recognize the island as an integral part of France subject to the laws of France, and declared that those who had fought against France ought to be permitted to return to their homeland. On the 27^(th) of December there were celebrations in all Corsican churches. Napoleon had a banner hung in his not inconsiderable house in Ajaccio, in the family house. "Viva la nation, viva Paoli, viva Mirabeau," who had supported the decree. "Long live the nation. Long live Paoli. Long live Mirabeau." He's trying to play it both ways. He wrote, "From now on we," that is Corsica and France, "have the same interests, the same concerns. The sea no longer separates us." Indeed, that's hardly the case. Even today, in Corsica, there have to be subventions to help keep the cost of food down in Corsica because of the enormous cost of transporting things that are not produced locally. You can't just live on goat cheese and things like that, and red wine produced in Corsica. The sea does matter. But the Revolution helped Napoleon reconcile some of the contradictions that had bothered him all the way along. At this point, he'd become a French Corsican. He renounced publishing some of his letters and began to enter these political struggles in the Revolution. Indeed, he was lucky. One of the amazing things about Napoleon was his luck. When he might have well been guillotined as being a Jacobin, he was in Corsica or in the South of France. He always seemed to be in the right place. This was true in his battles, as well. He was a tremendously courageous guy. His bodyguards are always trying to get him to move back in the traditional way as an officer, a commander, which he was--the commander from the battles. He, in fact, is only wounded very lightly two or three times. So he's pretty lucky. When bodies are falling and horses are falling all around him, he remained an extremely lucky guy. This also accounts for his success, although even though on the 18^(th) of Brumaire in 1799, clearly it would be a military person who was going to put an end to what has been indelicately called the War of the Chamber Pots that was the Directory, that is the period of the post-Thermidor, the Directory, the battles between left and right. The Revolution made military men extraordinarily important. There wasn't a king anymore, and the War of the Chamber Pots and the sort of sleaziness of the period, though it was important in giving France some sort of parliamentary experience in a meaningful way, meant that some military person was going to be imposing "order." When Abbe Sieyès, who would survive what is the Third Estate, who had also survived all of the vicissitudes of the Revolution, when he thinks about one general, another military man says, "There's your man. There's Napoleon," who is again in the right place. "He's going to do a better coup d'etat than the other guy could have done." So, Napoleon there happened to be a lucky fellow as well. In 1793 the followers of Paoli broke with the convention during the federalist revolts, which you know about. During the expulsion of the Girondins from the convention, the uprisings come in Leon, Marseilles, Bordeaux, Toulon, et cetera, et cetera. Those on the outs with Paoli, including Napoleon, now embrace the Jacobin cause. The Corsican assembly in Ajaccio--;by the way, it's A-J-A-C-C-I-O--condemn the Bonapartes, who had dropped the "u" in their name, that's in the book, as having been born in the mud of despotism. So, Napoleon turned his back on the independence movement to which he had pledged in the privacy of his room in Valence and other places, in Brienne, fidelity. He now hated Paoli, who he blamed for having turned so many Corsicans against France. Again, is this opportunism? Had he merely caught the nearest way? He had embraced the national identity of being French and he did take ideas seriously. It's possible to argue, I would believe this, that the philosophes eventually won out and he saw the Revolution as a liberating experience for France and the construction of a new way of imagining the state. Of course, he turns that into out and out political repression in his own country and the megalomaniac conquest of all of these other places. When he married Josephine, who once somebody said would have drunk gold out of the skull of any of her lovers, he made sure that the French spelling on the marriage certificate was there and that the Corsican "u" had been taken out of his name. On the island of Saint Helena in the middle of nowhere, where he had a lot of time to think, he wrote, "I am more champagnois," that's where the town of Brienne was, his military school, Reims, Épernay Champagne, and all these good things. "I am more champagnois than Corsican, because from the age of nine I was raised in Brienne. It would have displeased the French if I'd surrounded myself with Corsicans. On the contrary, I wanted absolutely to be French. Of the all the insults I have had heaped upon me in so many pamphlets, the one to which I was most sensitive was that of being Corsican." Napoleon was an inveterate liar, particularly when he was trying to craft, it was quite clear he was already ill, his legacy. Much of what he wrote on the island talking about his eternal devotion to the principles of liberty, fraternity, and equality, was trying to plan these 220,000 articles and books that would be written about him until 1980. This was a sheer invention of the past, because the record is quite clear in his writings and what he said that he considered himself Corsican. Yet, the Frenchness of the Revolution overwhelmed that in him. In the end, he remained a Frenchman, like very many people with a strong accent, in his case, that of Corsica. There are some other obvious things that are Corsican about him that remained. Again, this is part of the stereotype. In France, like other countries, one has stereotypes about different regions. In France people think, for example, that those from the center of France, from Auvergne, are cheap, radin in French. Or that people in Marseilles exaggerate. You say to somebody in French, "You're from Marseilles, aren't you?" after they just said that they caught a 1,000 pound perch, or something like that, or that Marseilles had just scored the goal of the century. There's a tendency of people from Marseilles to exaggerate. These sort of regional stereotypes are part of any country. One of the stereotypes, though there's some truth with this, is the idea of family loyalty. Most people are loyal to their families, but Napoleon took the kind of clan identity a bit strong. Of course, what he does is he perches his various brothers on the thrones of almost everywhere, this kind of family loyalty. It's not just people from Corsica who might, given that situation, do the same thing. Also there's the settling of scores. Napoleon, and we'll talk about this in a while, if you do believe that the period is--we can see the origins of total war there. I'm a little skeptical about this. Nonetheless, when people turned against Napoleon or against the French armies, his reaction was "We're going to pay them back and we're going to get them." Not with flashing knives, but with execution, burning of villages in Palestine, more about this later, in the south of Italy, and in the Tyrol, in the mountains of Austria. Whether vengeance is more of a Corsican thing than a champagnois thing or a lyonnais thing or Briton thing or a North German thing or a Polish thing or whatever, one can't say. Yet lots of the thinking about Napoleon looks for things that remain Corsican about him. Having said all of that, what shall we do? Let's now turn to this question of whether we think--and it's just a rhetorical question--whether my dear friend, David Bell, is right that you can see the origins of total war in this period. One thing that I'm a little skeptical about is that if you compare this to the Thirty Years' War--and you saw those ghoulish illustrations before of different ways you could perish at the hands of enemies determined for no particular reason, in many cases, to simply destroy you--it's not clear that the Revolutionary period and the Napoleonic wars really was the first. Yet, if we think aloud, and that's what I'm doing, if we see the origins of total war in World War I, where the mobilization of state resources, as much as possible to the war, and again in World War II--and particularly in World War II the breaking down of the differentiation between civilians and non-civilians. That happens a little bit in 1914, but not that much. It happens in the Turkish massacres of the Armenians in 1895 and 1915. That happens, too. But it is possible to argue that the Revolutionary and particularly the Napoleonic period--from that point of view, the mobilization of--melting church bells and transforming almost every available industrial site into war production, and turning out all these cannonballs, and all these rifles, and all these swords, and all these bayonets, with the total resources of the state directed toward war. There is a point there. There's really two sides of that argument. That's one, the mobilization of resources. The levée en masse, a mass military conscription that all male citizens are going to be in the army. This starts with the French Revolution. After all, Valmy was the battle near the windmill in Chalon, near Champagne in the east of France, the Sans-Culottes going to war--was the levée en masse where ordinary people are full of enthusiasm in singing patriotic songs or heading off to fight the enemy. But the other side of this total war story is, of course, what happens to the civilian population? Napoleon once said in one of his rare moments of real introspection that he didn't give a damn if a million people died because of him. He believed--part of his great failing is--a great weakness, and the suffering of humanity because of it was his sense that no matter what he did, it was the right thing to do. He has this sort of hallucination moment in about 1796 after one of his battles, I think it's Arcole, where he sees after himself--he sees himself transported in the air and that the whole world seemed to be like you're taking off in an airplane. The whole world is beneath him. At that point, he has this sort of sense that what he would will as a human being would inevitably become reality because he willed it. The other half of this sort of total war aspect is that, to be sure, not only did something like one of every French male born, who would have been eligible for military service, died during the Revolution and Napoleonic wars. But this sort of meting out of a brutal vengeance, more than just in a Corsican sense to people who crossed his will does anticipate in some ways, and I'm not even sure how much I believe this, but the twentieth century. On one hand is the difference between soldiers and civilians--is being eliminated with the end of the really just professional army of the eighteenth century. It's possible there were a lot of people killed in the eighteenth century, too, in those professional army wars and all that business. But victims, too, are not just military people. Of course, the worst atrocities committed by French troops were in this sort of madcap Egyptian, Middle-Eastern adventure when he goes off with a boat packed with scientists as well as munitions and lots to eat. He goes off to Egypt. Imagine conquering India. He had an idea how far India was away. Of course, when people don't put up with this, then he massacres them in Palestine. They raze villages and that's the end of that. As I said before, the examples before would be in Calabria in the south of Italy when there are persistent rebellions, resistance to French rule--and why not?--then they just start massacring people. Of course, the famous case of Spain where you have forever on these magnificent canvases of Goya where French troops are shooting down Spanish peasants who are resisting in the Peninsular War. These too, I guess by more modern definitions, not contemporary ones necessarily from that period, but would be classified as massacres. It's possible and this isn't too far fetched to imagine the sort of total war as being part of that experience. From 1792 to 1815, the experience of ordinary people in much of Europe was war. There is that, too. Napoleon's reaction to all of that was, "je m'en fous." He didn't really care. After every big defeat the next step was to plan the next war. The most famous example, of course, is when you've got hundreds of thousands of people that are picked off by Russians partisans--and why not?--or freeze to death in the Russian winter. When Napoleon, with his ragtag band of survivors, when they get back to France one can see why that French expression, "to lie like a military bulletin," comes into existence. The military bulletin that church people had to read, the priest had to read at mass, said that the emperor's health had never been better. Of course, that was true enough. He immediately begins to start planning another war. When Cossacks are camped on Montmartre and start the first Russian restaurant in Paris in 1814 and he's packed off to the island of Elba, not too far from the Italian coast--he makes his 100 days escape and lands at Fréjus in the south of France. Marshal Ney famously throws himself into his arms after having been sent to arrest him. Napoleon is immediately planning the next war and that ends happily for the rest of Europe at Waterloo, when Napoleon typically does not delegate enough authority, and Marshal Grouchy does not come to rescue him, and he's rounded up and sent so far away. It's a little difficult to plan the next European war if the closest port is some 600 miles away and is in Peru or someplace like that. I made this part of what I'm saying today in kind of a rhetorical way. I'm posing a question, because I don't really have a good answer to that. I don't believe that history runs on railroad tracks and all you need is the timetable to see when modern times show up. But if you look at the horrors of the twentieth century and the butchery of the civilians, in 1895 the Armenian massacre or the butchery of civilians after the Paris Commune of 1871, it's not too hard to see all of this. We're not yet talking about the Holocaust. We're not yet talking about World War II. But yet, some of that was out there. One more point is that, and I'm obviously not defending the French soldiers. It's very unusual for me not to be defending la France and all things French, but nonetheless, one of the cases that you might say total war comes before Napoleon, and this is of course the Vendée, which I alluded to the other day, the civil war in the West. There you had cases of them simply razing villages, and lining people up against the wall, and gunning down priests, and drowning nuns, and this extremely asocial, antisocial behavior. One of the things about these civil wars, and the case was true in Spain, was that from the point of view of soldiers in a guerilla war, anyone was a potential assailant. Again, and this is not excusing what French troops did in the Vendée, but to have made it a big political issue, which people did in 1989, the 200^(th) anniversary of the French Revolution and say it's the first genocide, which is what the far right was saying--the traditional far right, not Le Pen and those folks who would be happy to massacre almost anybody who they didn't view as French--it's just sheer nonsense. It's simply not the case. There are some contexts that should be provided in thinking about that. But it's an interesting theme and it's worth discussing. When you're doing this reading, which I hope you'll do, that's not a bad idea to think about. Let me just make a few points. We have about ten minutes left. This is just to amplify what you're reading about. Anyone who's ever had to wait in line at a prefecture in France for a driver's license or, in our case, our French identity cards or almost anything else will be cursing Napoleon for having maintained this sort of centralization that emerged out of absolutism and was honed in defense of the republic by folks like the Committee of Public Safety--where Napoleon founded a rational, "enlightened way" of organizing a state. Certainly Napoleon--whether he snatched the crown out of the pope's hand and crowned himself or let the pope crown him is not the issue. Napoleon could have pretty much done whatever he wanted, but in fact what he does is he maintains the departments. They were created in 1790. They send a prefect, who is like the intendant but even more centralized, to each department in 1800. They keep the same kind of top down centralized organization. Somebody once said that Gaul was divided up into three parts. When thinking about France at the time of Napoleon or anytime afterward one could think the same thing, that France was divided up into the Ministry of the Interior, the Ministry of Justice, and the Ministry of War. Napoleon, who ruthlessly censored newspapers, and forced them out of business, and made the costs of their continuation so extremely difficult, while organizing or orchestrating the cult of Napoleon, whether it be through paid art, some of them extremely great artists, or lesser versions--he maintains the kind of centralization that became important in France and in places where the waves of French troops, "liberty, fraternity, equality," and all of that ended up, that is maintained. He liked to think that the Napoleonic code was his greatest contribution. He wanted to be the modern Justinian. In fact, he does oversee lots of the beatings of lawyers, and jurists, and specialists. It's classic looking back from our view. It's patently ridiculous that there were many, many more times articles dealing with the sale of cattle than there were of the rights of women. This isn't too surprising, because Napoleon--as many dictators, including much more egregious ones like Mussolini and Hitler in the twentieth century--viewed women as nothing more than machines for producing babies. He said this. He said this exactly like that. Yet, and that's a big yet, the Napoleonic code survives and remains in many cases the basis for the French legal system. Again, this is an Enlightenment enterprise in many ways gone right. It is there. Among the other contributions, we don't really have time to talk about it and it's obvious about this sort of nationalism and that one's value comes from service to the state as opposed from royal blood, though he creates this new nobility based upon service to the state. Service to the state was above all through the army. A lot of these people who become marshals and all of this, if they were lucky enough to survive all these ridiculous wars, are military types. The Napoleonic code and this new sort of service nobility are important things. The concordat--he does a very important thing. He makes peace with the Catholic Church. He realized that as long as you had this potential contrast between juring priests and nonjuring priests, that you would still have lots of militant Catholics who wanted some sort of royalist restoration. Indeed, remember the king was dead and his son had died also in prison in Paris. But you've still got the king's brother out there. It's a very shrewd move. Of course, he uses the church for his own propaganda devices, and the church continues the tradition of really the civil constitution of the French clergy, the relationship between the church and the Napoleonic regime. This is a very important, clever step that basically ends the turmoil within France, at least to that extent. The old revolutionary calendar of Germinal, and Ventose, and Thermidor, that all disappears and was replaced by the basic calendar. People still in 1795 and 1796 in rural France are not thinking of ten day units called decadi, something like that. They're thinking of weeks and they still are having mass said secretly, which was the case in our village, even in 1794, until finally the priest has to go away. The concordat, this peace with the churches is obviously a very important thing. So is, really, the establishment of the basis of the French educational system that's remained, for better or for worse, the same until today. I'm a big believer in the French educational system. My kids were in French schools for three or four years. There's no higher good result of humanity's collective good deeds than a French kindergarten or first or second grade. It begins to fall apart by the time you get to lycée. He created the lycée, the high schools--and the university system is now in total chaos, and Sarkozy will probably make it even worse if he gets his way about creating an American-like hierarchy of institutions, which would be at the expense of not the lower level, but the more modest universities in the French system. But be that as it may, Napoleon--it may or may not be true that he once said he could look at his watch and see what everybody is studying at any given moment. And there are lots of problems with the French system, but the division of France into académies, again this has nothing to do with the academies I've been talking about before, but into a geographic way of organizing all education, from the universities down to kindergarten or even to crèches, nursery schools, organized by region. It has lasted through all this time. It really is an extraordinary accomplishment. An académie, for example, now would be the académie of Limoges, or the académie of Grenoble, or the académie of Marseilles, or the académie of Strasbourg. It covers two, three, or four, depending on the region, departments. It's almost impossible to get a schoolteacher fired, by the way. That's another thing. I shouldn't go into--it would be very indiscreet to go into this too much, but if you try to get a school teacher in a village fired, it has to go through the head of the whole académie, who is called the recteur or rectrice, madame la lectrice or monsieur le lector. It's very impossible. There are problems with that, but nonetheless, the reason that--and here this sounds like a very pro-French thing to say--but the reason the French children, as Finnish children, and children in most European countries test at a very much higher level than those in the United States at any level you can imagine is because they have a centralized education system which does not believe that you should have wealthy communes, wealthy parts of France having all of the advantages, and then schools that have very limited financial resources have not the same possibilities for exceptional advancement. France has the grands écoles, the big-time, high-powered elite schools, elite universities. They've got their equivalents of the fancy places of which you're all in one now. But nonetheless, Napoleon does create a system which is long lasting and which allowed, over time, the educational structure of France to advance in very, very meaningful ways in the whole course of the period. So, no matter what you think about the fact that in the end he was a megalomaniac and lots of people get killed because of him. There's no doubt about that. But the wave of the French Revolution and the Napoleonic period has long lasting results almost everywhere. Take, for example, the unification of Italy. Italy will become unified in the 1860s and early 1870s, "unified." Metternich said it was a geographic expression only, and to an extent he may have been correct. The unification comes through Piedmont Sardinia, which was the most prosperous part of Italy. It's in the north. They had the benefits of this French bureaucracy, of this administration that was centralized that allow them to be more prosperous than other parts of Italy. It contributes to that. They had other advantages, too. So, the Napoleonic wave did make a difference. Though it's hard when you to go Paris--and if you go to the Louvre, it's hard to not think of the fact that many of the treasures that are there were simply looted from Italy, loaded not in trains as Goering, and Goebbels, and those folks looted art treasures during World War II, but put very carefully-packed on military wagons and returned to Paris. So, we can debate about Napoleon and all of that. My view is already probably fairly clear, but one has to admit that besides just the romance of his life, and a career open to talent, and all of that, that he made a huge difference and thus was worth spending some time on. Have a good weekend. I'm going to St. Louis.
European_Civiliization_16481945_with_John_Merriman
6_Maximilien_Robespierre_and_the_French_Revolution.txt
Prof: I'm going to talk about the French Revolution. It's hard to do. I'll leave myself about forty-five minutes after I screw around at the beginning. I want to do two things. I want to see the Revolution through the eyes of Maximilien de Robespierre, a member of the Committee of Public Safety--arguably, with Saint-Just, its most important member. In a way, Jacobin--he incarnated the French Revolution. In doing so I want to talk about the terror and, above all, why it was that people supported or opposed the Revolution. It comes down a great deal to religion, as we'll see. But first, because I promised that we had the only live, bootlegged album of the trial of the king, and of his execution, I thought I'd play those and also the death of Citizen Marat in his bathtub. To do that, I decided to bring a prop. I'm not making light of instant death. I just finished a book about a guy who ends up putting his head in a little window. You knew people were smaller back in those days, but you really didn't imagine that they were this small. Do you know what this is, what this comes from, this guillotine? Do you know what this is for? What? No. This is real. That hurt, actually, when I did that. Don't write your parents, especially you freshman, and say, "he was running here waving a guillotine, running around the place. There he goes again." No, it's for cigars. Student:: That was my idea. Prof: That was your idea? Student:: Cigars, awesome! Prof: No, don't smoke. Anyway, can you put on the first one? This is the trial of the king. This is the king. I'll just translate part of it. I'm not going to translate the whole thing. It doesn't matter if you don't know French. This is just for ambience. This is ambience. They're putting him on trial. He did bad things. I'll translate part in a minute. This takes probably too much time, but it's cool. This is from a rock opera about the French Revolution. Keith Richards live here. Keith Richards? This is too long to get to it. I apologize. Louis XVI, this was his finest moment. This is not Louis XVI. They're going to ask him to respond to the charges, and the old boy will. "Answer the accusation, the indictment, what you've done against the nation." It's nice. Listen. "Among you, I'm looking for judges and all I see is accusers." (I won't do the whole thing.) "I never did this horrible thing. I never betrayed my country" (which is patently false). "Life has given me some misfortunes and death doesn't frighten me at all. Maybe you can do France better than me, take care of France better than I could. Keep it from its own excesses. Take care of my family." (They didn't come out very well either.) "Take care of my children. It's only favor I'll ask you to carry out. Je n'ai plus rien à vous dire. I have nothing more to say to you." (They're going to vote now. Got live, got dead.) La mort, death. Saint-Just, "death." Marat--he'll get his in a few minutes. He will get his, too. Now, they execute the old guy. This is the death of Citizen Marat. He meets Charlotte Corday, who is from Normandy and a royalist. He's in his bathtub. I won't translate everything. "Citizen, you're coming without knocking. You're seeing Citizen Marat nude in his bath. What's your name? Charlotte. You have very nice eyes. Come here a little closer. What can I do for you?" (I won't translate that.) "Do you like them, Citizen? Why are you looking so mean all of a sudden? To make you afraid, you bastard. What's the knife for? Argh!" You heard it live. That's it. Okay. Can we get the lights, please? Maximilien Robespierre was born on May 6,1758 in Arras, a beautiful town destroyed in World War I in the north of France. His father was a lawyer. He was the son and grandson of lawyers. His father married the daughter of a well-to-do brewer and they were married a few months before the birth of Maximilien. Two daughters, one who died, this is fairly normal, and Augustin, his brother, followed. His mother died giving birth to a fifth child who barely survived her. The father was unstable, always leaving home at the time of the birth of all of his children. He finally died in Germany. So, Robespierre never had a family. Psychohistorians have done a lot with this. The family of four was left in the care of a maternal grandmother and aunts. Essentially, he was an orphan at the age of eight. He felt his father's guilt about causing the death of his wife. His sister remembered after his father left, disappeared, "A total change came about in him, forming like all other children of his age, he was thoughtless and turbulent and flighty. But since he became the family head, so to speak, by virtue of being the eldest, he's become settled, responsible and laborious. He spoke to us with the kind of gravity which impressed us. If he was to take part in our games, it was in order to direct them. He loved us tenderly and there were no attentions and caresses that he did not lavish upon us." Henceforth, if you buy a psychohistorian's interpretation, "He could only be a man of order. He desperately tried to assimilate himself to the social order. He both loved and hated his father as he adored his dead mother. His whole life was marked by feeling of his father's guilt, which also represented the death in a real way of his own childhood." In his last hours, his death wish, his inability to act when he might have saved himself can be seen, if you will, in that context. He was forced into a seriousness and responsibility. He always had a passion for solitude, isolation. He knew what it was to be poor. He was an example of sort of downward mobility. He went to school and he was really smart, supported by charitable foundations, first in Arras and then in Paris. At age eleven to the college or middle school of Louis le Grand, where he became a star classics scholar. He was selected among all the other pupils to read a poem that he had composed to none less than the king and the queen as they passed by Reims, also in the north of France, in champagne country. As it was raining, the king and the queen ordered the driver to go on, not stopping to listen to Robespierre's little poem and, indeed, splashing his only good suit of clothes with mud as they drove off. He became a lawyer, getting his degree in 1780 in Paris, a lawyer at the Parlement of Paris. His scholarship passed, as things did in the old regime, to his younger brother. He entered literary contests that were run by the Académie. It was once said that he even caught sight of the great Rousseau, but that seems a little unlikely. But in Metz, the Académie awarded him 400 pounds, which was a lot of money. He was elected to the Académie of Arras. In law cases he championed the poor, the humble. He took the side of a man in an abbey who had been accused by the monks of a theft, when in fact one of the monks had done the ripping off. He once said when somebody was condemned to death, "I know very well that he is guilty, but I can't imagine to send someone to his own death." At the beginning, but only in the beginning, he did not believe in capital punishment; although, arguably those who agree with him would say that he saved the Revolution by meting out capital punishment. Because of his reputation--this was a classic case of a young lawyer on the make--he's elected to the Estates General from Artois. He's unknown. When he goes to Paris he's called in the minutes sometimes "Robes-Pierre," sometimes Robespierre. Sometimes Robert, like the name Robert, or sometimes simply Robert, as if that was his first name, and Pierre, his second name. But he began to make a mark, speaking always very softly. Sixty-eight times he spoke in 1789 and he gradually gets his reputation there. He opposes all restrictions on the freedom of the press, and of course that would change later as well. He invokes Rousseau's concept of the general will to support the view that the king should have no right to oppose or delay legislative measures proposed by the assembly. You know this from your reading. He sided on the left of the assembly with those who went to Varennes to bring back Louis XVI when he and Marie Antoinette tried to hightail it to the southern Netherlands or, that is, to Belgium. That was the king, by the way. He should have been there first. That's the king. There's Marat fully clothed. There is Maximilien Robespierre. Although he spoke often, he lacked presence and color. This would be the case until the very end. The English writer, Carlyle, saw him as "anxious, slight, an ineffectual looking man in spectacles, his eyes troubled, careful, with an upturned face. Dimly trying to understand the uncertain future times, but he spoke with an intense passion and conviction, a belief in all that he said." Mirabeau, who died of syphilis, one of the king's main advisors, said of him, "That man will go far, because he believes every single thing that he says." He seemed rigid in his principles, plain, unaffected in his manners. "Nothing," said an Englishman, "of the volatility of a Frenchman in his character." He supported the idea that all male citizens should have the right to vote and thus, he opposed the idea of having active citizens, who pay taxes, and passive citizens, who did not have enough money to pay taxes and thus could not vote. He calls for, among others, the deposition of--the king's being deposed, both in the legal sense and being deposed from the monarchy. He was already known as "the incorruptible." He received letters of admiration. Once leaving the assembly a crowd put oak leaves around him and carried him around the city in triumph. He always wore impeccably white clothes. He wore a powdered wig, which is very much an old regime thing and not a revolutionary thing. He was not somebody who was going to go out and tutoie easily. The revolutionaries tutoient--tutoie is in the familiar form, like du in German, as opposed to Sie. Du is the familiar form. All were equal, therefore, he didn't say vous to people who were above you in the social ladder. He didn't like people touching him. Indeed, probably he was chaste. He had only a few flimsy and only by mail flirtations with women. When they picked him up, you have to imagine a sort of crassly American analogy, where a football coach who's sort of swept off his feet after a big upset or something. He doesn't like people touching him. He doesn't like being carried away by them. He was ascetic, always preferred being alone. He ate very modestly. One letter to him said, "As incorruptible as you are courageous," and he was that, "you have always openly displayed your feelings. It has never been self interest that has made you act or speak, only the general interest." He identified with ordinary people and he ends up living in western Paris, a more prosperous part of Paris, but in the home of a carpenter on a street called the Faubourg Saint-Honoré. Those were really his happiest moments. It provided him with a family. It had young children in it that he really hadn't had--a very normal circumstance. People who came to see him saw him sort of stretched out on the couch with his family trying to guess from the way he looked what he might want. Would he want more grapes? Would he want more milk, et cetera? He read a lot. He wrote his speeches, which were written out by hand. He was always well combed and powdered, the cleanest of dressing gowns, et cetera. He began to be a frequenter of the Jacobin Club. These clubs, like the Feuillants, and the Cordeliers, and the Jacobins, were called that not because they had anything to do with the religious orders, like the Jacobins were a religious order. The biggest places you could meet were churches and abbeys. Those were always the biggest buildings. The Jacobins, who were on the left of these clubs, begin meeting right even before the Bastille falls on the 14^(th) of July, 1789. He begins to go to the Jacobin Club. The Jacobin become the great leftwing centralizers of the revolution. They trumpet the authority of the Parisian Sans-Culottes. That's another French term so important it worked its way into English dictionaries. The Sans-Culottes were those who supported the Revolution. Technically, if you said sans culottes it meant somebody who was not wearing pants. That's not what it meant. What it meant was not wearing fancy kind of aristocratic breeches, and it became identified with a form of political behavior. You could be an aristocrat, and there were liberal aristocrats, in a meeting in a club called the Club of the Thirty, who helped push the Revolution really toward constitutional monarchy, at least in the beginning. If you were against the Revolution, you were an aristo. You were an aristocrat. If you were for the Revolution, you were part of the people. You chose the color red, because red becomes the color of the leftwing interpretation of the Revolution. You gave people kisses on both sides of the cheeks or three times, depending on where you were in France, as recognizing the solidarity you had as being a citoyen, that is, a male citizen, or a citoyenne, female citizen. The whole idea of kissing, by the way, is terribly important in France, but that's mostly a late-nineteenth and early-twentieth-century thing. People really kept their distance, whereas now if you live in Paris you kiss twice, or in the Sixteenth Arrondissement not at all; you merely shake hands. If you live in the Parisian suburbs, often you kiss four times. If you live in the south of France, you kiss three times. In the Department of the Hérault, which is Bas-Languedoc, if you live in Béziers, you kiss three times. If you live in Montpellier, which is a more aristocratic city traditionally, a more formal city--it was a big university town, it really rocks--you kiss only twice. But this idea of kissing people on the cheeks was a sign of revolutionary solidarity. Symbols were very important. If you carried pikes around, pikes at the Battle of Valmy, which you can read about--is the pikes of the Sans-Culottes that stopped the highly-professional armies of the enemies of the Revolution. So, he believes in the necessity of a single will. Again, this comes out of Rousseau's idea of the general will to save the revolution against its enemies. He is one of the people that helps push the French Revolution to the left. His principles were totally unshakable. He doesn't budge on them at all. Of course, it's just insane to look back and see in Robespierre the origins of totalitarianism, despite the Committee of Public Safety. Robespierre also was a man of his times. He was not against all property. He was against les gros, people having too much unearned property. He thought everybody should have enough to get along, but that even people who didn't have any property and thus didn't pay any taxes, as I said before, ought to have the right to vote. He also, like the Jacobins, believed that they ought to have enough to eat. One of the tensions that one found in French Revolutionary political clubs, and political societies, and in the neighborhood sections that began planning how you would defend your neighborhood against foreign invaders or insurgents from within--the price of bread, of course, counted enormously. In a couple of weeks I'm going to talk about what difference bread made in terms of popular protest. People who believed in the kind of free trade that Turgot had in the 1870s believed that the market ought to determine the price of bread. But there was always a tradition that the price of bread ought to be kept at a reasonable amount, so that everybody ought to have enough to eat. So, the Jacobins, most of them believed in the maximum, "the maximum," which was a maximum on the price of bread. Now, their enemies on the revolutionary left, or left central, were called the Girondins, which I wrote on the board and a name I sent around on the class server. The Girondins, G-I-R-O-N-D-I-N-S--which is also the name of the Bordeaux soccer team--were from Bordeaux and the department of the Girondin, many of them were. They are merchants. They are free trade people and they also were extremely interested in launching foreign wars to carry liberty, fraternity, and equality abroad. I'm scrambling for this great quote. The Girondins were in love with war. There was this great rhetoric about conquest and carrying "freedom" to other countries. Although one had to be, and many people were, cynical about this, when the French troops poured into the Rhineland, the prostitutes of the Rhineland cities dressed up in red, white, and blue flags to welcome their new clients. Robespierre made a series of speeches against this Brissot, who was the former Grub Street writer that I mentioned before, arguing that France should not go to war. He said that--and more about this in a minute--that the danger to the Revolution did not come from a handful of émigrés in Germany, but from within France, from the counter-revolution. That's important. That's worth underlining. Secondly, he argued that launching wars all over the place will merely play into the hands of the king, who at this point was still the king, and the counter revolutionaries, perhaps paving the way for some sort of military dictatorship. How forward-looking was that? Because that's exactly what they ended up with, of course, with Napoleon. Moreover, he argued that war would separate soldiers from the rest of the people. Indeed, the levée en masse--my guillotine almost fell down--would compromise that, because all citizens become soldiers, et cetera, et cetera. But he said something. This is an amazing little speech that he gave. I'm going to read just a few lines of it. If you think about current politics in this country in the last five years, it may also ring true. Robespierre said, "The most extravagant idea that can arise in a politician's head is to believe that it is enough for a people to invade a foreign country to make it adopt their laws and their constitution. No one loves armed missionaries. The declaration of the rights of man is not a beam of sunlight that shines on all men, and it is not a lightening bolt which strikes every throne at the same time. I am far from claiming that a revolution will not eventually influence the fate of the world, but I say that it will not be today." Amazing! He lost this debate in 1792 and, in fact, his denunciation of plots against the Revolution may have contributed to revolutionary paranoia, which would be acted out in the terror. On the 20^(th) of April, 1792 the Girondins and the king got their wish and war was declared on Austria and French troops crossed into Belgium, that is, the Austria-Netherlands, and the wars went on and on. I want to make a couple points that are pretty important. I'm not going to use these papers to do this. The threat to the Revolution did not come just from Austria, from Great Britain, from Prussia, from Russia, from the big allies. Robespierre got this right. There are two--and the timing of this you can read about in the chapter and please do--main counter-revolutionary threats to the Revolution. The first, which was not the most important but still is worth mentioning was what has been called the federalist revolt. This was based in cities in which merchants, free traders, played a big role in that. Here is a map of la belle France. Bordeaux we've already talked about. You've got all these wine merchants and all this fancy land and who merchant other things as well. You've got Marseilles. Toulon is not yet the huge port it had become in the nineteenth century. We've got Marseilles. You've got Lyon, then France's second city, first in gastronomy, one could still argue today. Varennes was merely where the king is caught. That's up there. By the way, in 1790, in order to undercut local elites, that is clergy and nobles, and also to impart a more rational organization of the country--it comes right out of the Enlightenment, out of the philosophes--they create départements. They name most of them after rivers, though some are named after mountains. They create a capital in each of them. None of this matters, but it's just to tell you what's going on. That's Haute-Vienne, the capital is Limoges. That's the Corrèze, the capital is Tulle. Up there is the Creuse, the capital is Guéret. This is Rouen in the Seine-Maritime,. This is the Atlantic Pyrenees, the capital is Pau, et cetera, et cetera. The federalist revolt also in Cannes, that's where the aforementioned Charlotte Corday came from, from that part. That's the department of the Calvados, which is also the name of a wonderful apple brandy. The federalist revolt comes in, above all, in Lyon, Cannes, Marseilles, and Toulon. Of course, the revolutionary armies, the armies of ordinary citizens in the republic, what has become the republic, crush them. They crush them like grapes. In Lyon, one of the members of the Committee of Public Safety, who was confined to a wheelchair, a man named Couthon, who you can read about, C-O-U-T-H-O-N, says he's going to plow Lyon under like Carthage; and they do execute people on the Place Bellecour. So, this federalist revolt is against the Jacobins, a Parisian-centered, far-left interpretation of the French Revolution. By the way, there are groups even further to the left. The Jacobins really weren't that far to the left, but there are groups like The Enraged, les Enragés. There's even a guy called Gracchus Babeuf, who believed that property should be abolished. These are just very small groups. Babeuf is guillotined in Vendôme, which is south of Paris near the Loire. His trial is a wonderful source, the trial of Gracchus Babeuf, an original source. Anyway, that's the federalist revolt. But the most important threat to the revolution comes from peasants. It comes from peasants in the west of France, also down where we live, in this part of France, too. That comes later. Basically, what I'm going to talk about for a few minutes is the revolt in the west. It was often said that peasants would never march more than a day away from their fields; but certainly, as Mao, or Ho Chi Minh, and lots of other people have shown, that's not the case at all. The war was fought with a savagery, with a brutality that was simply staggering. There were massacres on both sides. The Vendée, which is in dark there, became a department, good old number eighty-five. But it became such a major blood bath, that the entire counter-revolution in the west is often simply called the Vendée. I am in history literally because I read a long time ago a book written by my late and much-missed friend, Charles Tilley, called The Vendée , which sought to explain, and did explain, and really hasn't been nuanced very much over the decades, why some people opposed the revolution, taking big-time chances. Other people who didn't live that far away supported the revolution. What he found--he studied an area, it doesn't matter where, but sort of the north of that dark area, actually in the Maine-et-Loire, a different department--but here's another French word that's also English, in the bocage country, B-O-C-A-G-E. It's the hedgerow country. He found that the people who rose up against the revolution and took big-time chances--the republicans didn't screw around, and there was a lot of horrible brutality. They drowned thousands of clergy in the swirling waters of the Loire River, which is a really dangerous river, by putting them out into the river with holes drilled in the bottom of the boat, which didn't give them a big hope. They killed lots of people on the Il de Ré, which is up here off La Rochelle. Of course, the forces representing the monarchy, representing the nobles, were, if anything, more brutal. They crucified people, literally. They made them kiss the cross before they beat them to death. It was really a nasty time. Looking at people who rebelled and ones who didn't, one of the things we can say at least about that particular area, but it really rings true, is that areas in which that traditional elite, noble and priest, had not been broken down by the economic and social changes of the eighteenth century. They were physically isolated. These hedges, you see them also in the in the Manche in Normandy. These are huge hedgerows that you literally can't fight your way through and can't really climb over. People tended to marry within their village or within a nearby village. The priests still walked tall. The noble was somebody who they still respected, even though many of the nobles had left. Many of them they still had to pay dues to the nobles. Their only contact with this kind of bourgeois world of economic change was with people who were farming taxes, for example, whom they hated, people who were putting out work into the countryside who cheated them and said, "I told you I'd give you five sous last time, but there are a lot of women doing that same work. I'm going to give you three and if you don't like that, too bad. I'll walk away and I won't take the cloth work that you've done for me. Too bad for you." Or people that collected the taxes for the nobles. If you look up further along the Loire River where you had this sort of economic change in the eighteenth century, people accepted this new lead. They were willing to ditch the idea of the monarch. It wasn't that they all read Rousseau instead of the Bible before they went to bed, but these were big-time changes that reflected the way things had evolved. Let me give you some more examples. One of the most important moments of the French Revolution--and this is also worth remembering, and you can read about it--is the civil constitution of the French clergy. The revolutionaries get the very good idea that you're broke. We already know that. The monarchy is just flat broke, so where are you going to get the money? Who has money? Well, nobles who leave France have money, because they have a lot of property, particularly in areas like Brittany and in Burgundy and in Ile-de-France, around Paris. But the church has enormous amounts of money, enormous amounts of land. What they do is they essentially nationalize the church, the details you can get, and they force people to take an oath to the French Revolution, to the nation. In certain parts of France, particularly those that rose up against the Revolution, the priests don't take the oath. They refuse to. They are called non-juring, that is, non-swearing, priests, J-U-R-I-N-G. In other parts of France, the priests were more willing to take the oath and they were called juring clergy. This is important. If you just look at this map, you see this is by cantons, cantons within departments. You can see all that white up in Brittany and those revolutionary areas. It corresponds exactly to the area of people who fought against the French Revolution. Because the clergy still has enormous influence, and so do the nobles, even if they're living in England at the moment, or living in Britain, or living in the Austrian Southern Netherlands, or in the Rhineland or somewhere else in the German states. But this isn't enough. That's not enough. We have to see really what--here we go. This is by district. This is an even better map of it. You see that in the central part of France here, priests refused to swear allegiance, as in our village, to the French Revolution. But in Brittany and in Normandy they did, massively. And in Alsace and Lorraine they did, massively. And in the north of France they did, massively. So, okay, that's fine. That's interesting. So, what's going on here? What's going on? Why does it go like that? It's not just that people all start getting together and say, "Let's not swear to the Revolution. Let's go have a drink of Calvados instead." More important changes are going on. The word "dechristianization," which I also sent around, has two meanings. One is the campaign against the church by the revolutionaries to melt down church bells, et cetera, et cetera. Dechristianization, to change the calendar so that it's no longer January, February, March, but it's Germinal, Thermidor, Ventôse, names that have to do with winds and plants and the agricultural calendar. That's part of dechristianization, but that's not the big issue. The issue is that in these areas in which the Revolution was accepted, that old time religion was on the rope. A friend of mine, who was a great historian, called Michel Vovelle, a long time ago did a book on dechristianization. He looked at part of Provence. He looked at what people did with their money and wills. He looked at the number of people that became priests or nuns. He looked at all sorts of things--how many people baptized their children within the three days you were supposed to in the Catholic Church. What he found is that the church, it wasn't the Revolution that destroyed the role of the church or that reduced the role in ordinary people's lives. That had already happened. It began after the counter-reformation, that is, the Catholic reformation. It was already well underway by the 1730s and the 1740s. So, you can see political behavior here reflecting these big-time, important trends. Another way, you could look at bishops' sermons. You look at how people named their children. In the nineteenth century when they stop naming their children after local saints, for example. That's another good indication. Or, you don't have many people named Mary Magdeleine anymore in France, or that kind of thing, or in the Limousin named Marcel or Léonard. These are names of local saints. Here again, the areas that were counter-revolutionary, particularly up in Brittany and Poitou and those regions there, the big story was these major kinds of change. Ultimately, what I'm saying is that religion was very likely to be--arguably the most important cause for people supporting or opposing the Revolution, particularly the radical revolution as the Committee of Public Safety sat around this big table and made big decisions. What about the terror itself? The interpretations of the terror basically have gone like this. One, it was sort of a bloodletting by the very poor of their social betters. Well, that's pretty much nonsense. Secondly, that it's a reflex action to save the republic. That makes more sense, basically, and somebody in the 1930s, before there was really quantitative history, went to look. A guy called Donald Greer said, "Let's look at the victims of the terror." What he finds fits into where we started, which is Maximilien Robespierre and his attempt to save the Revolution. Most people in the terror, there was a higher percentage of clergy and nobles executed, because there were small percentages of clergy and particularly nobles in the French population. But the vast majority of people that either were given a prison sentence, or put their heads through the little window, or were shot down, executed as in Lyon, were peasants and were artisans. Why? Because there were more peasants and artisans, above all peasants, in France than any other social group. That's good to know that, but even more important, that the incidents of the terror in the French Revolution come in areas that are battle zones. They reflect the war, the professional soldiers of monarchies fighting in the north of France or fighting in the east of France. They reflect the civil war. A lot of people were put to death or executed because they were fighting or participating in supplying troops in battle zones. It's possible, as my friend David Bell has argued, to say that parts of this represent the first total war. I'll talk more about that when we get to Napoleon. That's an interesting subject. Anyway, the terror was no sort of organized bloodletting or spontaneous--it was that, too, despite the horrible incidences, despite Nantes, and despite the massacres in the prisons, and the September massacres in Paris. There was a logic to this and it had to do with trying to save the Revolution. Back to Robespierre--fumbling through his papers here to get to the appropriate point--they tried to kill Robespierre, of course, as the terror gets more organized. He had a tendency, as does Saint-Just and the other ones that say, "If we just have one more terror, one more round of terror, we'll finally save the Revolution and all will be well." Lots of people in the assembly looked around, reasonably enough, and said, "We might be the next victims." He became increasingly tired, fatigued. He was never a dramatic speaker, but it was as if he no longer wanted to be heard. Two attempts on his life. It finally got him to come to the Jacobin Club and to go the convention. It's possible to argue that death and revolutionary immortality was something that he chose because of his own childhood guilt, primal guilt, about his own father and the death of his mother. For someone who is ascetic and withdrawn and preferred to be alone, he became increasingly that way, unable to act. In 1794 in the month of Thermidor, he became increasingly obsessed with his own end, his own demise. He says, "If providence is seen fit to snatch me from the hands of the assassins, it was to ensure that I would profitably employ the moments that still remain to me." Yet he could only, as people howled at him from every conceivable angle in the convention, murmur almost inarticulately. When he mounts the rostrum on the 8^(th) of Thermidor, he gives an incredibly clumsy speech for Robespierre. "I need to unburden my heart. Everyone is in a league against me and against those who hold the same principles that I hold. What friend of the nation would wish to serve the nation when he no longer is allowed to serve it? Why remain in an order of things in which intrigue eternally triumphs over truth? How may one bear the torture of seeing this horrible succession of traitors?" Perhaps he thought that his own death would rouse the patriots, that is the Sans-Culottes in the sections, in the neighborhoods of particularly central and eastern Paris. Eleven times he is shouted down, people shouting, "Down with the tyrant." He says only, "I ask for death." He leaves with his brother and with Couthon and they go to the town hall, the same building is not there, but it's in the same spot. They wait upstairs. Occasionally somebody would say to him, "Why don't we go out and rouse up the sections? We are at great risk here." And they just sit there. They sit in this room. They sit there all night. Finally, inevitably the troops of the conspiracy run up the stairs. Couthon tries to escape in his wheelchair and the wheelchair bounces along with him down the stairs. Robespierre either takes a pistol and shoots himself in the jaw or is shot in the jaw. He spends time trying to rub the blood, because he had this thing about clean, white clothes. Until they know what to do with him, at one point he's laying on a table and he points, with Saint-Just, they point to the declaration of the rights of man and of citizen. They said, "We did that." And indeed, they did. They find themselves taken to the Conciergerie, that is the prison of the Conciergerie, which is still there. It's no longer a prison. It's one of the three great gothic halls in France, along with Avignon and Mont Saint-Michel. Of course, the trial is, as he would have had it for the enemies of the Revolution, is quick with no defense really permitted. The next morning they clip his hair back, as one did, so that the long hair would not in any way slow the blade of the guillotine. They take him, put him in a wagon, and it takes, because there were not major thoroughfares through Paris. They take him to the Place de la Revolution, which is now called the Place la Concorde and they pass him by his house, the house of the carpenters where he stayed, where he had his happiest moments. As he gets closer to the Place de la Revolution, where he's going to meet Sanson--the executioners are all--it's a blood trade, after all. Rather like butchers, they all intermarry. They lived outside the walls of cities. He's going to meet Sanson. As he gets there, he surely noticed that the women were more and more well-dressed and so were the men. As he gets there, they're shrieking at him obscene terms, "Down with the maximum," the maximum on the price of bread. It's a very different crowd. Thousands of people came to hear, to lean forward to catch the last words, to see the head held above, as they did with Louis XVI and with Marie Antoinette. "Hold my head up. It's a good one," said Danton--or something like that, when they'd executed him on Saint-Just and Robespierre's orders before. He meets Sanson with blood pouring out of the bandage holding his jaw and having come loose. Even at the end, even as they shove his head into the little window, blood is pouring from him. One doesn't know--we can't imagine as he looked up as his head is down. He looks up and he sees the throngs who have paid a fine price to sit on the roofs, like living across from Wrigley Field or something like that, to see the death of the tyrant-tyrant, or the person who saved the Revolution, or one of them. It's very hard to say. It depends on your view. But certainly, one can imagine that Robespierre breathed a breath of relaxation, of leaving an existence that had tortured him and of gaining for the Revolution--he hoped--revolutionary immortality, but also perhaps, one can argue, paying a debt, a debt to his family that stemmed from the death of his own mother long before in Arras. See you on Wednesday.
European_Civiliization_16481945_with_John_Merriman
17_War_in_the_Trenches.txt
Prof: We're going to talk about the war today. Let's do that. I assume that you guys all saw Paths of Glory, so I'm going to talk about the mutinies in a while. Jay Winter is going to talk about essentially the Great War in modern memory. To make a nice transition to his lecture, I'm going to end with something that he wrote about how reality and art came together in a terrifying way in 1918. Okay. Now--;comment faire ça? Qu'est ce qu'on va faire?--so, just a few things at the beginning that are obvious. They're in the book. It didn't work out the way Schlieffen wanted it to. The point about the invasion of Belgium was that it brought Britain into the war. The Germans were counting on the fact that it would take Britain a very, very long time to raise an army, not a navy but an army of any size. What they called the British Expeditionary Force does arrive and takes its place next to the French. But it's very small and they don't have conscription until late in the war. Unlike the French, they did not have military conscription. Basically, to make a long story short, in part because Germany, as France, as everybody was worried about the home front, basically what happens is they hurt their chances of pulling this off by moving some divisions to Alsace to try to blunt the force there. Also, some more are headed off to the eastern front, because they start to realize that the Russians are mobilizing more rapidly than they thought they could. Basically, it's possible to argue that the Battle of the Marne saves Paris and saves France. Schlieffen would have gone crazy about this. Remember, the last thing he said supposedly in his life was, "Let the last soldier touch the English Channel and then come down and hit Paris." But they turned down before that, and the first airplanes are used as reconnaissance planes. The pilots literally had to carry pistols with them at the very beginning. They had not figured out a way to put machine guns on that the bullets wouldn't hit the propeller and then come back and kill the pilot. So, all this took some doing. But the first planes were reconnaissance planes. At one point in this huge engagement featuring enormous armies, in the German case supplied by trains going many every hour across the Rhine, the French planes see that there's a big gap in the German lines. So, they counter attack in the famous story everybody knows. Again, what I want to insist on is--look at where it says Battle of the Marne. There's a town called Lagny there. Now it's practically a suburb of Paris, L-A-G-N-Y. You could hear the battle in Paris. You could hear the roll of thunder of the guns. When you ask how the French home front holds together so long, it's that the Germans are so close. In 1918, they will be close again. In 1918, they're firing this huge gun which the British soldiers called "Big Bertha." It's lobbing from way, way the hell up in the north. It's lobbing shells from behind the German lines all the way to Paris on Easter Sunday 1918. It hit an apartment house on the Church of Saint Gervais, another hit an apartment house on the Boulevard Port Royal, one on the Rue de Rivoli not too far from our place. The Germans are so close. But in 1914 what happens is that literally the commander of Paris, whose name was Gallieni. He has a métro stop named after him. A lot of these guys do. He commandeers the Paris taxis. They are literally carrying soldiers out to the front at the Battle of the Marne. What happens is the Battle of the Marne stops the German advance and then the race to the sea begins. They try to outflank each other. Again, to borrow a ridiculous football analogy, but it's not so ridiculous. It doesn't matter if you don't follow football. If you're trying to get around the outside before the outside linebacker can get there and you're trying to turn the corner. Basically, that's what they're trying to do. Both sides are trying to turn the corner and they end up at the sea. At that point, the trenches are dug literally from the sea all the way to Switzerland. The war, to repeat what I said the other day, only a couple people who had seen what was going on in the Russo-Japanese War of 1904-1905 could have imagined this war in which the offense was supposed to have, as in 1870-1871, every advantage. Remember the French commander said élan vital, "We need the frenetic patriotic energy. That's all we need. We need to attack and keep on attacking." It doesn't work out that way. The reason that you have all of these millions of people killed, the flower of British youth, the flower of every youth in that period, is because this offense war becomes a defensive struggle in which breaking through is almost literally impossible. Thus, backdrop to what you have seen in Paths of Glory. The weapons of the war including the shelling, most people are killed by shells in World War I than dying in any other way. There are new and horrible ways of dying, flame throwers, for example, poison gas, which is first used by the Germans at Ypres, on of the many battles of Ypres. There are twelve battles of the same river in northern Italy. There are several battles of the Somme. These battles keep on happening, because large chunks of real estate are virtually impossible to conquer. So, trenches are defensive weapons. One of the reasons that the breakthrough is impossible is that when you're going to try to break through these trenches, what they have is what they call creeping barrages. They start trying to--and lots of people died with what the Americans call friendly fire--they try to coordinate the shelling to go in advance of the people going over the top and then trying to carry sixty pounds--pick up sixty pounds some time--worth of stuff on your back, and go down into these horrible craters full of all sorts of crap, and dead floating rats, and dead floating bodies of human beings, and to try to break through. Then you run into machine guns. Machine guns which can fire what? I just saw this morning or last night. I think it's 600 rounds a minute. The Gatling guns had first been used in, I think, the American Civil War. But these are much more rapid firing. They aim basically at your knees. They just sort of go back and forth, back and forth. Then barbed wire. One of the things that soldiers had to carry with them were wire cutters. Sometimes the wire cutters weren't equal to the task of cutting the wire. It's hard to cut wire if people are firing machine guns at you as well. That's why the trenches, which you'll see some real ones in a minute, are fairly elaborate defensive weapons. Everybody has seen the footage of real battle. Sometimes--they've wrecked it, but the Imperial War Museum, which used to be much better than it is now in London, but it's really worth seeing. They used to have this amazing small clip. You see these three guys and they're about ready to go. One guy blows his whistle to say, "Follow me." The first guy goes up and he gets his head over. Then he's dead. He falls back. The second guy goes up and he gets a little further. Then you see his body hit. The third guy, when the clip ends, is just about to get out. You don't know what happens to him, but his chances weren't very good. Breaking through. There are debates on how ridiculous these people like Nivelle were, or Foch, and Joffre and the whole gang, because they keep ordering these attacks. "The breakthrough is going to come next. We've really got them. We're going to break through." But they don't break through. And they don't break through. They can't break through. That is background for the mutinies. The first real breakthrough doesn't come until March, 1918, in the Ludendorff offensive, 1918. Then they overrun their supplies, and it kind of snaps back like a rubber band and pushes them back. The Germans, at that point, for reasons I'll explain in a minute, know that they're not going to win the war. They can't win the war. What's going to happen is that when the war ends, and more about this when we talk about the post-war, is that the war ends with German troops far inside France. How do you explain that back to the home front? The Berlin home front has started to collapse. There's great deprivation, great problems getting enough to eat. And that situation, that will make it easier, later, for Hitler and many other little would-be Hitlers to argue that you were winning, but you were stabbed in the back by the Jews, and the Communists, and the Socialists, and the peaceniks, and all of these people, from their point of view. When you do these creeping barrages, you're indicating where the attack is going to come. Behind the trenches the Germans, as do the French, have railroad lines that are used to bring in reinforcements, to bring in supplies. What you do is you bring in supplies. You bring in reinforcements. If you read a great book by Paul Fussell called The Great War in Modern Memory, it's about the war poets. It's about Siegfried Sassoon, and Wilfred Owen, and folks like that. Eisenberg, I quote him. I think one of his poems is in the book. That is an amazing look at the whole thing. That's a ready-made paper topic, to take a couple of those poems and talk about the war. Breaking through is very, very difficult. It's almost impossible. That's why you have the carnage. That's why you have, as I said the very first day if you were here, that there were more British soldiers killed or seriously wounded in the first three days of the Battle of the Somme, like the river, than there were Americans killed in World War I, Korea, and Vietnam. In three days. The first three days. You're talking about horrific losses. You're talking about an expectation. Try to put yourself in the same thing. I think I have that quote in there. Somebody said, "You discuss your own death as if you were discussing a lunch that you were planning tomorrow." Someone else said, "I didn't want to die, at least until I'd finished reading The Return of the Native." That, again, is background for the mutinies. What is amazing is that--;and again, the French situation because of the precariousness--it's difficult to explain how people could have continued to fight in many ways. Again, looking at the Austro-Hungarian Empire where they had huge losses. The armies hold together, really, until 1917 and even beyond the Austro-Hungarian Empire. The Russian case, too, is remarkable. The Battle of Tannenberg is just an amazing battle in 1914. There are so many casualties they couldn't even count them. There's so many people dead. There had never been a war like this. No one had ever seen, couldn't have imagined a war like this. The proximity, also, the English had the advantage of having the channel there. But it's one of these, if you've been to Victoria Station, it's one of these things they always say about the war, but it's true. You go to the officer's club in Victoria Station, have a decent lunch, knock down a couple pints of beer, and you're on the front and can be dead by early evening. It is said that in Kent, where the miners, these Welsh miners in Belgium, they tunnel under this sort of promontory that's sticking up, that's a defensive position for the Germans. They bring in all these munitions and they blow the thing up. They blow this huge thing up. In Kent supposedly it is said that people in Kent on and near the coast of the English Channel could actually hear the explosion. The war is that close. Of course, it's close in other ways. Imagine that you lived in a village in France or anywhere. The facteur, or, in our case, the factrice, the mail carrier comes. What you don't want to see is you don't want the mail carrier to come to your house. You don't want mail. He would be carrying a telegram saying, "Be proud of X, who has just died for" you fill in the country--Turkey, Bulgaria, Romania, France, Germany, Russia, Britain, anywhere. So, it became a war like no other war, with the only possible exception the Spanish Civil War. It has given birth to really the greatest writing about arguably any war, certainly, in history, and arguably any events outside of maybe the rise of Hitler and National Socialism in Germany. It was like that. It really couldn't have been any other way. They're still arguing over these battles. Passchendaele, we once drove up--Passchendaele was one of these places that they first used poison gas. Now it's a lot of lotissements, in Belgium, a lot of housing developments. I just wanted to go and see there. You can't even see there where the hell that was Passchendaele was actually there. If you're going to go see these battlefields, the one to go to is Verdun, which I'll talk about in a minute. There you can go through these forts, Douaumont and Vaux, to imagine what it's like. You can see some places where they've left, in the winds, and the mists, and the terrible--of that part of France. The one road going from Bar Le Duc, on the sacred road, supplying Verdun. You still see there's one place where they've left the guns with their bayonets. There was a lot of hand-to-hand fighting there. That's the place where Falkenhayn said, "We can afford to lose more children, more young people, more young men. We will simply outbleed them." He hurdles one attack after another over most of 1916 against Verdun. That's where so many people die. Of course, that is the background also for these mutinies. Okay. Here's the western front in 1915-1917. You can see that it really doesn't move at all. Again, there is Paris and there is fighting. By the way, these places like Reims, with the beautiful cathedral which was rebuilt thanks to the Americans after the war. The Carnegie family gave a lot of money, and ordinary people rebuilt the cathedrals. One of the great cathedrals anywhere in Europe. Reims was right on the line. Of course, Reims just got pounded. The whole place was just totally devastated. Arras, there's another example up there, right on the line. I can't remember on the first day or not if I didn't relate a story of people that we knew, now about eight years ago, who were actually killed because of World War I. They were killed. There's a family we knew who were cousins of really good friends of ours that would come down there. There was a boy the age of my daughter then. I guess he was twelve then. We met them and we had a good time talking to them. Then I asked how they were at Thanksgiving when I went over at Thanksgiving. In France we don't celebrate Thanksgiving, but I had ten days, so why not? They were dead. Not the father, but the son and the mother had been killed by World War I. Their house was right in Arras and in the basement they had a fire. This was just a few--you were ten years old when this happened. This was World War I. Still killing. They were killed because there was a fire in their basement. They didn't know that on the other side of the wall were all these munitions stocked right near the front on World War I. The fire caught and it blew up the house. The father wasn't there and this little guy and his mother were killed, were blown up, killed by World War I. In the 1920s there were people killed all the time. Every couple of weeks--you still see in the paper now that they found a bomb in Berlin from World War II from all the bombing, or in Dresden and in all these other places. In World War I, there were constantly farmers who were blown up as they were plowing, constantly, as they were plowing on these battlefields around the Chemin des Dames, for example. You can see the Somme there. That's a good one to have there. But the Chemin des Dames is up near there. It's north of Soissons. Anyway, if you go to any of those départements, if you go to the Marne, which is where the Somme basically was, or in the Pas de Calais, which is Picardie there, there are just fields and fields of these cemeteries there with hundreds of thousands of crosses. One can go on and on about this, but there had never been anything like it. So, war became the dominant experience in the lives of Europeans, period. No matter how old you were, you knew somebody who died. You had a relative who died, period. There are in France, where much of the fighting was--the western front fighting was there, and in Belgium, there are 36,000 communes, which is an administrative unit, 36,000. Twelve out of 36,000 had nobody killed in World War I. There are places you can go, particularly if you're in the south of France, where you can go. They were all taken. People who were skilled workers, who could work in munitions factories, could get out. There were a lot of tensions between rural and urban people, because urban people who had rationing problems said, "Oh, the rural people are hoarding their products" and stuff like that. But there are places you can go where you see these, and I'm a counter. I count things all the time. It's maddening. You'll find there's one town where seventy-four people died. A very small town in the south of France, in the Aveyron. There are hardly seventy-four houses. There's a village that's quite beautiful. It's a twelfth century church way up in the Cévennes mountains where we take tourists. When you walk there, the monument to the dead is inside the church. When you show people this beautiful Renaissance entryway, portail, there's twelve people killed in the war. You cannot count. There aren't twelve houses. You can't count twelve houses. People don't live there anymore. Hardly anyone's there. We know more about the western front and now there's some good books appearing on the eastern front, but it's the same thing in every country that you're talking about. The numbers of people killed around will make clear what the countries were that really suffered the most. They were Germany and France, followed by Russia, but also Britain. Don't forget Britain. Remember I said four empires disappear? The fifth empire arguably disappears in the end, because of dynamics caused by the war. People in the so-called colonies fighting for the British Empire, they began to think, "Why shouldn't we have independence? Why shouldn't we have freedom, too?" Of course, at the Battle of Gallipoli, which is one of the great tragedies of the war when Churchill, who had ten ideas a day and nine of them were bad, as one of his critics said, Churchill said, "We'll take the pressure off. We'll knock the Turks out of the war." They're going to have this impossible assault on Turkish fortified positions. They said, "We'll knock them out of the war with the Australians, and the Indians, and the New Zealanders. We can afford to lose them more easily. They're not really ours." Of course, that still resonates in places like New Zealand, and Australia, and India, as well it should. Anyway, that's another complicated story, and we have other stuff to do. Read things on this. It is a phenomenal thing. Mutinies. Just a little bit to the mutinies. The Somme you can read about, and all of that. When I used to work at Vincennes, in the military archives there, because I was writing about 1830 and 1848 and all that stuff, I was reading day by day the correspondence from various regions in France. I was trying to find these documents that I knew were there. This was when I was just starting out. I wasn't much older than you guys. I'd like to think that. Younger than I once was, but anyway, whatever the song is. And I knew the stuff was there. The person that ran it was out having sort of a torrid affair with this guy all the time. So, she was never there at lunch. And she didn't know what she was doing anyway. I bribed one of the guards to let me back in the stacks where you're not supposed to go in French archives. But the guy was a stamp collector and I knew that. So, I kept leaving all these jazzy stamps on my table. Finally he said, "Oh, those are beautiful stamps. "Would you like them?" The next thing I knew I'm in the back. I remember what I saw was this huge thing of boxes. This is in the mid-1970s. This huge number of boxes that were literally chained up. They were in this cage and they were chained up, really chained up with big locks and all that stuff, big security. I said, "What's all that?" He said, "Those are the mutiny documents. Those are the documents from the mutinies in 1917." Now, finally, a guy was able to get in, because in France there's a fifty-year rule and he should have been able, fifty years after the fact, he should be able to consult documents. This guy was finally able to get exception to go work on these documents. So, the thesis that was published is very good, by a guy called Guy Pedroncini, whom I don't know, and it's on the mutiny. Now we know about the mutinies. What do we know about the mutinies that confirms what you saw in the film? Several things. The mutinies spread rapidly. They did, indeed, begin with soldiers who were being sent the front baa-ing like sheep, as if they were being sent to a slaughterhouse, because that's what they're being sent to. What's the difference between a soldier carrying sixty pounds of equipment going to some attack that's going to go nowhere, where his chances of being killed are enormous, and sheep being led to a slaughterhouse? What is the difference? Really not much, except you're dealing with a human being and not a sheep. That was a bad sign for these officers. When the mutinies started, there were only really four reliable divisions, they figured at one point, between Paris and the German lines. The incredible thing was--is because soldiers never talk about the battle when they go back. They don't talk about the battle. It was impossible to communicate what was going on. The mutinies were one of the well-kept secrets. Nobody knew. The Germans didn't know at the time. Hardly anybody knew. Nobody is probably too strong. The mutinies involved thousands, and thousands, and thousands of soldiers. In some cases they elected people to represent them. In a few cases where the officers maintained the upper hand, they summarily shot mutineers. Do you say mutineers? I don't know, people who mutiny. I confuse these things. They were massive. But they had nothing to do with socialist, or anarchist, or pacifist propaganda at all. There were attempts. There were congresses. There was a congress in Sweden. There was another one in Switzerland. The French government would not let representatives go to those congresses. The first reaction in the high command was that, "Well, the socialists are now showing their true stripes. Anarchist propaganda is working." Look at the Bolshevik Revolution. It had not yet happened. That was in October, but the Russian Revolution in February had already occurred. It has nothing to do with it. What they objected to, they were not defeatists at all. They did not want the Germans to win the war. But they realized that they weren't going to win the war either and that this strategy was completely futile. There were cases of fraternization. They are very famous cases. Christmas 1914, on the front way up near Belgium, on the British side particularly. They start yelling back and forth, the Germans and the British. They say basically, "Screw this stuff. Why don't we take the day off?" So, the Welsh were singing Christmas carols to the Germans and the Germans were getting their best singers and singing back. They actually did get together and play a soccer game. They found a place that wasn't totally chopped up and played. In 1915 on Christmas a British soldier said, "Why don't we do the same thing?" They put him up against the wall and shot him. There were these rumors that were very persistent during the whole fighting on the western front that underneath, underneath Reims--where, after all, were all these champagne caves, or underneath Albers. That was the town where the statue of the Virgin Mary on the top of a church hung like this. The Germans said if it falls one way we're going to win. If it falls the other way, the French are going to win. That somewhere the people who are lucky enough to be alive were down there. They would come out and take food, and they would take wine rations, and stuff like that. They would take them back from the dead, and they were all partying underground. They were the lucky ones. They were all fraternizing. It wasn't that case. Still, you hear all these stories. The great war poets sort of saying, "Yeah, this German guy and a British guy find themselves in a crater, both on the verge of death, and they're discussing Nietzsche until somebody finally comes and rescues them." A lot of this may be apocryphal. But the mutinies had to do not with defeatism; it had to do with the sheer madness of it all. It was mad. And there are still historians who are saying, "Well, the creeping barrages, if they had made them a little bit more organized then maybe the breakthroughs would have come." They're still defending the impossible after all of these years--Something happened. I want to show you these, please. I think I'm turning it off. Simon, can you--? I am nul. It was very dark at the Battle of Verdun. What happens? Could you do this? Okay.--These are real ones from Verdun. Verdun was 1916. It begins in February. It rains all the time in that part of France. To explain the mutinies is also to understand Verdun. This is a reconnaissance plane. Those are craters there. Those are some more craters over there. That's Fort Douaumont or Vaux. When you go to them, and you really should go to them, it's a long way from Verdun. Verdun is the town that's near there. The one I'll never forget is when you go in and you see--after the war, like as people do in churches, people would come and put plaques--the most moving one is "To my son, since his eyes closed. Mine have not ceased to cry." Next one, please. So, you'd be going in there. Those are where the plaques are, right there. In fact, that plaque that I just said is right next to that. Now, you're here and they say, "Over the top, men." You're trying to get on the other side there. How are you going to do that? That's all barbed wire around there. How are you going to do that? You can't. That's inside Vaux or Douaumont. Night patrol. Again, there's the trench. They're attacking. But you've got to climb over your own barbed wire, too. That's barbed wire that's protecting you from if they attack. The casualty rates are just absolutely phenomenal at all these. The casualty rates here are not the same as the Somme, because it wasn't a massive attack. You're defending it against the Germans. They're taking care of some people that have been hurt, carrying somebody back. The poor guy looks a little peaked there. Telephones. The Russian phone system was so bad the Germans could hear every single word that they said on the eastern front. Next, please. Well, you get the point. There's the machine guns aimed low--medical. That's fantastic to walk in there. But you have to remember that a lot of the fighting is on the outside in the mists, and the snow, and the crap. It's an amazing thing. But they held. They held. Marshall Pétain became the hero of France. He would have a later incarnation in World War II. We'll get back to him. They hold. How are you going to run up that hill carrying sixty pounds? There's a commune, by the way, called Douaumont, which is the only commune out of the 36,000 that no longer exist because it was so battered that there's a difference in height in these hills of fifty and 100 feet. It could never be rebuilt. Next, please. We get the scene. It's amazing how little people knew. What they knew on the home front, and there's a very good book edited by Jay Winter and his friend, Jean-Louis Robert, about capital cities at war, about London, Paris, and Berlin, comparing the home front. It's really good stuff, how little people knew about this. It's like an Italian said, "Do people really imagine that we just jump up and down screaming ‘Long Live Italy'?" One of the most amazing things is that people in the hell, actually that more of them didn't mutiny. That's one of the most incredible things about the whole bloody mess. They died in hell, they called it Passchendaele. That was a place where the British gained four miles, that's about seven kilometers, in exchange for 300,000 dead or wounded, 300,000. Take a football stadium like University of Michigan or UT Austin and fill it up three times, and imagine that you know those people. That's what it was like. 1917 changes everything. 1917 changes everything because two key events happen, and they're obvious. One is the Russian Revolution in 1917 in February. That's A. Then B, and this is still point number one, is the Russian Revolution in October. It's clear that the--we'll talk about this or you can read about this. The Kerensky provisional government is under tremendous pressure from the allies to stay in the war. But it is clear that when the Bolsheviks seize power in October 1917, the Russians are going to get out of the war. "Peace, land, and bread" is a powerful, powerful slogan for the Russian soldiers. It's amazing that the Russian soldiers didn't all go back to Vladivostok, or to Kazakhstan, or to wherever, that they were able to hold on as long as they did. That's going to change things. It's at that time that the second event happens. That is the Americans come into the war. The Americans--outside of places like Chicago, and Milwaukee, and Philadelphia maybe, they had lots of Germans--most people in the United States, the tendency was to want the allies to win, to fight another day, and the Americans were angered by the submarine warfare campaign. In 1915, a boat called the Lusitania was sunk. There had been warnings posted by the German government, saying, "If you're a passenger, don't go on that. You're going into a war zone." The Germans claim when the boat was sunk that it was full of munitions. The Americans and the British said, "No, it wasn't." In fact, it was. That was proved about twenty-years-ago by divers. It sunk near Ireland and lots of people died. The Germans know that the only way they can win the war is the unrestricted campaign of submarine warfare to try to keep Britain from being supplied by American supplies. Woodrow Wilson, Princeton, who won the election, kept us out of war. He takes the country into war, and eventually he can't get the Treaty of Versailles passed even by the American isolationist senate. So, the Americans go to war in 1917. I took Yale alumni, besides taking them to the Épernay to drink champagne, a lot of them wanted to go to this a long time ago, to Chateau Thierry, which is the first place that American soldiers fought in 1917. Now, it wasn't the American troops that made the difference. In the imaginary, the imaginaire, in the perceptions of the French, it was the arrival of General Pershing, who had made his career slaughtering Mexicans in Mexico. The image was that the far west was coming and these sort of gun-toting Dodge City types were going to turn the tide. That's not what happens. What turns the tide is that once the Americans are in the war, the tremendous industrial strength of the U.S. means the curves are going to cross. By that simply I mean the curves that the Germans know they aren't going to win the war. The British, and the French, and the American high command know they're going to win the war. They think they're going to win the war in 1920 or 1921, maybe 1919, if all goes well. There was a quote in there after they just had--at the cost of thousands of lives--I think it's still in there. They had gotten a couple of kilometers of territory back from the Germans. Somebody says, "At this rate, we'll get to the Rhine in the year 2006," I think is what they figured. The long duration, of being in until the end, was going to be a long time, if you were able to survive. Those are the two big events, the curves cross. 1917 is also an important year because tanks begin to make a difference. Tanks can't do anything in these craters. They get stuck. Their treads just sort of spin like a car stuck in the snow in North Haven or something. They don't make any difference until actually they can break into the open. At that point, then they can be a way of protecting infantry behind them. So, 1917 really turns it around. To make a long story short yet again, in 1918, by this time, Hindenburg and Ludendorff basically have taken over the government. Basically, the Second Reich is now controlled by the military. Of course, Hindenburg has a rather pernicious role in the long run to play. He was determined to destroy the Weimar Republic, even though he was president. As he in 1932 says, "We'll bring in the Adolf. We'll bring him in as chancellor." So, Ludendorff said, "Look, we've got to do it now. If we don't do it now, it's never going to happen." So, they throw every conceivable resource into this offensive. They do break through. They do break through. You can look at the maps in the book. They get a long way. But then it snaps back like a rubber band. They overrun their supplies, as they had in 1914, in the big war offensive in 1914. They begin to overrun their supplies. They get tired and then they're pushed back. At that point, the worst days of the bombardment of Paris have ended. The allies are sure they're going to win the war, and that the Germans and the Austro-Hungarian Empire, which is almost on the verge of collapse, despite the sheer inefficiency of the Italian military, they know that it's going to collapse and that Russia's coming out of the war in the long run did not make that much of a difference. The Italians are able to stabilize the front in Austria-Hungary, and the whole thing is going to collapse. And the Austrian-Hungarian Empire, the nationalities are putting forth their claims. Franz Joseph dies in 1916, and it's not going to go on that long. Finally, on the 11^(th) of November 1918, in a railroad car near Compiegne in the forest, not very far north of Paris, they sign on the dotted line and the armistice is declared. In 1940, Hitler would accept France's surrender. It wasn't actually the same railroad car, but they told him it was, also in the forest near Compiegne in 1940. The war ended. More about this later. Basically, France in victory is not as strong as Germany in defeat. Germany is industrially a much more prosperous country. This will hang over the negotiations at Versailles, because the French demand that somebody pay for the war, which France suffered more than any other country in terms of its agricultural land being chewed up, the finest land in France, etc., etc. That's going to hang over the peace proceedings. I want to make just a few comments before I end with Jay Winter. We have five minutes left, so I'm going to do that. The highest percentage of losses was France, with 16.8 percent of those mobilized killed. In Germany, 15.4 percent killed. But if you take those in combat, it's twenty-two percent officers and eighteen percent soldiers. Remember, officers weren't all fancy generals who were sitting drinking champagne, plotting the deaths of all these people. The junior officers, and this is also the case in the British sense, the flower of British youth from Oxford, Cambridge, etc., etc., they're the ones that blew the whistle and said, "Follow me, men." And they jump over, armed with only a pistol. They're toast. They get killed in even greater percentages. Anyway, Serbia loses thirty-seven percent of all its combatants. They don't have as many. Turkey, twenty-seven percent; Romania, twenty-five percent; and Bulgaria, twenty-two percent. Now, think of this. The war starts in early August, 1914, and it ends on the 11^(th) of November, 1918. Every day of those years, every day. Think four years back in your own lives, and then every day, 900 Frenchmen were killed every day, every day. That's a lot of telegrams. "Be proud of X." 1,300 Germans were killed every day. The death rate was higher in World War II. Of course, in World War II, the Soviet Union has an unbelievable death rate, twenty-five million people die, some of them in Stalin's Gulag, but most of them because of the war. The death rate is higher. July 1,1916, the first day of the Battle of the Somme, 20,000 British soldiers were killed. Not just killed and wounded, dead in one day. They were there to go over the top, and they're dead at the end. Unlike previous wars, disease didn't play a major part. Unlike, for example, the Crimean War. Though the Blue Flu, sometimes called the Spanish Flu, as you know will kill more people in 1918,1919, and 1920, than the war. That's the pandemic. As I said, most people die of shells, followed by machine guns and flames, despite progress in medicine. Also, things like shell shock were first identified at this time after the war. Freud was very interested in that, among other people. The psychological--I wasn't--you went into the Paris Metro or the London Tube and you saw people begging with one arm, or one leg, or no legs. You saw people who had also choked out their lungs on gas or who were blind. They were all over the place. Europe was a country of widows, especially in countries like Italy where widows still wore black all the time. Europe was a country of widows. If you had a demographic curve, a triangle, it was like a shark had eaten a huge bite out of the male population between eighteen and, say, fifty-five. The length was simply staggering. The Battle of the Somme lasted five months. Gallipoli lasted more than eight months. Verdun, ten months. Ypres, in 1917, four months. On the Battle of the Somme, you talk about how war influenced people's lives, four million men participated in the Battle of the Somme, four million. That's a phenomenal statistic. More than a quarter were killed, captured, or porté disparu, classified as disappeared, nothing left. Battlefields were no longer called the field of glory. That went. The language went. I make an allusion to that, which is an obvious one, at the end of what you read. Also, there's a brutalization of the sense of humanity that you lost because you were dealing with so many people dead all around. You were fighting for your life. The attitude that people had toward other people changes, and the demons of the twentieth century--fascism above all--would be built on that dehumanization. Difficult to imagine, though not impossible, the Holocaust without World War I; but given the Turks and what they did to the Armenians, it's hard to say. Also, atrocities. There were atrocities. Now there are a couple of good books on atrocities. Most of the atrocities were committed by the Germans in Belgium. They executed 5,500 Belgian civilians. Edith Cavell was the most famous, the nurse. In part because German soldiers believed that they were being picked off by civilians--is what had happened in France in 1870-1871. But the Russians committed atrocities in east Prussia and in Galicia. The Austrians, who had been told that the Serbs were subhuman, committed atrocities there. There were rapes. Rape had not yet become an arm of combat as it would with the Russians after World War II, but people were treated like animals. Hitler said in 1939, "After all, who will remember the Armenians?" That's an incredible, chilling thing. So, I want to end simply with Jay Winter, whom you're going to meet soon, assuming I can find this. It's about a haunting film done by Abel Gance. It's called J'Accuse, I Accuse. It's not the same thing as Zola's I Accuse; it's another one. Made in 1918-1919. The hero, Jean Diaz, is a wounded soldier poet. He begins to lose his mind. He escapes from the hospital and he reaches his village. There he summons the villagers and he tells them of a dream. It starts on a battlefield graveyard with wooden crosses all here, and there, and everywhere. A huge black cloud rises above it, and magically, ghost-like figures emerge from the ground. They're wrapped in tattered bandages, some limping, some blind, walking with upraised arms stumbling blindly like Frankenstein's monster. They leave the battlefield and they go home. They go from the grave to their villages. And they want to see if their sacrifices have been in vain. And they get back to their villages and what they find is that their wives have cheated on them. They find that people are still ripping people off by false weights at the market. The petty ways have continued despite their horrific losses. They say, "You must mend your ways. We didn't go through all of this hell so that you would continue to behave like you do. The world, after all, must be a better place. Isn't it a better place now? Won't it be?" That's the big illusion, by the way, about 1920s and 1930s. The world wasn't going to be a better place. It wasn't at all. They believed their mission is fulfilled. They go back to their graves. After recounting this dream, the poet, now totally mad, accuses the sun above of standing idly by and watching the war go on. Then he dies. The oddest thing about this, about how art and reality merge, is that this film was made before the end of the war. I'll bet Gance, the producer, got permission from the army to have real soldiers be extras in his movie. You can see real people, who are not going back to the front, with their arms ripped off. Stumps. They had stumps. Some of the people who were in that movie went back to the front and were killed. They didn't survive the war. The war had taken a terrible vengeance both in art, the joys of great artistic production, but on reality, too. It's an incredible scene. Of course, things couldn't go back again. You couldn't go back to your village. You couldn't get off a bus at the end, and go back and fall into the arms of your family, and stand there with tears on your cheeks as you were counting off the names of the dead, people that you knew. Things were going to get better, but they don't. One way of looking at the entire period of 1914 to 1945, and Jay will talk about this, is to view it as an entire, more horrible Thirty Years' War, because things don't get better; they get worse, if that's even possible. On that light note, I wish you a happy election.
European_Civiliization_16481945_with_John_Merriman
22_Fascists.txt
Prof: I assume you saw Triumph of the Will. I think I mentioned the other day Leni Riefenstahl only died about four years ago, at age 102. She did interviews, and just looked back on that regime as, she was a professional and she did a good job. Her employers, in this case Adolf Hitler, were pleased with her work. What's interesting about the film, among the many things, and some of the themes I'll touch on and you're reading about, is that it's a combination of the kind of medieval and the very modern. Hitler, like Mussolini, used modern technology. Germans who could barely afford to eat had radios, and listened to speeches of the Fuhrer, and it was the same thing in Italy with Mussolini. While you saw the images of kind of medieval Nuremburg, which no longer exists, medieval Nuremburg, or not much of it, and the kind of modern technology and the whole thing. Hitler liked airplanes. He liked to fly around, and for all of the kind of images of the German warriors, kind of a medieval person diving in frozen Pomeranian ponds and things like that, the modern is apparent, too. If you want, the most chilling example of the modern would be the assembly line, the transformation of the assembly line into mass murder. The assembly line in the death camps. Has anyone here been to Auschwitz? I've been to Auschwitz-Birkenau. I've been to Dachau, also, a long, long time ago, but Auschwitz fairly recently. One of the most chilling things about Auschwitz, actually, the sheer--it's just beyond anything, but it's the commandant's house. The commandant's house has little swings out behind it. That's where the commandant lived. His wife said this was the happiest time of their lives. The little children were playing in the garden on the swings, and there's a big wall there, but not a huge wall. The crematoriums are on the other side in that part, at Auschwitz. Birkenau s a couple kilometers away. Life went on in that way as this sort of assembly line--mass murderers of millions and millions of people. Hungarian Jews outnumbered Polish Jews exterminated at Auschwitz just barely. That's because at the end of the war the Hungarians were sending these huge trainloads of people to be exterminated there. Anyway, I want today to talk about Adolf Hitler. I will bring into this some of the themes that you're reading about. Just two things at the beginning. Obviously, National Socialism was one variant, certainly the most horrible variant of fascism. You can put Franco into that mix. There was rightwing authoritarian rule everywhere. Secondly, like World War I, there's no other period of history that has such great literature, at least in English, about it. There's a wonderful trilogy by Richard Evans on Hitler and the Nazis to 1933, and the second volume is 1933 to the war, September 1,1939. The third is 1939 to the end, to the bunker. There are many biographies of Hitler. I've read about three of them. But the best by far is Ian Kershaw's two volumes. It's very long, and I'll be drawing on that in part. Let's get going on that. There's a photo that's not in the book, but there's a photo of Hitler reviewing his guys. That particular photo, which was taken about 1927, was on a huge field. You see Hitler reviewing his guys there. What people don't realize is that picture was taken from a huge--;there are lots of other people out there, little groups like the Nazis. It might have been a little earlier. They, too, have their leader, their Fuhrer. Hitler ends up, the National Socialists end up winning, but they weren't the only group. I'm not a believer in the "great person" view of history. Hitler did not make the Nazis. World War I created the Nazis. A lot of the racism, a lot of the idea of hygienics, racial purification, and all of that was out there, as you know and I've tried to make clear. But if it wouldn't have been Hitler, there would have been somebody else. In 1933, when Hitler becomes chancellor, when the other rights, there are many rights, but when Von Papen says, "We've got them boxed in now. We can use Hitler for our own goals." How incredibly naïve that was. The Nazis must be seen in the context of World War I. They must be seen in the context of the poisoning of the political atmosphere between the wars. In 1876, Alois Schickelgruber--I didn't write it on the board; I sent this stuff around to you today, a lot of it, but Schicklgruber is not on the list--changed his name to Alois Hitler. It was a peasant family in lower Australia--lower Australia? Lower Austria!--bordering on Bohemia. I've been to lower Australia. I've been to lower Austria, too. But anyway, bordering on Bohemia. Thus, the family's dislike of Czechs, and Hitler's particular dislike of Czechs. But he disliked everybody outside of Germans. His father was "illegitimate," and ended up with the name of his mother's long-deceased wife's father, Georg Heidler, which in 1876 became Hitler, as I said. There was a rumor even during the 1920s that Hitler's grandfather was Jewish, and these rumors circulated in Munich in the 1920s. Hitler was born in Braunau-am-Inn on the border of Germany, that is the Austrian-German border. This was important in his obsession with uniting the two countries. His father was a customs official, comfortable kind of lower-class existence. But it was not a happy family at all. His father was strict, pompous, proud of his minimal status, extremely pedantic, and had a violent temper. He took care of bees with more loving attention than he took care of his family. He managed the family with efficiency, but without love. Hitler's mother is described by Ian Kershaw as a simple, modest, kindly woman, who went to church and was devoted to her two surviving children, Adolf and Paula. She smothered them with protectiveness. Adolf Hitler feared but did not love his father, but this does not explain the murderous results of the whole thing. Civil servants get moved around, customs people. The family moved to Linz, L-I-N-Z, in Austria, which was a hotbed of anti-Semitism, in 1895. Hitler began his schooling at age six. He viewed Linz as his hometown, and, in not a terribly too happy early life, looked back almost nostalgically upon living in Linz. He did not pick up his anti-Semitism in Linz. He started secretary schooling in 1900, but he was unsatisfactory in math and in natural history. He didn't like his teachers. He was, in principle, respectful, but he thought himself above many of them. He was badly adjusted. His father wanted him to be a civil servant. He wanted him to follow and be the next in line of the Hitler civil servants. But Adolf, as you know, resisted. He wanted to be an artist. His father said, "You will not be an artist as long as I am living." Linz was--besides being a hotbed of anti-Semitism, it was a hotbed of German nationalism. Not just Austrian German-speaking nationalism, but German in general nationalism. His father died in 1903 and then Hitler hit the academic skids. He failed in math. He moved to another school fifty-miles-away in a place called Steyr, but it wasn't any better. Then he took up this sort of idle existence. He painted. He read poetry. He attended the theater. That was one of his great loves in Linz, 1905-1907. He had one friend, August Kubizek, who was the son of an upholsterer. Hitler dominated. He needed somebody to listen to him. Kubizek was exposed, and I suppose willingly, to Hitler's diatribes, his pontification, his monologues about virtually everything. He was the classic kind of know-it-all. He was pale, thin. He had that little mustache that would become bigger. He wore a black coat and a dark hat. He carried a black cane with a pretentious ivory handle. His great passion was Wagner--Those of you who know about music know that Wagner was a raving anti-Semite--as well as art and architecture, about which he claimed to know a great deal. He wanted to begin his artistic career at the academy in Vienna, and his mother had fallen ill with cancer and soon died. She died in 1907. This struck him as a "bolt out of the blue," he remembered. He applied for the academy in Vienna, and, to his horror, he was turned down. He went to Vienna anyway in February 1908, hoping to become an architect. He said later, "I owe it to that period that I grew hardened." He lived in Vienna from February 1908 until May 1913. He said later, after the war, during his political ascent, that it was during that period that "my eyes were opened to the two menaces of which I had previously scarcely known the names"--Marxism and Jewry, the Jews. This appears in Mein Kampf, My Struggle, which he wrote when he was in Landsberg prison not far from Munich--I even visited the cell once--after the ill-fated Beer Hall Putsch in Munich in 1923. This was out of retrospect. There is really no evidence that he had become a raging anti-Semite before 1914. Yet, anti-Semitism was so prevalent in Austria. Karl Lueger, who was the mayor of Vienna, whom I mentioned before, was one of the worst in that period. I've given this chilling quote before, but I'll say it again. He's the one who said, "I decide who is a Jew." The liberalism that had been in Vienna in the earlier period was hardened, like Hitler, became hardened into just a vast intolerance. But at the time he said that these two menaces were known to him, he was struggling. He wanted to be the man in leadership of the German Reich. In saying this, if you believe Kershaw, and I do on this and on much more, this was a fabrication. The anti-Marxist, the anti-socialist and subsequent anti-communist after 1917, that was there. His long diatribes in this sort of shabby rooming house, where they would sit around, and finally you can imagine one by one people just getting tired of listening to Adolf, and going up to their miserable little rooms to get some sleep, were against the socialists. The Austrian socialists, like their German SPD counterparts, had long marches through the streets of Vienna on behalf of workers' rights, etc., etc. Hitler would stand on the porch of this rooming house and simply hate them as they went by. Yet, Vienna was a huge melting pot of this enormous empire. There are all sorts of people besides German speakers who lived in Vienna. Many of the German speakers were Jews, Freud among them. I've been to Freud's almost bizarrely recreated office there in Vienna. The Jewish population was about two percent of the population. In 1910 it was 175,000 people in Vienna. Then it grew to 8.6 percent of the population. Later, in Hitler's thundering speeches, over-the-top speeches, he saw Jews as capitalist exploiters of true Germans, etc., etc. This came later. Lueger, by the way, anticipated Hitler and lots of other people by saying in 1890 that "the Jewish problem" would be solved if all the Jews were placed on a large ship and sunk at sea. When Adolf Hitler lived with Kubizek in this rooming house, and went to the theatre with him, he was not yet thinking of politics. What he wanted to do was become this famous artist. It is true that he painted postcards for tourists, which he sold to kind of keep himself afloat. Kubizek was a piano player, so in the room was two beds and a piano and that was about it. Sometimes you could imagine Kubizek playing the piano just to try to tune out Adolf. But he was rather loyal to him. Hitler began to write a play. He went to the theatre, as I said, and he got a little bit of inherited money after his parents died. He had little interest in women. Of course, one of the sort of prevalent rumors is that he was impotent, though as you all know surely, he would marry Eva Braun in the bunker, before they took cyanide pills and killed themselves, as the Russian tanks could almost be heard rumbling above. We know of no sexual experience that he had. He described the ideal woman as a "cute, cuddly, naïve little thing, tender, sweet, and stupid." Of course, like Mussolini, who was a notorious philanderer and used to brag tirelessly about his sexual exploits, both Hitler and Mussolini believed that a woman's place was in the home turning out baby boy soldiers and not in the factories. Of course, one of the ironies is during the Second World War that women are increasingly doing jobs that Hitler and Mussolini thought were inappropriate, simply because the men were dead. Anyway, he was prudish, seemingly repelled by sex, although fascinated by it. One of the points that Kershaw makes is that Kubizek's recollections, along with that of Hitler's sister, Paula, give us a sense of some of the things that would remain characteristic of Hitler until his much-deserved end. Basically, he was lazy. He was manic at times. There would be these bursts of wild enthusiasm for something. During the war he would demand that the generals place maps in front of him, and he would make the decisions as the generals secretly moaned. He considered himself an expert in military affairs, as well. There was a pathological sense of reality and a sense of proportion, and a vindictiveness that, as most of you that have followed this at all would know, kept the Russian invasion, stalled it as he punished the Yugoslavs, poured troops into Yugoslavia to slaughter people, and then delayed the famous invasion of the Soviet Union on June 22^(nd). I was in Kiev once and the bells were all ringing. I realized that was the same time that the German planes had first arrived. His intolerance, his flashes of anger, his tediousness, his sense of predestined greatness, it was all there in the shabby little rooming house, the sense of frustration that his genius wasn't recognized. But there is no evidence of tirades against Jews. That would come later. Another friend of his, a guy called Hanisch, about whom I know nothing, said after Kubizek had disappeared from Vienna, "In those days Hitler was by no means a Jew hater. He became one afterwards." In the words of Kershaw, the First World War made Hitler possible. In 1920 he said, for the first time in print, "Jews are to be exterminated." This is after the foundation of the German Workers' Party, early in 1919. Of course, it's that party that would become the Nazis. There is a picture that may be doctored--and that apparently is no longer in the second edition. It should be. It was in the first edition--of the war starting in Munich. I think I have mentioned this before. It's a crowd scene. The war has been announced. The war is not in Munich, but all these people are around the town hall, and they are just exuberant. You can see the smiling Hitler beaming, happy, fulfilled. He's going to fight for German nationalism. He did fight in the war. He was one of the guys. He was a comrade. He was wounded twice. He was a runner in the war. He carried messages from officers to the trenches, and then he--not literally ran but carried them. They called them runners. After the war he emerges, as do troops demobilized in every country, facing the challenges of an uncertain future. Nowhere was that future arguably more uncertain than in Germany. Not all veterans of the fight of the German war cause in World War I turned to far-right politics. The SPD, the Socialist Veterans Organization, was the largest of them all. Yet, there are just enormous continuities between those German soldiers who returned from the war with their weapons in their houses joining the Free Corps, the Freikorps. They kept on marching. They kept on training in their basements. They would come back and therefore be exposed to all of these currents, the sense of betrayal again, as I've said before. This is the third time. How do you explain to the folks back home that you've lost the war when your troops are way inside of France? They're not perched on the frontier. They're way inside. So, it's got to be somebody's fault. Whose fault is it? It's the Jews. It's the socialists. And it's the Weimar Republic. These themes come together. That's a constant theme. Hitler believed if you told people the same lies over, and over, and over again that they would believe them. This happens in our country, too. In Hitler's case the lies were even more pernicious. The revisionism becomes an official policy of all of these rightwing groups, of all of them. The thing that's really just incredible is that the people had memories of Hitler--when you see pictures of him, this kind of pauvre type, you would say in French, this kind of sad sack wearing ill-fitting clothes, did not have friends. Kubizek had disappeared. I have no idea what happened to him. He had big hopes for himself that could never possibly be fulfilled. The idea of this--those of you who have partied in Munich on the tour, or something like that. I partied in Munich when I was twenty-years-old. We went to these places. But when you go into these big places like the Hofbräuhaus, which was one of the worst, and these other places, it is hard to imagine. This is where the rightwing groups met. All of a sudden, this kind of sad sack guy, who'd jump or be lifted on a table. He wasn't terribly athletic. Suddenly, he has people listening to every word that he said for hours, for hours. Those speeches, if you ever heard speeches, if your German is really good--mine is terrible. It barely exists. People would listen on the radio. He would build up with this crescendo announcing the will to power, my struggle, our struggle, the German people's struggle, those who have destroyed us, those who signed on the dotted line of the war guilt clause that said that Germany started it all. "We will get them back," he says in 1925, when Mein Kampf was published. Isn't it 1925? I think it's 1925. He says, "We will kill the Jews." He says "We will expand elbow room, living space." We will expand to the east. He says this. You could buy copies of Mein Kampf in Manhattan. You could buy copies of it in Melbourne. You could buy copies of it anywhere. It was translated into a variety of languages. It was all there from the beginning. The consistency in what Hitler was telling was there all the way through. It was there all the way through. The concrete plans for the extermination of the Jews, as well as the gypsies, and of gay people as well, these concrete plans will come later. Dachau in 1933 was built with Himmler in charge, primarily to put communists in Dachau, and many Jews were communists, and later other people. I went to Dachau when I was your age. I remember seeing an old guy working in the fields right outside the wall. He was old enough that that guy owned that farm back during the war. People knew. I'll come back to that in a minute. They knew. You try to think, "What did he think when he saw the people come in? What did he think when the smoke rose? What did he think?" They knew. They knew and they didn't care, point. If Hitler's themes barely changed, it raises some very important questions. Who first supported Hitler? Hitler's support--and I do write about this a little bit--the role of the economic crisis cannot be underestimated. The inflation statistics you will not want to commit to memory, but those are unbelievable. The only case that I know that is vaguely like that is Zimbabwe in the Mugabe period. This is even worse, if that is possible. Middle-class people who had to pawn armoires, chests, drawers, silver that had been in their family for years, in order to have enough to eat. They wouldn't forget, and they blamed, and they hated. "It's the fault of the allies. It's the fault of the Jews. It's the fault of the Socialists. It's the fault of the Communists. It's the fault of Weimar." They first flock to Hitler, the middle classes do. If this sounds like an orthodox Marxist interpretation, that's what the orthodox Marxists say and they're right. Big business did not flock to Hitler. Big business wanted the destruction of Weimar. They helped make Hitler possible. Only one big businessman gave Hitler a lot of money. He got a lot of small donations. But pretty soon he gets introduced to the right people, the right cocktail parties. They thought he was vulgar. Quick story. I had a colleague who died decades ago. He was very nice to me when I came here. He was a German diplomatic historian called Hans Gatzke. He wasn't Jewish and he wasn't a communist, that's for sure. He left Germany in the mid-1930s because he didn't like what was going on. He didn't like what was going on. He got a job translating for the Canadian Olympic team. I said to him once, "Did you ever see Hitler?" He said, "Yes." He was under a stadium in Berlin. Like any big stadium, you've got space underneath. A lot of places have batting cages. Sometimes there's a baseball stadium or something like that. He was down there. He was supposed to meet the Canadian Olympic team. All of a sudden he heard this enormous roar of machinery, as machine gun carrying vehicles are coming in. By incredible coincidence, he had a couple pillars here, and just about where Leslie is, there was Adolf Hitler. He was scared, because there were machine guns. He stood there frozen. Would they gun him down? No. He just was standing there. I said, "What did you think? You are fifteen yards away from Adolf Hitler, less than that." He said, "I had a weird reaction. He was vulgar. He was an Austrian corporal. He sneezed and he blew his nose on his sleeve." That's what Gatzke remembered. Big business--Gatzke was a moderate political. He believed in the Weimar Republic. He was a very good guy, a very kind of aristocratic guy. He was a Rhinelander. His reaction was the same as big business, except that big business wanted to destroy Weimar. The reaction was that Hitler was a commoner. He's vulgar. "We've got him locked in," they said in 1933. "We've got him boxed in. We can use him to our advantage and then have a military dictatorship." When von Stauffenberg tries to kill Hitler, and puts a bomb under the table that blows up but doesn't kill Hitler because a big, old, German wooden barrier the table stood on, he wasn't trying to bring parliamentary regime back to Germany. He wanted a military dictatorship. Hitler was supported by the middle classes disproportionately at the beginning. But in all classes people supported him, workers less so. But they break in 1933. They destroy the unions. They destroy the Communist Party. They use the Reichstag fire, which is actually set by the guy probably now we think, the Dutch guy, whom I write about in there. They destroy the unions. They destroy the possibility of resistance. But lots of workers were there, sieg heil, too, but less so than the other classes. What about religion? Hitler was a southerner. He never liked Berlin at all. He wanted to raze it and then this sort of art deco monument of his own planning. He was a southern guy. One of the places where he first does very well is Schleswig-Holstein, part of it used to be Danish, and it is totally Protestant. The Catholic Church rings the bell and reads what Hitler wants read from the sermons. They were happy to have Hitler there, as are the Protestants. There's no doubt about that. Fascism is in the air all over the place. The main elements of fascism that I list in that book, if you think about them, they all apply to Hitler and to the people who followed him: anti-communist, anti-socialist, anti-Weimar, the role of the economic crisis with long, long memories, and hating the allies, and hating the Jews, and hating the Socialists. The Nazis and other fascist groups are better at saying whom they were against than what they wanted. What they want is ultra-nationalism. What they want is a totalitarian state and the destruction of parliamentary rule. What they want is a dictator. They want a caudillo, as Franco was. They want a duce, as Mussolini called himself. They want a führer. They want a leader who incarnates in that mystical body, as they would view it, the aspirations of the German people. Part of who you were is who you were excluding. You have a völkisch community, in the perverse biological racism of these people, and other people who aren't in it, too bad for them. If they are "work shy," Germans who don't want to work, then they're not really part of the völkisch community. "I decide who's a Jew and who isn't." That's what Lueger said. Hitler says, and this is the horror of it all, "We decide who will live and who will die." They're using euthanasia as a tool to kill people who are mentally handicapped, and even some people who are physically handicapped. Pretty soon, in the late 1930s, the Germans say, "Wait, these are Germans." If they're Jews who may be Germans, we don't consider them German. That's okay. Get rid of them. They pull back on that. But that's there from the beginning, ultra nationalistic, ultra antiparliamentarianism. You want the guy. He's going to represent you and he's going to tell you what to do. The terror is there. The violence is there. The Gestapo. There are hundreds of thousands of denunciations. If you denounce somebody, you could be sending them to torture and their death. There's no question about that. There are denunciations all the time. "Hey, my neighbor, I think he's Jewish. I know my neighbor down the hall. I know he was a big guy in the German Socialist Party, the SPD. I know that the butcher around the street, I might want his store, because I'm a butcher, too. I know he was a communist activist until 1933." You see denunciations. they've got them all the time. They've got them all the time. Here's a quote, somebody describing one of the Gestapo offices and the bureaucratization of terror: "Grimy corridors, offices furnished with Spartan simplicity, threats, kicks, troops chasing chained men up and down the reaches of the building, shouting, rows of girls and women standing with their noses and toes against the wall, overflowing ashtrays, portraits of Hitler and his aids, the smell of coffee, smartly-dressed girls working at a high speed behind typewriters, girls seemingly indifferent to the squalor and agony about them, stacks of confiscated publications, printing machines, books and pictures, and Gestapo agents asleep on the tables." Nobody had any illusion about what was going on. They didn't just rule through terror. The SS, by the way, everybody knows about the SS. They destroy the SA. Ernst Röhm challenges Hitler, and in the Night of the Long Knives, they wipe them all out. The SS was a form of sort of social mobility for people. These young guys come back after the war. There was no work. Pretty soon in the 1920s--the SS, you've got a uniform. You can go beat the hell out of communists, Jews, or anyone else and there's no--the judges are all Nazi sympathizers or rightwing sympathizers. They were all trained in the Empire. You can kill somebody and you'll be out of jail in a very short matter of time. You're working with impunity, especially in Prussia where Göring is the minister of the interior. It is all routinized. It is all there. They don't rule just through terror. That's what I did not emphasize enough in what you've read. It's going to be in the next edition. Hitler promises order. Order is zero tolerance on petty crime, for example. They have police who are called the Kripo (appropriately enough, in the English translation--;pronounced creap-o). They are sort of your basic police. They are not the Gestapo. They go out, and people who are lounging about, who are "work shy," that's a dangerous thing to be, "work shy." Petty criminals, people who are hungry, who are stealing apples off of fruit stands and things like that, they go out and make war on them. The German population nods enthusiastically, overall, as a whole. The war on crimes is something they like. Also, there's the economy. Hitler got credit from many German people for having revived the German economy. How does he do that? He does it by violating the statutes of the Treaty of Versailles. They're preparing for war. He's preparing for war all the way through. If the Rhineland occupation, the French and the Belgians had put up a fuss, it's possible that the whole thing could have been stopped there. It's possible. The generals are saying, "Mein Fuhrer, we're not really ready yet for war," while he is freezing his opponents, and they capitulate at one time after another, and the famous story of Neville Chamberlain, who'd returned bringing peace in his time after having sold out Czechoslovakia. But the German economy does revive. There are still huge gaps between the wealthy and people who aren't wealthy, enormous gaps. But the German economy does revive because of the same thing that happens in the United States in World War II--you're turning out, transforming the war economy. That's exactly what happens. He takes credit for this. There are a lot of flashy gestures. The VW--I went around Europe in a VW with a couple of my friends, sleeping on beaches, the little VW, the Volkswagen. But only one of them was ever produced. He promises the German people a Volkswagen, but only one model ever comes off, for the press and all that. The Autobahn. They're going to have routes, autobahns all over. Now there are in Germany and people driving 500 miles an hour with impunity. There are only 500 or miles of autobahn done by the time he's finished. Strength through joy. He announces a program that the Germans who have never been on vacation, ordinary working-class Germans can go on vacation. Some people did go on vacation. They all get drunk on cruise boats all over the place, but hardly anyone gets to go. But he gets credit for it. He seemed to be producing. He seemed to be producing. And, in a country in which anti-Semitism, despite the fact that Jews were terribly assimilated, was endemic, they liked the fact the Jews are disappearing. They like it and they know it. It's sheer nonsense to think that people didn't know what was going on. These trials are put in the papers all the time. "So-and-so has been condemned, being sent away to Dachau because of anti-state behavior, anti-German behavior," you name it. People know. They have no doubts about it. Where do they think the trains are going? They can kind of imagine. Where are these people coming from? When all the Polish workers are coming in, being brought in as sort of slave labor, the ones who haven't been destroyed, when they're coming to work in the factories, where do these people think their families were? They're all dead. That's why Ordinary Men is such a chilling book. How these people, this police officer brigade in Hamburg, how these people can put bullets in the backs of heads of old ladies and little children in the killing fields around Lodz, or anywhere else in Poland, is just an extraordinary story. People knew. Not everybody knew, but they knew. They knew. For people who wanted order, this was their idea. This was the racial idea of order. The universities. What happens to the universities? Certain fields do real well. Racial hygiene. They establish chairs in racial hygiene. German folklore. They establish chairs in German folklore. Physics does very well, for obvious reasons. Physics equals rockets. Military history, chairs in military history, chairs in German history, chairs particularly in German medieval history. But anything else, your basic history, German literature, for example, doesn't do very well. There was a famous headline that is in the book saluting the fact that there were fewer visits to libraries, and people were checking out books in far fewer numbers than they did before. How do they pull this off? They pull it off through the atomization of society. There's a really wonderful book called The Nazi Seizure of Power, written by William Sheridan Allen about a town near Hanover. He changes the name of the town. People were so proud of that book in the 1960s. In the 1970s they put stacks of them and said, "That's us. That's us who were beating up the Jews. That's us who were beating up the communists. We are so proud." It's a very good book. There's another good book by Rudy Koshar, a friend of mine at Wisconsin on the town of Marburg. What they do is they get Nazis into every voluntary association, basically, and they take them over. What you have is the atomization of society, what Ian Kershaw calls "going to the Fuhrer." The only thing left is the family. You protect yourself and the family, or you thrive in the family, but you're in the family. Your children are in Hitler youth. There is no possible organized way of opposition. Soccer clubs, football clubs, everything is part of the atomization because it's been taken over by the Nazis. There is almost no resistance in Germany. I'll talk a little bit about this next time. This was a regime that is capable, as they did in Dusseldorf, of hanging sixteen-year-old boys because they listened to Benny Goodman, or were considered to be slackers, or "work shy." There's that phrase again. This atomization of German society makes all of this possible. When Stauffenberg places his bomb, and the thing blows up and it doesn't kill Hitler, Hitler amused himself and his friends by--all of the people and all of the families of people involved, they filmed them being slowly strangled with wire. They laughed as they watched the film. The most chilling thing, even more than that, is that Germans pour into the street in thirty or forty different cities, as bombs have been raining down all the time. They thank God. Mein Gott, you saved our Fuhrer! That's extraordinarily difficult to explain. By 1944, the armies are full of old men and boys, because basically everybody else is dead. They keep fighting. They fight with astonishing, foolish courage, until the bitter end. They believed, not everybody believed, there will be a revival. Even the German Federal Republic was just replete with very proud former Nazis who take hugely important positions in power after that. Of course, the good old Americans help a lot of these Nazi war criminals escape to Paraguay and places like that, in exchange for information about communist movements and that sort of thing. They believed. They believed. Not everybody believed, but that's one of the scariest things about the whole thing, that it was sieg heil until the end. Again, not for everybody, not for everybody, but for some social classes and others. You find this in other countries, and I'll talk a little bit about that when--I guess I'll talk mostly about France next time. Hitler gives the German people what they want. His prestige, every time he stands down the British and the French, every time that he pulls this off--the occupation of the Rhineland, the absorption of the Sudeten part of Bohemia, and then they just take over the whole country, the Anschluss--where he's greeted enthusiastically by the crowds. You can see these photos of the adoring Viennese crowds. Where was the Vienna of calm concerts? It became the Vienna of Wagner. It became the Vienna of saluting Hitler and then going out and beating up and killing Jews. Something like 100 Jews are murdered in Vienna when Hitler arrives, to celebrate. They, too, believed. One of the dark secrets was the Nazi past of the former Secretary General of the United Nations, Kurt Waldheim. All this came out before most of you were reading newspapers. Some of you were reading them back then. It was about fifteen years ago, or something like that. The people knew. Those are really the big points that I wanted to make. When you were reading, German women or German men waiting to get their hair done, when you read a popular newspaper or popular magazine, all of which had articles about Hitler, and this sort of entourage and all that, and you read a cheery headline, such as "Gas Masks for Children Now Readied." You sort of nodded and said, "We'll be ready for the struggle." What happened was Hitler's book, Mein Kampf, became perceived of and adopted by the majority of people in Germany. Tragically enough, they remain with Adolf Hitler and the Nazis until the very, very bitter end. Of course, it's important to see the context is that in all of these places, whether you're talking about Brussels, whether you're talking about Amsterdam, whether you're talking about Prague, anyplace you're talking about in Europe, or Oswald Mosley strutting through Hyde Park with his little Naziling followers. Hitler was just the most violent, the most egregious, the most horrible, the most tragic example of what was a general phenomenon throughout the entire period, at different degrees of success during the 1920s and 1930s. The war that began in 1914 basically does not end, at least in Europe, until the defeat finally of Germany, and the death of Adolf Hitler, still at a relatively young age, in the bunker in Berlin.
European_Civiliization_16481945_with_John_Merriman
8_Industrial_Revolutions.txt
Prof: Today I want to talk about the Industrial Revolution from a variety of aspects. Everything on the board I put on our website, so don't worry about copying it down. It's all pretty obvious. Doing the Industrial Revolution across the century is no easy task, but we will do it and do the reading. Let me just say that the way people look at what used to be called Industrial Revolution, and I guess some people still call it that, has changed dramatically. Through the 1950s and into the 1960s, the idea of the Industrial Revolution was that it was the work of some genius inventors who created machines used primarily in the textile industry--but also in mining--that eliminated blocks to assembly line production. Then everybody was crowded into factories and the new brave world opened up. In fact, one of the most interesting books and great classics that is still in print was written by an economic historian at Harvard who's still around called David Landes. It's a good book called The Unbound Prometheus, which was basically that. Some of the inventions that I briefly describe in your reading, the spinning jenny, etc., refer to that. That kind of analysis led one to concentrate on England, where the Industrial Revolution began, and to view industrialization as being a situation of winners and losers (by not going as fast). In your reading I give you some pretty obvious examples of reasons for the Industrial Revolution first coming to England: the location of resources, particularly coal; a country in which nowhere is more than seventy-five miles away from the sea; precocious canals and roads; banking systems; fluidity between classes and a very large and increasingly larger proletariat; agricultural revolution, etc. With that kind of analysis, those places that didn't industrialize as fast, for example, France, one thought they were "retarded"; a word that was used, unfortunately, at that time. Then one tried to see why not. That analysis has been rejected greatly over the past years, because the Industrial Revolution is measured by more than simply large factories with industrial workers and the number of machines. This is the point of the beginning of this. The more that we look at the Industrial Revolution, the more we see that the Industrial Revolution was first and foremost an intensification of forms of production, of kinds of production that were already there. Thus, we spend more time looking at the intensification of artisanal production, craft production, domestic industry--which we've already mentioned, that is, people, mostly women but also men and children, too, working in the countryside. The rapid rise of industrial production was very much tied to traditional forms of production. In Paris, for example, in 1870, the average unit of production had only slightly more than seven people in it. So, if you only look for big factories and lots of machines, you'll be missing the boat on the Industrial Revolution. To be sure, when we think of the Industrial Revolution we think of Manchester, which grew from a very small town into this enormous city full of what Engels called "the satanic mills" of industrial production. Or you think of smoky Sheffield, also in Northern England. Or you think of Birmingham in the midlands. If you think of France you'll think of Lille and its two burgeoning towns around it, Tourcoing and Roubaix. Or you think of Saint Etienne, which was kind of France's Manchester. In Germany you think of the Rhineland and the Ruhr. In Italy you think of Turin and Milan. In Russia, you think of the Moscow and St. Petersburg region. In Spain, Barcelona. Indeed, those are classic cases of industrial concentration, where you do have really significant mechanization over a very long period of time. You do have large towns with smoky factories full of workers. But again, and we've underestimated--in fact, the second edition has more about this than the first, which you're reading--the degree of industrial production in the late Russian empire. Yet, to be sure, when I say that the Industrial Revolution is first and foremost an intensification of forms of industries that already existed, if you were a parachutist and you're somehow floating down over Europe from, say, the middle of the eighteenth century through the middle of the nineteenth century, what you would see is that there were still all sorts of industry, a rapid increase of industrial production that is out in the countryside, that's not in factories. It's done in a very traditional way. Or rural handicrafts, people producing all sorts of things still at home. There's a marvelous book written by a scholar called Maxine Berg, who teaches at Warwick in England. The book is called The Age of Manufacture. She reexamined the Industrial Revolution and discovered that, for example, the town of Birmingham, which produced all sorts of toys, big toy manufacturers, that even though you had a lot of factories, you still had a lot of the toys being finished or even produced by women working in the hinterland, that is, the arrière pays, or the environs of Birmingham. If you take smoky Sheffield, a grim kind of place in the nineteenth century, where they produced knives and cutlery. You still had a lot of these products being finished by people out in the countryside. If you take the North of France, if you think of a town like Reims, famous for champagne, it was a big industrial center but it wasn't the center of mechanized production until after about 1850. What you had is you had all these people out in the countryside, mostly women, who are doing spinning and weaving and carding and that kind of thing. Or around Nancy in the east of France. By 1875 you still had something like 75,000 women who were embroiderers working in the countryside. Rural industry intensifies. Finally, at the end--not at the end, but it depends on where you are--you have this implosion of work into factories. So, by the end of the century the kind of traditional view that one would have of the Industrial Revolution has really arrived, where factory production and above all, in the textile industry. The textile industry is the leading edge of the Industrial Revolution. You have women who used to work at home that are now working in factories as what the British call textile operatives. Or Switzerland, you think of Switzerland as being the famous mercenaries in the early modern period or the very wealthy bankers in our own day. But if you think of a town like Zurich, on the lake, there was all sorts of industry in the uplands of Zurich, up into the hills and even into the mountains around Zurich, of handicraft production. Or Austria, in the Austria-Hungarian empire, there's hundreds of thousands of people working in the textile industry. The details aren't as important as the fact that, to be sure, the mines that you read about in Germinal, which is a great, great read, and the factories that I will describe in a while are described by Engels--and I couldn't do better than that--are a reality and they become the industrial experience. When you think of Detroit, Michigan, in the 1930s, or Flint, Michigan, in the 1930s, or you think about now the rust belt of Connecticut of Torrington and these places that were once booming industrial towns. That's the kind of classic model. The American model really is closer to what people used to think the Industrial Revolution meant in the case of Europe. But that's not a subject for now. A couple points--by the way, I don't think I'll ever get to my notes, but it doesn't really matter. First of all, and this is another reason why the Industrial Revolution starts in England. You can't have an industrial revolution without an agricultural revolution. What the Agricultural Revolution does is increases the amount of food produced that's going to feed your burgeoning proletariat, your labor force. This is a place, all of Europe increases in population. The French population is unique; it stops growing in 1846 and 1847. In simply stops, skids to a halt. But everywhere else, the population grows. There are regional differences in France, as there are regional differences everywhere. But the Industrial Revolution depends on the Agricultural Revolution for an increase in food supply. This makes possible the increase in urban population, thus also increasing the demand for food. Also, the Agricultural Revolution particularly, but not just in the case of England, increases capital formation. You've got this sort of surplus of money, bucks, pounds, fric, cash that can be invested in industry. This is precisely what happens. That's why the Agricultural Revolution is absolutely important. These three things, Industrial Revolution, Agricultural Revolution, and the growth of cities, are very much tied together. Let me give you an example, which you certainly don't have to remember. Think of Manchester. I describe the statistics in there, that the growth of Manchester is a prodigious, scary thing. I'll talk more about how rural and urban elites are frightened by the growth of cities, particularly in Germany, but in France, England, and in the United States, later. What the growth of Manchester does is it really changes the countryside around and helps bring the Agricultural Revolution. What do I mean by that? You find the same thing around Paris, around Berlin, or around Warsaw, almost any big city that I can think of. In response to this urban growth, this big octopus of people and money, of rich people and poor people, I'll talk about some of the rich people next time on Wednesday. You've got an expanded demand for food. In that ring immediately around a city like Manchester, you've got a dramatic expansion of people doing what they call truck farming. They're specializing in crops for the urban market--fruit, vegetables, things like that. They specialize because there are people there that are going to pay for and eat what they produce. Take the example of Paris, which I'll come back and talk about with great relish someday. The suburbs of Paris, a place called Montreuil, which is kind of a grim part of eastern Paris. It used to be famous for its cherries, and fruits, and that kind of thing that they were producing for the urban market. Or wine, if you can imagine wine being produced, what a horrible idea, in the region of Paris. It's Asnières, on the Seine. They used to produce wine for Paris's vast market. Then the next big ring around Manchester, you've got the big fish eating the little fish. They are more productive. As this commercial agriculture develops and more productive production--that's a terrible sentence, there's a greater productivity in response to this urban demand. On the far, distant places you have people specializing in the production of cattle, that is, milk and meat for the market. Of course, the other thing which goes without saying is that in the course of the nineteenth century you've got this amazing development in shipping. Pretty soon with steel, and with refrigeration--and just like now you've got lamb arriving from New Zealand and things like that. This is largely in response to the increase of these large urban conurbations. We use the term "conurbation" to describe cities that grow up so much that they actually merge together. The American Northeast became sort of a conurbation. It's very hard when going to New Jersey to ever see where there aren't cities. One ends and then the other starts. That becomes the case in parts of Northern England as well. The term "protoindus trialization" there is what we mean by the expansion of industrial production along very traditional lines. What I put in parentheses there, domestic or rural industry, we've already talked about. So, first you've got this expansion of industry in the countryside. I'll give you one example. Again, I hate to keep taking examples occasionally from France, but I know that best. The city of Lyons, which is a big soap producing city, what you have in the first half of the nineteenth century is you've got an implosion of work into Lyons, into this working class suburb called the Croix-Rousse--it doesn't matter, although it's a neat place. It's a really neat place. Then in the 1850s the people that owned the silk begin to put work back out into the countryside. Why would they do that? Because the women working there or the men working there worked for less than people living in the city. Again, if you're parachuting down starting about 1750, you have to imagine hundreds of thousands of little dots out in the countryside. And even more of them as the Industrial Revolution gets kicking along before you finally have this implosion or movement in and around cities. More about that when I talk about cities. I'll help explain why European cities are so different than American cities, with the poor living on the outside and the rich living within. Large-scale industrialization has a lot to do with that. Having said all of that, let me talk a little bit about--I'll never get to my notes, but this is fun anyway--women's work. Did the Industrial Revolution change women's work? There are continuities in women's work which are extremely important, and ultimately there will be changes as factory production comes to dominate in many places in industrial Europe. Yet, there are certain things that don't change about women's work and women's roles in the household. Women remain the head of the family economy. Women, whether they're married or simply living with people that they've been living with for a short or long time, run the family economy and it's true whether they are in rural Switzerland in the uplands of Zurich, working in the textile industry, or whether they are textile operatives in Manchester or someplace like that. The Industrial Revolution does not change other aspects of women's work in that at least well into the nineteenth century in most parts of industrial Europe, women are still working in the countryside but also major employers of women don't change at all with industrialization. The classic case hereto is England, and that is domestic service. If you were going to take England in, say, 1850, the largest three categories of people doing anything are not in this order, but just about all the same number would be women working as domestics. Some men worked as domestics, too. Say, domestic service, textile operatives, and an important category that I'm going to talk about later in my theme of "it's bitter hard to write the history of remainders," rural agricultural laborers, rural proletarians. Another category of women's work, again which one hesitates to evoke, is of course prostitution. The Industrial Revolution doesn't change that sad aspect of women's work. It increases with urban growth the number of people working as prostitutes in even very small towns. The number of prostitutes in Paris or London is simply incalculable. The estimates in Paris go from 20,000 to 100,000. Lots of women who are married become prostitutes pour faire sa fin de mois, to pay off the bills at the end of the month. This sad aspect of women's work, people forced into prostitution by want, doesn't really change with industrialization. The numbers simply get bigger and bigger. Of course, one of the results of this, this isn't the time to discuss this, but there's a sort of panic at the end of the nineteenth century about syphilis and about venereal disease and all that. Also which ironically helps further condemn ordinary people in elite minds, which is a coincidence, since many of the patrons or many of the clients of prostitutes were middle-class males no matter what country you're talking about. More about women's work in a while in the context of factories. Here again, history has its history, too. When I grew up, to the extent I ever did, as a student when I was thinking about doing a dissertation, and becoming an historian, and all that stuff, what people studied was--the reason I put it in quotes--;"working class consciousness." We were sort of children of the very late or the 1960s or 1970s and everybody wanted to follow the great English historian E.P. Thompson, who wrote a monumental book called The Making of the English Working Class. Everybody wanted to find the making of class-conscious workers in various places. Everybody wanted to study the crowd, as in The Crowd in the French Revolution, my late friend, George Rudé's famous book, the crowd here or the crowd there. The first article I ever published was called "The Crowd in the Affair du Limoges, April 27,1848." Now I look back sometimes and I think, "Who cares?" But anyway, in the 1980s the move kind of turned away from that and more people started studying the middle class. More about the middle class folk in the nineteenth century and what my friend, Peter Gay, called the bourgeois century next time around. Nonetheless, you can't throw out the baby with the bathwater, and class remains a fundamental concept. If you're going to understand nineteenth and twentieth century Europe, you have to understand social class, because there's a reality. We live in a country now where people like to think there are no classes. Well, don't get me going on the current economic crisis. I can remember people going down to the Ford plant in Ypsilanti and Detroit and trying to get people who work in those places interested in the war, against the war in Vietnam, and getting absolutely nowhere and hearing arguments that in America we don't have classes. That simply isn't true. Anyway, in the nineteenth century social class was a real thing. Nobody had a stronger class identity than the middle class. That's what I'm going to talk about next time. I can hardly wait. There was a working class, but not everybody saw themselves as workers, as a form of identity as opposed to something else. People can have multiple identities. When we talk about nationalism, that's an obvious point to make. If you ask people who they are, they might say they're Protestant or they're Jewish or they're Catholic or they're Muslim or they might say they're from this extended family or they're from this region. They're Bavarian or whatever. In the nineteenth century-class identity, the sense of being workers as a class apart was a reality. That's just the way it is. That was worth studying and people did some very good work on it. It's kind of come back, too. It's kind of come back. Anyone who's been in Britain, where class identity is so revealed by language, there isn't anyplace, including France or any place else that I know where a difference in accent is so revealing as to not only where you are from, but who you are in terms of social class. It's really just amazing. It remains true in France and some other places. There was a strangler. There are always these stranglers around in Britain. There was one guy was this hardcore killer, a bad guy killing a bunch of people about fifteen years ago. Finally, they get all these experts on language and he called up I guess a radio station and sort of "Here I am. Come and get me" kind of thing. They had him pegged where he was within something like ten miles of where they ended up arresting him, which is in Bradford in the north of England. Language is one of the ways that people reveal their class. In the nineteenth century we're talking about workers and how some workers, but not all, began to see themselves as proletarians. That seems like one of those trendy words, but it meant something to people. A proletarian is somebody dependent on their own labor, usually unskilled or semi-skilled, in order to survive. There are two aspects to the term "proletarianization." One is kind of the objective sense that you are a laborer. You may be a harvester. You may pick grapes for the wine harvest. You may be carrying around large boxes, which is what I did at Alice Love's Jams and Jellies in Portland, Oregon, or at Kellogg's of Battle Creek, where I also worked, totally unskilled, but again that was not going to be my lifelong identity, because I was able to go on to do something else. But in Europe you were born into the proletariat in most cases. If you grew up as a Catholic, in this part of France, you still would have been a practicing Catholic, a Catholic guy, a young boy or young woman in and around Saint Etienne or in Lille, the chances were overwhelming that you were going to follow your parents into the mines. You were going to go in the mines. As a matter of fact, again I hate giving these French examples, but there's an expression that's really only used there that I've ever heard when a kid screws up, does something he's not supposed to. What they say is deux semaines dessous une benne, which is if you spend two weeks ducking down like this and having to help guide this cart full of coal up and down the railroad tracks, you won't screw up like that again, little boy. The sense of you were born into the world of work. In America there were all these kinds of literature, the equivalent of Boys Life about remarkable assents into the social stratosphere, that America was the land of opportunity. Well, America was the land of opportunity, to be sure, with availability of land. But cases of social mobility were actually fairly limited. This was certainly the case in almost all of Europe. You were essentially born into, for most people, this status. The other thing that happened, and this explains the rise of class consciousness, is that people who--suddenly the bottom drops out of their economic life--that's a fairly appropriate analogy for today--who were artisans, who were craftsmen, become really the first, depending on where we're talking about. It begins really about the turn of the century, that is, 1800 or slightly before, but mostly afterward, by 1830 in England and then follows in other countries in many, many places. Artisans and craftsmen are really the first to see themselves as a class apart. Not unskilled workers. Why? This is pretty obvious. Artisans and craftsmen are educated up to a point. They have a sense of dignity about their trades. They have organizations. They have mutual aid societies, for example. There's a craft guild organization in France called the compagnonnage. This came from the medieval times when they built the big cathedrals and all of that. They have organization. They have a sense of pride in craft dignity. Karl Marx, who was a pretty smart guy, he got a lot of things wrong, but he got a lot of things right. Karl Marx wrote in the 1830s and 1940s about how workers' wages were declining. He was right for artisans. There's no question about it. Artisans are at the forefront of every single social and political movement that you can think of in the French Revolution. There we go. Who stormed the Bastille? It was artisans, 1830 in France, 1848 in Austria, in Berlin, in Paris. It was artisans. Why all of a sudden do they get mad? There's really two reasons, two things that happened to artisans that caused their economic situation to go downhill. First of all, the French Revolution or the effects of the French Revolution destroy the guilds. Anybody can be a tailor or a shoemaker or whatever, a glassmaker. If you learn the skills, there's no one who's going to say no, you can't get in this union. You might be able to get in this mutual aid society or friendly society, but you can't do the work because the guilds are gone. The French Revolution banishes the guilds, laissez faire, Adam Smith, et cetera, et cetera. There are laws against unions. Strikes are not legal in France until 1864. The corporation acts are reinforced by the fear in Britain of the French Revolution. But what happens is you've got what can be called, Bill Sewell has called it that, a friend of mine in Chicago, the crisis of expansion. You've got all these people now who say, "Hey, I'm going to be a tailor, too." If you lived in Berlin or someplace in the 1840s, you would hear tailors walking along the street pushing carts full of clothes that they had made from the beginning to the end, of suits being sold for practically nothing. Why? Because there's so many other people making suits as well. Also, mechanized production means that you can buy suits off the rack, and they're getting very little for their suits. They didn't wake up and think, "Gee, I can't remember how to make a suit." They got these suits. They can make them and they can't sell them. Their wages are declining. Are they mad? They're furious. Who do they blame? They blame the state and they blame the bourgeoisie, the middle class, the middlemen. For example, in the case of tailors, there are a lot of middlemen who've got capital. What do you do? You say, "Look, I'm going to get a bunch of suits made. Here are all these tailors, they don't have enough to work." I'm not giving you a very good example, because I don't know a damn thing about being a tailor. But they say, "Okay, you guys do the sleeves. You guys do the pants, because you can do them all one after another and you don't have to worry about doing the rest of it. Then I'll pick up everything that you make." This is a continuation of rural industry. "Then I will sell it in the markets." Into World War I you still had single women in Paris now chained to their sewing machine, not literally, but they've got to pay off their sewing machine. The sewing machine starts before electricity, but after electricity comes along. They're working by themselves. Their day isn't cut short anymore by the end of daylight. It's cut short by sheer fatigue and producing these goods for this market. These tailors, and shoemakers, and all of that, they're in every single movement. They are the ones who first say, "Hey, you know what? All we workers, we've got some stuff in common. This is amplified by residential patterns, people living in and around where they work, et cetera, et cetera. Mechanization also, I'll give you an example that I do know something about which is porcelain. Porcelain is one of these products that's a luxury good. Renoir, the great painter, started out--he was born in Limoges, France in 1841--Renoir starts out decorating plates. He painted plates. Along comes this new technological innovation. If you did, and I only did very briefly, make those model airplanes and stuff like that, there were little decals that you'd stick on the plane to represent the Spitfire, or whatever American fighters or boats. I'm not such a war guy, so I stopped that pretty quickly. So, somebody invents one of these decals that can be baked on to high quality plates between the first and second baking. Porcelain remains a luxury good. The people that used to paint them are sent to the warehouse where they work for about a third of what they would make as skilled painters. They didn't wake up one day and say, "Geez, I can't remember how you paint a plate anymore." No one's going to pay them to paint plates except for very special orders. Glassmaking is the same thing. People that formed bottles used to be very well paid. Then a machine is invented that comes along and does the same thing. It turns out bottles by the zillions to be filled with wine and whatever. They're out of luck. Are they mad? They're furious. Pretty soon they start thinking, "You know these unskilled people, we have some of the same grievances." They begin seeing themselves as a class apart. Class consciousness isn't sort of an invention of lefties from the 1970s like yours truly. It's not at all. It was a reality. It wasn't for everybody, but if you read a lot of literature, especially from London or from anywhere about the kinds of solidarities that people had because of their social class, and the sense that they formed a class apart and were relegated to sort of a permanent proletarian status by forces that they can't control--the state and big money and big capital. People would be a little better off if they were thinking about that now. Anyway, that's that. Having said that, I want to turn in the last ten minutes to--did I get all that in? Yes! I want to turn to something that complements that. That is a discussion of industrial discipline. One thing as workers learn to strike, going on to strike for better working conditions, for more money, for better hours, shorter hours, et cetera, et cetera, one has to imagine what the world looked like for them. What did they think about things that were happening to them? One of the things that had happened to them was this sort of nineteenth-century end stage of the Industrial Revolution, that is, factory production. If you were an artisan, if you were a tailor--I keep using these examples, but they're so stuck in my mind--or shoemaker, you basically worked when you wanted. You worked in response to demand for your product. Many of these people were on the move, going from one place to the next. But you worked kind of when you wanted to, or when there was a demand for your product. If you were in domestic industry, and you were a woman working in the hinterland of Zurich, you worked when there was demand for your work. Then you took time off to nurse your child or to take account of the family to see how we were doing, if there was enough to tie through until the next week. You more or less worked on your own. A pottery baron called Josiah Wedgwood, you've probably heard of Wedgwood pottery, just before 1800 he's trying to think about how you make all these workers that he had-- how do you make them respond in the very same way, so they don't just kind of get up and wander off or spend time talking or enjoying themselves? How do you get them all to work at his single command? His dream, his fantasy was that he wanted a set of workers that responded as fingers on two hands in response to his command. That's what he wants. He and his successors create strategies of doing just that. In doing so they launch this sort of protracted struggle, which is very revealing about the bigger processes at play in the nineteenth century. Factories have a lot to do with that. Hereto, I say that with such intensity for my bad experiences working in factories. I once was working in Alice Love's Jams and Jellies. I was supposed to be to work about 6:00 in the morning after a night I probably shouldn't have had. The last thing I remember was the guy. He didn't like me because I was a college guy. I always had my mighty maize and blue Michigan shirt on. He said, "Listen, idiot"--I was on jams and jelly duty. There was a huge machine. You have to imagine an enormous accordion. They'd put all these berries in there. Then the press would squeeze them into jelly, which we would drink or eat and make ourselves sick. It would build up a lot of pressure and the last thing I remember him saying was, "Listen, idiot," that was me, "don't leave your finger on that button very long." As I was trying to figure out who had beaten whom the night before in the American League, the thing blew up. This enormous tidal wave of boysenberry juice engulfed me and I was burned. Actually, I was out on sick leave for two weeks or I was just down playing basketball and getting paid to do that. This tidal wave of boysenberries, a forklift with about something like 2,000 jars of apple butter spun out of control. It was a terrible, terrible mess. But the point of this is that I hated the foreman. As I left, I said, "Too bad for you, foreman." I take that back. I didn't say that. Anyway, the point of this is that factories become first of all a way of maintaining industrial discipline. In the first factories in Britain they were not there because you had these machines that were there immediately. James Watts' steam engine was not used really for about fifteen years after it was made, because there weren't many things it could do. The first factories were there putting together artisans, semi-skilled workers, and unskilled workers as a form of industrial discipline. When you think, if you see postcards--at the end of the nineteenth century, really about 1900, the craze for postcards begins in Europe and in the United States, too. Now, these postcards are extremely expensive if they have people in them, particularly people at work. They're really, really--and I have all sorts of them from Limoges and the porcelain industry and from the strikes. But if you see these pictures, when workers had their pictures taken together, they're always in front of the door. Why? You had to enter the door or leave the door. The signal was given by the clock and by the bell that called you to work. If you were late, too bad for you. You could be docked or fired, and there are lots more people out there who would like to have those jobs. What happens in the nineteenth century is that the factory, before really its role as a houser of big thundering machines, in many places the factory was first and foremost a way of putting discipline on workers. There's a terrible case in Brooklyn, I think in 1912 or something, where 150 or 200 women were burned to death because the bosses or the foreman had locked the doors, so they couldn't go out and "chatter." What they begin to do in the middle of the nineteenth century is have rules, regulations for work, what you can do, what you can't do, and what you must do. You can't talk. If you were a porcelain worker and something blew up in the oven, that was docked from your salary. In order to watch over these workers, they bring in the foreman, fore-people. There was a strike in Limoges because the fore-person, a woman, was very religious and she made the workers kneel down on the ground and pray with their knees on the stone before work started. No separation of church and work there. They bring in foremen who are going to enforce these--to see if you're a good worker or if you're a bad worker. Now, workers resent this very much. How did workers view the bosses, for example? In the 1820s and 1830s, you're still working in smaller units of production in most places in Europe if you're in a factory. You've got an issue with the boss. The boss is somebody who might give you a little extra on Christmas, or something like that. The boss is somebody you knew. There was a sense of, "Well, you're not doing me right now. This isn't right and I'm going to leave until you get it together and do better by me." The boss is a presence. He's there all the time, as my boss at Dennis Uniform Manufacturing Company, also in Portland, Oregon. He was there all the time. By the time the foremen start coming in, the foreman is representing the boss. The foreman is somebody who's brought in from the outside or promoted, often unjustly, from within. The foreman replaces the boss as the one who's hitting on the young female workers. They call it in French the droit de cuissage--that's rather crude--the right of hitting on and scoring, putting oneself in a power relationship with a female employee. The foreman begins to represent the boss. The boss now during strikes, the language of workers during strikes is, "The boss, he's a letch. He's a drunk. He eats too much." He doesn't care whether you live or die. He's still somebody you see kind of walking through and all that. You don't like him that much, but he's still a presence. The strikes at the end of the nineteenth century are very, very different. The boss often is a very distant person. He's sending telegrams from London or sending telegrams from Frankfurt to his foreman demanding this or that. In the case of a strike in Limoges, France, the owner of the factory was an American called David Haviland, as in Haviland porcelain, at one point actually demands that the U.S. Embassy send in the U.S. Marines, as if that was possible in order to put an end to this disturbance, this disorder in his factory. He or she has become a symbol of capitalism, protected by the state and protected by the army. This is how workers, not all workers, but in many cases, view the boss. Industrial discipline has been imposed by these rules, these regulations, and these foreman. If you don't like it, too bad. Women workers are no longer allowed to nurse their children, to bring their children or to go out and nurse them. They are forced to eat inside the factory. That's why tuberculosis rates are enormous, particularly in mining and in factories. You'll see this in Germinal. It's really kind of an amazing book. This view of workers of their bosses tells you something about this long process, it's very uneven and not everywhere, but still there--that explain this massive kind of movements of strikes that you found in all sorts of countries--Northern Italy, Barcelona, Moscow in 1917. Huge strikes would be terribly important in 1917 in Moscow. Then something else happens, and I'm going to end with this, because it's in a minor way an amazing tale about a colleague, a brilliant woman I know called Michelle Perrot, something she wrote in the late 1970s. It tells you so much about our time. In the beginning of the twentieth century an American engineer called Taylor comes up. Remember, this is the time when the Olympics have started up again. You measure how fast people can run the 100 or how far they can throw the shot-put. You're measuring things. Car races have begun. Bicycle race, which was a bloody spectacle with bikes careening out of control all the time. You see working class heroes getting just mangled, in part through each other's manipulations of trying to knock them off. But you're measuring things. Taylor comes up with this way of measuring units of production, the ultimate in industrial discipline. You were on the assembly line. They will count the number of units you can do of jars of apple butter that you can turn out. If you're not turning out enough, "See ya', we'll get somebody else. A lot of people want that job. We'll see ya'." That's why the wages stay down, because there are a lot of people who want those jobs because of the growth. He becomes the darling of the French car manufacturers. He becomes the darling. He is a hot item. He would be on People magazine, if they did have one, because he comes and tells them how they can get more effort out of their tired, fatigued workers by counting them. It's just like that all the way. Michelle Perrot, when she wrote a brilliant article and edited a book I did a long time ago, her article was called "The Three Ages of Industrial Discipline." She had an amazing phrase for the late 1970s. This was before, thank god, cell phones. It was before personal computers and all of that. There were computers, but they weren't personal computers. She said that in this post-industrial age, where you've got the rust belt, and you've got factories being torn down outside Detroit, and in Flint, and in Torrington, and Waterbury, and places like that, in Pittsburg, and almost anywhere you can name that was the heart of the American industrial experience. She predicted in 1978 that what would replace Taylorism would be the computer. The computer will measure in your cubicles your performance. She said in the end, the foreman would be replaced by "the quiet violence of the computer." Kind of amazing. See you on Wednesday.
European_Civiliization_16481945_with_John_Merriman
9_Middle_Classes.txt
Prof: You know why I am dressed up? When I do this course and when I do the first half of the French course I do a lecture on the bourgeoisie, the middle classes. Middle class was a form of self-identity that was constructed in the way being a worker was constructed, or being a noble. One day I was about to go out among you all and talk about Daumier, and show you some Daumier slides about the bourgeoisie, and my wife said to me, "You can't go talk about the bourgeoisie looking like you usually do. You've got to look like you mean it, like you have a vague sense of knowing it." So, as a result, look at this. I wear this about once a year. Unfortunately, I wear it to funerals. The last time I wore it was something Bill Clinton had, some mutual friends. I only have one tie that I share with my son. We had to find him another tie underneath his soccer shoes. Then we got into New York and went to this party, and we're all dolled up and all that. Then we went out to a restaurant and I lost my one tie. The last time I bought a tie, ties cost fifteen dollars. In Ann Arbor I bought a tie. This is a seventy-five dollar tie. This is my only tie. That's a long way of answering your question about why I look like this today. But I hope to make some sense of that in the lecture. So, thank you very much. I didn't set that question up, did I? I didn't ask you, "Please ask that question." When you're looking at me dressed, it's not Halloween. That's the first thing I thought. When you look at me dressed like this, please try to think, knowing me a little bit as you do, why it was that it meant a lot to dress like this in the nineteenth century. The middle classes started dressing like this in the nineteenth century, dark with a little bit of color. When you see Daumier or you see Delacroix's famous, which I forgot the slide, Liberty Leading the People, and you see the bourgeois, there with his top hat, he's dressed in a bourgeois uniform like this. That emerges out of the bourgeois century. While last time we talked about the construction of class identity for ordinary people, for working people, the bourgeoisie had as strong a sense of self-identity as any social class you could imagine. It was, as I'll make the point in a minute, difficult to get into that class if you weren't born into it. The fear of falling out of it was something that helps motivate lots of political things in the nineteenth century. The nineteenth century, in terms of being the bourgeois century--one of the things you see in countries, particularly in Western Europe and Great Britain, in France and in Germany, and in Italy, is you see the middle classes wanting the political power commensurate with their economic status. If, in the eighteenth century--this is one of those truisms that happens to be true and can be exaggerated--the aristocracy, you were born into the aristocracy. If you hit the big time and you get lucky, you can buy your way in, thanks to the broke French monarchy. But the ideal aristocrat, and this is how an aristocrat would have talked about him or herself, was born into the aristocracy through blood, through family. It was an ascribed status. In the nineteenth century one of the things that happens with the French Revolution and with Napoleon is that the middle-class person and middle-class values seem to be something to be emulated. Once we've got an increase in the wealth of the middle classes and the diversity and complexity that I'll talk about in a minute, then you wanted the political power. You wanted the right to vote. You wanted access to information through the press and print culture. All of these things are closely tied to the middle classes. That's what I'm going to talk about today. Most of it is about bourgeois culture. That's why I'm dressed like this. I assure you that the minute this lecture is over, I will go back and like--I could never compare myself to Clark Kent--but I will find my phone booth and change back into normal duds. Let's talk a little bit about the middle class in the bourgeois century. The middle classes or the bourgeoisie are terms that we conveniently use. Marx talked about the bourgeoisie as being this extremely homogenous class. In fact, the word "bourgeois" has really more cultural connotations, maybe, than objective or social categorization, living in a bourgeois manner. We'll see some aspects of that in terms of access to private space, middle-class concepts of childhood, and that sort of thing. Middle classes is probably a better term. Bourgeois is equivalent of burgher, but middle classes is probably, for our point of view, a better term. It seems rather odd to be talking about the English bourgeoisie of Leeds, about which there is an excellent book, because bourgeois, after all, started out as a French word. In using and indeed insisting on the term "middle classes," what I'm suggesting is the enormous complexity of the middle class. There wasn't just one middle class. Yet the middle classes shared some cultural values and symbols in common and when challenged by ordinary people could snap back in an extremely cohesive class-based manner. Marx had some of that quite correctly. In a Parisian newspaper called the Journal des Débats in 1847, someone actually did a pretty damn good job of describing the bourgeoisie. "The bourgeoisie is not a class," the person argued. "It is a position. One acquires that position and one loses it. Work, thrift, and ability confer it," he argued, referring to himself, of course. "Vice dissipation and idleness mean that it can be lost." And, so, that old kind of aristocratic ethos of not working, of being idle, although it can be exaggerated, as we've seen in the seventeenth and eighteenth centuries, nonetheless there was something to it. An eighteenth century noble let his fingernails grow long, sort of just hung out showing his good taste by living in an idle, aristocratic manner. The bourgeoisie did anything but that. Work was part of how they believed to get ahead, and getting ahead is what they wanted to do. The French Revolution, and here's an important point, I guess, opened the way by removing legal blocks in very many places to the career open to talents. Napoleon used to say tediously that in each soldier's backpack there was a marshal's baton, or staff that you could get promoted with good work, hard work, if you didn't get your head blown off in one of these battles. But certainly one of the things that comes out of his insistence on service to the state is creating a whole series of rewards that recompensed virtuous action and hard work. That's what the Légion d'honneur, the Legion of Honor was all about. Making money was part of it. Of course, it was always in the nineteenth century sort of classic to poke fun at bourgeois culture, and in some cases the lack of it, and to ascribe to the middle classes philistine habits in which making money was really the only thing that counted. Certainly, Friedrich Engels, Marx's socialist partner--obsessed, as well he should have been, with the slums of the satanic mills of Manchester--he once wrote the following. He says, "One day I walked with one of these middle class gentlemen into Manchester. I spoke to him about the disgraceful, unhealthy slums and drew his attention to the disgusting condition of that part of town in which the factory workers lived. I declared that I'd never seen so badly built a town in my life. He listened patiently and at the end of the corner of a street at which we parted, he remarked, ‘And yet, there is a good deal of money to be made here. Good morning, sir. And he walked away." One employer wrote in the 1830s that, relative to his workers--is that the worker, I couldn't invent this, "should be constantly harassed by need, for then he will not set his children a bad example and his poverty will be the guarantee of good behavior." Of course, this is a caricature of middle class self-absorption, of narcissism, of this inveterate cruelty to the classes below them. On the other hand, the more we study the middle classes--and in the 1960s people really didn't study the middle classes because they didn't like them very much. They studied workers. But there's been an awful lot of good work done on the middle classes. Among them my dear friend Peter Gay, his five volumes of the Bourgeois Century, take on the idea that the middle class lived without passion, and were philistines, and that sort of thing. The more we look at the middle class now, we see certainly that no matter where you look one of the things the middle class people did was form voluntary associations. Aristocrats didn't form voluntary associations. They didn't need to. The middle class formed voluntary associations, and many of these were for extremely charitable purposes, particularly in Britain. Again, the study that I referred to by somebody called Morris--I think it's Morris--on Leeds shows the kind of richness and depth to these voluntary associations in which people try to do an awful lot for ordinary people. It has a sense of moralizing. There's always this sort of top-down look about moralizing them, and trying to get the workers to drink less, trying to get them to go to church, trying to get them, when it was possible, for their children to become educated and stay in school. There's always this tension between families who needed children's income, however small that was. Across the nineteenth century, over a very long period, laws finally by the end of the century in most places made at least primary education obligatory, and in most cases free. Here's a ridiculous example. It's not a ridiculous example if you love animals. I'm a cat person, as I already said. The Society for the Protection of Cruelty to Animals, these sorts of organizations really are one of the classic examples of bourgeois voluntary associations doing good things. They also get together to hang out with each other and sort of try to gauge who has more money than the other, and they get together for social reasons in the coffeehouses of England, and in the clubs, circles you call them in France, and their equivalents in Germany, and Italy, and Spain. One of the more ludicrous kind of mottos, we call it a devise, a motto, of the Society for the Protection of Cruelty to Animals was in one of the organizations in France, which said, "One must love animals, but not fraternize with animals." I don't know what that means, but the main thing is that they wanted to save animals from being beaten, almost beaten to death in many cases of horses. You can see how, in places in which bullfighting over the long run in the nineteenth and twentieth century, such as the very south of France and in Spain--there were always movements to try to protect the bulls, which seems like a reasonable thing to do. For all the bad press that the middle class has had, and you can read some of this bad press in what you're reading, there is also this good side that should be evoked as well. That's a period. Certainly, in terms of organized religion, the middle class goes to church more than ordinary people, than workers, for sure. In the case of peasants it depends on where. As I said before, in many parts of France, the example that's well studied, you still had this de-Christianization. But certainly religion was a fundamental part of the British middle class's view of itself. The percentage of people who went to church could be exaggerated. There was a study in all of England. I don't think it was in Wales and Scotland, but at least it was in England, maybe in Wales, too, probably in Wales as well. I think it was in 1851 where they decided to look at every single church in England and Wales, let's say, and to see how many people went to church. They found to their horror that it was less than they thought. They also discovered that if everybody who had wanted to go to church had gone to various churches, Methodist for more ordinary people, Anglican, Catholic for the Irish and for a certain minority of British citizens, or Jews going to synagogue in the east end of London, that they couldn't have accommodated all these people. So there's a massive kind of church building campaign that has its counterpart in almost every country as well. Certainly in France after the Paris Commune of 1871 they start building churches in the working class districts perched on the edge of cities. More about that in another lecture. One could go on and on about this. Religion for the middle classes has a greater role in their lives than in working class cities. In the case of the peasants, there weren't any peasants left in England. I'll talk about that and it will be fun to talk about in one of these lectures. Anyway, there we go. How many people would have considered themselves middle-class? Again, self-identity, how people thought of themselves is one of those aspects that we want to discuss. How do we know? How would you know who is middle-class? When they first started doing censuses--and censuses are really a nineteenth-century phenomenon, and subsequent centuries, as I said before. The first census was in Copenhagen, I think, in the eighteenth century. The first real censuses do not come until the nineteenth century almost everywhere. They didn't ask people--they asked you your name and where you lived. In some cases they asked you your profession. But they did not say, "Are you middle class?" or "Are you not middle class?" There was a whole lot of work done in the 1970s on what they used to call the new urban history, which is counting people up and deciding who might well have considered themselves middle-class. There are a lot of dissertations written on that kind of thing. There was one in the case of Paris. Inevitably I have to talk some about Paris because the work is so rich there. A woman called Adeline Daumard wrote a dissertation that was subsequently published called Les Bourgeois de Paris, or The Bourgeois of Paris in the First Half of the Nineteenth Century. What she did is she looked at wills. The middle-class people had enough money to leave wills, therefore, their inventories after death. That's what you call them. That's one of the reasons we know about the explosion of print culture, because they inventoried the books that people read. I mentioned this in the context of Enlightenment, too, because you do have that, too. Taking the kinds of ways that she looked at social class, she determined that somewhere between seventeen and nineteen percent of the Parisian population in the first half of the nineteenth century would have been considered bourgeois, and would have considered themselves bourgeois, that is, in the middle classes. In Britain the percentage is higher. It probably approaches twenty-five percent. I can't remember the exact figures. That percent will continue to increase in the nineteenth century. You can already very well anticipate, from what you already know, where other parts of Europe that have large important middle classes. The old Hanseatic port cities of German, the German free cities that would become part of unified Germany in 1871--northern German cities in general, like Bremen, and Lübeck, and Hamburg above all. Hamburg's a huge port city. It's got a very enormous bourgeoisie. If you went to Madrid, you'd find a sizeable middle class, but it would be nothing that you would have if you compared Madrid to Barcelona. Barcelona is a really natural economy based upon important economic relations between its hinterland and Barcelona, and between Barcelona and the world, because it's a major port. So, you've got this big teaming middle class there as well. In the case of France, obviously places that have lots of industry and small businesses have middle-class people in large numbers, though not as large numbers as workers. Lyon would be a good example. Lyon has the most tightly closed middle class that you can imagine and still is. Lyon is very Lyon. What can one say? Again, northern Italy you find a huge vibrant middle class, but not in southern Italy. Naples is one of the biggest cities in Europe right through the early-modern period. You've got a large middle class, but most of Italy is extremely rural and what you had in Rome is you had clergy. It's a city, so you've got an important middle class. The further east you get, the smaller the middle class gets. In Russia, the estimates are about two percent of the population were middle class. Two percent, which isn't very much at all. And, of course, they are clustered in Moscow and in St. Petersburg, and in Kiev, now Ukraine, always Ukraine but then part of Russia, in the large cities. In Poland, Warsaw had a large--I was just at a history museum, a fascinating one at Warsaw Museum a couple months ago. Warsaw, as Krakow, had a big middle class. GdaÅ„sk, obviously, because it's a port city--but much of Poland was rural and wouldn't have that kind of middle class. Belgrade would have been the only city in the Balkans, outside of Istanbul, but Istanbul isn't in the Balkans, but with an important middle class. This is all perfectly obvious. Anyway, who are these folks and what do they want? They're not all--how am I going to do this? I'm going to do it like this. You have to imagine the middle class like this, that it's a pyramid. It's a pyramid with a small top and a big bottom. I'll show you a lithograph that really represents, two of these, in very interesting ways, I think compelling ways at the beginning. At the very top--think of Zurich. Think of any city you want. Zurich has a big middle class. So does Geneva for obvious reasons. But at the very top there are the great bourgeoisie, the big bourgeoisie. These are people who are big financiers. The nineteenth century bankers will become much more important for perfectly obvious reasons. These are big wholesale merchants who are making bundles shipping things from here to there. You won't yet find lawyers and people like that. What also makes them the high bourgeoisie or the big bourgeoisie, a small percentage, it doesn't really matter where this line goes, is that they have access to political power. Even if they're in Prussia, a place that's dominated by the nobles who are called the Junkers, as most of you know, they will still have access by virtue of their wealth to political power, which is exactly the way they want it. There's a revolution in France in 1830, yet another one that you can read about. Arguably--Marx says this and in a way it's sort of true--what it does is it brings to power in France the big bourgeoisie, and they have the ear of the king, Louis-Philippe, who calls himself the Citizen King. He would rule from 1830 to 1848. In the portraits of him, the paintings that he had done to represent who he was are very different than those of the Bourbon kings. The Bourbon kings are all looking like, even the pathetic successors of Louis XIV, they're looking like big people in chateaus who are kings of all that they see, which of course was more or less the case. Louis-Philippe's view of himself was that he was the Citizen King. That's what he calls himself. He's still the king. He was noble. He was not any bourgeois. But in the official paintings of him you see people dressed like me who are coming into the throne room. They're dressed like me in dark suits. They have power. He wants them in the painting with him. That's terribly revealing. It's terribly interesting. So, these are people, these are big bankers, high financiers at the top. Then you've got other layers of bourgeoisie. You can kind of fill in the gap. Here we have smaller bankers, not in size but in money, industrialists, merchants, these kinds of people, and Daumier's, the great caricaturist's, least favorite people--;lawyers. Lawyers rise up rapidly in popular esteem and usefulness. The middle class likes to see themselves as useful. You find lawyers reaching in there and, slowly, doctors. Remember doctors had very low social status. They were sort of a cut above the bad pun I make in what you read, ordinary field surgeons during Napoleonic battles, some of whom were butchers or people that knew how to wield a knife. Doctors increase a self-identity and become more important in the nineteenth century. You also find notaries no matter what country you're in. Notaries have a much bigger role in Europe than they do here. Notaries know where all the goodies are. When you buy property in France, by the way, if you have a mortgage you pay twelve percent right off the top goes to the notary just for holding in his office your deed. If you don't have a mortgage, you pay seven percent right off the top. So, notaries know all of the secrets of people with money. Notaries are important in all these countries, et cetera, et cetera. You can kind of fill in the occupation, but they share things together. Then at the bottom you have the petty bourgeoisie, and everybody's making fun of the petty bourgeoisie, but they too had a self-identity. I found one day in the stacks of the library at the University of Michigan a pamphlet that was actually the report on what surely must have been the last, but in any case was the first World Congress of the Petty Bourgeoisie. They met, appropriately enough, in Brussels. Can you imagine going to a professional history conference where they all had their little nametags? All they do is they start up your body and look at your nametags, and see if it's worth looking at your face. It's really pathetic. Can you imagine going to a conference like the World Congress of the Petty Bourgeoisie? "Hi, my name is Albert." But they had a self-identity. Who are in the petty bourgeoisie? Lots of these classically new nineteenth-century professions--schoolteachers. Schoolteachers were a way of social mobility for peasant families, whether they were in Italy, Germany, Austria, Hungary, Switzerland, no matter where they were. Out of the working class or out of the peasantry female schoolteachers become increasingly more important. They always were in Catholic schools because they were nuns with the big hats and all that, and doing a very good job, even though often they were undereducated and it was kind of hard for them to do that. But schoolteachers you'd find here, and also café or tavern owners, weinstube owners. I'm just giving you a couple examples. These are the petty bourgeoisie. Also, very importantly, what do you do with artisans and craftsmen? Master artisans own the tools that their journeymen work with. They rent or own their shops. When things are going pretty well they do pretty well themselves. But when things aren't going well, they don't do well. That's why they're on the barricades all these times, as you know, in the French Revolution--;the French revolutions, and in the revolutions of 1848, as you shall discover in Vienna and Berlin, and other places. They're always there. These folks are here, too. This is your basic petty bourgeoisie. People are always dumping all over them needlessly. I will give you some example. If you've ever read the great French novelist--;he was paid by the word, as you can see when he has descriptions of single sofas that go on for about two pages, but Balzac. Balzac is really the novelist of the bourgeoisie. When he describes Paris and the seventeen to nineteen percent of the population who are increasingly living in the western part of Paris, more about that another time, he describes it as a jungle. You count your money in the morning and then you count your money when you come home. By the way, your wife, who would in the census be listed as not working. If you were a shopkeeper, your wife was the one who kept care of the accounts. Your wife was the one who stood behind the counter when you were working, when you were an artisan. He describes this as a jungle. In order to really give an image of what it was like, I've got to find this thing someplace, but he's got this one magnificent print called the "Street of the Four Winds." That's a street in Paris, rue des quatre vents, near the Odéon. It doesn't matter. But here's a guy dressed like me. There's a theme in this. He's dressed like me and he's wearing his bourgeois hat. I don't have one of those. My only hat has an M for Michigan on it. His one suit isn't going to blow off his body. But the wind is taking his hat, which is a symbol of who he is. The wind is carrying it away from his hand. In several hundred brushstrokes, Daumier captures the look of panic on his face because he's going to go home without his hat, and his wife's going to say, "Where's your hat?" He'll say, "The wind blew it away," and he's got to buy one, and they've got to put the money together so he is not going to fall off the ladder in this jungle. Then you have to imagine this as a ladder, like this. Social mobility is the goal. You want to have enough money to leave to your 2.2 children. Then to really make this go you'd have to have vines up here like the jungle. Then you'd have to grease this pole through bad economic times. Let's say in Europe 1816-17--don't write this down, if you do, you're compulsive--I'm compulsive--but 1826-27,1840-41, really bad one, 1846-47,1855, those are the really bad years. At that point, if you don't get credit, that's what's going on now, here. If you can't get credit because people withdraw the credit, same thing, then here you go. Look out below. You slide down this pole. What happens down below here? Holy cow! That's the big sea. I saw this wretched movie called the Poseidon Adventure once. It had an image where the water is kind of coming up below and it's going to finally get to the top and there's no more room to breathe. This is how the people on the bottom part of this ladder viewed the demands of the working class. They want to vote, too. What if they vote and somebody wants to raise your taxes or something like that? Boy, that's scary. But what's down here? This is ordinary people. This is the other, what would it be in the case of Paris, eighty-three percent of the population. You're going to fall into the ranks of the proletariat if you're on the bottom rungs here. This is your jungle and you're trying to make it up there to the big time. The chances are that in these bad years you're going to fall down. But yet lots of people get up and the ranks of the middle class increases everywhere in the nineteenth century, in Russia, too, everywhere. That's simply the case. Now, if I could just bring this down and show you how this works, and talk about some accoutrements of middle class culture that you will recognize, many of you. This is the guy at the top. This is Daumier. Daumier is the greatest caricaturist in the nineteenth century and arguably ever, to make an extreme assertion, but it really is pretty true. This is what he captures, the prevailing mood in much of Europe in that money, more than blood if you were going to exclude places like Hungary, Poland, Spain, and Prussia, money talks more than blood. What is the man doing? He's counting his money. Remember I said that you counted your money in the morning and then you came home at night, and counted your money again to see how you've done. This is great. Remember I said the variations within the bourgeoisie? You can see this. Some of these images, this is really not very interesting art, but that's not the point. Look what this shows. The guy at the left here is a clerk. That's a very nineteenth-century profession, as it is for every subject. By the way, this is before the 1860s, because that's when real fountain pens are invented. He's got your basic quill pen there. Now look at the coats. They both have coats like mine, but there's a huge difference in them. This guy, if you have extraordinary eyes and can read upside down, you will be able to see that he is reading a newspaper on the price of colonial goods, imports. He is a wholesale merchant. He's one of these people that's at the very top of my triangle there. Look, this guy's got his coat, too. This is early in the century. You can tell. This is either the son-in-law or the would-be son-in-law. The bourgeoisie didn't kiss and hug a lot. But he's got his hand draped rather daintily on the old guy's arm here. He's not about to embrace him and give him a big kiss on each cheek. One day all of this stuff will be his, if he plays his cards right. They still had arranged marriages. Love could count for something, but marriages were still essentially, less so for the middle classes than for ordinary people, but economic relationships. That's what they were. They were economic relationships, wrangling over the dowry and that kind of thing. Look at our guy on the left. He's working very hard there. This pole that is put up there has a real sense of dividing these. It's like the barriers on my quite arbitrary, and not terribly well designed, triangle there. Do these people have something in common? Yes, they will in 1830 and they will in 1848, but the rest of the time they don't. He's dreaming about being this guy. He'll work very hard and he's educated. He had probably not secondary education. Most people didn't go to high school, secondary, lycée in France or gymnasium in Germany, et cetera, et cetera. It represents this world. By the way, we also know that this takes place in the center of Paris, right behind a big department store, subsequently the Hotel de Ville, but right near the town hall. Anyway, there we go. I've got to get my watch so I can keep track of things here. This is very common. You see this in the book you're reading, I think. These things can be represented spatially very easily. One of the themes of the long run is the emergence of increased development of prosperous western Paris, prosperous western London, prosperous center Vienna and other places, and increasingly impoverished east and the periphery. That's another theme. Still, through much of the period, and to a lesser extent still today, where you live in a building reflected how much money you had. The ground floor, in French the rez-de-chauss ée--this is the concierge there. The concierge will be somebody of very modest means. You'd probably place them in the petty bourgeoisie there. Then the big apartment on the first floor, high ceilings, big party, lots of people dressed like me there, a piano, more about pianos in a minute. So, this is my triangle upside down, isn't it? The more you go up there, you're still within the middle class. The guy above has these little Napoleonic beds there. I hope he's not closing his ears against his own baby there--but no, obviously this is a different house. He's a musician. This is all rather banal but nonetheless telling. You've got an artist up here with not much money, but he still has a little bit of furniture, not much, his nosey neighbor looking at his painting. Then on the top you've got the poorest of them all, besides the cat who's on the roof up there, you've got a seamstress. Anyone looking at this very popular lithograph would immediately see that she has some dignity left. Why? Because she has not yet pawned her mattress. In Zola's great novel, L'Assommoir, Gervaise dies like a dog on a bed of straw, because there was no more mattress. She must be at the very top. Now these rooms then became in the twentieth century student rooms and then were transformed into enormously expensive lofts. But this is a way of visualizing the special concomitance of what I'm talking about. People were aware of what these symbols meant. This is your classic Hamburg financier's apartment. We don't need to go on and on about the kind of material culture of wealth, but there it is. Let's go on and on about it in another one that's easier to pick up for your eyes there. Here again, we know we're on one of the lower floors. Why? Because you see the trees outside the window. You've got a domesticated animal. Ordinary people didn't have as many domesticated--dogs had a real purpose. They bring the sheep down the mountain. My wife just came down the mountain two weeks ago bringing the sheep down from friends of ours in the village. All these dogs are useful things to keep the sheep in line and all of that. This is all obvious stuff. You've got slippers. Ordinary people did not wear slippers. You've got a domestic servant. Domestic servants cost almost nothing. It was considered to be a way of moving up the ladder to say that you had four domestic servants instead of three. You've got brass or copper here on the heater. That's a good sign. You've got very fancy chairs. Look, these are very good chairs, sort of Louis-Philippe chairs. You've got print culture, a big old porcelain plate above there, and you've got that bourgeois accoutrement, the piano. The piano replaces the harpsichord. Leon Plantinga, who is in J.E. College, who's a retired professor of music, has got great stuff on this, the role of the piano and the emergence, along with William Weber, who taught at Long Beach, the emergence of the public concert, as opposed to the chateau concert or the church concert, the public concert. Along with that comes the piano. Pianos were expensive, but the middle class has pianos. Working people don't have pianos. Middle-class people have pianos. You also see something else that's important here. There's more than one room. You'll see in a minute there's even more than two rooms. There are lots of rooms. What the middle class wants, all those people in that triangle, they want privacy. They want privacy. They want their own rooms. She's playing the piano. It's all obvious stuff. There's the kitchen. This is not the wife. This is the domestic with her children, who are part of the team who has been hired to help run this household. There again you see the trees. We're in the same apartment there. You have real, real copper pots. Back before the Bush dollar, people would buy, bring back from Paris and from Europe these enormously heavy copper pots. I've carried so many of them back. It's just incredible. Ordinary people did not cook with things like that. There we go. These are the kinds of symbols of all of this. The middle class wants privacy and they also developed something else. This is almost trite to say, because so many people have said it and it can be exaggerated, but the middle class arguably helps create the notion of childhood. In many early-modern paintings children are portrayed as sort of little squished up adults and that sort of thing. Children come into their own in the nineteenth century. Ordinary peasants' children, everybody slept with the animals often along with the adults. Most ordinary people--and some of the worst tenements in Europe were in Edinburgh, and in Glasgow, and in Lille in France, but also in Berlin and lots of places. There were no secrets. Everybody slept in the same room. There were no secrets at all. What the middle class wants, besides social mobility and access to political power, is they want space. The notion of childhood, childhood didn't exist for ordinary people. You started working, helping out when you were five or six years old. You started tending the animals in the little courtyard as they would call it, taking care of chickens and rabbits, and things like that. Working people, their children went to work right away, as soon as they could make anything. If they were poor and didn't have jobs, then they were sent out to beg. Childhood became a middle class phenomenon. To be sure, nobles had children, but it was a different way of bringing up your children. Nobles did not send their children to public schools or even to private schools. They were educated, to some extent at least, by private tutors. Even the notion of the children's hour, the children's room, the idea of a children's room, of having your own room or a room shared with a sibling, was something that was just inconceivable for the majority of Europeans, the vast, vast majority of Europeans. The children's hour--I can even remember the horror show of being summoned for the children's hour, when you're supposed to come out when there were guests and run through your extraordinarily modest bag of tricks for the guests. Then you would be sent sort of packing. Since I couldn't play a note on the piano, I had been expelled from piano after two weeks and sent back to the playing fields by a nun in Portland, Oregon, I didn't have many tricks to show. But the children's hour, all of this stuff comes out of the middle class. How about birth control? How about not having ten or eleven children? We have friends, one of whom unfortunately just died, very older friends who were born in the early 1930s in the south of France. One had thirteen brothers and sisters, and the other eleven. They grew up in absolute misery. They were a very, very Catholic family in the center of France. The middle class, particularly the French middle class, start reducing their number of children. France is a particular case because they get rid of--you could get around it by primogeniture. The plot of land has to be divided up into two, or three, or four, or five, or twelve. What if you own no land? Not so good. So, they begin having 2.2 children or something like that. Birth control--in some parts of Europe people think that birth control really started with peasants and then moves up to the upper classes, but basically, particularly in the case of France where it's been, like most things, studied to death, birth control really begins with the middle classes. They are limiting their children so that their children can be the son of, and inherit the business and hopefully be left with enough money to make it go. A print culture. That was just an example. The whole salon, the idea of going to see art shows. It really starts in the eighteenth century. The middle class wants to be seen rather like the Dutch middle class that we talked about in the seventeenth century. They want to be seen having paintings. They wait in line to go to theatres. This is all Daumier. This is the morceau, the piece that you're obliged to swallow after dinner. Here's the little girl being trotted out to play a few notes for the quite bored people who are sitting there and waiting. Even the idea of "It's your birthday, papa." You didn't take time out to celebrate a birthday if you were an ordinary person having to get to the fields at 4:00 in the morning in the summer, or going to work during the day. The culture of childhood is really all there. Also, there's a whole notion, and here again this would probably fit rather awkwardly into the birth control description, but there's this whole sense of being prepared that emerges with the middle class. One of those sort of accoutrements--I once, when I gave the equivalent of this lecture, I had an old battered umbrella. I was trying to explain how people on the top rung were trying to beat down people at the bottom. I ended up smashing this umbrella, sort of the imaginary of somebody smashing their guitar onstage. But the point is that the umbrellas come with the middle class. They are black umbrellas. They're not these big colored things you have now. It was the idea of protecting that one suit. I'm from Oregon. We didn't carry umbrellas, because it rained all the time anyway and I'd just lose it. Umbrellas are middle-class accoutrements along with the piano and along with the children's room, and along with the children's hour, and along with the idea of not having too many children, and along with the top hat, and with the idea of wanting access to information through the newspapers, wanting the right to vote, probably not wanting those people down below you on the ladder to vote, but demanding that you have the right to vote. They all shared these things in common. Lastly--gazing at his watch--in the last one minute thirty-five seconds that remains to me, the bourgeoisie, the middle classes, and this is particularly true of Germany and France, and of England, too, and of other places--they want the right to bear arms. They want to be in the national guard. The national guard might hypothetically be there in case there was an invasion of France or Germany by, I don't know, some distant place, the Fins or something most unlikely. But the main reason they wanted to join the national guard--and you had to own property to be in the national guard. You had to be defined as a property-owning citizen to have the right to vote. In all of these countries the right to vote was defined, until you have universal male suffrage, by how much taxes you paid and how much property you own. You can measure where you are on this ladder by how much taxes you paid. They didn't want to pay a lot of taxes, but property reflects one's belief in one's own social worth. That's the way they looked at it. No longer was it the worth of blood. So, they formed these national guards, particularly after revolutions and after 1848, or after 1830. For a while they go march around. But these are mainly there to protect them against the workers. Should one day all of these people try to rise up, climb up this ladder, you'll be down there to stomp on their fingers or to shoot them down. It doesn't last very long. Pretty soon, this guy's tired. This isn't Daumier. I don't know who it is. It doesn't matter. It's not very good. He's had it. He's freezing. His wife is kind of looking at him like, "I don't know why you're doing this stuff, marching around in the middle of the night. No one's going to rise up anyway." This won't last. His old blunderbuss there on the let will be put back in the closet, or taken out to slay deer, or some damn thing. That will be the end of it and they'll turn it over to more professional repressive forces such as armies. Daumier's light lines, and this is the last one, disappear in this painting, which is called the Rue Transnonain, April 15, 1934-don't write it down, in Paris. It's a street that no longer exists. It disappeared when Haussmann built the boulevards in the 1850s and 1860s. It was selected to disappear because it recalled an event in the early 1830s when these bourgeois panicked and start going into a house full of very ordinary people and simply shooting them all. The light lines disappear with Daumier. He did another one of these after a massacre in 1848 in Rouen and it's been lost. We don't have it. Rue Transnonain. H.D. Daumier at the bottom left. The middle classes, for all of their insistence that they have access to information, at least in the case of France they cheered on a press law in 1835 that kept Daumier from touching political scenes such as this which were deemed too sensitive. The rue Transnonain, where this happened in the center of Paris, simply disappeared. It didn't quite disappear from the collective memory of people thinking about Parisian things. In conclusion, the middle classes extremely vary. They share much. They have a common material culture. They share a belief in achieved status, as measured by the amount of property that you had. They want to vote. They want a collective voice in decisions. For all the variety within the middle classes, so beautifully depicted by Daumier and other people, they still, when push came to shove, shared an awful lot in the bourgeois century, that of the nineteenth century. Have a good weekend. See you on Monday.
European_Civiliization_16481945_with_John_Merriman
15_Imperialists_and_Boy_Scouts.txt
Prof: All right. I'm going to talk about imperialism today. This complements the chapter in the book. The main topic is the New Imperialism, and the lecture is very much about the culture of imperialism. Part of the age of mass politics in Europe in the 1880s and 1890s, before World War I, involved massive support for the New Imperialism. What was new about the New Imperialism? What period do we talk about as having had the New Imperialism? It's really from the mid-1880s, just say the 1880s, to 1914. It's at that point, as you can see from the maps in the book and you can see from the discussion, that the European powers really conquer the world. There's no other way to put it. There's a frenetic, wild chase even to the South Pole as part of that. The African continent, of which there were huge blanks in the maps of Africa, by 1914 virtually the entire continent was not only charted but had been conquered. Europeans really control the globe. The Americans, in a smaller way, are part of the New Imperialism. Let me just start out by posing the question, and I sent all this stuff around to you, so I don't have to scribble on the board and you don't have to try to figure out what it is that's written on the board, because it's hard to see from here. If you were going to point out or to claim that there was a central reason for the New Imperialism, why even Bismarck, who described colonies as an albatross around the neck of Germany while he gets into the kind of feeding frenzy himself, it's been put rather cleverly by a guy called Baumgar a long time ago, that it comes down to God, gold, and glory. There were those who interpreted the mad quest for colonies as being the missionary impulse. A sort of subset of this would be the French idea that there was a civilizing mission going on and trying to give indigenous peoples access to French culture. Basically it argues that Dutch Calvinist ministers, and Lutheran ministers, and Catholic priests, and other denominations encouraged states and their own church people to bring to their religion indigenous peoples all over the place. Well, we can dispense with that one. That was part of it, of course. You can't distinguish any of these three and say that any of them are nul. But that is a rather small part of the quest for yet more colonies in the New Imperialism, and indeed for all of the well-meant, however condescending in many cases, quest for religious conversion. Most of the Lutheran ministers and Dutch Calvinist ministers in Southeast Asia, and Catholic priests all over the place such as Vietnam--my friend Charles Keith just finished a dissertation on Catholic Vietnam in the 1920s--most of those priests in areas such as Africa were there to tend to the religious needs of the European communities. It was particularly true of, for example, Lutheran ministers in German Southwest Africa and in other places. The drive to convert peoples to organized European religions was probably greatest, and the Vietnam case is a very good one, and the role of the Catholic Church is extremely interesting in Vietnam and the origins of Vietnamese nationalism. But that is another story. The second one was gold. Gee, I put a "d" for the "o" in gold, but it's spelled G-O-L-D usually. I said in what you're reading that if you get Karl Marx, if he ever sat together with Hobson, a very major economic thinker whom I describe in there, if they were having dinner, there would be a lot that was uncomfortable about the dinner. But they would really agree. They would say that the New Imperialism, of which obviously Hobson was a great critic, emerged out of the quest for riches, for resources. Part of Marxism and part of Leninism, an important part was that imperialism is sort of the final stage of the development of capitalism, and that states need new markets. They need new resources. Therefore, they set out to, at a time of economic crisis--nothing like now, but there is a depression that lasts from 1874 to the mid-1890s--they set out to find new riches. The people going up the Niger River, for example, where I've been in Mali, they expected to find gold around the next bend, or more peanut oil, or diamonds, because of the diamonds in South Africa, which was the equivalent of the gold rush in the U.S. in about 1848 in California. Hobson was no Marxist at all. And he was a critic of the brutality of the New Imperialism, which I'll talk about in a minute. But he said, "If you want to find out where this all began, you look at high finance in the City," the City being the City in London, Westminster, where the high rollers, and the bankers, and the big capitalists are. There are the origins of the New Imperialism. Now, there were critics of the New Imperialism. Most of them, but not all, were in Britain. Many of them opposed the New Imperialism because of the brutality exerted on indigenous peoples by the imperial power. There was a real wave of opposition, for example, to imperialism that swept through Britain and London in 1900 in what they called the Khaki Election, khaki because it was the color of the uniforms of many of the British soldiers in hot climates. Some of the opposition in the liberal party were opposed, ran on a campaign of anti-imperialism. They were just wiped away. They were just absolutely swept away in the elections of 1900. Ordinary people in Britain thrilling to the accounts of colonial exploits voted overwhelmingly for the conservatives who just blow the liberals out of the water, and the labor party exists in 1900, but is not yet a major force. Imperialism carries the day. The big parades in London of returning soldiers from the Boer War in South Africa and from other wars, from all the wars, they are greeted as conquering heroes nowhere more frenetically, enthusiastically, exuberantly than the City, because there is a link between big finance, big capital and imperialism. Besides that, we have a category we call social imperialism. The imperialist power saw imperialism as part of the overall strategy of conquests. They said, "Look, if you've got economic problems at home and you've got a lot of unemployed workers--also in France--if you've got a lot of unemployed workers who happen to be socialists, or in Italy, that you could kind of export your problems, because you can point people in the direction and say, ‘Hey, times are tough here. But if you go to Algeria, we'll rip off some Arab land for you and you'll be just fine.' Or ‘You can go make it rich in Vietnam.' Or ‘You can go to Kenya or to Ghana,' (or what would become Kenya or Ghana). ‘You can export your social problems.'" This is sort of what New Imperialism meant. A classic case would be the insurrection of 1851. This is backing up before the New Imperialism. What do they do with the people who are arrested after the insurrection of 1851? A lot of them are sent to Algeria. You export your "social and political problems." The irony there, amazing delicious irony, is their great, great, great, great, great grandchildren end up being right-wing supporters of the National Front, and before that of various right-wing groups that believe in French Algeria and who try to keep the French from leaving Algeria in the early 1960s, after the Algerian war of independence. So, social imperialism is seen by sort of the economic canon, that is, the way of thinking about the political economy of these countries, as a way of keeping things calm at home. They say, "Give people opportunities. Send them to these foreign places." Geez, in the case of France I remember reading these gripping, just pathetic stories of these people who just can't make it in the area in which we live in the south of France. They pack up all their stuff and they walk. They walk or they get little push carts, try to get to Avignon, try to get to Marseilles, try to get a boat to get to Morocco, or Tunisia, or Algeria, to try to make a living there. This, too, is part of social imperialism and is part of the idea that somehow social imperialism is economically determined. That it's the final stage of capitalism. Is that the biggest reason? No. But it's damn important. The biggest reason has to do with the entangling alliances and great power rivalries. It's represented best by Fashoda, at the end of the 1890s, where a British force stumbles into a French force in the middle of Sudan and they say nasty things to each other, finally toast each other with what drinks they had brought along and their countries almost go to war, because the flag would be tarnished by losing out to the craven reptiles that you just stumbled into in the Sudan. The New Imperialism is one of the fundamental causes of World War I, period. That is the biggest reason. Now, don't get rid of the gold interpretation completely, because obviously as Britain and Germany become huge economic rivals, big economic rivals, as the Germans are not only nipping at the heels of the city, British industrial production and British naval production, but passing them in things like chemistry, and production of steel, and the production of big battleships. All this stuff runs together. Your victory is your craven reptile opponent's loss. That's the way they viewed it. Most people, I'll talk about this on Wednesday. It's fun to talk about, sad but also fun. Most people in the 1890s thought that the next war would involve France and Britain. They'll be fighting again and their rivals here and there. Or they thought that maybe the British and the Russians would fight because they're rivals in what was called the "Great Game" for north of India, and Afghanistan, and all of that. Basically, glory and the great power rivalries is the biggest reason that Germany gets into the imperial game, for example. Bismarck--;it's the famous Bismarck story--a really awful man. But when there's an imperial lobby comes racing along and says, "Look, Herr Chancellor, we really need to have the troops go and protect our merchants." People like the sort of freelance guy, Karl Peters. He said at one point, he slams down a map of Europe on the table and he says, "That's my map of Africa. Here we are and we're surrounded by Russia and France." But toward the end of his career was completely different. He's backing up German merchants with expeditionary forces. Plant the flag and then you'd better defend it. The big issue there is rivalry with France and with Russia. Bismarck says, "Geez, if we can get the French interested in all these colonies in Africa, then they won't be dreaming of re-conquering Alsace and much of Lorraine." At the end he says, "Well, we'd better be out there, too." And they're all out there. As some wag once puts it, Italy gets into the game, too, with Libya and Ethiopia, with "a huge appetite and bad teeth," as someone once put it. Of course, they get defeated in the battle in 1896. Then they will pay them back with poison gas and cascades of bombs in the 1930s, and just destroy everybody and kill them all, if they can, to pay them back for their defeat in 1896. I am eventually going to talk about the culture of imperialism and give you the example, which I find telling, of Robert Baden-Powell and the origins of the Boy Scouts. You didn't associate the Boy Scouts with imperialism, but you will in a minute. First, let me just say that this is not some sort of ‘70s radical guy saying--there he goes again--;"it's really nasty to be slaughtering hundreds of thousands of people." But it is nasty, and that's what they did. That cannot be forgotten. It doesn't just start with the famous case of the Germans in Southwest Africa. More about that in a minute. Bugeaud, the name is quite forgettable but who's a general from Limoges. The French conqueror Algiers anyway in 1830 is a political diversion. Gradually they expand their control over Algeria. Algeria becomes a colony. It becomes an integral point of view--from the point of view of the French, in a different way than Tunisia, and Vietnam, or Morocco, and other places of France, even though it's not part of metropolitan France. Bugeaud and his successors kill about 850,000 people during the campaign, very unequal battles. Bugeaud comes up with the idea of simply putting men, women, and children into these huge caves and caverns, and then simply throwing bombs in and so they all die. He did that over and over again. It's easy to say, "Well, the demons of the twentieth century, they come in the twentieth century, don't they?" But, as I suggested before in terms of the Commune, this stuff is out there in the nineteenth century as well, and so racist ideology is out there in the nineteenth century. There's no doubt about it. It wasn't that way in every place, but the French experience was pretty terrible. In the very well-documented case of what happened in what now is Congo and Zaire, which were sort of the private colony of the king of Belgium, the atrocities there are well-known. One could go on all day talking about these atrocities. The most well-known, certainly, and most well-documented, and, in a way because of what comes later in the twentieth century, is that of the conquest and indeed genocide. Here I'm borrowing an appropriate term, I think, in this case-that's not a term you throw around very loosely--of my friend and colleague, Ben Kiernan, whom some of you know, in his big book on genocide, which Yale Press published recently. They begin conquering Southwest Africa in 1885. So, Bismarck still has a few years to go. In their way, as they would see it, among other people were the Herero, H-E-R-E-R-O, a Bantu group of about 75,000 cattle herders who were in the center of what would become the German colonial territory. Again, European powers are putting things like borders there, boundaries, and that has nothing to do with the way that, particularly nomadic people--they don't have any sense of borders. Mali, where I've been because my daughter was just studying in Touareg in northern Mali, north of Timbuktu. The Touareg are a people who had no sense of borders. There were Touareg across other borders, too. Borders are something that were artificially constructed by these powers to say, "Here. Our empire goes there and yours doesn't start until there." And, so, as these people rise up to defend their own territory, they are systematically massacred. They basically first decide to crush the uprising at all costs. There is in 1904 an extermination order. That's literally the German translation from the German. The proclamation of the local military commander is that, "The Herero people must leave this land. If they don't I will force them to do so by using the great gun," that is artillery. "Within the German border," that is defined as now German, "every male Herero armed or unarmed, with or without cattle, will be shot to death. I shall no longer receive women or children," that is spare them, "but will drive them back to their people or have them shot. These are my words to the Herero people." Now, I couldn't make this up. It's easy to say how terrible this is, but it is terrible. It was part of the enterprise and has remained part of the imperialist enterprise. It wasn't the goal of every imperialist to exterminate the people who were there, but if they got in the way in a very equal fighting. In India there were various cases of soldiers complaining it was too easy shooting down the rebels because it was just like hunting. It was a very British, upper-class analogy. It was just like hunting. Basically what they do if they don't shoot them they chase them out into the desert and then they cement over the wells in the oases so they die. Basically they exterminate about two-thirds of the people. There's a very excellent book on this written by a former graduate student here many moons ago called Isabel Hull that was published four or five years ago. The origins of this, and again there are people now writing and saying, "Well, it wasn't that bad. They brought trains to India, ended the huge disparities in prices." Certainly lots of good things did come. But looming in the background were these massacres. The edition that I'm working on now, that I'm just finishing of the book that you're kindly reading, there's a whole recent spate of interesting literature on the end of the British empire in Kenya in the 1950s. History of the Hanged is one. There's another one by a woman called Caroline Elkins at Harvard. The title escapes me at the moment, but these are just fantastic, just gripping, just chilling accounts of essentially the mass murder, incarceration, and murder, and shooting, under the guise of "trying to escape" and all of this of hundreds of thousands of people. This was hidden from the British public, just systematically by the government. It's a long story and it's one that we have to wrestle with. Having said that, I want now to talk about the culture of imperialism--this is sort of shifting gears rather rapidly--and talk about Robert Baden-Powell and the origins of the Boy Scouts. Again, because I was once asked to leave the Boy Scouts in Portland, Oregon because I was of no use and never accumulated a single badge, this is not the origins of this lecture. There's lots of stuff written on Baden-Powell. He's an easy person to mock. He's an easy person, I suppose, to have some sort of respect for, too, in a way, depending on your point of view. I'm not dissing the Boy Scouts. Once I had people running up. There was a woman who came up who was a Girl Scout. She says, "Oh, this is so cruel what you're saying about scouting. It's not like that." I know it's not like that now. But having had some relative who had the very strange idea of giving me, of all people, Boys' Life as a birthday present. I remember reading that and all this kind of over-the-top Americana publications, I suppose I'm reacting a little bit against that, too. But there is a point to all of this, so the rest of this is about Baden-Powell and the Boy Scouts. Robert Baden-Powell was a soldier. He came up with the idea of scouting as a way of preparing British youth for imperialism and for the next war. The origins of the Scouts, in terms of its timing, that is the first decade of the twentieth century, has to be seen in terms of these international conflicts, these international great power rivalries with which we began. It comes at the time of the Moroccan Affair, the first Moroccan Affair and the second Moroccan Affair in 1905,1911, when it seems like the French and the Germans will go to war against each other and they will bring in the other great powers. More about that. Robert Baden-Powell was a professional soldier. When he went back to England he thought that British youth were cigarette-smoking, heavy-drinking, flabby weaklings, whether they were upper classes and, even worse, his few lower classes, because they were underfed and therefore smaller. He hated the Oxbridge common rooms; he said, "With its town life, buses, hot and cold water laid on, everything is done for you." The British working classes, like the upper classes, tended to drink a lot. He was sure that there'd be a war fought in the lifetime of these same people, and he came to the idea of scouting. Now, America has a role in all of this. This country has always believed in the frontier. Those of you who had Glenda's course in American history and other people know about the Turner Thesis, about always you can expand to the west. You can diffuse your social tensions in the east by giving people access to land further on and get rid of the Indians in the way, etc., etc. Now, we have friends in France who still read The Last of the Mohicans. There's just a fascination with the American frontier. This is extremely important in the end of the nineteenth century in Europe. Baden-Powell borrows the uniform of the Boy Scouts from the frontier uniform as he imagined it in America--the cowboy hat, the flannel shirt, their neckerchief, the short pants. He said, "The shape of a face gives a good guide to a man's character," this sort of firm face. He loved that. Square jaw, compared to working-class "loafers" and "shirkers," as he called them. It's this cult of masculinity. This comes at a time, one must say, when you've got very aggressive movement for female suffrage by the suffragettes who want the rights of women to vote in Britain, one of whom throws herself in front of a horse at a horse race, and sacrifices her life to make a point. It comes at a time as the famous Oscar Wilde trial. Oscar Wilde, of course, was gay. There was a sense that the virility of English manhood was being tested by women-- Baden-Powell did not like women, he referred to women as "silly women," "silly girls"--and by gays, whom he saw as effeminate and therefore not really British, and wouldn't be there. What good could they do in the next war? Also it's a time where in Germany particularly, but not only in Germany, men were dueling. There's sort of that test of masculinity. If you're lucky you'll end up with a dueling scar and not actually get killed. Most of them don't get killed. But they're dueling all over the place. They're dueling in the woods outside of Paris. They're dueling almost everywhere in Germany. They're dueling still in Britain. That sort of reaffirmation, according to Bob Nye and lots of other people, and all sorts of people have written on this. Ute Frevert, , my colleague, is now gone from Yale, unfortunately. This is part of the reaffirmation of virility. The tendency is to say, looking back, "Well, they're taking it out on animals, blowing the hell out of them and indigenous people, etc., etc." So, scouting for boys takes off. It spreads from Britain to Australia to Canada to New Zealand to India to Chile to Argentina to Brazil. In 1910 it starts in the United States. In 1910, Baden-Powell resigns from the command of a division of the Territorial Army to spend the rest of his life involved in scouting. Again, what I'm saying is that involves this sort of grafting on this idea of the American frontier. You're going to create your new frontier. Your new frontier is going to be in Africa. Your new frontier is going to be in Afghanistan. You create your frontiers, and then you hold the frontiers and you train these boys, these young men to hold the colonial frontier. He finds sponsorship in the Daily Telegraph, which was a big conservative newspaper. All of the big newspapers are conservative. The 60,000 scouts--I think I sent this around--by 1909 there's 60,000 scouts in Britain. In 1910 there are 107,000. In 1913,152,000, and in 1917,194,000. Why was there such a short gap? Not that much of a leap between 1913 and 1917? Because they're dead. They get killed in the war. They're going off to fight. Scouting is finished rather early. You've got these big rallies, enormous in London, and scouts coming from all over the empire. Girl Scouts are created in 1914, but Baden-Powell didn't care much about that. Now, there had been groups of frontier-inspired youth organizations that existed in Scotland, particularly. They're called things like The Sons of Daniel Boone, The Woodcraft Indians, The Boys' Brigade in Glasgow in 1883. Some were church sponsored. Again, this is the sort of moralization of the working classes. You get them into groups. They won't smoke cigarettes, which is a good thing not to do. They won't drink. They won't hang out with the wrong people. They will go to work and become cogs in Britain's industrial empire. They, too, can look at maps of Africa being increasingly painted red, which was the color of the empire. So, nature remains a part of this. Again, to repeat, the cult of the American frontiersmen, let me say a little bit more about that, is part of this. The idea of the frontiersmen, the buckskin man. Rudyard Kipling is not my kind of poet, but anyway, he expresses often this idea. There's something hidden; go and find it--what's happened? I must have pushed something. I pushed something. It doesn't matter. I'm not easily alarmed--Go and find it. Go and look behind the ranges, something behind the ranges is lost and waiting for you. Go! Baden-Powell described the frontiersman whose manhood is strong and rich, of a pure life. Now, his own predilection is that for him a life would not involve "silly women," as he put it. The other idea, and this is not at all, I'm not saying anything about his sexuality, but the reality of the situation is that he preferred the company of young men to anyone else. This is involved in the way he lived his life. The idea is that the free man must earn independence with his gun. This is, again, part of this old American western idea, but you apply it to indigenous people. Now, you have aggressive models coming from the American West. William "Wild Bill" Cody, from my wife's state of Nebraska, had killed thousands of buffalo. He had dueled. The duels that they do with the German dueling fraternities, you've got the equivalent in Dodge City, and all of this, where you're dueling, and the classic kind of Clint Eastwood western. He'd killed thousands of buffalo, dueled, and he's a killer and scalper of Indians. He was his own publicist and he had enormous influence in Britain. At the Battle of Little Big Horn in 1876, he kills Indian Chief Yellow Hand. In 1887 he crosses the Atlantic. He goes to London, Paris, and Berlin. Queen Victoria came out of her extended period of decades of mourning for her dear husband, Albert, to attend the Wild Bill Cody Show. She wants to go. And she's there with all the others. She hadn't been to an event like that in twenty-six years. The irony is that Wild Bill Cody runs these fake combats between the Indians and the cowboys in the equivalent of stadiums in Britain. One of the ironies of this about art and reality merging is that some of the people he brought across the Atlantic were Indians who'd actually fought in a battle against him in the Dakotas, and he hires them as extras and he takes them to Paris, to London, and to Berlin. They are a big, huge success. It's the Wild West program. At the same time in Canada, those of you who are Canadian know about the Mounted Police and all that business. The Mounted Police become a powerful, though somewhat tamer, more acceptable, more vanilla equivalent of that, of keeping order in Saskatoon and all of these places like that. I've actually been to Saskatoon. It's a pretty nice place. The idea of these mountain men--now the mountain men get uniforms. The mountain men are no longer sort of taking pot shots at people in Kentucky on the frontier or scalping Indians in the Dakotas. They're wearing sort of freelance scalpers. They're wearing the uniform of these countries and they're big-time imperialists. That's really the point. Here's a verse, I can't remember where I got that. Our mission is to plant the right of British freedom here. Restrain the lawless savages and protect the pioneer. (It rhymes.) And ‘tis a proud and daring trust to hold these vast domains, But with 300 mountain man You've got to kind of make it rhyme a little bit--mountain man, pronounce it as if you were a mountain man. But anyway, and that's a little harder to do if you have an Oxbridge accent, which I clearly don't. Also, this is part of the whole-- I don't have time to do it now. I spent a fair amount of time in Australia, but it's also part of the idea of being Australian, too. Anyway, that's another thing. Kipling's Lost Legion is really just awful, but here we go: There is a legion that was never listed That carries no colors or crest But split in a thousand detachments Is breaking the road to the rest (I'm supposed to be more respectful when I do this, but anyway…) Our fathers, they left us their blessing They taught us and groomed us and crammed But we're shaking the clubs and the messes To go and find out and be damned, dear boys To go and shot and be damned, dear boys. Virility, adventure, loyalty--loyalty to boys, loyalty to young men, and brotherhood, and so it starts like that. Can I barely go on? Out from the woods of the Great Northwest Under the austral sky From the south and the north, they'll come forth At the sound of the mother's cry And each at his post where the danger is most Will stand as a sentry then Britishers all to stand or to fall The Empire's frontiersmen. Now, Baden-Powell is his own best publicist, even better than Wild Bill Cody had been. He helps plant newspaper articles about him. Here's one from 1900. "It has been suggested that Major-General Baden-Powell's unrivaled skill as a cavalry scout forms a quite remarkable inheritance of heredity that he's descended from Pocahontas, the American Indian princess," which he was clearly not. But how does he become so popular? How do these God-awful poems that I've just read, how do they become popular? They become popular because they become the stuff of boys literature of the culture of imperialism. They were the British equivalents of Boys' Life. I'm not knocking Boys' Life. I don't know if that existed. I strongly preferred Sports Illustrated and the sporting news to that. They become the stuff that people are reading as they're looking at these maps of Africa gradually becoming painted British. Now, how did he become well known? Well, because he's an imperialist. He's fighting. In 1896 he fought in the Matabele War, which I sent around, not the war but the name, a skirmish against about 1,000 indigenous fighters. It's at that point where he starts coming up with his own freelance uniform that would become that of the Boy Scouts. In military units people that were scouts, again the idea of tracking. You're tracking, you're seeing where the Indians have been. The Indians can see where you've been now. You learn how they do it. How the blades of grass turn and all of that. I couldn't scout anything. You see how they do it. They become known as scouts, which is sort of an Americanization of a term. This is what he likes to do. Teddy Roosevelt, there's a good example of that. Talk about that kind of narcissism of the colonial imagination and the imperial imagination, "Rough, rough, we're the stuff. We want to fight and we can't get enough." Whoopie! That's the song of the Rough Riders from the Cuban-American War of Teddy Roosevelt, so it's part of the hysteria of the U.S. Spanish War. But again, it's the frontier spirit. Baden-Powell helps create his own myth, which I've said. He drew pictures of the people that he had allegedly shot. These pictures end up being in the tabloid newspapers. Again, the role of the tabloids in spreading all this stuff is terribly important. I said before there's twenty-one daily newspapers in Paris at the time. I don't remember how many there are in Britain, but there are an awful lot of them. He sketched a last stand of eight people, supposedly until they get rescued, against the indigenous people. He claimed that the Zulus, against whom the British War, the Zulus called him, this sounds unlikely, "the man, he who likes to lie down to shoot." The Ashanti called him, in awe, this was his term for himself, "he of the big hat." And that in this war in 1896, they called him "the wolf," in awe again, his opponents. "The beast that does not sleep but sneaks around at night." So, he became "the wolf who never sleeps." There's a slight problem with this invention of a term to describe himself as "the wolf who sneaks around," is there aren't any wolves in Africa. There are not any wolves at all. He made it up and made it up rather badly, having taken it out of some book somewhere else. But that doesn't stop the tabloids from referring to him as "the wolf who never sleeps." The Boers understand that in the Boer War, that is the Dutch Afrikaner opposition opponents, who by the way--the British created the term "concentration camp." Again, I'm not looking back from history. They're separating children and women from the men, and trying to avoid that they receive provisioning out in the bush. They create the term "concentration camp" in the Boer War. The Boers actually lived there and had for a long time, though they're not an indigenous people. They know there aren't any wolves there. So, they start mocking Baden-Powell. But "he of the big hat" did not slow down at all. So, in 1899, he has the good luck to be at the siege of Mafeking, where they are surrounded by a force, but not a terribly aggressive force. Again, he draws pictures of people on duty and all of that, night duty. And the town had resisted 217 days stationed on the railway line that runs between the Cape and Rhodesia. This was a big takeoff for his reputation. Just the name Baden-Powell, the initials B.P. become identified with British imperialism. B.P, "He loves the night and after his return from the hollows of the veldt, where he has kept so many anxious vigils, he lies awake hour after hour upon his camp mattress in the veranda tracing out in his mind the various means and agencies by which he can forestall the Boer move, which unknown to them he has personally already watched. He is the wolf who never sleeps." Now, B.P., those initials also become British Pluck, the idea that the British are mudders. This is kind of the image that would come out of the very heroic Battle of Britain under the bombs of German Luftwaffe in World War II. British Pluck, also B.P., British Peerage, British Peers, the upper classes, the title British Peers. He becomes identified with all of this, the wolf who never sleeps. His advice to his own garrison is to "sit tight and shoot straight. All is well here," he writes. They were able to get messages out to the newspapers who are covering this. Now, again, the British newspapers covered another siege which ends rather badly, which is at Khartoum, with the death of Charles Chinese Gordon. He was called Chinese Gordon because he slaughtered the Chinese, and he gets his at Khartoum. Of course, school children, there's an enormous, enormous outpouring of tears over the death of this man. The newspapers, because of these modern techniques, they can follow all of this stuff pretty much how the siege is going, etc., etc. So, B.P. the prince of good fellows, prince of scouts, here we go. They emphasize his youth. He's forty-three but he's youthful. He's cheerful. He's always whistling and telling stories, even when things are going bad. He loves pranks, childish pranks. This is from some of the newspapers. "Life was a game, but you have to play it honorably." It was a game that silly women, as he called them, could not play. He becomes known again as sports, mass sports is starting just at this time. The Olympics are starting just at this time. Again, there's a reassertion of virility in these Olympics. He's called "the gallant goalkeeper," "the goaltender of Mafeking." So, a sports analogy becomes part again of this imperial thrust. They print patriotic letters to him, which can be signed and can be sent. You can send a postcard. You could send a postcard home. Your parents have left after parents' weekend, if they came. You can send them the following postcard: Dear Parents, Dear Mom and Dad, We have shouted "Rule Britannia!" We have sung God Save the Queen. We have toasted gallant Baden a half a score. We have sent our best respects to Plucky Mafeking and we have hoisted flags and bunting in galore. With a wild and frenzied madness born of joy the empire cheers, while we Britishers rejoice through the land. In this hour of jubilation I am sending you a line with the wish that I could warmly shake your hand. Yours exultantly. Then you sign your own name to it. So, scouting, as someone said, I can't remember whom, was an attempt to make these "values" of Mafeking permanent and to trace them on the map of these countries of these peoples all over the world. A 1909 newspaper said: It may be that he is not a great soldier of the sort which Napoleon, or the Maltese, or the Kitcheners are made. He is the frontiersman, the born leader of irregulars, a maverick, and the empire has need of such. Furthermore, he has the knack of seizing the imagination of boys and a deep sympathy with them. He is doing his day's work for the empire by training a number of manly little fellows to keep their wits about them and their eyes skinned. We shall profit another day in a much greater affair than Mafeking. That, of course, is preparing for the war against those other peoples who might contest British domination, not the indigenous peoples, but the other powers in Africa. So, be prepared, B.P., the same thing, the same initials. Anybody here a scout? I had to memorize that stuff. I didn't get a single badge, but a scout. Be prepared. You're supposed to do that. The jamborees. He creates these jamborees. Also, at the same time, and I don't have time to talk about this, but this is the same time when Arthur Conan Doyle, the idea of sleuthing, but it was sort of an urban sleuthing for evildoers in London. It kind of merges with all of that. Of boys who risked their life, he says, "I said to one of these boys on one occasion when he came through a rather heavy fire, ‘You will get hit one of these days riding about like that when shells are flying.' And he replied, ‘Sir, I pedal so quickly, they'll never catch me.' Those boys don't seem to mind the bullets one bit." Of course, millions of them would catch bullets that ultimately they minded. "I will do my best to God and the king. I will do my best to help others. Whatever it costs me. I know the scout law and I will obey it." Again, I am not knocking doing good things for people. Please do understand. But I'm just trying to place the origins of whatever you think of the Boy Scouts in the context of the culture of imperialism, because that's where it belongs and that's where it started. In 1912 in August a boat capsized off the coast of Devon, I think. Nine boys from eleven to fourteen drown. They were scouts. There was an enormous, enormous national funeral service in London in which millions of people saw at least parts of it. This helped. Their deaths, and many more deaths would follow, helped tie together the idea of scouting with service to the nation. A magazine called The Captain--again, this is part of the culture of imperialism and of aggressive nationalism--had a troop of mobile scouts on bikes fitted with a rifle bucket and a clip to carry a carbine, a rifle. So, it shifts. The image of all of this shifts from Africa, where much of the fighting was already over, and indigenous people destroyed or pacified, to the European enemies against whom the next war would be fought. There's a famous cartoon in the British magazine Punch which showed a Boy Scout complete in uniform being prepared, taking Mrs. Britannia, that is the image of Victoria who was dead, but the female image of the empire, by the arm. It says, "Fear not, grandma. No danger can befall you. I, after all, remember I am with you now." Boy Scouts played an enormous role in 1914 and in the subsequent years. "Goodbye, I'm off to war." There was a caricature in the newspaper as Boy Scouts joined up along with lots of other people who weren't scouts in the war. As you well know, they don't come back, or a lot of them don't come back. It's part of the mood of nationalism and of imperialism, of the New Imperialism. Those two things are tied together and the expectation, indeed in many cases, as in the case of Baden-Powell, joyous expectation. You could test your virility in a more meaningful combat than simply slaughtering indigenous people, or picking off Boers with greater numerical superiority. By the way, Robert Baden-Powell died in Kenya, in 1941, from which he had just sent his last patriotic message to the Boy Scouts, in what was a very different war. Thank you. I'll see you on Wednesday.
European_Civiliization_16481945_with_John_Merriman
23_Collaboration_and_Resistance_in_World_War_II.txt
Prof: Okay, I want today to talk about collaboration, but above all, resistance in Europe during World War II. I'll talk mostly about France, because that's where there's been so much written about, and also because France coming to grips with the Vichy past was not an evident thing. It was something that took a long time. There was a process of sort of collective and official repression about what had happened. I want to talk about that. Again, histories have their histories. I've been around here long enough that I can remember all this happening. Not the war, obviously, thank you, but France coming to grips with its past. I want to talk about that. We haven't talked about France in a long time. I'm going to talk about that. But first let me just say a couple things. Other countries had their resistors as well. It was obviously--the most successful case of resistance was that of ex-Yugoslavia. Well before the end of the war, Marshal Tito and his partisans, taking advantage of the mountains of ex-Yugoslavia, were able to pin down entire German divisions, and with weapons parachuted in by the allies, and with entire moving hospitals, were able to launch the most effective resistance, arguably, in Europe. Of course, the case of the Soviet Union, twenty-five million people died. Twenty-five million people died in World War II, most of them in the war, but lots also in Stalin's camps. A lot of partisans lost their lives picking off German soldiers, in the case of Poland. In the third edition there will be more on this. They got scarcely a mention. The Polish had a home army, as they called it, of about 300,000 people by the end of the war. The Warsaw ghetto rose up, and was crushed with 12,000 deaths and with thousands of other people sent away to the camps in 1943, then the Warsaw uprising. One of the reasons that Warsaw, where I'll be on Friday, and where I go fairly often--there was nothing left, because the uprising was crushed, and thousands and thousands of people lost their lives. I just reviewed a book actually for the Boston Globe called Ghettostadt, which is an interesting book by a man called Gordon Horowitz, who teaches in Illinois. It's about the Lodz ghetto. It's a tragic, all-to-familiar tale. It doesn't have anything to do with resistance, because it was impossible, but it was about the German ideas of creating this Aryan city in Lodz, which was a big industrial town, and still is, in Poland. Of course, what they did is they put all the Jews into the ghetto, which was several kilometers square, and put them to work making uniforms, and ear muffs, and all sorts of things for the German troops. In the story, the most horrific aspect of it is that the people in the ghetto, they don't really know. There's all these rumors about what's happening outside. Of course, what's happening is the killing fields, and three million Jews disappear in Poland in World War II, three million, three million. Gradually, and some people, before they are being killed by the Nazis, are forced to write cheery postcards saying, "All is well here in these camps. Everything is just delightful." Then they're executed. Gradually, it's about the mounting horror of the people who live there. They see clothes stacked up outside the ghetto that they could recognize as having been on people they knew, who had been shipped away to the camps. The whole thing is so horrendous. It lives with us today. Obviously, it was easier to resist in places in which you could hide. When I talk about France, the reason--and I sent this term around--you called the French resistors the maquisards, or even just les maquis, is because they were able to hide behind brush called maquis. More about this later. So, resistance in Belgium, which is in the flat country except for the Ardennes was very, very difficult. There's hardly a hill that's more than a hump in Denmark, but yet is was the Danes in Copenhagen who saved the Jews, who got them out, with the help of a German officer, and were able to get them just across the very narrow straits to Malmo in Sweden. Other countries had their resistances as well. All those can't be covered now in this short amount of time--why am I supposed to have this glass here, actually? It has a label on it. I'm not supposed to have this glass here at all--I guess what I'll do is I'm going to talk about France and about the resistance there. Now, until about 1969, a year that I can remember, Altamont, the Mets win the series, but more important, protests against the war in the United States and mounting dissatisfaction with United States foreign policy. I can remember that very, very well. But until 1969, in France the official line was virtually everybody resisted, a few elites, a few notables, rural elites collaborated, period. The official line was one that was very closely tied to Gaullism. Because Charles de Gaulle, the big guy, his voice crackles over June 18,1940. He calls on France to resist. Part of the myth that everybody resisted, or almost everybody, and few people collaborated, had to do with the official Gaullist policy, which is that Gaullists resisted. Charles de Gaulle, this mystical body of Charles de Gaulle, the body being greater than the sum of all its parts, led France, which essentially liberated itself. Of course, that's simply not true. Also, what that forgot about was the fact that the communists were enormously important in the resistance. More about that in a while. There was a film made, a documentary, I think in about 1953. I've never actually seen it. It had to do with the Jews. It had to do with what happened to the Jews in France. It was conveniently forgotten that the Jews in Paris who were arrested, in the Marais, in the Jewish section of Paris, and in other places, too, were arrested by the French police. The Germans would have been happy to do it, but they didn't need to, because the French police were so eager to do that. In this film, Jews and other people, communists and other people who were sent away, were packed off to a place called Drancy, which is, if you've ever taken the RER in from the airport or to the Roissy airport in Paris, you've gone through Drancy. That was a transit camp. In transit camps, rather like Malines or Mechelen, in Belgium, or Westerbok in the Netherlands quite near the German border, these camps were run by French, Belgians, and Dutch, They were not run by the Nazis. The Nazis would have been happy to do it, but the local populations, the local collaborators were doing that. In this film made in 1953, in the original, you see a French gendarme who's guarding the Jews at Drancy, isn't in the film. In the documentary that was finally released, somebody has reached in and plucked him from the film. He simply disappears. It's doctored. The French gendarme, with his French gendarme hat, isn't in the film, because the myth was that the Jews were taken away by the Germans, and that communist resistors were shot by the Germans, and the gypsies and gay people were taken away by the Germans, were arrested by the Germans, and that France resisted and didn't collaborate. Now, two events--let me also tell you two stories. I hope I didn't say this the first day when I was trying to get you interested in learning about World War II. I worked in a place called Tulle when I was doing my research for my dissertation, long ago, and all that. I didn't have any money, and I'd go down and buy an ice cream cone for lunch every day. I started talking to this guy and I didn't speak French very well then. But I knew that there were a lot of people hung there. Ninety-nine men were hung. The Germans left. The maquis, the resistors, were very active there. André Malraux, the great writer, was active in a place called Argentat near there. One day the Germans all left, and then everybody came out and started partying, and the Germans came back. They hung ninety-nine men from poles in Tulle. One day I was there and this guy was telling me this story about how he had hidden. He had gone up--it's a real windy town in a valley--he'd gone up and hidden. You've got a house here and you've got room under the house. He was able to hide and escape. Because he was sixteen, he would have been hung. This woman came up and I was eating my ice cream cone. She ordered an ice cream cone. The guy suddenly said, "Madame Dupont, you remember that day, don't you?" She said, "I sure do. They hung my husband from that pole." How every day you could live with that and talk about that as if you were discussing where you had bought something at a sale. But the next step to thinking about that is who in France made all those things possible? Who was helping the Germans do that? The answer is that lots of people collaborated. Lots of people got what they wanted on a platter because of the Nazi victory. The same people who were shrieking "Better Hitler than Blum!" in 1936 got exactly what they wanted. Marshal Pétain, who was a rabid anti-Semite, his national revolution was essentially aimed to do in France what Hitler had done in Germany, and what other petty despots had done in other places, some not so petty, like Hitler. They got what they wanted. So, how did the official line get shaken by reality? How did this happen? Second story. I have a friend who is still a lawyer in Paris. I've known him for a long, long, long time. He was too young to remember, but his older brother, who's dead now, remembered when the Germans came to his house in the suburbs, a place called Le Perreux-sur-Marne, took away the father, who was a Greek Jew. Of course, he was taken away and was killed. He ended up in one of the camps. They don't know what happened to him. Now, the Germans just didn't come to that house by chance. The guy was denounced as a Jew by the policeman in that town. After the war, every Saturday when this lady, the widow, went to the market, she walked by and saw this policeman directing traffic, the same guy. Nothing ever happened to him. Nothing ever happened to him. So, how did the official version get eliminated by historical reality? Histories have their histories. How did that happen? There are two events that are kind of key. They're both in what I sent around. One is the movie, The Sorrow and the Pity, which I mentioned in here before, which was described as a two six-pack movie in the old days when I used to show it here, because it lasts four and a half hours. It was a documentary made for French television by Max Ophüls. It was never shown on French television until 1981. Why? Because it was a documentary in which collaborators--there's sort of a local notable called Christian De la Mazière, who describes in his smoking, his smoking, in his fancy jacket in the château, why he fought alongside the Nazis on the Eastern Front in the Waffen SS. It's about collaboration and resistance, tales of true heroism but also of repressed memory. There's a great scene in which they're walking through the school. They ask about the teacher, a teacher who disappeared. They don't even remember about it. They don't remember it, the guys that are being interviewed. They've conveniently forgotten. So, The Sorrow and the Pity was never shown on French TV until 1981. It's a fantastic thing. It's too long, and I should have never shown it. I started showing it twice in sections. Also, it's kind of dubbed and it's very hard to understand either in French or in English. It's a monument. It's a monument not just because it's a driving, forceful documentary, but it helped France rediscover its past. Fabulous. Talking about the role of the Communist Party. Again, I'm not a communist, but I'm telling you, the Communist Party had an enormous role in the resistance. Most of it's about Clermont-Ferrand, the area. It's based on the Auvergne town of Clermont-Ferrand. There's this great scene where these two peasants out in the countryside say, "Nous sommes rouge, comme le vin," "We're red like the wine we're drinking." It's a fabulous, fabulous, fabulous thing. Of course, there's the inevitable scene at the end where women who were called, indelicately, "horizontal collaborators," had their heads shaved and were being paraded through the town. That happened all over the place. Les tondeuses is what you called them in French. It doesn't matter what you call it in French. In the end, there's Maurice Chevalier. Your grandparents will know who Maurice Chevalier was, because he kind of represented, in the American imagination, what France was. He was a crooner. He was a singer who was born in L'Aiguillon-sur-Mer, which is in a proletarian edge of Paris, right near where Edith Piaf, the singer, was, whom your grandparents would have heard of also, people way before my time. But at the end of the movie they have him and he's wearing his little crooning suit and he says in English, "Well, you know there are zees rumors that I was singing for zee Germans. But I just want to tell you that I was only singing for zee boys," that is, for the prisoners of war. He was dealing with his own past as well. Francois Mitterrand, president of France for fourteen years beginning in 1981, when he was inaugurated, the cameras follow him through the Panthéon. He follows him by where the heart or some part of Jean Jaurès is left. But Francois Mitterrand, when he was dying, he came to grips with his own past. When he was dying, he too, like France, said, "There was a moment when I was not a resistor," which he became a resistor. But there was a moment that he had celebrated Vichy, and somebody had found a picture of him in a right-wing rally in 1936 or 1937, of which there were many in Paris. He, too, came to grips with his past. This all started, the history of history started in the 1970s. The second event was a book published by my good friend Robert Paxton. He's about ten years older than me, probably more than that. He wrote a book called Vichy France, published in 1972. Vichy France could not use French archives, because they weren't available. There's a fifty-year rule in French archives. But there's also a site--talking about the mutinies, that the mutinies weren't available well after fifty years had passed, after the mutinies in World War I. So, he used captured German documents, not French documents because they weren't available to him. What he did in this book was to show what Vichy and Pétain's national revolution thought they were doing, and why many, many people collaborated. There's a more recent book by a guy called Philip Burrin that I use in the seminar on Vichy that I do from time to time, a junior seminar, which explores more deeply, using these archives that are now available, the whole question of collaboration. But the point that Paxton made is that he demolished the shield argument, the argument that Pétain and the national revolution had saved the French State, and that they were a shield. If it wasn't for Vichy, worse things would have happened. When Maurice Papon, P-A-P-O-N, went on trial over eighty years of age, went on trial for having signed away the lives of many Jews in Bordeaux where he worked in the prefecture. He made the same argument. He said, "I was a good bureaucrat. My superiors liked me. If it hadn't been for me, more Jews would have been shipped away to Drancy" or, more directly, to the camps. He was condemned. He died a couple years ago. He was under house arrest. The most amazing part of the whole trial was he managed to escape at age eighty. People drove him to the Swiss border and they found him in a fancy Swiss restaurant and brought him back. But Papon had gone on to a very distinguished career as a bureaucrat in the Fourth and Fifth republics, as did a lot of other salauds, a lot of other bastards, such as René Bousquet, who was a prefecture police. The argument was the shield argument. "If it wasn't for us, things would have been worse." But as Paxton wrote very, very memorably, Pétain might have provided continuity for the French state, but not for the French nation. The French nation, what was and is, I hope and I'm proud to say, based on liberty, fraternity, equality. They take those off the coins and it becomes "family, country, work." It used to be when I was there when I was a kid, you could see still these little coins from Vichy that they transformed into centimes. Paxton's book--I saw him once when I was in Brussels. I saw him on a TV show, my wife and I did. It was one of those typical French shows, where it will be about World War II and they'll have somebody who remembered the war, somebody who was in the war, somebody who didn't even know what was going on, and all this stuff, and they interview them. Some guy got up, this sort of rightwing guy, and there was protesting against Paxton's presence by skinheads. They got up and said, "Mr. Paxton, what could you possibly know about the war? You were only twelve years old during the war?" But Paxton became, this was an important part of the history of history. When he was introduced at the Sorbonne, he was introduced by a historian called Jean-Pierre Azéma. When he introduced him, he said, "Messr. Paxton, dans un certain sens, vous êtes le conscience de la France," "In a certain sense you, Paxton, are the conscience of France." These two events are important in the emergence of what the historian Henry Rousso calls the "Vichy Syndrome." Vichy was conveniently forgotten, because of Gaullism or because of not wanting to remember the bad things that had happened, the collaborators, the eager anti-Semites. Now, since the early 1970s, people are obsessed with Vichy. There's all sorts of good work that's been done on Vichy, and the whole period of resistance and collaboration. Paxton estimated in that book that two percent of the French population resisted. My friend John Sweets, who did a book called Choices in Vichy France, a great title in which he looked at Clermont-Ferrand, because that was where the movie The Sorrow and the Pity were focused on. He estimates, depending on how you define resistance, people that refused to get off the sidewalk when a German officer passed, or people that whistled in the documentaries, the German newsreels before the movie, and the theater, that something like sixteen or eighteen percent of the population resisted. It's a more charitable definition of resistance. The fact is, and I won't talk too much more about this, but the collaboration was widespread. It was not simply an elite. The elites were more apt to collaborate earlier in the war. Later in the war the kinds of people who joined the militia, which formed in January 1943, which was the French equivalent of the Gestapo, tended to be sort of down and out. They were the kinds of people who in Germany joined the SS, many of them in the 1920s, saw it as a form of social mobility. There's a really good film called Lacombe Lucien, that I haven't seen in years, about somebody who--between his ears there wasn't very much. The resistance doesn't want him because he's just kind of an idiot who doesn't believe in anything. But the militia's very happy to have him, and it's about what happens to him in the southwest of France. During the Papon trial, which was maybe about eight years ago, or something like that, there was one time they interviewed a German officer who was still alive. They said, "Look, what are your memories of Papon and the militia?" He said, "If we got a gar, a guy, if we arrested a French guy and we rather liked him, we wouldn't turn him over to the militia, because they would torture him so hideously." Of course, the Germans were capable of and did all over the place torture people hideously, no doubt about that. But the militia were generally bad, bad, bad guys. You saw this in Lacombe, Lucien a little bit. That restaurant scene is so crucial in Lacombe Lucien. That is really the essence of that film, in Lacombe, Lucien, the restaurant scene when they're in there. Collaborators were everywhere. At the end of the war probably about 25,000 people were executed after very short trials or simply gunned down. Near where we live in Ardèche, there was a priest in a village not too far away from us. He had Déat--;I think it was him-- who was a real fascist, to lunch. After the war, they put him up against his own church and gunned him down. I have an acquaintance a long time ago who worked in the archives in Limoges, where I spent a lot of time. He was a young man then, and was a refugee from Lorraine. After the war everybody was celebrating. He lived in a place called Saint-Léonard-de-Noblat, which is near Limoges. They were all partying in this little town that's twelve kilometers away from Limoges. Somebody said, "Where's the gendarme who sold people down the river?" Someone said, "He's got an aunt in Limoges." So they left all the casts of wine that were left. They marched into Limoges, went to the aunt's house, got the guy, hauled him out, put him at the beginning of this procession, joyous but also a deadly serious procession, sort of an enraged charivari, and they got him back to where he had done great damage. They put him against the wall and prrrt. Then they went back to partying. There was lots of settling of scores. Sometimes not everybody who had their score settled deserved it. There were cases of people who were misidentified, or simply there were rivalries, but lots of people got theirs. As for Marshal Pétain, what happened to Pétain, he was put on trial. He was an old, old man. They said, "You can't execute an old, old man. He's senile." He wasn't at all. But you can't execute an old man who was the hero of Verdun, can you? So, they put him in house confinement on an island. There were still people trying to get to the island, which is off the coast of Brittany, and bring back his bones to Verdun. That happened only about ten or twelve years ago. So, France--it took a lot longer than the kind of gunning people down and the trials that went on after the war for France to come to grips with his past. Now, resistance. What do we know about resistance? First of all, obviously it was easier to resist in the south than the north, because of the topography. One of the reasons why the Germans occupied so-called free France in November of 1942 was the fact that resistance had already started. The first active case of resistance with important consequences in Paris was at the Metro stop called Barbès-Rochechouart, which is now one of those places where the police, especially since Sarkozy was elected, they have these raffles, where anyone of color is immediately asked for their ID and made to stand there and be humiliated by the police. Anyway, back then somebody gunned down a German officer, and gradually acts of resistance started. To repeat what I said before, the word maquis comes from a very thick brush that's in Corsica and in what they call in French the garrigue, also. It's a rocky part of the south. We have it around where we live, too. But it was just sort of a metaphor for places that you could hide. You had to be out there hiding. By 1944, by certainly the spring of 1944, and in many places earlier than that, the maquis ruled, at least at night. During the day they didn't. Only twice in France did they foolishly try to take on German militarized units, big units. One near Clermont-Ferrand and the other is near the Vercors, which is near Grenoble, near the Alps. They were just wasted. They were just destroyed. In a village near us, somebody denounced people who were up in the hills, up in the Cévennes mountains. One day the motorized units come, and the parachutes come, and they're toast. That's the end of it. There were a bunch of slaughters down around where we live. People don't like to talk about what happened. I wanted to interview somebody who was a resistor in our village, even though our village isn't where there was a lot of resistance going on. I wanted to talk to him because I was writing a book about our village called Mémoires de pierres. He agreed to come over and talk about it, then he simply never showed up. People didn't like to talk about things like that. He never did want to discuss it. Obviously, more resistance was in the south than in the north, though it's forgotten often that there was a lot of resistance in Paris, that there was Jewish resistance in Paris, too. I met a guy in Australia eight years ago who made a lot of money making cakes, and then went back and got his Ph.D. in history working with a friend of mine, Peter McPhee. He wrote a book on the Jewish resistance published by Oxford, the Jewish resistance in Paris, a guy called Jacques Adler, who is happily still around. But the most famous cases that you all know about are these resistors who are living off the land in Auvergne, or in the Savenne Mountains, or anywhere that you could hide there would be. Often in French cities you can see plaques saying, "Resistors met here to organize resistance." That's what they did. They took big, big chances. When they, for example, blew up railroad tracks--there were so many communist resistors, and the Communist Party had a big hold on cheminots, the railroad workers. When you go to railroad stations, Rouen, Lille, anywhere you go you see huge lists. Any railroad station you go to in France, huge lists of people who were killed during the war, either fighting the resistance or were shot because they were involved with sabotage. It doesn't take much to blow up a track. They did it all the time, down in the Rhone Valley constantly. There was this woman who was a big-time collaborator in the northern part of the Ardèche, where the awful Xavier Vallat came from, too. He was minister of Jewish affairs, totally unrepentant. That meant that he was shipping Jews away to be killed. That's what he was doing. She was a collaborator. One day she walked across the bridge to go shopping on the other side of the Rhone, and they blew her head away. But when you did that, you knew that they were going to pay you back so much. It's when Heydrich--I went to see where Heydrich was assassinated near Prague. When Heydrich was assassinated by Czech resistors in 1942, they took an entire village and killed everybody in the village, a place called Lidice, everybody in the village, hundreds and hundreds of people were massacred. They were capable of doing anything. But the point is that in all these countries there were people who were very, very happy to see that happen. If you go to Budapest, when you see the shoes of all the people that were pushed, shot, or just thrown into the swirling water of the Danube, it was Hungarians pushing the Jews there. It was the Hungarians shipping the Jews off to Auschwitz. There were people in every place who were happy to see these things happen. The big lie in Germany is people didn't know. Of course, people knew. They knew. And they knew in France, too. They knew, absolutely. It fit into the xenophobia. It fit into Vichy's vision of what France would be, a vision in which the Catholic Church would have a much greater role. There were two people executed for abortion during the time, a corporatist ethic, where like Mussolini's corporatism, you'd eliminate class struggle by having everybody in vertical organizations. Everybody's happy to be French, or happy to be Italian, or happy to be German, and you forget the fact that your employer makes ten times more than what you do. The kind of embrace of "peasantism," the resurgence of Joan of Arc. Joan of Arc became identified with Pétain, as saving France and all of that. It's all very familiar stuff. They had a plan and the national revolution was something they wanted to do. My good friend, Eric Jennings, who teaches in Toronto, wrote a fantastic book called Vichy in the Tropics. He looked at Guadalupe, Indochina, and Madagascar. In those places, you couldn't say, "The Nazis made us do it," because there weren't any Nazis there. There were no German troops in those places. In Vietnam there were twenty-seven Jews, and they are desperately trying to find these twenty-seven Jews to send them to the death camps so far away, or to kill them themselves. The shield argument doesn't work. They collaborated. In the end, a lot of them got what they wanted. As so far as the resistance, we've always focused on males, because the idea is you've got all these Spanish refugees from the civil war, from Franco and you've got all working-class people and you've got peasants and there are all these males. Yes, they were there, but somebody had to darn their socks. Somebody had to provide them with food. Somebody had to carry messages. It's more than just one of these old movie things of the very young, attractive woman is carrying a message, and charming the guards so they don't frisk her or stop her at all. But that happened. You had to never be so stupid as to have a written message, but you were carrying verbal messages. In places you could hide food, such as where we live, or out near where we live. Somebody has to take these people food. Also, another thing is the Catholic Church, this business about the pope helping Jews is just sheer nonsense and nobody should ever be tricked by that. But the complicated role of the Catholic Church in France, there was the archbishop of Toulouse, who was a very courageous guy who said, "Don't hurt anybody," who was encouraging really resistance implicitly. The archbishop of Albi, which is only an hour drive if that from Toulouse, he seemed an outright collaborator. In many places, Catholic clergy who are opinion leaders in their village, along with the schoolteachers, were very, very important in helping give a moral kind of stamp to acts of resistance. There's a good book on the resistance by a guy called H.R. Kedward. He's got two books about the resistance, one about resistance in urban areas, particularly Lyons and Montpellier, and how people kind of got together. You had to be careful about who you talked to. You're waiting for a train, the train is late because it's the war, you're kind of feeling each other out. But you'd better be damn careful you're not talking to some denouncer. You're toast if you talk to the wrong person. But it's about how you can make resistance happen. It's about, for example, printing out just little type scripted things that say, "Do not come and hear the Berlin Philharmonic Orchestra when they play in Lyons." All you do is you get on a bus with those things, and you're in the back seat, and the bus turns the corner and you just let them go. The wind takes them. His other book that's really good is called In Search of the Maquis, which is about precisely what I'm talking about. It's about the resistance in the south, and looking at people who resisted. He has an interesting story. There were a lot of villages that were Protestant villages that suffered greatly during all the wars of religion. A couple were noteworthy in that after the wars of religion, the king had these huge mission crosses, huge crosses of conquest put up over the village, which were essentially Protestant and remained Protestant villages. Did those signs help identify the Catholic Church that had been the enemy of those Protestants in the old days with Vichy? Quite possibly. But it's what John Sweets called "choices in Vichy France." Things happened that made you take a choice. What was one of those things? The most important was the STO, the service du travail obligatoire, which I wrote in the notes, the "obligatory work service." The deal was basically that if you agreed to work in a German factory, they would let prisoners of war go and all that. It doesn't work out like that. These people were fools. Two people from our village went. One was dead drunk. Someone told him he was going to a party. So, he got on the bus. The next stop is the Rhineland. Of course, those people are wasted by the bombing, because the Allies are the masters of the air for the last couple years of the war. They systematically devastate those factories. A lot of those people in the STO that went were killed, left the earth. What the STO did is it made people take a choice. If you didn't show up on the 9^(th) of February, or you pick the date, 1944, you didn't go. If you're sitting in your village, they're going to come and take you. At that point--choices in Vichy, France--;"I'm going to go in the resistance," big choice. You go in the resistance. You live off the land. Sometimes just a couple people, sometimes lots of people, international mix. Lots of Poles were there, lots of Spaniards, but most of the people were French. One of the interesting things is that the resistance itself did not, unlike almost every big political event in France since the Revolution to 1981, did not follow traditional lines between right and left. Leftwing regions did not have a monopoly on the resistance. There was tons of resistance in Brittany. There was tons of resistance in Normandy. Eisenhower after the war said the French resistance was worth an entire division, or two divisions, I can't remember exactly what he said. Of course, they helped prepare the way in Normandy for the invasion of June 6,1944. That old left-right dichotomy does not work in terms of regions. It does work in terms of what people were more likely to resist. Working people were more likely to resist, because their unions had been broken by Vichy, because they were more apt to have supported the Popular Front. "No to the France of the aperitif" was the cry of the right in 1936. "No to the France of drinking before lunch," and all of this. "No to the France of the Jew Blum." "Better Hitler than Blum" over and over again. Working people and peasants, like the ones who said, "Nous sommes rouge, comme le vin," that I mentioned before, are more apt to resist. Now, why does the Communist Party have such a privileged role in the resistance? After the war, they called themselves the party of 75,000 martyrs. That may be an exaggeration, but not by much. Whenever there was a shooting, whenever there was a Nazi gunned down, whenever there was a railroad track blown up, carrying munitions, carrying soldiers, carrying whatever, whenever they couldn't get through, who were the ones who are first, when they go to the mayor and say, "Who do you want shot?" The communists would be the first to go, always. The forts around Paris and these other places, there were communists put up against the wall all the time. They were the most likely to resist, along with other Gaullists. Jean Moulin, the prefect of the Eure-et-Loir who was hideously tortured without revealing any secrets, was one who was sent out to try to unify the resistance. Why were the communists so effective? Because the Communist Party are organized into cells. We still get little notices in our mailbox saying that the Communist Party, the cell of Balazuc where we live, all four people in the Communist Party are going to meet together and to drink illegal wine, to drink a wine called Clinton. I was once asked to describe the fall of capitalism. I had to say, "It's really not falling yet." The point is that they were already organized. These networks were not destroyed by the war, were not destroyed by it. They existed, the comradeship. If you were a communist, you'd been a communist since the 1930s, you trusted those people. You were apt to fall in with them. There were two people, one of whom is still alive. He spent a lot of time in prison in Paris, a painter. He's now ninety-five. He's a friend of mine. He and his wife, the first vacations, they took a double bike. They pedaled all the way from Paris down to our village, which they had subsequently made their home. They joined the Communist Party in 1933 and 1935. He was a big time resistor. He was damn lucky to escape with his life. He was scheduled to be executed and he wasn't. He painted people in the prison. I've seen his paintings. The socialists weren't organized in that way. Sometimes after the war the communists said, "Aha! The socialists weren't the big resistors." Well, many did, individually. Léon Blum was lucky not to have been executed. He survived the war in prison. He was put on trial at a place called Riom, right near Clermont-Ferrand. He survives the war. But there was a Catholic resistance. I have very old friends, much older than me, who went from the leftwing Catholic resistance into the Communist Party, into the Socialist Party, kind of the normal trajectory of those things among militants. They were resistors also. Protestants are more known for having resisted because of some very famous events. But remember, only five percent of the French population is Protestant. There's a village called Le Chambon-sur-Lignon, which in Haute-Loire, but near Ardèche. They had a cottage industry of making fake IDs for Jewish children from Lyons and Saint-Étienne who were kept in this small village and who were saved, who were saved because of these people. Whenever the Germans would come through, which wasn't that often, they would hide the children, or the Germans would go through and say, "My god, there are a lot of children. Well, these are practicing Catholics, aren't they?" They weren't. They were practicing Protestants. Those are the more famous cases, but lots of people resisted. Lots of people resisted, but lots of people collaborated, and many other people were indifferent. That's the way it is. I want to close with a story of Oradour-sur-Glane, because somebody who wrote this book called Martyred Village, both in French, chez Gallimard, and in English with Cal Press, was somebody who took this course with me a long time ago, and was in Ezra Stiles College, Sarah Farmer. There was a village near Limoges where, when the Germans were leaving, they were leaving, getting the hell out, going north after this massacre in Tulles that I alluded to. Suddenly they show up in the village and they shoot all the men, and they put the men, and the women, and the children, in a church and they kill them. They blow the church up. One woman escaped through the little window. A very thin lady escaped through the little window behind it. They destroyed the entire village. People who had taken the tram to the market in Limoges came back and there was nothing. Everybody was dead, dead. They left this village standing the way it always--it's still there. Now there's a center of memory. One of my friends is the director of it. Sarah Farmer wrote a book about it. But what's important about it is that this was the site chosen, the site chosen to commemorate the war. Why? Because it was virgin, no collaborators supposedly, no resistors supposedly. Martyred Village. It turned out more complicated than that. It's a wonderful book, Martyred Village, Sarah Farmer. But what shows the complexity of it is what happened afterwards. The people in this village were gunned down. The women and children were killed by some Germans, but lots of them were Alsatian, who were brought directly into the German army. So, they went on trial in 1953. There were riots in Colmar, in Strasbourg, that they should ever be put on trial. They called themselves the malgré nous, the "in spite of ourselves." There were riots in Limoges that the penalties were so mild. Some of them were let go if they had not joined voluntarily. The others went to jail. The man who apparently ordered the massacre, a guy called Franz Lammerding, they were various attempts to kidnap him from Germany and bring him back to France, but he died a natural death in the 1970s or 1980s. This was the enormous, ironic complexity of the whole thing, of getting into the history of history, of trying to understand what happened during those years, that some of the murderers in this case were Alsatian, and therefore French, until Hitler invades in 1940. So, collaboration and resistance. Great subjects for study, but heartbreaking, just absolutely tragic. The Nazis would be happy to do all of the stuff on their own, but the xenophobia, the anti-Semitism led to those cases of the guys going up the stairs in Paris, and in other cities, and all the patrons signing lives away were French. So, France, as in other countries, it's happening in Belgium, too, are coming to grips with their past. So, it's been a sad pleasure to talk about that.
European_Civiliization_16481945_with_John_Merriman
19_The_Romanovs_and_the_Russian_Revolution.txt
Prof: Today I want to talk about the Russian Revolution. I want to do just a couple things at the beginning. Then I'm going to--I hope you weren't in Jay's class "The Age of Total War" last year, because I gave almost the same lecture in it. In fact, I might have done it this year, too. As you know, I have him come in, then he has me go into theirs. But what I want to do is see the Revolution through the eyes of Nicholas and Alexandra, for the last part. But first, just a couple things at the beginning. Picking up on something that I said when we talked about 1848, the Russian Revolution is a perfect way to see revolution as process at work. You know, read the chapter. The revolution in February, as I said before, people wake up and there are not a lot of troops around, and people are hungry, and the--and I'll talk more about this in a minute--autocracy falls rather quickly and rather easily. It's at that point when you've got the provisional government of Kerensky. It's at that point that, as in 1848, and as in 1789 and the following years, people who want to shape the future of the country put in their claims. That's when social and political conflicts increase dramatically. The context of the war is, of course, mind-boggling, with the front not all that far away from Petrograd--because St. Petersburg was renamed Petrograd at the beginning of the war, because it was a more Russian name. Those groups, like the Mensheviks whom you read about, the Bolsheviks--Lenin comes back on the sealed train--the Kadets, liberals, and those people who wanted czarist restoration, and the Socialist Revolutionaries, of which Kerensky was one, who have the most influence in Russia of any dissident party by far. They would be allies, especially the leftwing of the Socialist Revolutionaries, of the Bolsheviks after the Bolshevik seizure of power. Then they're dismissed and persecuted like the others. But they all put forward their claims. All the kinds of tensions, and the "Kornilov plot," in quotes, which you can read about, and the July Days, and all of that really reflect the revolutionary process. What happens in October is the Bolsheviks, after one attempt that didn't work, are able to seize power. So, Leon Trotsky--who ends up, as you know, with an ice pick planted through his neck in a garden in Mexico City, assassinated on the orders of Stalin--and Lenin and the very young Stalin, who was in Siberia at the time of the February revolution, the Bolsheviks come to power and the Soviet Union is created. Next week I'll talk about Stalin and Stalinism. Today it's enough to talk about the Russian Revolution. Before I go back and tell you about Nicholas and Alexandra, and the crazed Rasputin and those folks, nobody expected there would be a revolution, that the Marxist revolution or version would come to Russia. Populists, who in the middle decades of the nineteenth century believed that the Russian peasantry was a potentially revolutionary force, people like Bakunin, whom I've talked about before. They thought that the peasants would rise up one day and sweep away their masters, to whom they were indentured as serfs until 1861. That's not that long before World War I and all of that. But for Marx the revolution had to come where you had a class-conscious proletariat that had been organized by this revolutionary elite, sort of a top-down organized revolutionary elite. That would come in Germany, in Britain, in France, eventually maybe in the United States. After the Bolshevik revolution, Lenin is still convinced that the revolution is going to come in Germany. In fact, the Spartacists do rise. They were a real far-left revolutionary group full of some very good people, incidentally, like Rosa Luxemburg who ends up being murdered. She was born in Zamosc, in what then was Russian Poland. The revolution had to come where you had an industrial proletariat. But it doesn't. Or does it? I'm a little ambiguous there, but let me say that because the revolution starts in Petrograd, that the way Petrograd was in 1917 an administrative, czarist, autocratic capital constructed by Peter the Great. But it's also a huge, enormous industrial center with hundreds of thousands of industrial workers. The historians still debate whether by October, that is, after February during the provisional government time, the Socialist Revolutionaries or the Bolsheviks had more influence in the soviets. That's where the Soviet Union comes from. In the soviets, which were organizations of workers, sailors, and soldiers. Marx wasn't all wrong. The role of the industrial workers in St. Petersburg is very important in this. Lots of them get betrayed. They all get betrayed, ultimately, because what was going to be the workers' paradise, it ain't that. And workers' self-management, it didn't become that. It didn't become that at all. They're shocked when the Red Guards are putting down their strikes. But in the beginning, the role of the workers on the periphery--remember center and periphery is terribly important. I talk a little bit about this in what you're reading. Along the Nevsky Prospect you have government buildings. You've got the Singer Sewing Machine Company. You've got tramways. I haven't been to St. Petersburg since au temps des camarades, since the fall of communism, but very fancy stores in 1917, very dolled-up people, very rich people. Then the tramway simply stopped in the mud when they reached the periphery, when they reached the working-class suburbs. The glittering lights of the big department stores that would make you think of London, and Paris, and Berlin, and Vienna, and the big fancy hotels, all lit up with doormen clicking their heels as the well-heeled enter and leave, even during the war. There weren't any lights, or very few, when you got into the working-class suburbs. The one thing to keep in mind is that the Russian Revolution, both that of February and that of October, was a popular revolution. This was no sort of a coup d'état carried out by a couple of extremely organized, determined politicos. Lenin was organized and he was determined. Lenin was not what the French would call rigolo. He was not a barrel of laughs. He was sure of himself. He had very little sense of humor. He had biting sarcasm. I guess I quote once in there, when he would argue with somebody he said, "He who does not understand that understands nothing." He was very, very sure of himself, a very difficult man to get along with. I'll leave it to you to think was Stalinism inevitable in Leninism? I'm not so sure it was. Anyway, the revolution was a popular revolution. The fall of the autocracy, the masses did not rise up to save the czar. They did not, and the czarina. "Bread, land, and peace." "Bread, land, and peace" is a very, very important slogan when you've got millions of people under arms from all of the nationalities, some of whom didn't know Russian at all, many of whom when they go into the war don't know the difference between a gun and a pitchfork. Until the very end, Nicholas and Alexandra, who are not very loveable people-- one can feel sorry for them, and you will feel sorry for them. They end horribly. But they still had the beliefs that the Russian people loved their czar, and that they would pour forward to save the czar, and the czarina, and the autocracy. And they didn't. The Treaty of Brest-Litovsk, March 1918, pulls Russia out of the war and all that. That's just a couple of things that the beginning to say. It's all in the book. Still, it's a very interesting revolution. There's a lot of great literature on the Russian Revolution in English. I don't read Russian at all, but in all sorts of languages. Now, having just sort of that set that up--reserved seating, VIP? What is this? I don't know. Anyway, there's nobody there--Let's talk about--did I do all that? Yeah--Let's talk about Nicholas and Alexandra. The czar. Nicholas was a family guy. He enjoyed his family. They played tennis. They were modern people. They had bicycles. They pedaled around. The bicycle was a relatively recent invention, as you know. The first big bicycle races in Europe are already in the 1890s. Like his cousin, Nicholas II, he had some general education in the political economy, in math and geography, and foreign languages, which he spoke very well, and in military science. But he had very little intellectual interest at all. Built into the way he looked at the world was this inherent suspicion of rationality, of the Enlightenment. He was somebody who, and his wife also, would still blame, if he discussed it, Peter the Great for having really incorporated, in some ways, rational organization and the Enlightenment, at least the works of the philosophes, into Russia long before that. He believed that waging war was a matter of honor. In that he shared lots with his wacko cousin, Wilhelm II. He was hard-working in the sense that he read or listened to reports on all that was going on about the war. Mark Steinberg has published some of the letters. Mark Steinberg is a friend of mine who teaches at Illinois. If I remember correctly, the czar believed that the nobles had compromised the fate of the autocracy in some ways, or threatened it by being indolent and not working hard enough. His view was always that he was the father of his people, that he was the holy czar. Again, everybody has seen pictures of him sitting on a horse, blessing the kneeling soldiers as they're going off to fight in 1914. He constantly referred to his ancestors, the Romanovs. "Only the state which preserves the heritage of the past is strong and firm," he wrote. "We ourselves have sinned against this and God is punishing us with the war." He took command of the Russian army in 1915 against the advice of his wife. He didn't usually go against the advice of Alexandra, the advice of his ministers, and the advice of the mad monk, Rasputin. Obviously, one of the reasons that people argued against this was (a) he really wasn't a military guy, and (b) if it doesn't go well, will people blame the czar? Will the role of the czar be diminished? He had a strong sense of what he considered moral, and it was shaped by his Orthodox religion. His wife, about whom I'll have more to say, was a convert to Russian Orthodoxy. Like many converts from one religion to another, was absolutely fanatic in her attachment. He was more relaxed about religion than his wife was, but he often spoke about this sort of religious ecstasy that he felt when he went to church. Historians now say, "It's too easy to shape discourse about the Russian Revolution around the influence of Rasputin." But in fact, Rasputin in 1914 had warned that war would bring God's punishment upon Russia, and great destruction, and grief without end. Rasputin did have great influence with the family. In one letter--again, Mark put this stuff together--the czar wrote, "When I am worried or doubtful or vexed, I only talk to Gregory for a few minutes to feel myself immediately soothed and strengthened." One of the reasons that Rasputin had so much influence on the royal family was, of course, tragic illness. Alexi, the son, was a hemophiliac. Hemophiliacs, I guess now they can treat it easier than they could before, but when hemophiliacs get a scratch they can bleed. The blood does not coagulate and they can die. He was not in good health at all. Rasputin, on a couple of occasions, got lucky and predicted the end of a spell, as they used to call them, or an episode of hemophilia, and everything worked out okay. This increases the belief of these parents in the power, if you will, of Rasputin. When there was a mutiny in the navy he wrote, "If you find me so little troubled, it is because I have the firm and absolute faith that the destiny of Russia, of my own fate, and that of my family are in the hands of Almighty God, who has placed me where I am. Whatever may happen I shall bow to His will." This kind of fatalism you would see to the end, when after the revolution he's on a train and he finally has to turn back. This sort of fatalism was part of it. Rasputin by then at that point was already dead. He'd been assassinated. Even the story of his assassination, it was almost impossible to kill him. They kept hammering him with huge rocks and pumping one bullet after another into him. The people that wanted to get him out of the way. They finally, after sort of beating the hell out of him and pumping one bullet after another into all parts of his body, they threw him into a lake weighted down with rocks. When they brought the body up and did an autopsy, they found out that he died of drowning. Anyway, his influence over the czar in a way helped accentuate this sort of fatalism that was almost predetermined, you could say, by his religious orthodoxy. But if you're going to be the autocratic czar, father of all the people, you don't want any political institution that's going to limit your will. Now, the Duma, the assembly, had been created in 1905 after the revolution--which I trust that you have read about--in 1905, and the role of the Russo-Japanese War facilitating that. He believed that even the existence of the Duma would compromise the virtues of autocracy. As you know, the Duma loses most of what authority it had been given. The Duma seemed to be a rational organization and this didn't fit terribly well in a worldview that believes in faith feeling as opposed to reason, and has a particular, and sometimes peculiar, idea of morality shaped by the traditions of Mother Russia. He and his wife look back to this sort of imaginary time before Peter the Great, when the true Russia did not look westward at all, and was not tempted by these foreign imports. He idealized that time of piety, the unity between the czar and his people, the narod. In 1902, he wrote a letter to Alexandra when he was on tour. He said, "We passed through large villages where the good peasants presented simple bread and salt," which are very important in Russia. At his coronation they spilled the salt. It was part of the ceremony. That was bad omen. He was very superstitious, by the way. Seventeen was his unlucky number. He was terrified of seventeen. There was a huge throng at his inauguration, and a stampede, and lots of people were killed. That was a bad omen, too. Anyway, he said, "All the peasants presented simple bread and salt and all went down immediately on their knees showing such a touching childish joy." He had the image that the Russian people were childish, that they--and his wife insisted on this--loved being whipped. They loved being punished. Since the abolition of the serfdom in 1861, there was lots of mistreatment of peasants by lords, but you could no longer literally torture serfs, so long as he didn't die. Before 1861 if you tortured a serf and he did die or if you just ordered him killed or killed him yourself, you would receive a small fine. But still, there was this idea that the narod, that the people, "good, virtuous, and kindly," will come to their senses and that they will not disobey during the war. They will do what he told them. Until the end, possibly, we don't know this--the idea that they will rise up and take him away from his captors in those final days. They did try. There were attempts, but it wasn't ordinary people. He had these views of orthodoxy, autocracy, aristocracy, etc. It's a romantic view. He preferred Russian foods. Peter the Great liked Russian food, but he also, you'll remember, ripped roasts off tables in London, and drank tons of wine and things like that when he was in Western Europe. He spoke Russian, obviously, very well, but he spoke English with his wife, because English was her language, along with some German. Again, speaking English was part of this kind of aristocratic tradition of speaking other languages by the aristocracy, but not Russian. Again, French and German were--they did speak Russian, but French and German were sort of privileged languages. He had this feeling that he didn't like big cities. He had his retreat on the sea near St. Petersburg or Petrograd. He said that Moscow and St. Petersburg were "two needle dots on the map of our country." Well, in terms of percentages he was certainly correct. It was his idea to rename St. Petersburg to Petrograd, because it was more Russian. But he believed, and I've already spoken about this a little bit, that the heart of the empire was Moscow, because it was the religious capital of the empire, and that the skyline was dotted with churches and not by government buildings. Part of having a modern army and a modern navy was you had to have a bureaucracy. Petrograd was a bureaucratized city and "not truly Russian in its heart and in its spirit." He didn't spend much time in the famous winter palace, that of the siege in the Russian Revolution. When he went to his provincial resort on the sea, where I've been, but a long time ago, he had a new church built there but in the original Moscow style. Nicholas and Alexandra were raving anti-Semites. That's why it's amazing, this business--didn't they canonize him as a saint or something? I really don't know, but that's horrific. He loved the Black Hundreds who had sparked--and in the pogroms, particularly in Crimea in 1905, and had beaten Jews to death. He thought that they represented the true heart of Russia. His interpretation was that the pogroms were the "pious rage." I think that's Mark's phrase, not his. Here's his, unfortunately I quote, "The Poles and the Yids," that is a slang, horrible, racist, ethnic denunciation of people who happen to be Jewish, "who had agitated and brought about the concessions of 1905." That the revolution of 1905 and until he went to his grave, so to speak, he believed that the Russian Revolution was the work of Jews. Incidentally, because a fair number of the Bolsheviks happen to be Jewish, this played into the Russian Civil War because of the sheer brutality in the Russian Civil War of the "white forces" against the "reds," or the Bolsheviks, was often part of, just sort of an extension of anti-Semitism run wild. Anyway, Nicholas wrote that the deaths of these people, of the Jews in 1905, was justified. "Harm befell not only the Yids, but also Russian agitators, engineers, lawyers, and all other bad people." Anyway, it's very sad. There was lots of complexity built into his being. On one hand he's supposed to be the czar of all the people. He's supposed to be ruthless. He's supposed to be tough, hard, etc., etc. On the other hand, he's dominated by his wife. His wife is constantly urging him in her letters to him to be harsh, demonstrate "the power of your will and your decisiveness." "Show you're the complete autocrat, without whom Russia cannot exist. Ah, my love, when at last will you thump with your hand upon the table and scream at those who act wrongly. They do not fear you enough, but indeed they must, oh my boy. Make one tremble before you. To love you is not enough. They must obey you. Show to all that you are the master and that your will will be obeyed." He signed one letter to his wife "ever your poor little hussy,"--that's an odd choice of words--"with a tiny will." With a tiny will. She was born in Germany, a princess. These royals, as I've stressed, they're all intermarried. She's the granddaughter of Queen Victoria of England. She had identity with Hesse-Darmstadt, the part of Germany in which she was born. And, as I said, although she spoke English at home and with her children, she had been a convert to Russian Orthodoxy. She also feared or resented idleness. She thought it was important to work. She was a nurse for her children. She worked for her son, above all. She worked very, very hard. She wanted to keep her daughters "from foolish gossip" and away from being "idle and listless." She was fanatically religious. She went to church everyday, and she was intolerant of those people who did not. Again, this turns them towards Rasputin. Again, Rasputin did not make, by being who he was, the Russian Revolution. It's a popular revolution. But still, he's there lurking in the shadows. She said, "God has given Rasputin more insight, wisdom, and enlightenment than all of the czar's advisors." At home, she reinforces the idea that any kind of constitutional compromise was dangerous. She believed until the end that St. Petersburg, Petrograd, was a rotten town, not Russian at all. As for the Black Hundreds, who had murdered all the Jews, they represented "the healthy, right-thinking Russians." "The Russian people loved to be whipped," she said. She believed it was in the Slavic nature. They use over and over the word "childish" in describing other Russian people. The progressive block, which I sent around on the website, she believed that the existence of the progressive block, which wouldn't have been really possible had it not been for World War I. World War I gives opportunity to Russian dissidents to get together in ways that they couldn't have otherwise. It makes possible the creation and operation of voluntary associations that are bringing people together to try to send food and letters to the front, to get news from the front that are not military secrets. These inevitably began to imagine a world without the czar. There were people in 1905 who could imagine a world without the czar. Again, there are lots of people who are thinking about the post-war world and who imagine or are beginning plan for a reformed czarism. It's very hard to say how many people could comprehend the idea that Russia would not have a czar. Obviously, the Mensheviks, the Socialist Revolutionaries, and the Bolsheviks did feel that way. All of these dissident groups want change, but only the liberals and particularly the Kadets, whom you can read about, want the czardom, the autocracy to continue. But until the very end, Nicholas is determined to defend the autocracy, trying to transfer power to his brother when the Revolution comes. Mikhail, who would be the regent for Alexi. Then, when told by his son's physician, and imagine this, that his son would not recover, he tried to leave the autocracy in the hands forever of his brother, Mikhail. In fact, he abdicated the next day. He really wrote only, "All around me is treachery, cowards, and deceit." What were they going to do with him? What do you do with the czar of all the Russians, and Alexandra, and their children? Predictably enough, the liberals and the Kadets want to have them protected. The Socialists basically want him to go on trial, that is the Mensheviks, the Socialist Revolutionaries, and the Bolsheviks. But for a couple days they don't do anything at all. They've got other things going on. They've got the war. I won't discuss the attitude of the Provisional Government to the war, and the kinds of pressures from the Allies, of course, for them to keep the war going. That is handled in the textbook. There are more demands from the public and from the soviets, in particular, to arrest him. So, the provisional government wants to protect them. They finally order them confined to the resort, which is called Tsarskoe Selo, but the name doesn't matter. Nicholas himself, he wants to go to Britain. One day he wanted to live out the rest of his life with his family in Crimea. Does the British government want the czar of all the Russian people to arrive in London? Not exactly. It might complicate the war effort. Labor will have no part of it at all. You're dealing with a coalition in the war. You can't have the czar. You're not going to be coming in a 747 or something, but you can't have him coming up the river in the Thames. How are you going to get him there in the first place? That simply is not practical. The liberals didn't want the czar there either. There were constantly rumors that the czar was going to be allowed to leave. There's lots of protests about that. Kerensky says that the revolution should show its moral worth by seeing that no harm came to the royal family. By the way, just as an aside, Kerensky lived a very, very long life. At the end of his life, he taught this course at Stanford University. There's a story that's probably apocryphal. This is in very contentious times in American politics in the late 1960s. I can vaguely remember those days. A student, not realizing it was Kerensky, asked a question saying, "How could the provisional government be so stupid in their conduct of those operations?" And this clueless person had no idea that this was Kerensky, who was an historian and was trained as an historian. He died shortly thereafter. But that is really amazing to think. Of course, Lenin dies in 1924 or 1925, but Kerensky went on and on. So, what they do, they're in the resort, their little mini palace. They're allowed to take walks. They could talk on the telephone, but only in Russian and only in the presence of a guard who spoke Russian. They could not speak German and they could not speak English. They separated the family for a while, fearful of the influence of Alexandra on Nicholas. Again, this is rather like the attitude that people had toward Marie Antoinette and Louis XVI. People thought that the influence of Marie Antoinette was prenant, was overwhelming, on Louis XVI. Anyway, then they were put back together and they didn't have much to do. They gardened. They taught the children, which they always had done. They wrote letters. They went to church. They complained that the soldiers were more and more disrespectful. They were noisy. They were slovenly in their dress, their crushed caps were set awry on huge mops of unkempt hair. Their coats were half-buttoned and their nonchalant manner of performing their military duties was a constant irritation. They mocked them. They knocked on the door. "Who's there?" The answer would come, "The czar of all the Russians." A Latvian guard, "a mere commoner," outside would be laughing uproariously. Once the czar was pedaling his bike and he goes by a guard who has a bayonet, and the guy sticks his bayonet into the spokes of the wheel. The czar of all the Russians went tumbling down and skinned his knees. But yet there were rumors of how well they were eating when nobody else was eating. When the July Days mini-attempt at revolution comes, for their own safety Kerensky says they have to be moved. So, they decided on Siberia, almost inevitably, to a town called Tobolsk, where they would presumably be safe and they were moved on August 1,1917, with their windows covered up for most of the trip. Like Lenin, when he comes back, is brought back by Germany to encourage the Russians to get out of the war, he goes on the famous sealed train, so people can't see that it is Lenin, because people knew what he looked like. The same thing, the czar of all the people is not going to be seen by the masses, because what if they try to stop the train and pull them off that? Ironically, they passed by Rasputin's home village, and indeed his house, as a steamboat took them via two rivers to this town of Tobolsk, where I have never been. This frightened them, especially Nicholas, because he is so superstitious. The salt falling, the stampede, the number seventeen, and all of this. When they're pulled by the house, on the steamboat, of the dead Rasputin, his trusted advisor, it is a bad omen. So, they could go to church when they arrived. They played cards. They performed plays, en famille, which they did a lot. But the counterrevolution was a very real threat. The Americans, and the British, and the French, after the Bolshevik revolution--one reason Stalin was so paranoid, he was clinically paranoid, just a complete dangerous crackpot, but they had a lot to be paranoid about, because the Americans, and the British, and the French kept trying to undo the Russian Revolution. Anyway, they were photographed and they had ID cards. Can you imagine the czar and the czarina having ID cards, like your Yale ID cards? They start seeing obscene graffiti written, new guards come that had even less respect for them than the other people. There were serious attempts to kidnap them and to get rid of them. There was one in which czarists were supposed to be hidden under the altar of the church when they were in the service. This, again, is a throwback to the revolution. It was like in the French Revolution, where there is a massacre that starts at the Festival of the Federation when people are hidden under the church. So, there are articles in the newspapers calling for the surveillance of Nicholas "The Bloody Romanov," and calls for him to be put on trial. The arrival of a certain Vasili Yakovlev, a name you don't have to remember, obviously, sent by the Central Executive Committee of the Soviet. He was a longtime revolutionary, rumored subsequently to have been an agent of the Germans, which is preposterous. Lenin was rumored to have been an agent of the Germans, by his enemies, because of the way he got back to Russia after the initial revolutions. He was the son of a peasant. He had that kind of curriculum vitae that lots of folks, including Stalin, had. He had participated in armed holdups to raise money for revolution and became a Bolshevik. He transferred the czar and family to a much smaller place. The czar and his wife believed, naively, that they were going to be taken to Moscow because the provisional government--originally, before the provisional government is ended--wants Nicholas to sign the eventual peace treaty with the Germans. So, this Yakovlev was ordered to take his "baggage," as they called it, that is the royal family, to this small town in the Ural mountains. But the Ural mountain Bolsheviks were harder to control. Remember, the Russian Revolution, both the first one and the second, have been aptly described as a revolution by telegraph. The vast reaches of the Russian Empire are so absolutely enormous that in many cases it was weeks, and in a few places months, before any revolutionary commissar arrived to sort of inform people what's going on. So, lots of these Bolsheviks were sort of freelancers, and, for a party that was extremely hierarchically controlled from the top down, there was very little control over the Ural Bolsheviks. Nicholas had some trepidation about that, because he wrote that "there was a mood that was rather harsh" against him. Conditions were worse. They used to like to photograph birds and things like that in bushes. Their equipment was taken away from them. They could no longer control their own money. The guards couldn't talk to them at all, so they could only talk to each other. Some of the guards were just awful to them. There were plans afoot to put them on trial, a kind of show trial, but there was also this big possibility discussed that they might simply be killed. Trotsky asked that they be put on trial, so that the corruption and abuses of the autocracy could be revealed. The context is that there are large, massive armies being organized, the White Armies. Because foreign intervention was already underway, it's conceivable, one could imagine why there was a national current within the Bolsheviks, but local in particular, and that's what would count, that they should be executed. It's possible that a telegram came from Moscow ordering that they be executed, or simply that it was the Ural Bolsheviks--;in the Ural mountain region--acting on their own. Recently, the archives have been opened up only in the last ten or fifteen years, and the people that have looked think that is mostly the case. In any case, the order came in July 1918, that there be a trial. If that was not deemed possible, they should be shot. A bloody execution on the night of, early morning really, of July 16-17, that number again. They actually died on the 17^(th) of July 1918 in a horrific massacre with machine guns and pistols, a bloodbath in the basement of a house. Almost immediately there were stories that Alexandra and her daughters had been seen taking a train away from there. Way into the 1980s the Russian community in Paris tended to settle around the Boulevard Montparnasse, where there is still a very good Russian restaurant that is there. There is a particular café called the Coupole on boulevard Montparnasse where sort of the Russian émigré wealthy people went. There were periodically women turning up who claimed to be the daughter, and then later, as the time passed, the granddaughter of the czar. It was only after what was left of the bodies, or the bones, them dry bones, were discovered in 1976, and forensic experts in 1991 were able to work with the DNA, that the victims have all been accounted for. None of them escaped. The others were in that long Russian tradition of false czars or false czarinas, the kind their loyalty to whom in the eighteenth century generated so many uprisings. What can one say? He's been canonized, this vicious, murderous, anti-Semite, by the Russian Orthodox Church. But that's not my church. It's not for me to say that. I don't know. Yes I do. Anyway, tragic martyrdom? Were they heroic people, or simply human beings who were mowed down in a revolution that didn't start out as a bloody revolution but became a very, very bloody civil war? Was it the first signposts of Soviet totalitarianism? No. It wasn't that. Was it bloody vengeance for past misdeeds in the pursuit of justice? It depends on your viewpoint. I happen to believe the latter. See you on Wednesday. Thank you.