playlist
stringclasses
160 values
file_name
stringlengths
9
102
content
stringlengths
29
329k
CS_285_Deep_RL_2023
CS_285_Lecture_14_Part_1.txt
all right so on monday we had kind of a longer lecture about core topics in exploration in today's lecture we're going to do something a little different uh i'm actually going to discuss a different perspective on exploration which is quite uh distinct from the one on monday and a little bit unusual this is not how most people think about exploration problems but i think it offers a different perspective that might get us thinking about what exploration really is so this lecture is much more of kind of a state-of-the-art research focused lecture uh partly to get you thinking about final project topics partly to just get you thinking about how how else we could consider the exploration problem differently from how it is considered conventionally and this lecture will be probably a little bit shorter and a little bit quicker to get through all right so what's the the exploration problem just to recap from monday well the exploration problem can be kind of summarized with these two animations so in homework three you used your q learning algorithm to learn a variety of atari games and you know probably some of them work pretty well but some games are easy to learn whereas others are seemingly impossible and we learn on monday how this is due in part to the fact that some of these games have highly delayed rewards where intermediate reward signals don't really correlate with what you're supposed to be doing so that's what monday's lecture was about and conventionally we think about exploration as this problem where you have to trade off exploration exploitation and figure out some way to incentivize your rl agent to visit novel unusual states or states where it has a lot of information gain but here is a different way we can think about the exploration problem what if we don't just consider delayed rewards or sparse rewards but we consider a setting where the rewards are absent all together what if we want to recover diverse behaviors without any reward signal at all so uh you could imagine this from kind of a more of an ai or uh kind of scientific perspective you could say well uh human children for example seem to be able to uh you know spend copious amounts of time playing with things in their environment presumably they're not acting randomly and presumably they're getting something out of this you know it's a energy intensive activity both for them and their parents so there must be a reason uh why this is something that people do probably there is something that is learned through play through undirected uh exploration and it's not just random there's some notion of goals being set some notion of goals being accomplished and some presumably very useful body of knowledge that is that is distilled the brain through this activity so why might we want to learn without any reward function at all well perhaps we could acquire a variety of different skills without any explicit reward supervision create a repertoire of those skills and then use them to accomplish new goals when those goals are given to us maybe we can use some sub skills that we can use with perhaps a hierarchical reinforcement learning scheme and perhaps we can explore the space of possible behaviors to build up a large data set a large buffer that can then be used to acquire other tasks so this is a pretty different way of thinking about exploration conventional exploration is thought of as this problem where you just have to seek out the states that have reward here we're thinking about instead as the problem of acquiring skills that could then be repurposed later if you want a kind of a more practically minded example you could imagine that you have a robot in your home and you buy that robot you put it in your kitchen and then you turn it on and the robot's job is to figure out what it can do in this environment that could potentially be useful so that when you uh come home in the in the evening and you say well now i need you to do my dishes whatever the robot practiced during this unsupervised phase it can repurpose to very very efficiently figure out how to clean your dishes so if you can prepare for an unknown future goal then when that goal is given to you you can accomplish it quite quickly all right so in today's lecture we're going to cover a few concepts that might help us start thinking about this uh this problem this is a big open area of research there are no fixed known and perfect solutions but perhaps some of the concepts i'll discuss might help you start thinking about how formal mathematical tools and reinforcement learning algorithms could be brought to bear on this kind of problem so we'll discuss first some definitions and concepts from information theory which many of you might already be familiar with but getting a refresher on those will be important for everyone to be on the same page as we talk about the more uh you know sophisticated algorithms that come next then we'll discuss how we can learn without a reward function to figure out strategies for reaching goals so we'll have an algorithm that proposes goals attempts to reach them and through that process acquires a deeper understanding of the world then we'll talk about a state distribution matching formulation of reinforcement learning where we can match desired state distributions and in the process perform unsupervised exploration we'll discuss whether coverage of valid states whether basically breadth and novelty is a good exploration objective by itself and why it might be and then we'll talk about how we can go beyond just covering states and actually covering the space of skills and what the distinction between those is all right but let's start with some definitions and concepts from information theory before we dive into the main technical portion of today's lecture so first some useful identities uh as all of you probably know we can use p of x to denote a distribution and of course we'll see that a lot in today's lecture we saw that a lot already and you can think of a of a distribution as something that you fit to a bunch of points and you get maybe a density uh in continuous space or distribution discrete space h of p of x denotes entropy and we've seen this before uh entropy is defined as the negative uh of the expected value of the log probability of x and intuitive the entropy quantifies how broad a distribution is so if you have a discrete variable x then the uniform distribution has the largest entropy whereas a distribution that is peaked on exactly one value and zero everywhere else has the lowest entropy so intuitively the entropy is kind of the width of this distribution so that that's stuff that hopefully all of you are already familiar with now another concept which maybe not everyone is familiar with but that will come up a lot in today's lecture is mutual information the mutual information between two variables x and y which we write with a semicolon like this because we could also have mutual information between groups of variables so you can have mutual information between x together with z and y in which case you would write i of x comma z semicolon y this is defined as the kl divergence and remember kale divergence is a measure of how different two distributions are it's defined as the kl divergence between the joint distribution over x and y and the product of their marginals so intuitively if x and y are independent of each other then their joint will just be their product of marginals and the scale divergence will be zero as x and y depend on each other more and more their joint distribution will be more and more different from their product of marginals in which case you'll see the scale divergence go up [Music] now we can write the mutual information as the expected you know just using the definition of kl divergence as the expected value under the joint distribution over x and y of the log of the ratio of the joint and the product of marginals intuitively you can think of it like this if these green dots represent samples from our distribution here looking at this picture you notice that there's a clear trend the y values clearly depend on the x values they're not fully determined by the x values but there's definitely a trend in a relationship whereas here the y values seemingly don't depend on the x values so in the top picture you have high mutual information essentially if i tell you x you can do a decent job of guessing why at the bottom you have low mutual information if i tell you x you will not do any better at guessing why than if i hadn't told you about it now one important thing about mutual information is that it can be also written as a difference of two entropies so you can write the mutual information as the entropy of y minus the entropy of y given x and this just follows from a little bit of algebra so you can basically start with the definition at the top manipulate that equation a little bit and you will end up with the equation at the bottom but this way of writing mutual information also has a very appealing intuitive interpretation you can think of mutual information as the reduction in the entropy of y that you get from observing x this is essentially like that information gain calculation that we saw in the previous lecture on monday so mutual information tells you how informative x is about why and because it's symmetric it also tells you about how informative y is about x all right so let's tie this into rl a little bit so the kind of information theoretic quantities that will be useful in our discussion today's lecture will be the following i'll use pi of s to denote the state marginal distribution of policy pi in previous lectures i also refer to this sometimes as p theta of s same exact thing when i write h of pi of s this refers to the state marginal entropy of the policy pi now this is kind of an interesting quantity because it quantifies the coverage that our policy gets so if you have a very random policy that visits all possible states you would expect that h of pi of s would be large here's an example of how mutual information can crop up in reinforcement learning and i won't go to this in too much detail but it's a cons it's a fairly intuitive concept that's worth bringing up so one very classic quantity uh that has been defined in reinforcement learning in terms of mutual information is something called empowerment so a lot of this comes from work by daniel blaney and colleagues empowerment is defined as the mutual information between the next state and the current action there are a lot of variants of empowerment has also been defined as the mutual information between the next state and the current action given the current state as well as other variants but let's think about the simple version the mutual information between the next state and the current action if we substituted in the entropy equations that we had on the previous slide we know that we can also write this as the entropy of the next state minus the conditional entropy of the next state given the current action why is this called empowerment take a moment to consider that you know empowerment in english mean refers to how much power you have how capable you are of achieving your desired end goals what does this equation tell us about empowerment take a moment to think about it maybe write a comment uh in the comments section so the way that we can think about this equation is it's saying that you would like the entropy of the next state to be large which means that you would like there to be many possible next states but you would like that entropy to be small condition on your current action so that means that if you know which action you took it's easy to predict which state you landed in that means you have a lot of control authority about the environment means you have a lot of ability to deterministically influence which state you'll be in on the other hand if you don't know the action you want the state entropy to be large so that means that you have a variety of actions available to you and different actions will lead to very different states and if you have both of these things then what you should get is an agent that places themselves in a situation where they have many actions available to them that will all lead to very different states but will do so in a reliable and controlled manner so if you have a room maybe you want to stand in the middle of that room because from there you can access all the parts of the room deterministically if you had just one of these things that wouldn't do the job if you just had the entropy over the next state now that's not really providing you with empowerment because there you want to put yourself in a situation where the future is very random maybe it's out of your control if you just have the negative entropy of the next state given the current action that's not quantifying the the notion that you want many options available to you so there you might put yourself in a very deterministic situation maybe you're sitting at the bottom of a well the next state is extremely predictable whether you know the action or not which means that the next state given the current action is also extremely predictable so that would minimize the second term but wouldn't maximize the first term but if you have both of these terms then the only way to satisfy that objective is to put yourself in situations where you have many actions available that lead to many different future states but you have a lot of control about which state you get by choosing your action so that's why this quantity is referred to as empowerment and the main reason i wanted to illustrate this you know we're not going to go into detail about empowerment today's lecture but i want to illustrate this just to give you sort of a taste for how mutual information concepts can quantify useful notions in reinforcement learning so this can be viewed as quantifying a notion of control authority in an information theoretical way
CS_285_Deep_RL_2023
CS_285_Lecture_12_Part_2_ModelBased_RL_with_Policies.txt
all right let's talk about model free reinforcement learning with a learned model so essentially model based RL by using model 3rl methods before we get into algorithms let's make a little bit more precise some of the things that we discussed in the previous portion of the lecture on what these back propagation gradients actually look like and why they might not be as good as using model 3 methods so this is the familiar policy grading expression that we had before this is basically just directly taken from previous lectures now we talked about the policy grading as a model pre-reinforcement learning algorithm but you could just as well think of it as a gradient estimator they could be used to estimate the gradient of the reward with respect to the policy parameters used in this way it's sometimes referred to as a likelihood ratio gradient estimator or colloquially as the reinforced grading estimate but it's important to remember that as a gradient estimator it doesn't necessarily have to have anything to do with reinforcement learning anytime that you have these kinds of a stochastic computation graphs you could use this type of estimator now before the really convenient thing for us about this kind of gradient estimator is that it doesn't contain the transition probabilities that's a little bit of a lie of course because in reality you do need the transition probabilities to calculate the policy gradient because you need a sample and those samples come from the policy and from the transition probabilities but the transition probabilities themselves do not show up in the expression except in so far as they generate the samples and in particular we don't even know their derivatives but there's nothing stopping you from using this gradient estimator with a learn model so just the same way that you would sample from the real mdp before now you could sample from the learn model the alternative the back propagation gradient it's also sometimes called a pathwise gradient can be written out like this now this might seem like a very daunting mathematical expression but all I did here was I just applied the chain rule of calculus to compute the derivatives with respect to the policy parameters for those computation graphs that I showed in the previous section so there's an outer sum over all time steps and every time step there's the derivative of the action at that time step with respect to the policy parameters times the derivative of the next state with respect to the action and then that expression in parentheses is just the derivative of the reward for all future states with respect to the next state and the you know the particular problematic part is of course that giant product in the second set of parentheses which is a product of all the jacobians between time step T Prime and t plus one uh so that's a little bit of a problem because in there you have these uh dsda and dsds terms which are basically the derivative of the next state with respect the previous action and the next state with respect to the previous state and they all get multiplied together so if you imagine that your states are n-dimensional those just those DS uh DS terms that the very last term in the expression that's going to be an N by n Matrix and there's going to be a lot of those matrices getting multiplied together and if those matrices have eigenvalues that are larger than one then if you multiply enough of them together eventually they explode and if they have eigenvalues less than one you multiply enough of them together eventually they vanish so that's what makes this pathwise gradient so difficult to deal with um just as a detail I do want to note here that the likelihood ratio gradient at the top is technically only valid for stochastic policies and stochastic transitions the pathwise gradient at the bottom is technically only valid for deterministic policies and transitions but this is a solvable problem and you could in fact extend the pathwise gradient to some types of stochastic transitions by using something called the reparametrization trick which we will learn about in a later lecture and you can even make the policy gradient nearly deterministic by taking the limit as the variance of let's say a gaussian policy or transition probability goes to zero and you can still get an expression for it although it'll be a little bit different so it's easier to write the policy gradient for stochastic systems and the pathwise gradient for deterministic systems but that's not a fundamental limitation the fundamental difference is that the pathwise gradient involves that product of all those jacobians whereas the policy gradient does not um now some of you might be wondering at this point well it seems like there's kind of a free lunch going on here like how is it that you can just get rid of a giant product of jacobians but there was a trade-off of course which is the policy grading requires sampling so that's that's kind of uh where the difference comes in and in fact if we were to really dig down to the optimization details of these procedures it actually turns out the policy gradient has some fairly deep connections with things like um finite differencing methods so there is no relaunch the policy gradient does pay a price for getting rid of the product of jacobians but if you're multiplying enough jacobians together uh you know paying the price of switching over to sampling can be worth it so policy gradients might in fact be more stable if you generate enough samples because it doesn't require multiplying many jacobians now before generating lots of samples was a problem because when we were talking about model 3rl those samples required actually running a real physical system but if we're talking about model based RL then generating those samples could involve simply running your model which costs compute but it doesn't cost any kind of like physical interaction with your mdp so now that trade-off might be well worth it for us because generating more samples is just a matter of you know sort of sticking more gpus in the data center if you want to learn more about the numerical stability issues specifically in regard to policy gradients you can check out this 2018 paper that talks about some of the uh stability issues but the short version is that the model free gradient can actually be better now from this we could write down you know what again I might call them uh I might make up a name for the name I'm going to make up is model based RL version 2.5 model based RL version 2.5 is going to be very similar to 2.0 except instead of using back propagation it's going to use the policy gradient so step one run some policy to collect a data set step two learn a Dynamics model step 3 use that Dynamics model to sample a whole lot of trajectories with your current policy step four use those trajectories to improve the policy via policy gradient and you can of course use all the actor critic tricks all that stuff here and then repeat step three a few more times so you can take many policy gradient steps resampling trajectories each time but not generally generating any more real data nor retraining your model and once you've improved your policy enough that you're happy with it then you would run your policy to collect more data append those to your data set and use that larger data set to Now train a better model okay so this algorithm would get rid of the issue with back propagation that we discussed before but it still has some problems and in the end this is not actually the model based RL method that most people would want to use so what might be the problem with this procedure take a moment to think about this and again you could pause the video and Ponder this on your own time and when you're ready to continue then continue and I will tell you what's wrong with this okay so the issue really has to do with making long model based rollouts to understand this issue let's actually think back to something we discussed earlier in the course when we talked about imitation learning when we talked about imitation learning we learned that if you train a policy with supervised learning uh and you try to run that policy you might make a small mistake because every learn model will make at least a small mistake but the problem is that when your learned policy makes a small mistake it'll deviate a little bit from what we'll see in the data and when it deviates a little bit from What was seen in the data it'll find itself in an unfamiliar situation where it'll make a bigger mistake and these mistakes will compound and we learned when we talked when we discussed imitation learning that this issue really comes down to something called distributional shift it comes down to the problem that the distribution Over States under which your policy was trained with supervised learning differs from the distribution of states that it receives as input when it's actually executed in the environment now the same exact challenge applies to learned models so when we talked about this of course before when we discussed learn models that if the black curve now represents running Pi Theta with the true Dynamics and the red curve represents running it with a learn model then when you run it with the learn model the learn model will make some mistakes it'll put itself into slightly different states and in those slightly different states it'll make bigger mistakes so if your model based rollout is long enough eventually it'll differ significantly from The Real World because the mistakes get bigger and bigger now this is all for the case where you're running the same policy as the one that you used to collect the data but of course in model based RL version 2.5 you're going to change the policy you're going to make it better with respect to your model which means that the issue is even further exacerbated because now you'll be running with a learned Dynamics model that is different from the real dynamics model and with a modified policy that is different from the policy that collected the data so the distributional script will be even worse so how quickly does the error accumulate well even in the best case when you run the same exact policy just like in the behavioral cloning discussion it'll accumulate as Epsilon t squared and I would leave it as an exercise to you guys to show that in fact that there's a bound of Epsilon t squared and the bound is tight the logic for that is extremely similar to what we had in behavioral cloning but the takeaway for us for now is that errors build up very quickly as the Horizon of your model based rollout increases which means that making long model based rollouts is very costly to you in terms of accumulated error this is another way of saying that the longer you roll out is the more likely it is that the conclusion you draw from that rollout meaning that its reward will differ from what you would actually get if you were to roll out in the real world so perhaps what we want to do is avoid long rollouts perhaps we want to devise model based RL methods that can get away with only ever using short rollouts can we do something like this so long rollouts are bad because they have huge accumulating error well what if we just reduce the Horizon like you know our task has a horizon of 1000 and we'll limit our rollouts to only 50 steps this will have much lower error the problem is that of course what an NDP with a horizon of 1000 doesn't look the same as the same mdp with a horizon of 50. there may be something that happens in those later time steps that you'd never see in the earlier time steps if you're for example controlling a robot that's supposed to cook a meal in the kitchen well maybe it'll take 30 minutes to cook the meal and if you only make model based straws that are five minutes in length that's hardly enough time for the robot to like put the pot on the stove so this isn't really good enough because you're essentially just changing the problem so here's a little trick that we can use what if we only ever make short model based rollouts but we'll still use a smaller number of long Real Worlds so let's say that these black trajectories actually represent real world rollouts that are to the full length of the mdp and we'll collect these relatively infrequently and then when we make our model based rollouts we won't start them at the beginning we'll actually sample some States from these real world rollouts we'll sample them maybe uniformly at random over the whole trajectory and then from each one we'll make a little short model based rollout so this has some interesting trade-offs we do have the much lower error because our model based rollouts are very short now and we get to see all the time steps so you will sample some of these States from very late in the trajectory and make your model based raw from there so you will see later time steps as well but here's the problem what kind of policy does the state distribution of these model-based rollouts correspond to well the answer is it's complicated in fact if your policy is changing as you're making these uh short model based rollouts that are branched from the from the real world rollouts you use the different policy to roll into that state then you're using to roll out of it so you got to those orange dots using the policy that was collecting the data and then when you run your model from there now you're switching to the new policy that you are improving and that's actually a little bit problematic because the state distribution that you get from this is not the state distribution of your latest policy it's not the state distribution of the policy that collected the data either it's actually kind of a mix of the two that's not necessarily fatal and certainly if you make small changes to your policy then all that same logic that we talked about when we discussed Advanced policy grading methods would still apply but usually the point of using model based RL is to improve your policy more between data collection episodes so that you can be more data efficient and in that case this gets to be a little bit of a problem because if the whole point is to change the policy a lot now that state distribution mismatch is going to hurt us if we use on policy methods like policy gradient algorithms so we can do this but it turns out that to make this work really well typically it's better to use off policy algorithms like Q learning or Q function actor critic methods although it is possible when people have devised policy grading based strategies that employ this kind of idea you just can't change the policy as much in between data collection rounds okay so model based RL with short rollouts is something that we could call model based RL version 3.0 and this is actually getting much closer to the kinds of methods that people actually use in practice so the way these methods you work is just like before they would collect some data use that data to train a model then they would pick some states from the data set they collected in the real world and then use them all to make short rollouts from those States and these can be very short they can be as short as one time step in Practical algorithms even when they're longer they're on the order of 10 time steps so very short much shorter than the full Horizon of the problem and then typically these methods would use both real data and the model based data to improve the policy with some kind of off-polic crl method so this might involve Q learning or it might involve active critic methods and what I have written here is that they improve the policy but in reality they typically have both a policy and a queue function and typically they would generate a lot more data from the model than they would have from the real mdp so they would do this a few times and then they would run the policy in the real mdp to collect more data to append to the data set and then retrain the model and repeat the process and there's often a lot of delicate design decisions that go into these methods in terms of how much they improve the policy in between data collection how much data they collect how much data they collect from the model and so on um so in the next portion of the lecture we will talk about specific designs for these algorithms and get a sense for the overall system architecture for these kinds of methods
CS_285_Deep_RL_2023
CS_285_Lecture_16_Part_4_Offline_Reinforcement_Learning_2.txt
in the last part of today's lecture i'm going to conclude my discussion of offline rl with a brief summary some discussion of applications and some discussion of open questions so the first question that i'll talk about here as part of the summary is maybe something that some of you already have on your mind which is well i talked about lots of algorithms which offline are algorithms should you actually use um here is a rough back of the envelope kind of rule of thumb of course this is not the final word on anything and your mileage may vary but you know if i were to try to approach some new offline rl problem here is the decision tree that i would use if you want to train only offline meaning that you're not going to do online fine tuning conservative queue learning is a good choice because it has just one hyper parameter and it's well understood and widely tested and there has been extensive uh verification in many different papers showing that conservative q-learning does work decently well in pure offline mode implicit queue learning is also a good choice it's a bit more flexible because it also works well for both offline and online but it has more hyper parameters if you want to only train offline and then fine-tune online then advantage weighted actual critic is a good choice it's widely used and well tested in exactly this regime conservative queue learning is actually not a good choice because conservative queue learning while it works very well offline it doesn't fine-tune very well because it tends to be too conservative implicit queue learning is a good choice for offline training followed by online fine-tuning empirically that seems to work pretty well and actually seems to perform better than advantage weighted actual critic although it hasn't been around for as long and is quite kind of not as widely validated if you have a good way to train models in your domain then you can opt for a model based offline rl method now this is rather domain dependent so basically depending on on the particular dynamics that you have it may be easy to train a good model or it may be very hard but if you're pretty confident that you can train a good model combo is a good choice it's one of the best performing current offline model based rl methods it has similar properties as cql but it benefits from models so you can think of combos basically cql but with models but it's not always easy to train a good model in your domain so you need to first check that you can actually get good models trajectory transformer can be a good choice because it has very powerful and effective models the downsides are that it's extremely computationally expensive to train and evaluate and because it's not learning a policy there's still some limitations on horizon so if you have very long horizon a method that is more dyna-like that benefits from dynamic programming may still be better so this is kind of the rule of thumb that i would suggest now offline rl is a very rapidly evolving field and it could be that by next year some of these will change maybe new methods will come out or occur something better will be understood about current methods but this is roughly what uh this looks like you know as of when i recorded this lecture which is in late 2021 now next what i want to talk about is a little discussion of applications and a little discussion of why offline rl can be a very powerful framework for getting reinforcement learning to really work in the real world now oftentimes you'll do reinforcement learning with simulation in which case you basically don't have to worry about this if you're blessed enough to have a good simulator doing online rls perfectly fine but if you want to actually do reinforcement learning directly in the real world if you want to use online rl this is what your process might look like step one is you might instrument the task so that you can run rl so you probably need some safety mechanism you know whether you're doing robotics or algorithmic trading you need something to make sure that your exploration policy doesn't do crazy stuff you might need to put some work into autonomous collection so especially in robotics you know maybe you try a task then you need to try it again so you need to reset between trials you need to take care to design rewards for offline rl you can just label the rewards in the data set you can for example add you know crowdsource it but for online rl you really need an automated reward function which means you need to write some code or train some model to do this then you would wait a long time for rl to run and this can be a rather manual process because you might need some kind of safety monitoring and then you would change something about your algorithm in some small way to improve it and then do this all over again so the iteration is very slow because each time you change something you have to rerun the whole process and when you're done you throw it all in the garbage to start over for the next task so if you trained a robot to you know make a cup of coffee and now you wanted to make a cup of tea typically you would throw this all out and start all over with offline rl you would collect your initial data set which would come from a wide range of different sources it could be human data scripted controllers it could come from some baseline policy or even a combination of all of the above you might still need to design a reward function but you could also have humans just label the reward because you only need the reward on your training data then you would train your policy with offline rl and then you might change the algorithm in some small way but if you change the algorithm you don't need to recollect your data so this process becomes a lot more lightweight you might choose to collect more data and add it to a growing data set but again you don't need to recollect the data from scratch so anything you collect you add to your data so you append it you aggregate it and then just keep reusing it now for full disclosure you will periodically need to run your policy online mostly to see how well it's doing but that's a lot less honors than doing training online and then if you have another project that you want to do in the future in a similar domain you can keep your data set around and reuse it again so if you really need to do real world rl training if you don't have a simulator the offline process can be a lot more practical and i'll illustrate this with a few examples from some uh from some of my own research with colleagues at uh at google and also some folks at uc berkeley so this is kind of a fun video uh research portion of the lecture this is not really like you know key material that you have to know it's more just some examples and some fun videos to hopefully keep you entertained um so as i mentioned uh in the lecture on monday uh in 2018 we had this large project on real world reinforcement learning with q learning for robotic grasping and more recently in 2021 we extended this system this is also some work that was done at google to handle multiple tasks and the multitask bar is not that important but just to give you an idea of what was involved there are 12 different tasks several thousand different objects and months of data collection so this is a really big manual effort to like get lots of data collected with lots of robots but once we did that we had a hypothesis the particular hypothesis is not that important for this lecture but just to give you a sense for the process so the hypothesis we had was could we learn these tasks these 12 tasks without actually using rewards at all just by using goal condition reinforcement learning so the idea here is instead of giving the robot ground truth reward functions for which task is doing we just give it a goal image and we assign rewards automatically based on how similar the final state it reaches is to that gold image okay that's just a hypothesis we had it's a robotic central hypothesis it's not really about offline rl per se but then what we did is instead of collecting all new data we just reused the same data that we already had for these 12 different tasks but trained a policy with goals instead of ground truth reward functions and we could actually evaluate our hypothesis without any new data collection whatsoever so this is the goal condition policy the goal is shown in the lower right hand corner and uh you can see that the robot does kind of a decent job these grasping tasks are fairly simple uh so here it's the goal image just has it holding an object and it figures out that means it has to go and pick it up but we can also do some rearrangement tasks so that's going to come next so in this rearrangement tasks the goal image has the carrot lying in the plate and then the robot figures out that means he needs to pick up the carrot and move it to the plate so here there's no reward function at all the task is defined entirely using a goal image well there's no hand-designed reward function i mean there's an automated reward function for reaching the goal the method is very similar to conservative queue learning just adapt it to goal reaching and one fun thing you can do with this is you can actually use it as an unsupervised pre-training objective so kind of in the same way that you might pre-train with uh a language model in nlp and then fine-tune it to a task you can pre-train this goal condition thing on a large data set and then fine-tune it with a task reward and that leads to some pretty substantial improvements so that's kind of nice you can verify a new hypothesis in this case about goal-conditioned rl without collecting any new data but you're going to test it directly in the real world here's another robotics example so in 2020 and gregory khan who was a phd student here at berkeley at the time collected a data set of about 40 hours of off-road navigation using a small ground robot in early 2020. in late 2020 drew shaw another phd student used the same data to build a goal condition navigation system that could do things like deliver mail or deliver a pizza and he didn't need to collect any new data to do to do this he could just reuse the same data with offline rl and in early 2021 dhruv could use the same data set to train a policy that would learn to search for particular goals in an environment the techniques used in the in this work were a little different than the algorithms that i covered in this lecture but the basic principle that offline rl could let you test out these hypotheses very quickly in the real world but without additional real world data collection in my opinion kind of illustrates uh the you know one of the benefits of using this for kind of rapidly testing out new algorithmic ideas while sticking to real data without having to rely entirely on simulation all right now let me talk about some takeaways some conclusions and also maybe some future directions so the the dream in offline rl is you could collect a data set using any policy or mixture of policies and then you could run offline rl on this dataset to learn a policy and then just deploy directly in the real world for medical diagnosis for algorithmic trading for logistics driving what have you and then there's current rl algorithms and there's still a gap there so here are a few things and this is you know partly this is for you guys to think about project ideas also to think about open problems one of the open problems is workflows so if you're doing supervised learning you have a training validation test split so you have pretty good confidence that if you train your policy on a training set and does well according to a validation set that it will probably do well in the real world so you know in supervised learning you typically don't even need to deploy your policy in the real environment you can get a pretty good sense for how well you expect it to do just from your validation set or test set what's the equivalent of that in offline rl these days in offline rl if you want to learn how well your policy is doing in the real world if you want to understand how well it's doing in the real world you would actually deploy it and run it so the training is offline but the evaluation is still online and that can be costly or even dangerous um there is some work on this we have actually uh some some of my students have a paper on this called the workflow for offline model-free robotic reinforcement learning but there's still a big gap in understanding that there's still a lot of theory that's missing there's still a lot of basic understanding of how we should have we should structure our offline our workflows without requiring online evaluation that needs a lot of work classic techniques like off policy evaluation ope also get at this point but op methods themselves require hyper parameter tuning which in turn also often requires online evaluation so it's a big open problem statistical guarantees are a big problem in offline rl so you know there are a lot of bounds and results involving distributional shift but they tend to be pretty loose and incomplete and then of course scalable methods in large scale applications so in principle offline rl can be applied to a wide range of settings in practice it still hasn't been applied that widely and i think better understanding the real limitations and constraints of real-world applications is really important to push us in the right direction so i've talked about some examples in robotics but there are a lot of things outside of robotics these things could be applied to and a lot of open questions as to what goes wrong when we do that
CS_285_Deep_RL_2023
CS_285_Lecture_7_Part_3.txt
all right in the next portion of today's lecture we're going to discuss how this generic form of physical iteration that we covered can be instantiated as different kinds of practical deep reinforcement learning algorithms so first let's talk a little bit more about what it means for physical iteration to be an off policy algorithm so just to remind everybody of policy means that you do not need samples from the latest policy in order to keep running your rl algorithm typically what that means that you can take many gradient steps on the same set of samples or reuse samples from previous iterations so you don't have to throw out your your old samples you can keep using them which in practice gives you more data to train on so intuitively the main reason that physical iteration allows us to get away with using off policy data is that the one place where the policy is actually used is actually utilizing the q function rather than stepping through the simulator so as our policy changes what really changes is this max remember the way that we got this max was by taking the rmax which is our policy the policies in our max policy and then plugging it back into the q value to get the actual uh value for the policy so inside of that max you can kind of unpack it and pretend that it's actually q phi of s i prime comma arg max of q5 and that rmax is basically our policy so this is the only place where the policy shows up and conveniently enough it shows up as an argument to the q function which means that as our policy changes as our action ai prime changes we do not need to generate new rollouts you can almost think of this as a kind of model the q function allows you to sort of simulate what kind of values you would get if you were to take different actions and then of course you take the best action if you want to most improve your behavior so this max approximates the value of pi prime our greedy policy at si prime and that's why we don't need new samples we're basically using our q function to simulate the value of new actions so given a state in an action the transition is actually independent of pi right if s i and a i are fixed no matter how much we change pi s i prime is not going to change because pi only influences ai and here ai is fixed so one way that you can think of physical iteration kind of structurally is that you have this big bucket of different transitions and what you'll do is you'll back up the values along each of those transitions and each of those backups will improve your q value but you don't actually really care so much about which specific transitions they are so long as they kind of cover the space of all possible transitions quite well so you could imagine you have this data set of transitions and you're just plugging away on this data set running fit a queue iteration improving your q function each time you go around the loop now what exactly is it that fit accuration is optimizing well this step the step where you take the max improves your policy right so in the tabular case this would literally be your policy improvement and your step three is minimizing the error of fit so if you have a tabular update you would just directly write those yi's into your table but since you have a neural network you have to actually perform some optimization to minimize an error against those y eyes and you might not drive the error perfectly to zero so you could think of fit a q iteration as optimizing an error the error being the bellman er the difference between q phi s a and those target values y and that is kind of the closest to a actual optimization objective but of course that error itself doesn't really reflect the goodness of your policy it's just the accuracy with which you're able to copy your target values if the error is zero then you know that q phi s a is equal to rsa plus gamma max a prime q phi s prime a prime and this is an optimal q function corresponding to the optimal policy pi prime where the policies were covered by the argmax rule so this is this you can show maximize reward but if the error is not zero um then uh you can't really say much about the performance of this policy so what we know about uh theta q iteration is in the tabular case your error will be zero which means it'll recover q star if your error is not zero then most guarantees are lost when we leave the tabular case all right now let's discuss a few special cases of phytic iteration which correspond to the very popular algorithms in the literature so so far the generic form of fitted q learning that we talked about has these three steps collect a data set evaluate your target values train your neural network parameters to fit those target values and then alternate these two steps k times and then after k times go out and collect more data you can instantiate a special case of this algorithm with particular choices for those hybrid parameters which actually corresponds to an online algorithm so in the online algorithm in step one you take exactly one action ai and observe one transition s-i-a-i-s-i prime ri then in step two you compute one target value for the transition that you just took very much analogous to how you calculate the advantage value in active critic in online active critic for the one transition that you just took and then in step three you take one gradient descent step on the error between your q values and the target value that you just computed so the equation that i have here it looks a little complicated but i basically just applied the chain rule of probability to that objective inside the argument and step three so applying chain rule you get dq d5 at siai times the error q phi s i a i minus y i and the error in those parentheses that q s i a i minus y i is sometimes referred to as the temporal difference error so this is the basic online queue q learning algorithm also sometimes called watkins q learning this is kind of the classic q learning algorithm that we learn about in textbooks and it is a non-policy algorithm so you do not have to take the action ai using your latest greedy policy so what policy should you use so your final policy will be the greedy policy if q learning converges and has error zero then we know the greedy policy is the optimal policy but while learning is progressing using the greedy policy may not be such a good idea here's a question for you to think about why might we not want to use the greedy policy the argmax policy in step one while running online queue learning or online queue iteration take a moment to think about this question so part of why we might not want to do this is that this our max policy is deterministic and if our initial q function is quite bad it's not going to be random but it's going to be arbitrary then it will essentially commit our argmax policy take the same action every time it enters a particular state and if that action is not a very good action we might be stuck taking that bad action essentially in perpetuity and we might never discover that better actions exist so in practice when we run fitted q iteration or q learning algorithms it's highly desirable to modify the policy that we use in step one to not just be the argmax policy but to inject some additional randomness to produce better exploration and there are a number of choices that we make in practice to facilitate this so one common choice is called epsilon greedy this is one of the simplest exploration rules that we can use with discrete actions and that's something that you will all implement in homework 3. epsilon greeting simply says that with probability 1 minus epsilon you will take the greedy action and then with probability epsilon you will take one of the other actions uniformly at random so the probability of every action is one minus epsilon of this the r max and then epsilon divided by the number of actions minus one otherwise this is called epsilon reading why might this be a good idea well if we choose epsilon to be some small number it means that most of the time we take the action that we think is best and that's usually a good idea because if we have a if we've got if we've got it right then we'll go to some good region and collect some good data but we always have a small but non-zero probability of taking some other action which will ensure that if our q function is bad eventually we'll just randomly do something better it's a very simple exploration rule and it's very commonly used in practice a very common practical choice is to actually vary the value of epsilon over the course of training and that makes a lot of sense because you expect your q function to be pretty bad initially and at that point you might want to use a larger epsilon and then as learning progresses your q function gets better and then you can reduce epsilon another exploration rule that you could use is to select your actions in proportion to some positive transformation of your q values and a particularly popular positive transformation is exponentiation so if you take actions in proportion to the exponential of your q values what will happen is that the best actions will be the most frequent actions that are almost as good as the best action will also be taken quite frequently because they'll have similarly high probabilities but if some action has an extremely low q value then it will almost never be taken in some cases this kind of exploration rule can be preferred over epsilon greedy because with epsilon greedy the action that happens to be the max gets much higher probability and if there are two actions that are about equally good the second best one has a much lower probability whereas with this exponentiation rule if you really have two equally good actions you'll take them about an equal number of times the second reason it might be better is we have a really bad action and you've already learned that it's just a really bad action you probably don't want to waste your time exploring it whereas epsilon really won't uh make use of that so this is sometimes also called the boltzmann exploration rule also the softmax exploration rule we'll discuss more sophisticated ways to do exploration in much more detail in another lecture in the second half of the course but these simple rules are hopefully going to be enough to implement basic versions of q iteration and q learning algorithms all right so to review what we've covered so far we've discussed value-based methods which don't learn a policy explicitly but just learn a value function or q function we've discussed how if you have a value function you can recover a policy by using the arcmax and how we can devise this fitter queue iteration method which does not require knowing the transition dynamics so it's a true model free method and we can instantiate it in various ways as a batch mode off policy method or an online queue learning method uh depending on the choice of those hyper parameters the number of steps we we take to gather data the number of grading updates and so on you
CS_285_Deep_RL_2023
CS_285_Lecture_18_Variational_Inference_Part_2.txt
all right so now let's get into the main technical part of today's lecture which is to discuss the variational inference framework so this framework is basically concerned with this question how do we calculate p of c given x i but in the process of deriving this we'll also see why the expected log likelihood is actually a reasonable objective so let's think about making some crude approximations so p of z given x i is in general a pretty complex distribution right because a single point x might come from many different places in the space of z's but let's make a really simplistic approximation let's say that we're going to approximate p of z given x i with some distribution qi of z which is a gaussian or in general some very simple tractable parametrized distribution class and notice that i'm calling it q subscript i of of z so it's a distribution over z and it is a distribution is going to be specific to this point x i all right so instead of having this complicated thing we're going to try to approximate it with just a single peak and i chose this picture intentionally just to make it clear that this approximation is not necessarily going to be a good one but we'll try to find the best possible fit within this simple distribution class the gaussian distribution class it turns out that if you approximate p of z given x i with any q i z you can actually construct a lower bound on the log probability of x i and this is going to be a very powerful idea because if you can construct a lower bound on the log probability of x i then maximizing that bound will push up on the log probability of x i now in general maximizing lower bounds does not increase the quantity you care about but if the bound is sufficiently tight then it does and we'll see later that under some conditions the bound is in fact tight but for now let's just not let's not worry about tightness let's just see how we can get a bound by using q ioc so we can write out the log probability of x i as the log of the integral over all values of z of p of x i given z times p of c and uh you know the usual trick if you want to bring in some quantity that is not currently in the equation is to multiply by that quantity divided by itself so q i z over q i z is equal to one so we can multiply that in whenever we want so now we can notice that we have some quantity multiplied by qi of z so we can write that quantity as an expected value under q i of z so we basically take the numerator in that ratio that become that turns into an expectation and then everything else is left behind so we have log of the expected value under q y of z of p x i given z times p of z divided by q i of c all right so so far we haven't made any approximation these are just this is just a little bit of algebraic manipulation next what we're going to do is we're going to use uh jensen's inequality jensen's inequality is a way to relate uh convex or concave functions applied to linear combinations so what i have written out on the slide is a special case of jensen's inequality for the logarithm which is a concave function but in general this inequality would hold true for any concave function if you have a convex function then it holds true but the inequality goes the other way so for the case of logarithms jansen's inequality says that the logarithm of an expected value over some variable y is greater than or equal to the expected value of the logarithm of that variable if this seems a little counterintuitive to you something you could consider is trying to draw a picture so the logarithm is a concave function so it kind of goes like that and if you imagine the logarithm of a sum of functions because the logarithm the the rate at which the logarithm increases always decreases then that sum of functions uh the logarithmic sum functions will be greater than or equal to the sum of the logarithms because of the rate of decrease so if this is a little unclear to you try drawing out a picture on this uh you know of multiple different logarithm functions getting sum together okay so we can directly apply jensen's inequality to the result from the previous slide and the way that we do that uh is uh by noting that we have the log of the expected value of some quantity so applying jensen's inequality simply pushes the expected value outside of the log and replaces the equality with a greater than or equal to sign so that means that our previous result is lower bounded by the expected value under qi of z of the logarithm of the ratio p of x i given z times p of z divided by q i of c but now of course we know that logarithms of products can be written out as sums of logarithms so we can equivalently write this out as the expected value under q i of z of log p x i given z plus log p of z minus the expected value under q i of z of log q y of c and the reason i wrote it out like this is because i want to collect all the terms that depend on p in the first part and all the terms that depend on q in the second part now the nice thing about this equation here is that everything is tractable so and this is true for any qi of z so we could just pick some random q i of z and we have a lower bound although not all q i's will of course lead to the best lower bounds but we can pick some q i of z sample from it to evaluate the first expectation and then the second expectation you'll notice actually the equation for the entropy of q i of z which for many simple distributions like gaussians has a closed form solution okay so we can replace that second term with just the entropy of qi so uh maximizing this could maximize log p x i although as i mentioned before you need to show that the bound is not too loose now let me uh make a brief aside to just to sort of recap some of the information theoretic quantities that we're encountering here uh much of this we already saw we already talked about entropy for example in the exploration lectures but i just want to briefly recap it because this stuff is really important for getting a good intuition for what variational inference is actually doing so entropy the entropy of some distribution is the negative expected value of the log probability of that distribution and here is an equation for the entropy of a of a bernoulli distribution so the probability of a binary event and you can see that the entropy goes up as the probability of that event approaches 0.5 and it goes down to zero if the event is guaranteed to happen so probability equals one or guaranteed not to happen probability equals zero so one intuition for the entropy is how random is the random variable so this makes a lot of sense in the case of the bernoulli variable here when it's 0.5 the variable is in some sense the most random the most unpredictable and it has the highest entropy and the second intuition is it's how large is the log probability in expectation under itself so if you mostly see low log probabilities in expectation under yourself that means that there are many many places to which you assign you know roughly equal probabilities if you mostly see very high log probabilities that means that you really concentrate around a few points that you assign half probability to so the top example has high entropy because log probabilities are generally lower everywhere and the bottom one has higher entropy or lower entropy because the log probability is very high in just a few places and that's that's a low entropy distribution all right so then we could ask the question for the variational lower bound that we saw on the previous slide what do we expect it to actually do so it's the expected value of some quantity plus the entropy of qi so if this graph is showing p x i comma z so the thing inside the first expectation you could imagine that the the expected value of this of this function would be maximized just by putting a lot of probability mass on the tallest peak right so this is what we would get if we just maximize the first part we just want to find a distribution over z inside of which we have the largest values of p of x i comma z but we're also trying to maximize the entropy of this distribution so we don't want to make it too skinny if we're also trying to maximize the entropy then we want to spread it out as much as possible while still remaining in regions where p of x i comma z is large so because we have that second term we get something that kind of spreads out and the intuition is that because of this the qi of z that maximizes this quantity will kind of cover the the p x i comma z distribution now the other concept i want to recap here is kl diversions kl divergence between two distributions q and p is given by the expected value under the first distribution of the log of the ratio of the probability of the first distribution divided by the second and again by explaining the fact that the logarithm of a product is a sum of logarithms we can write this out as the expected value under q of q of x minus the expected value under q of log p of x which we could rewrite in a manner that looks a lot more like the equation on the previous slide if we just trade places and recognize the expected value under q of log q is just the negative entropy so the kl divergence is the negative of the expected value under q of log p of x and the entropy of q one intuition for what the kl divergence measures is how different two distributions are you'll notice that when q and p are equal the kl diverges to zero it's easy to see why it's zero because you have q over p equals one log of one is zero and the second intuition is how small is the expected log probability of one distribution over the other minus entropy now why entropy well for the same reason that we saw before because if you don't have the entropy term then q will just want to sit at the log at the most likely point under p but if we have the entropy that wants to cover it so the variational approximation says that log p of x i is greater than or equal to the expected value under q i of z of log p i of x i given z plus log p of z plus the entropy of q and we call this the evidence lower bound or variational lower bound which i'm going to denote as l i of p comma q i and as we saw in the previous slide it's also the negative kl divergence so what makes for a good qi of z well the intuition is that a good qi of z should approximate p of z given x i because then you get the tightest bound approximating what sense well you can compare them in terms of kl divergence so you can say well kl divers measures the difference between two distributions when the kld virus is zero then the two distributions are exactly equal so let's pick qi to minimize the kl divergence between qi of z and the posterior p of z given x why well because if we write out the scale divergence using the definition from before we'll see that it is equal to the expected value uh under qi of z of log q i of z divided by p of z given x i now p of z given x i can be written as p of x i by v of x i comma z divided by p of x i and since we're doing one over that we flip the ratio and we get this equation here and again applying the property that the sum of log the the log of a product is the sum of logs we get this equation on the side so we have the first term the negative expected value under q i of z of log p x i given z plus log p of z then we have the entropy term and then we have this prior term so substituting in the equation for entropy we get this so that means that the kl divergence between these two quantities is equal to the negative variational lower bound plus the log probability of x i notice however the log probability of x i doesn't actually depend on q i so we can rearrange the terms a little bit and we can express log p of x i as being equal to the kl divergence between q i and p plus the evidence lower bound and this is not an inequality this is all exact now we know that kl divergences are always positive so this is actually another way to derive the evidence lower bound right because you know that log p of x is equal to some positive quantity plus l which means that l is a lower bound on log p of x i but furthermore this equation shows that if we drive the kale divers to zero then the evidence lower bound is actually equal to log p of x i which means that minimizing that kl divergence is an effective way to tighten the bound so this justifies why we want to choose qi of z to approximate p of z given x i and it also justifies why we want to use the expected log likelihood because when we use the expected log likelihood that's like taking the expectation under p of z given x which is the point at which the bound is tightest okay so here's the equation we had before we used it to derive this bound and the kl divergence we can write out like this this is what we saw in the previous slide so that means that the kl divergence is given by the negative variational lower bound plus this log p of x i term now log p of x i doesn't depend on q i so if we want to optimize q i of x over z to minimize the kl divergence we can equivalently optimize the same evidence lower bound so that's pretty appealing maximizing the same evidence lower bound with respect to q i minimizes scale divergence and tightens our bound maximizing with respect to p increases the log likelihood so now this immediately suggests a practical learning algorithm take your variation lower bound your evidence lower bound maximize it with respect to q i to get the tightest bound and then maximize it with respect to p to improve your model to improve your log likelihood and then alternate these two steps okay so just to recap this our goal is to maximize the log p theta of x i but uh that's intractable so instead we're going to maximize the evidence lower bound so for each x i we'll calculate the gradient with respect to the model parameters by sampling z's from our q i of x i and then using those samples to estimate the gradient so this is a single sample version so you sample one z from q i of x i and then assuming your prior p of z doesn't depend on theta then the gradient is just the gradient of log p theta x i given that c and then you improve your theta and then you update qi to maximize the same evidence lower bound so this is the stochastic gradient descent version of variational inference just to state this again so that we're all on the same page in order to estimate the gradient grad theta of l i p p comma q i sample a z from the approximate posterior q i calculate the gradient grad theta l i p comma q i as grad theta log p theta x i given z and then take that gradient step and then update your q i to maximize l i p comma q all right so everything here is straightforward except for the last line how do you actually improve your qi well let's say that qi is given by a a gaussian distribution with mean mu i and variance sigma i well you could actually calculate the gradient with respect to the mean and variance of the evidence lower bound and then degrading ascent on mu i and sigma i so what's the problem with this well how many parameters do we have remember that we have a separate qi for every data point x i and they each have a different mu and a different sigma this is not a big deal if you have a few thousand data points but it becomes a big problem if you have millions of data points and in the deep learning setting typically we would have a very large number of data points so the total number of parameters if we have this gaussian distribution is the number of parameters in the model theta plus the dimensionality of the mean plus the dimensionality of the variance times the number of data points n okay so that's maybe a little too large n might be a pretty large number and this might be intractable but remember our intuition is that qi of z needs to somehow approximate the posterior p of z given x i so what if instead of learning a separate q i of z a separate mean and variance for every data point what if you train a separate neural network model that approximates q i of z so instead of having a separate q i of z for every data point x i we have one network q of z given x i which aims to approximate the posterior so then in our generative model we would have one neural network that maps from z to x and another neural network that maps from x to z and that neural network gives us a posterior estimate with a mean and variance that are given by neural network functions of x so that's the idea behind amortized variational inference and that's what i'll talk about in the next part of the lecture
CS_285_Deep_RL_2023
CS_285_Lecture_10_Part_2.txt
okay so in the next part of the lecture i'm going to discuss some algorithms for open loop planning that make kind of minimal assumptions about the dynamics model so they require you to know the dynamics model but otherwise they don't make any assumption about it being continuous or discrete stochastic or deterministic or about whether it's even differentiable so for now we're going to concentrate on the open loop planning problem where you are given a state and you have to produce a sequence of actions that will maximize your reward when you start from that state so this won't be a very good idea for taking that math test but it can be a pretty good idea for many other practical problems okay so we'll start with a class of methods uh that are that can broadly be considered stochastic optimization methods these are often sometimes called black box optimization methods so what we're going to do is we're going to first abstract away the temporal structure and optimal controller planning so these methods are black box meaning that to them the optimization problem you're solving is a black box so they don't care about the fact that you have different time steps that you have a trajectory distribution over time all i care about is that you have some maximization or minimization problem so the way we'll abstract this away is we'll say that we're just solving some problem over some variables a1 through a capital t with an objective that i'm going to denote as j so j quantifies the expected reward but for these algorithms they don't really care about that and in fact they don't even care that the actions are a sequence so we're going to represent the entire sequence of actions as capital a so you can think of capital a as basically just the concatenation of a1 through a capital t so this is just an arbitrary unconstrained optimization problem a very simple way to address this problem which might at first seem really silly is to basically just guess and check so uh you just pick a set of action sequences from some distribution maybe even uniformly at random so just pick a1 through a capital n and then you choose your action sequence to choose some ai based on which one is the arg max with respect to the index of jaon basically choose the best action sequence instead of maximizing over let's say a large or continuous valued sequence of actions you just maximize over a discrete index from one to n that's very very easy to do just check each of those action sequences and take the one that gets the largest reward hence guessing check this is sometimes referred to as the random shooting method uh shooting because you can think of this procedure where you pick this action sequence as sort of as randomly shooting the environment you say well if i just try this open loop sequence of actions what will i get this might seem like a really bad way to do control but in practice for low dimensional systems and small horizons this can actually work very well and it has some pretty appealing advantages take a moment to think about what these advantages might be so one major advantage of this approach is that it is very very simple to implement coding the sub takes just a few minutes it's also often quite efficient on modern hardware because you know later on when we talk about learned models when your model is represented by something like a neural network it can actually be quite nice to be able to evaluate the value of multiple different action sequences in parallel you can essentially treat a1 through a n as a kind of mini badge and evaluate the returns through your neural network model all simultaneously and then the arc max is a max reduction so there are typically very very fast ways to implement these methods on modern gpus with modern deep learning frameworks what's the disadvantage of this approach well you might not pick very good actions because you're essentially relying on getting lucky you're relying on one of those randomly sampled action sequences being very good so one way that we can dramatically improve this random shooting method while still retaining many of its benefits is something called the cross entropy method or cem the cross-entry method is quite a good choice if you want a black box optimization algorithm for these kinds of control problems in low to moderate dimensions with low to moderate time horizons so our original recipe with random shooting was to choose a sequence of actions from some distribution like the uniform distribution and then pick the rmax so what we're going to do in cross symmetry method is we're going to be a bit smarter about selecting this distribution instead of sampling the actions completely at random from the let's say the uniform distribution over all valid actions we'll instead select this distribution to focus in on the regions where we think the good actions might lie and this will be an iterative process so the way that we're going to do better intuitively will be like this let's say that we generated four samples and here's what those samples look like so the horizontal axis is a the vertical axis is j of a what we're going to do is we're going to fit a new distribution to the region where the better samples seem to be located and then we'll generate more samples from that new distribution refit the distribution again and repeat and in doing this repeatedly we're going to hopefully arrive at a much better solution because each time we generate more samples we're focusing in on the region where the good samples seem to lie so one way that we can instantiate this idea for example with crop with continuous valued actions is we can iteratively repeat the following steps sample your actions from some distribution p of a where initially p of a might just be the uniform distribution then evaluate the return of each of those actual sequences and then pick something called the elites so the elites is a subset of your n samples so you pick m of those samples or m is less than n that have the highest value a one common choice is to pick the ten percent of your samples with the best value and then we're going to refit the distribution p of a just to the elites so for example if you choose your distribution to be a gaussian distribution you would simply fit the gaussian find the max likelihood fit to the best m samples among the end samples that you generated and then you repeat the process then you generate n more samples from that fitted distribution evaluate their return take their elites and find a new distribution that's hopefully better cross-century method has a number of very appealing guarantees if you choose a large enough initial distribution and you generate enough samples cross-entry method will in general actually find the global optimum of course for complex problems that number of samples and a number of iterations might be prohibitively large but in practice crosstalk can work pretty well and it has a number of advantages so because you're evaluating the return of all of your action sequences in parallel this is very friendly to modern uh kind of deep learning frameworks that can accommodate a mini batch the method does not require your actions your model to be differential with respect to the actions and it can actually be extended to things like discrete actions by using other distribution classes typically you would use a gaussian distribution for continuous actions although other classes can also be used and you can make cem quite a bit fancier so if you want to check out more sophisticated methods in this category check out cmaes cmaes which stands for various metrics adaptation evolution strategies is a kind of extension to cem which includes kind of momentum style terms where if you're going to take many iterations then cmaes can produce better solutions with smaller population sizes okay so what are the benefits of these methods to summarize well they're very fast if they're paralyzed they are generally extremely simple to implement what's the problem the problem is that they typically have a pretty harsh dimensionality limit you're really relying on this random sampling procedure to get you good coverage over potential actions and while refusing refitting your distribution like we did in cm can help the situation it still poses a major challenge and these methods only produce open-loop planning so the dimensionality limit if you want kind of a rule of thumb obviously depends on the details of your problem but typically if you have more than about 30 to 60 dimensionals chances are these methods are going to struggle you can sometimes get away with longer sequences so if you have let's say a 10 dimensional problem and you have 15 time steps you technically have 150 dimensions but the successive time steps are strongly correlated with each other so that might still work but generally uh you know 30 to 60 dimensions works really well if you're doing planning rule of thumb 10 dimensions 15 time steps is about what you're going to be able to do much more than that and you'll start running into problems okay next we're going to talk about another way that we can do planning which actually does consider the close you know closed loop feedback which is monte carlo tree search monte carlo research can accommodate both discrete and continuous states although it's a little bit more commonly used used for discrete states and it's particularly popular for board games so things like alphago actually used a variant of monte carlo research in general monte carlo research is a very good choice for kind of games of chance so poker for example is a common application for monte carlo research so here's how we can think about monte carlo research let's say that you want to play an atari game let's say you want to play this game called sea quest where you have to shoot these torpedoes at some fish i don't know why you want to shoot torpedoes at fish that seems ecologically irresponsible but that's what the game requires you to do and the game requires selecting from a discrete set of actions to control your little submarine what you could imagine if you have access to a model is you could take your starting state and see what happens if you take action a1 uh equals zero and see what happens if you take action a1 equals one maybe you have just two actions and both of you know each of those actions will put you in a different state it might put you in a different state each time you take that action so the true dynamics might be stochastic uh but that's okay we would just take the action multiple times and see what happens and then maybe for each of the possible states you land in you can try every possible value fraction a2 and so on and so on now if you can actually do this you will eventually find the optimal action to take but this is unfortunately an exponentially expensive process so without some additional tricks this general unrestricted unconstrained tree search requires an exponential number of expansions at every layer which means that if you want to control your system for capital t time steps you need a number of steps that's exponential and capital t and that's no good we don't want that so how can we approximate the value of some state without expanding the full tree well one thing you could imagine doing is when you land at each at some node let's say you pick a depth let's say you say my depth is going to be three i'll expand the tree to depth three and after three steps what i'll do is i'll just run some baseline policy maybe it's even just a random policy now the value that i get when i run that baseline policy is not really going to be exactly the true value of having taken those actions but if i've expanded enough actions and especially if i have some like a discount factor that rollout with the random policy might still give me a reasonable idea of how good those uh states are essentially if i landed in a really bad state the random policy would probably do badly if i ran land in a really good state let's say i landed in a state where i'm about to win the game pretty much any move will probably give me a decent value so it's not an optimal strategy it's not going to give you exactly the right value but it might be pretty good if you expand the tree enough and use a sensible rollout policy in fact in practice monte carlo research is actually a very good algorithm for these kinds of discrete stochastic settings where you really want to account for the closed loop case okay so this might at first seem a little weird a little contradictory but it turns out the basic idea actually works very well okay now we can't of course search all the paths so the question we usually have to answer with monte carlo tree search is with which path do we search first so we start at the root we have action a a1 equals 0 and a1 equals 1 which one do we start with well let's let's say that we picked a1 equals 0. we don't know anything about these actions initially so we have to make that choice arbitrarily or randomly we picked a1 equals 0 and we got a reward of a value of plus 10. now plus 10k refers to the full value of that rollout so it refers to what we got from taking action a1 equals 0 and then running our baseline policy now at this point we don't know whether plus 10 is good or bad it's just a number so we have to take the other action we don't know anything about the other actions so we can't really trade off which of these two paths is more promising to explore let's say we take the other action and we get a return of plus 15. and remember this plus 15 refers to the total reward you get from taking the action a1 equals one and then running your baseline policy now we have to remember something very important here we are planning in a stochastic system which means that if we were to take a1 equals 0 again and run that random policy again we might not get plus 10 again we might get something else we might get something else because our policy is random and because the outcome of equal a1 equals zero is also random so these values should be treated as sample based estimates for the real uh value of taking that action okay so at this point if we look at these two outcomes a reasonable conclusion in my draw is that action one is a bit better than action zero we don't know that for sure we might be wrong but we took both actions once and one of them ended up being better so if you really had to choose which direction to explore maybe we should explore the one that produced the better return so the intuition is you choose the nodes with the best return but you prefer rarely visited node so if some node was not visited before at all you really need to try it because you have no way of knowing whether its return is better or worse but at this point we probably want to explore the the right substrate okay so let's try to formalize this into an actual algorithm here's a generic sketch of an mcts method first we're going to take our current tree and we're going to find a leaf sl using some tree policy the term tree policy doesn't refer to an actual policy that you run in the world it refers to a strategy for looking at your tree and selecting which leaf node to expand step two you expand that leaf node using your default policy now default policy here is actually referring to a real policy like that random policy that i had before how do you expand the leaf well remember that the nodes in the tree correspond to actual sequences the same action sequence actually executed multiple times might actually lead to different states so the way that you evaluate a leaf is you take you start in the initial state s1 and then you take all the actions on the path from that leaf to the root and then follow the the the default policy so you don't just teleport to some arbitrary state you could do the teleporting thing too and that would also give you actually a well-defined algorithm but typically you would actually execute the same sequence of actions again to actually give them the chance to lead to a different random outcome because remember you want the actions that are best in expectation and then step three update all the values in the tree between s1 and sl and then repeat this process and then once you're done you take the best action from the root s1 and typically in mcts you would actually rerun the whole planning process each time each time steps you would take the best action from the root and then the world would randomly present you with a different state and then you would do all the planning all over again okay so our tree policy initially can't do anything smart we haven't expanded any of the actions you just have to try action zero and then you evaluate the using the default policy and then and then you update all the values in the tree between s1 sl sl heroes s2 and notice here that we collected a return which is 10 and we also record how many times we've visited that node which is a one now we have to expand the other action we can't we can't really say anything meaningful about it without expanding it so we go and expand the action one and there we get a return of 12 and n equals 1 because we visited only once so a very common choice for the tree policy is the the uct tree policy which basically follows the following recipe if some state has not been fully expanded choose a new action in that state otherwise choose a child of that state with the best score and the score will be defined shortly and then you apply this recursively so essentially the street policy starts at the root s1 if some action at the root is not expanded then you expand it otherwise you choose a child with the best score and then recurse so here you know for any reasonable value of score we would have chosen s2 because they both have been visited the same number of times but the value at s2 is larger so we would go and expand a new action for s2 and maybe we get a return of 10. now the the n value at that leaf is one but remember step three in mcts is to propagate all the values back up to the root so we also update s2 to give it n equals 2 and q equals 22. so essentially every time we update a node we add the new value to its old value and we add 1 to its count that way we can always recover the average value at some node by dividing q by n by the way one thing i might mention is when you see these indices s1 s2 s3 these numbers are just referring to the time step they are so remember these nodes do not uniquely index states if you take the same action sequence two times you might get a different state but i'm still referring that as s2 or s3 because it's the state at time step 2 or 3. so the actual states are stochastic all right so now we have a choice to make we have two possible choices from the root one leads to a node with q equals 10 and n equals one the other leads to q equals 22 and n equals two so the action one still leads to a higher average value which is 11 but the action zero leads to another that has been visited less often so here the choice of score is actually highly non-trivial there are many possible choices for the score in mcts but one very common choice which is this uct rule is to basically choose a node based on its average value so q over n plus some bonus for rarely visited nodes so one commonly used bonus is written here it's two times some constant of the square root of two times the natural log of the count at the current node divided by the count at the target node so the denominator basically uh refers to how many times each child of the current node has been visited the intuition is that the less often a child has been visited the more you want to take the corresponding action so here the the node for action zero has a denominator of one the node for action one has a denominator of two so a one equals zero has a bigger bonus the numerator is two times the natural log of the number of times the current node has been visited and that and that's meant to basically account for the fact that if you visited some node um a a very small number of times then you want to prioritize novelty more if you visit a node a very large number of times then then you probably have a more confident estimate of the values okay so in this case we would probably actually choose to visit the node a1 equals zero because even though its average value is lower its end value is also lower so it will get a larger bonus which might exceed the difference in value if the constant c is large enough and then when we visit that node we have to just expand an arbitrary new action because we don't know the value of anything else and maybe here we record q equals 12 n equals one we again propagated back up to the root so add the end to the n at the power node add the q to the q at the parent node and now we have two nodes with equal values they're both 11. so we have to break the tie somehow arbitrarily and we go over here we get a q equals eight and n equals one and now the value at this node becomes 30 and the denominator is three so now take a moment to think about which way mcts would go yep it has to go to the right because then the node for the right the one corresponding to action a1 equals one has both a larger value and a lower visitation count so that's what we're going to do and so on so then this process will recurse for some number of steps you have to choose your step based on your computational budget and once your computational budget is exhausted then you take the action leading to the node with the best average return okay if you want to learn more about mcts i would highly recommend this paper a survey of monte carlo tree search methods which provides kind of a high level overview in general mcts methods are very difficult to analyze theoretically and they actually have surprisingly few guarantees but they do work very well in practice and if you have a kind of some kind of game of chance where there's stochasticity these kinds of algorithms tend to be a very good choice and of course there are many ways you can make mcts more clever for instance by actually learning your default policy using the best policy you've got you could use value functions to evaluate terminal nodes and so on if you take this to the extreme you get something similar to for example what alphago actually did which was a combination of mcts and reinforcement learning of value functions and default policies
CS_285_Deep_RL_2023
CS_285_Lecture_10_Part_5.txt
all right in the last portion of today's lecture i'm going to go over a little case study that demonstrates the power of optimal control algorithms in the case where we know the true dynamics and the point that i want to make with this is just kind of partly to motivate why we want to study model based rl and partly to show that these things really do work and they really do things that are pretty impressive compared to even the best model free rl methods so the case study that i'm going to talk about is this paper called synthesis and stabilization of complex behaviors through online trajectory optimization by yuval tossa tomares and emmanuel totoro what this paper describes is a fairly simple almost textbook algorithm but implemented quite well that uses iterative lqr as an inner loop in something called model predictive control so model period of control is a way to use a model based planner in settings where your state might be unpredictable and the the main idea in model predictive control is very simple every time step you observe your current state xt then you use your favorite planning or control method to figure out a plan a sequence of actions ut ut plus one etc all the way out to u capital t and then you execute only the first action of that plan discard the other actions observe the next state that occurs and re-plan all over again so essentially model print of control is a fancy way of saying replan on every single time step and that's what this paper does uh most of the contributions in this paper are actually in the particular implementation of iterative lqr and if you want to kind of know all the bells and whistles all the tips and tricks for implementing iterative lqr effectively i would encourage you to check it out what i want to show you today is the video of the result from that paper so i'm going to play the video and narrate a little bit so here what they're showing is a simple acrobat system using iterative lqr they're going to show a swimmer and a little hopper as well as a more complex humanoid so here's the acrobat two degrees of freedom and only one control dimension a very simple cost and first they just run it passively so no controls at all and then they turn on the controller and you can see that in real time the controller actually discovers how to swing up the acrobat so there's no learning at all although you do have to know the dynamics but the impressive thing is that this behavior is actually discovered completely automatically and completely in real time and uh because you're they're using model creative control when they apply perturbations to the system the robot successfully recovers from those perturbations here they have a little swimming snake and his goal is to get to the green dot while avoiding the red dot and again kind of the interesting thing here is that this undulating swimming gate is actually discovered by the controller automatically just through optimization without needing to know know or learn anything in advance except of course for the system dynamics here's the hopper system so here what they're going to do is uh they're going to first apply some perturbation forces to it just to show off their physics engine and then having applied those perturbation forces they'll show what happens when you uh actually ask the hopper to stand up so it figures out on its own how to jump up and stand and then when they apply perturbations to it it reacts to those perturbations and manages to stay upright and uh here they show that it can react to even very extreme perturbations here they're showing what happens if they give it the wrong dynamics so because they're planning every step they can actually get somewhat sensible results even when the dynamics are misspecified so here the true robot has half the mass that the controller thinks it does and here it has double the mass of the controller thinks it does so you can see with double the mass it kind of struggles a little bit but still does something seemingly reasonable uh here they're going to be controlling a 3d humanoid the cost function here is i should say pretty heavily engineered so it's still a fairly short horizon controller it's not planning far into the future and the cost function therefore needs to be quite detailed so here they turn it on and it figures out how to stand up it's a little bit slower than real time so they had to speed up this video to play it back but uh it manages to do some rudimentary stepping balancing and pretty intelligent reactions even in the face of fairly extreme perturbations okay if you're interested in additional readings on these topics here's what i would recommend this monograph by maine and jacobsen called differential dynamic programming is the original description of the ddp algorithm from which ilqr is inspired this is the npc paper for which i just presented the video this is a paper that provides a probabilistic formulation and trust fusion alternatives to the deterministic line search for lqr so if you want to know how to handle these kind of lqr things in stochastic settings this could be something to check out and in next week's lectures we will extend this to the case where the dynamics are perhaps not known so what's wrong with known dynamics well no dynamics are great if you're controlling some system that is easy to model like the kinematics of a car but if you're trying to get a robot to fold a towel or sort some objects in a factory maybe modeling all those exactly is very difficult or even impossible and in those cases maybe we can learn our models so that's what we'll talk about next
CS_285_Deep_RL_2023
CS_285_Lecture_14_Part_2.txt
all right so the first algorithm that i'm going to kind of dive more deeply into is going to tackle this question how do we learn without a reward function by proposing and reaching goals and as i mentioned at the beginning this lecture was really intended more to discuss sort of cutting edge research topics and maybe provide a slightly different perspective for thinking about exploration so i won't actually discuss the algorithm in sort of enough detail to implement it but hopefully enough detail for you to kind of understand the main ideas but i will have references to papers at the bottom and if you uh want to get all the details then i would encourage you to read those papers but you know think of this more as a way to get a perspective on how we can approach this unsupervised exploration problem mathematically less as a as a specific tutorial about a particular method that you actually should be using all right so the example scenario again that we're dealing with is this setting where you have a robot you put it in your kitchen it's supposed to spend the day practicing various skills and then in the evening when you come home you're going to give it a task and perhaps you will ask it to do the dishes and should somehow utilize the experience that it acquired to perform that task now uh one fairly mundane thing that we have to figure out for this before we can even get started is how we're going to actually uh propose uh how we're actually going to command goals to the robot once the learning is finished so if you want sort of a real world analogy maybe you can think of it like this maybe you're going to show the agent uh an image of the situation that you would like it to uh to reach in rl parlance this would amount to giving it the observation or the state that constitutes the goal for the task so what you would like is you would like to somehow have the agent learn something that enables it to accomplish whatever goal you give it and the goal will be specified by a state if we're talking about images maybe it's an image of the desired outcome this is not necessarily the best way to communicate with a with autonomous agents but it just allows us to nail down something very concrete the problem will be given a state the agent should reach that state and then the unsupervised learning phase should train up a policy that would allow the agent to reach whatever state you would care to command it now as a technical detail we need some mechanism for comparing different states if those states are very complex like images just like we saw in the uh in the exploration lecture on monday we need some notion of similarity between those states because in general in high dimensional or continuous spaces every state will be unique so there are many ways to deal with this problem but the way that we'll deal with it for now is we'll say well let's just train some kind of generative model the particular generative model i'll use as a working example is something called a variational auto encoder which we'll cover a few weeks from now but there are many other choices and we'll just assume that this generative model has some latent variable representation of your image so if your image is x you can also think of as a state s then your latent variable model will construct some latent variable representation of that state which i'm going to note as z so z would be sort of a compact vector that describes what's going on in the scene and we'll assume that that vector is you know at least somewhat well behaved meaning that similar functionally similar states will lie close together in that latent space but there are many ways to get this effect all right and then of course the main thing that we're concerned with is we would like our agent to basically have this unsupervised training phase where before we even specify any goals that should accomplish it can sort of imagine its own goals propose those goals to itself attempt to reach them and as a result acquire a goal reaching policy without any manual supervision without any reward supervision so intuitively what it's going to be doing is going to be using the slate in space to propose potential z vectors that could try to treat us goals attempt to reach those goals and as a result improve its policy okay so let's try to sketch out what session algorithm might look like we're going to have our variational autoencoder as our generative model so that has a distribution x given z which is a distribution over images given latent codes you can also think of it as s given z so i'm going to use x here but s means the same thing and then we have our latent variable distribution p of c and when you train a variational autoencoder as we'll learn a few weeks from now you also need an inference network that maps back disease from states so if you have a generative model like this one of the ways you could propose a goal is you could just sample it from the model so you could sample your latent variable from the latent variable prior so sample zg from p of z and then reconstruct the corresponding image by sampling x g from d theta x j given z g so that will give you an imagined image and again you don't have to do this with vaes any kind of charitable model would work something that can propose a goal and then you could attempt to reach that goal using a policy so your policy now would be a a conditional policy so it'd be a distribution of our actions given the current image x and given the goal xg and when you attempt to to reach the goal using this policy the policy may or may not succeed so let's say that it reaches some state and we'll call that state x bar ideally we would like x bar to be equal to x g but in general it might not be in fact x g might not even be a real image it may be impossible to reach so you'll get some other image x-bar and in the process of writing that policy you'll of course collect data which you can use to update your policy maybe using something like a q-learning algorithm like what you're doing for homework 3. and you can also use that data to update your generative model so if in the course of attempting to reach that goal you saw some other images that you hadn't seen before incorporating that data to update your generative model might give you a better general model that can propose more interesting goals and then you can repeat this process so this is a basic sketch of an algorithm that utilizes a goal proposal mechanism an unsupervised goal proposal mechanism and a goal condition policy and the interaction of these two things leads it to proposed goals and then attempt to reach them okay but there's a little bit of a problem with this recipe because the generative model is being trained on the data that you've seen so it's going to generate data that looks very much like the data that you've seen which means that if your agent figures out how to do one very specific thing maybe it figures out how to pick up a mug now it has lots of data of picking up that mug and when it generates additional images additional goals it'll generate lots more data of picking up that same mug and might not bother with other things so this is where we can bring in some ideas related to what we covered in the lecture on monday some of these exploration ideas let's imagine that we have this 2d navigation scenario so the little circles represents states that you visited intuitively what you would like to do is you would like to take this data set and modify it skew it some way to upwaite the rarely seen states very much like the novelty seeking exploration that we discussed on monday and if you can do this if you can upload the rarely seen states before fitting your generative model then when you fit your generative model it should assign higher probability to the tails of this distribution so that when you propose new goals it will sort of broaden broaden it out and visit more states there on the fringes and hopefully expand its repertoire of states that it can reach so this is the intuition behind what we want to make such an algorithm really work so how do we do this well the idea is that we're going to modify step four instead of blindly using all the data we've collected to fit our generative model we're going to actually weight that data so that's basically what this step will be so the standard way to fit our generative model is basically maximum likelihood estimation find the generative model that maximizes the expected log probability of the states that you actually reached which i'm denoting here with x bar instead you could imagine having a weighted maximum likelihood learning procedure where you train your generative model to assign high likelihood to the states that you've seen x bar but weighted by some waiting function w of x bar and intuitively you would like that waiting function to upweight those states or those images that have been seen rarely what do we have at our disposal that can tell us how rarely something has been seen well we're using a generative model to propose these goals and a generative model should be able to give us a density score just like when we learned about counts and pseudo counts so what we can do is we can assign a weight based on the probability density that our current model p theta assigns to that state x so we'll set the weight to be p theta of x bar raised to some power alpha where alpha is a negative number so this will essentially be 1 over p theta of x bar to some positive power or equally p theta of x bar to some negative power and one of the things we can prove i'm not going to go through the the proof for this but the proof is in these papers uh it's possible to prove that if you use a negative exponent then the entropy of p theta of x will increase meaning that each time you go around this loop you will be proposing broader and broader goals and if your entropy always increases that means that you eventually converge to the maximum entropy distribution which would be a uniform distribution over possible valid states now a uniform distribution over valid states is not the same as a uniform distribution over x so x might represent an image totally random images just kind of static might not actually be valid states so what you should be looking for is a uniform distribution over valid states a uniform distribution over valid images okay now looking at this equation you know one thing that might jump out at you is that this looks an awful lot like what we saw when we had uh pseudo counts and count based exploration so if you remember count based exploration our bonuses had this form like 1 over n of s or square root of one over n of s in general they were the form n of s raised to some negative power negative one half if you have one over square root or negative one if you have one over n so this looks an awful lot like that by raising the p theta of x bar to some negative power we're actually doing something that greatly resembles discount based exploration except instead of using it as a reward we're using it to train our goal proposal mechanism to propose diverse goals all right so uh the main change we're going to make is we're going to fit our generative model with this weighting scheme where the weight is the previous density for that state raised to some negative exponent now one question we could ask is well what's the overall objective of this entire procedure it seems like we laid out a recipe but in machine learning we like to think of algorithms as optimizing objectives so what is the objective for this algorithm well i mentioned that the entropy of the goal distribution will increase every step around this loop which means that one of the things we're doing is we're maximizing the entropy of the goal distribution that's good because we want good coverage we want to cover many many different goals so the goals get higher entropy due to this skew fit procedure what does the rl part do well your policy which you can also write as pi of a given s comma g so it's probability of action given current state and given goal your policy is trained to reach the goal g which means that as the policy gets better the final state which i'll denote as s here is going to get closer and closer to g so that means that the probability of g given your final state becomes more and more deterministic essentially if your policy is very good you could pose this question given the final state s that the policy reached what is the goal g that it was trying to reach if the policy is very good you could just say well the goal was probably the thing that it actually reached because it's a good policy it's going to reach its goal so that means that the better the policy is the easier it is to predict g from s which means that the entropy of p of g given s is lower so that means that you're also minimizing the conditional entropy of g given s and now when we look at this equation something should jump out at us that if we are max maximizing h of g minus h of g given s that means that we are maximizing the mutual information between s and g and maximizing the mutual information between s and g leads to good exploration because we're maximizing the entropy of our goals so we have coverage of all possible goals and effective goal reaching because we're minimizing the entropy of the goal given the state so that's another way that this concept of mutual information leads to an elegant and very simple objective that quantifies exploration performance essentially in this case the mutual information between states and goals quantifies how effectively we can reach the most diverse possible set of goals all right now for a quick robot video uh this was an actual research paper that we did a few years back and uh what we did what we did with this kind of objective is we put the robot in front of a door so that's that hook shaped thing that's the uh the gripper for the robot uh and but we didn't tell that it needs to open the door it was just supposed to figure out the sound on its own and in the top row you can see the goals that it's suggesting to itself the actual images that it's generating and in the bottom row you can see the behavior and at zero hours it's not really doing very much it's kind of wiggling around in front of the door uh 10 hours in it tends to touch the door handle and occasionally gets the door open and after 25 hours it pretty reliably uh messes with the door and opens it to all different angles and when the system is fully trained then you could give it an image of the door open to a different angle and it will successfully open it to that angle
CS_285_Deep_RL_2023
CS_285_Lecture_2_Imitation_Learning_Part_5.txt
all right the last topic we're going to talk about is the dagger algorithm and the dagger algorithm is actually something that you're going to be implementing in your homework and the dagger algorithm aims to provide a more principled solution uh to the imitational and distributional Shi problem so uh as a reminder the problem with distributional shift intuitively is that your policy makes at least small mistakes even close to the training data and when it Mak small mistakes it finds itself in states that are more unfamiliar and there it makes bigger mistakes and the mistakes compound more precisely the problem can be described as a problem of distributional shift meaning the distribution of States under which the policy is trained py data is systematically different from the distribution of States under which it's tested which is p Pi Theta and so far a lot of what we talked about are methods that try to change the policy so that P Pi Theta will stay closer to P data by making fewer mistakes but can we go the other way around can we instead change p data so that P data better covers the states that the policy actually visits okay how can we make p data be equal to P Pi Theta well of course if we're changing our data set we're introducing some additional assumptions so we're going to be actually collecting more data than just the initial demonstrations and the question then is which data to collect and that's what uh dagger tries to answer so instead of being clever about Pi Pi Theta or about how we train our policy Let's Be Clever about our data collection strategy so the idea in dagger is to actually run the policy in the real world see which states it visits and ask humans to label those States so the goal is to collect data in such a way that P Pi Theta uh that the train data comes from Pi Pi Theta instead of P data and we're going to do that by actually running our policy so here's the algorithm now we're going to need labels for all those States we're going to train our policy first on our training dat just on our demonstrations to get it started and then we'll run our policy and we'll record the observations that the policy seats and then we'll ask a person to go through all of those observations and label them with the action that they would have taken okay and now we have a labeled version of the policy data set and then we're going to aggregate we're going to take the union of the original data set and this additional label data set that we just got and then go back to Step One retrain the policy and repeat so every time through the Sloop we run our policy we collect observations we ask humans to label them with the correct actions for those observations and then we Aggregate and it can actually be shown that eventually this algorithm will converge such that eventually the distribution uh of observations in this data set will approach the distribution of observations that the policy actually sees when it runs the intuition for why that's true of course is that uh eventually is that each time the policy runs you collect its observations but then you might label them with actions that are different from the actions it took but that distribution is closer than the initial one so as long as you get closer each step eventually you'll get to a distribution where that the policy can actually learn and then you'll stay there forever so then as you collect from it more and more eventually your data set becomes dominated by samples from the correct P Pi Theta distribution so that's the algorithm it's a very simple algorithm to implement if you can get those labels here's a a video of this algorithm in action this is in the original diag paper this was a about 12 years ago where they actually used it to fly a drone through a forest and a dagger was used to where they actually flew the Drone collected the data and then ask humans to label it offline by actually looking at the images and using a a little mouse interface to specify what the action should have been and with a few iterations of dagger that can actually get up to fly pretty reliably through a forest dodging trees now there is of course a problem with this method and that has to do with step three uh it's sometimes not very natural to ask a human to examine images after the fact and output the correct action when you're driving a car you're not just instantaneously making a decision every time step about which action to choose you are situated on temporal process you have reaction times all that stuff so sometimes the human labels that you can get offline in this sort of a counterfactual way can be not as natural as what a human might do when they were actually operating the system so step three can be a bit of a problem for dagger and many improvements on dagger seek to alleviate that challenge but the basic version of dagger works like this and that's the version that you will all be implementing in your homework there's really not much more to say about dagger it alleviates the distributional shift problem it actually provably uh address it so you can derive a Bound for dagger and that bound is linear in t rather than quadratic but of course that comes at the cost of introducing this much stronger assumption that you can collect the additional data okay so that's basically the list of methods I wanted to cover for how to address the challenges of behavior cloning we can be smart about how we collect an augment our data we can use powerful models that make very few mistakes we can use multitask learning or we can change the data collection procedure and use dagram the last thing I want to mention which is a little bit of a preview of what's going to come next is why is imitation learning Not Enough by itself why do we even need the rest of the course well humans need to provide data for imitation learning which is sometimes fine but deep learning works best when the data is very plentiful so asking humans to provide huge amounts of data can be huge limitation if the if the algorithm can collect data autonomously then we can be in that regime where deep Nets really Thrive and data is very plentiful without exorbitant amounts of human effort the other thing is that humans are not good at providing some kinds of actions so humans might be pretty good at specifying whether you should go left or right on a hiking trail uh or controlling a quadcopter through a remote control but they might not be so good at for example controlling the low-level commands to qual copter rotors to make it do some really complex aerobatic trick if you want humans to control all the joints in a complex humanoid robot that might be even harder maybe you need to rig up some really complicated harness for them to wear uh if you want to control a giant robotic spider well good luck finding a human who can operate that um and humans can learn things autonomously and just intellectually it seems very appealing uh to try to develop methods that can allow our machines to do the same as I mentioned in lecture one one of the most exciting things we can get out of learning based control is emerging behaviors behaviors that are better than what humans would have done and in that case it's very desirable to learn autonomously uh when learning autonomously in principle machines can get unlimited data from their own experience and they can continuously self-improve and get better and better in principle exceeding the performance of humans now in order to start thinking about that we have to introduce some terminology and notation we have to actually Define what it is that we want if our goal is no longer just to imitate but we want to do something else well what is it that we want and maybe instead of matching the actions in the expert data set we want want to bring about some desired outcome maybe in the tiger example we want to minimize the probability of being eaten by the tiger so we want to minimize the probability that we will land in a state S Prime which is an eaten by Tiger State and we can write that down mathematically and in general we can write it as the expected value of some cost in this case the cost is being eaten by a tiger uh now we already saw costs before when we talked about counting the number of mistakes but in general we can have arbitrary cost on States and actions and those can Define arbitrary control tasks like not being eaten by tigers or reaching a desired destination so the new thing that we're going to introduce and that we're going to use in lectures next week is the cost function or sometimes the reward function now the cost function and the reward function are really the same thing they're just negatives of one another and the reason that we see both sometimes is the same kind of a cultural distinction that I alluded to before remember I mentioned that we have S a uh which comes from the study of dynamic programming that's where the reward comes from in optimal control it's it's a bit more common to deal with costs I don't know if there's a cultural commentary here well you know optimal control originated in Russia maybe it's a little more common to think about costs in America we are all very optimistic and we think about life as bringing rewards maybe there's something to that but for the purpose of this class don't worry about it C is just the negative R and to bring this all the way back around to imitation well the cost function that we saw before for imitation is just can be framed in exactly the same framework we have rewards which are log probabilities we have costs uh and those are interchangeable you can have the cost be the negative of the reward and you can define a cost for imitation but you can define a more expressive cost for the thing you actually want like reaching your destination or avoiding a car accident and then use those with more the more powerful reinforcement learning algorithms that we'll cover in future weeks
CS_285_Deep_RL_2023
CS_285_Lecture_19_Control_as_Inference_Part_3.txt
all right so so far we saw how we could frame control as inference in a particular graphical model and then we talked about how we could do exact inference in that graphical model and understand three possible inference problems computing backward messages computing policies which uses those backward messages and computing forward messages which as i've alluded to will be useful later on when we talk about inverse reinforcement learning now all of the inference procedures we've discussed so far have been exact inference but of course in complex high dimensional or continuous state spaces or settings where the dynamics are not known where the transition probabilities are not available to us and we can only sample from them by performing rollouts we need to do approximate inference and that's what i'm going to talk about in the next section i'll actually use the tools that we learned about from last week the tools of variational inference to show how model-free reinforcement learning procedures can be derived from this control as inference framework now in the course of designing these approximate algorithms we're also going to see how we can devise a solution to a particular problem that i raised previously and that's the optimism problem that i mentioned so if you recall from the previous part of the lecture we talked about how the state backward message and the state action backward message their logarithms can be interpreted as being very similar to value functions and q functions and we when we write out these equations in log space we derive an algorithm that looks very similar to value iteration except the max over the actions is replaced with a soft max and the belmont backup has a log expected value exponential form now the softmax is not really that much of a problem that's actually where we get this notion of soft optimality so we actually want that but this kind of backup is a little bit problematic the trouble with this backup is that the log of the expected value of the exponentiated next state values is going to be dominated by the luckiest state the easiest way to see this is to imagine that the action corresponds to buying a lottery ticket so you have a one in a thousand chance of getting an extremely large payoff and a 999 and a thousand chance of getting nothing now the effect of this will be that the expected value is you know 0 times 0.9999 and 1 million times 0.001 so that means that it's just 1 million times 0.001 when you take the exponential of that and then the logarithm the zeros their effect will essentially disappear and the final value will be dominated by that positive outcome and that's really bad news because of course buying the water ticket is not a good idea and it's expected value is not high but its log expected exponentiated value is high so essentially this kind of backup results in a kind of optimism bias now why does this happen well the inference problem that we're solving is to infer the most likely trajectory given optimality and then marginalizing and conditioning this we get the policy p of a t given s t comma 0 1 through capital t the question intuitively this inference problem is asking is given that you obtained high reward what was your action probability now think back to the lottery example if you know that you got a million dollars that makes it more likely that you played the lottery that doesn't mean that playing the lottery is a good idea so fundamentally the tension here is that the inference question we're asking is not quite the the question to which we really want the answer what we want to know is what would you have done if you were trying to be optimal not what do i think you did given that you got a million bucks the issue that this really stems from is that the posterior probability of s t plus 1 given s d a t and o 1 through capital t is not the same as its prior probability so when we perform this inference process we're actually altering the dynamics to agree with our evidence again the intuition here follows very nicely from the lottery example if you know that you got a million bucks and you bought the lottery ticket there's a higher probability that you won the lottery because the evidence that you got a million bucks increases the belief that you actually won the lottery but of course the dynamics are not allowed to change in reality in reality we'd like to figure out what is a approximately optimal thing to do in the actual original dynamics so this question is given that you obtained high reward what was your transition probability but in a sense we don't care about this question we your transition probability should remain fixed so let's think about how we can address this optimism problem so what we want is we want the policy but we don't want our process of inferring the policy to allow us to change the dynamics so intuitively what we want is given that you obtain high reward what was your action probability given that your transition probabilities did not change so one of the ways that we could approach this is we could say can we find another distribution um q over states and actions that is close to the posterior over states and actions given o1 through but has the same original dynamics so in this approximate posterior cue we want the dynamics to be the same as as they were originally unaffected by the uh your knowledge of the reward but we want the action probabilities to change so where have we seen this before where have we seen the notion of approximating one probability distribution with another one that has some constraints so if for a minute we say that x is one through capital t and z is s1 through t and a1 through t then this problem is equivalent to saying find q of z to approximate p of z given x basically find an approximate distribution that accurately approximates the posterior over unobserved variables and that is basically the problem that varys length solves so can we shoehorn this problem can we find another distribution q s one through t a one through t that is close to the posterior p but has the dynamics p of s t plus 1 given sda t can we shoehorn this into the framework of variational inference take a few moments to think about this think about how you could use variational inference to address this maybe pause the video and think about it and then check your your answer against what i'm going to tell you on the next slide all right so what we're going to do in order to perform control using variational inference is we'll define a somewhat peculiar distribution class for q we'll define q of s1 through t and a1 through t as the product of p of s1 the product of the transitional probabilities p of s t plus 1 given s d a t at every time step and an action distribution q of a t given st now this distribution this definition for the variational distribution is quite peculiar because typically when we use variational inference we learn the entire variational distribution but here we're actually fixing some parts of the variational distribution to be the same as p and only learning the action conditional so we're going to have the same dynamics and the same initial state as p and that's going to be important for combating this optimism bias so the only thing that we learned for learning this approximate posterior is q of a t given s t we can represent this graphically as follows the real graphical model that we are in which we are trying to do inference is shown here so we have the observed variables one through capital t and the unobserved variables the s's and the a's so we have the initial state the transition probabilities and the optimality variable probabilities the approximation corresponds to this graphical model so remember in variational inference the variational distribution does not contain the observed variables so it makes sense that the o's are removed only the s's and a's remain and we have the same initial state distribution the same transition probabilities we no longer have the o's but instead we have q of a t given st and that's the only part that we're going to learn by the way as an aside i should mention that all of these derivations are presented for the case where s1 is unobserved oftentimes you might actually know s1 in which case p of s1 goes away the s1 node will be shaded everywhere and it will not actually be represented as part of your variational distribution it's very straightforward to derive that it just adds a little bit more clutter to the notation which is why i omit that on these slides and treat s1 as a latent variable but keep in mind that if you are in a situation where you know the current state and just want to figure out future states and actions then s1 will be observed but it's pretty easy to extend this to that setting and i would encourage you to do that as a whole as a exercise on your own time okay so now to tie this back to the variational inference discussion from last week again we're going to say x our observed variables is just o1 through t z our latent variables correspond to s1 through t and a1 through t so if the first graphical model is p of c given x the second one is q of z and then we're going to write out our variational lower bound in terms of these things and then we will optimize that variation of lower bound and we'll see that actually corresponds very closely to a lot of rl algorithms that we've already learned about okay so here's the variational lower bound that we saw in the lecture last week the log probability of x is greater than or equal to the expected value under q of c of log p of x comma z and minus log q of z and this is actually true for any q of z but of course as we learned last week the closer q of z is to the posterior p of c given x the tighter this bound becomes and the sub this last term is just the entropy of q so substituting in our definitions for x and z from the previous slide we can say let q be equal to this thing and then we can write out log p of o one through t the log probability of our evidence as being greater than or equal to the expected value under s1 through t and a1 through t distributed according to q of all of the probabilities in our graphical model log p of s1 plus the sum of the log probabilities of transitions plus the sum of the log probabilities of the optimality variables minus the entropy which is uh going to be minus log p of s1 so this this s1 comes from our definition for q minus the log probabilities of the transitions again this comes from our differentiation for q and then minus the log q of a t given st so now we can see why we made this particular choice for q we chose q so the initial state probabilities and the transition probabilities very conveniently cancel out which means that our bound now just corresponds to the sum of the log probabilities of the optimality variables minus the log probabilities of the actions under q substituting in the definition for p of o t given s t a t we get this expression the lower bound on our likelihood is just the expected value of the total reward minus log q a t given s t at every time and i can move the x the sum outside the expectation by linearity of expectation and replace the log q term with an entropy and now we can see that this lower bound is exactly equivalent to maximizing the reward and maximizing the action entropy and remember the q has the same initial state distribution and transition probabilities as the original problem which means that this is precisely the expected reward our original reinforcement learning objective plus these additional entropy terms and the additional entropy terms serve to justify why you don't want just the single optimal solution but why you might want some stochastic behavior that also models things that are slightly sub-optimal thinking back to the sub-optimal monkey this is optimizing this objective will basically give us the sub-optimal monkey so the cool thing about this is just by applying the variation lower bound we recovered an objective that looks very much like the original reinforcement learning objective but with the addition of these additional entropy terms okay so how can we optimize this variation of lower bound so there's our q there's our bound from the last slide take a moment to think about how we could optimize it can we for example employ some of the algorithms that we already learned about from the previous lectures so one of the things we could do is we could employ dynamic programming approach so similarly to you know the value iteration style methods we learned about we could solve for the last time step which just has the single reward function and when solving for the last time step we can group the terms so we have the expected value of s capital t under q of s capital t of the expected value of a capital t of the reward plus the entropy and you could actually show that any time you have a maximization objective which has the form of the expected value under a distribution of some quantity minus the log probability of that distribution the solution always has the form of the exponential of that of that quantity it's pretty easy to show this by just uh setting the derivatives you know taking the derivative setting the derivative to zero and solving for q a capital t given s capital t but in general it's a good rule of thumb that if your objective is the expected value of something minus the log probability of the thing under which you're taking the expected value the solution is always the expected the exponentiation of that point so the last time step is always optimized when q of a capital t given st is proportional to the exponential the last time step reward and in particular if we write out the normalization you can see that the denominator is just the integral over all actions of the exponentiated reward which of course is exactly the exponentiation of the q function minus the value function of course on the last time step the q function is kind of trivial the q function is just the reward and the value function is just the log integral exponential exponentiation of the q function which is the normalizing constant right so that's the value function um now if i were to then substitute in this expression for q [Music] then i i know that the difference between r and log q is just the value right because log q log little q is going to be big q minus v so big q here is r so r minus r plus v i end up with the expression on the right side so this is somewhat analogous to what we did in lqr we're starting at the back solving for the optimal policy and then substituting the corresponding expression so what this tells us is that for the last time step the contribution this last time step next to the overall objective is v of s capital t where little q of a t given s t is given by this expression and then we can proceed with the recursive case we can say that at any given time step uh the q of a little t given st at that time step is the r max of the expected value under q of s t of the expected value under q of a t given s t of the reward of that time step plus the expected value of the value function at the next time step plus the entropy of q of a t given st and of course if we do that we can always say that we have this quantity q t s t t which is r plus next v that's just the regular bellman backup which is not optimistic anymore and we substitute them into this equation and again we get an expression that looks like the expected value under q of atst of some quantity minus log q so we know that again the solution is the exponentiated q value and the normalizer is again the value function so again we have the same expression for q of a t given st and we can repeat this recursion backwards in time so this gives us a dynamic programming solution and of course we can formalize this as a back or pass and here's a summary of that backward pass from the last time step until the beginning set your q function to be r plus the expected value of next v so this is the regular belmont backup set your v to be the soft max so this is the soft maximum and just like in the regular value iteration algorithm we would repeat these backups now we have a soft value iteration algorithm where everything is exactly the same except that v is a soft max rather than a hard max and the final policy is the exponential of q minus c okay so to summarize this we have our original model we made a variational approximation our value functions at every step are the log integral of the exponentiated q values our q values are backed up normally like in the regular bell on backup and you can read more about this in this tutorial article from 2018 called reinforcement learning and control is probabilistic inference tutorial in review but this basically gets us a dynamic programming algorithm that is a soft analog to value iteration now there are many variants of this you could for example construct a discounted variant where you put a gamma in front of the expected value of the next value function and that just corresponds to changing your dynamics to have a one minus gallon probability of death you could also add an explicit temperature so when you perform this value function computation you can add an alpha where you multiply your q value by one over alpha and then multiply the result by alpha at the end and as alpha goes to zero this will approach a hard max of course you can also construct an infinite horizon formulation of this where instead of literally doing dynamic programming from the end of the structure to the beginning you actually run an infinite horizon soft value iteration procedure and that's also a perfectly reasonable perfectly correct thing to do for the infant horizon case it basically works exactly as you would expect exactly according to the procedure outlined on the slide okay so that's the dynamic programming way of doing control as variational inference in the next part i'm going to talk about how to instantiate this idea as well as some other ideas to design some practical rl algorithms that utilize this variation and reference formulas
CS_285_Deep_RL_2023
CS_285_Lecture_20_Inverse_Reinforcement_Learning_Part_1.txt
all right welcome to lecture 20 of cs285 today we're going to talk about inverse reinforcement learning so so far every time that we've had to take on a reinforcement learning problem we also always assumed that a reward function was provided for us and typically if you were to use these reinforcement learning algorithms you would program a reward function by hand manually what if instead you have a task where the reward function is difficult to specify manually but you have access to data of humans or in general some kind of expert performing that task successfully could you back out their reward function from observing their behavior and then re-optimize that reward function with reinforcement learning algorithms what we're going to learn about today is how we can apply this approximate model of optimality formalized as an inference problem from last time to learn a reward function rather than just directly learning a policy from a known reward and this is called the inverse reinforcement learning problem so the goals for today will be to understand the inverse reinforcement learning problem definition understand how probabilistic models of behavior can be used to derive inverse reinforcement learning algorithms and understand a few practical inverse reinforcement learning methods that we can actually use in high dimensional problems of the sort that we encounter in deep reinforcement learning all right so one of the things that i mentioned in the previous lecture is that optimal control and reinforcement learning could serve as a model of human behavior and there's actually a very long history going back over 100 years of scientists trying to study human motion human decision making and human behavior through the lens of optimal decision making and rationality in fact one of the definitions of rational behavior is that rational behavior can be framed as maximizing a well-defined utility function it turns out that any rational decision-making strategy for instance one where if you prefer a a over b and you prefer b over c then you prefer a over c any strategy that is rational in a sense can be explained with a well-defined set of scalar valued utilities whereas an irrational strategy such as for instance if you prefer apples over bananas and you prefer bananas over oranges but then you prefer oranges over apples that is irrational and that in fact cannot be explained with a well-defined set of scalar valued utilities so if we want to explain human motion human decision making and so on through the lens of optimality what we could do is we could write down the equations that describe optimal decision making either in the deterministic case as we learned about in the optimal control lecture or in the stochastic case and then we could ask assuming that the human is solving this optimization problem what can we plug in in place of r so that the solution to this optimization problem matches the behavior the human actually exhibited and in fact studies in neuroscience motor control psychology and so on have applied this basic model and although as we discussed last week the classic model of optimality is sometimes a poor fit for human decision making because humans are often not deterministic and not perfectly optimal a soft model of optimality can explain human behavior quite well in many cases and in fact the notion that optimality is a good framework for thinking about human decision making and human motor control has been extremely influential in studies of human behavior and neuroscience all right that's maybe the kind of more intellectual motivation but we could also ask a practical question why should we worry about learning reward functions well one perspective we can take on is the imitation learning perspective so a standard way to go about imitation learning problems as we discussed in the beginning of the course would be to demonstrate a behavior that you want to a robot or whatever your agent is your autonomous car your e-commerce agent whatever it is and have it simply imitate that behavior through behavioral clone however when humans imitate other humans we don't actually do it this way you know if you imagine teaching a robot uh through imitation learning you would maybe actually like teleoperate the robot and move its arms through the motions that you want to perform but when you think of a person imitating somebody it's not like you need someone to like hold you and move your body in exactly the way that is needed for you to accomplish the task no that's not what you do you watch somebody and you figure out what is it that they're trying to do and then you attempt to emulate not their direct emotions but rather their intentions so standard imitation learning deals with copying the actions performed by the expert without reasoning about the purpose of those actions without reasoning about their outcomes human imitation learning is very different when humans imitate we copy the intent of the expert we might do what they did but differently because we understand why they took the actions they did and what outcome they were seeking and this might result in actually very different actions from the ones the expert took but the same outcome here is a nice video that i think illustrates this point this is a psychology experiment the subject of the psychology experiment is the child in the lower right corner of the frame now if you play put yourself in the in the place of that child imagine what you would do upon seeing this well you would infer the intentions of the experimenter here and you would not perform the action that the experimenter is performing you would instead perform the action that leads to the desired outcome the outcome that you inferred is the outcome that they are going for so can we figure out how to enable reinforcement learning agents to do things like this there's another perspective we can take to think about why inverse reinforcement learning is important and that's the more reinforcement learning-centric perspective in many of the reinforcement tasks that we want to tackle such as the games that you guys have to work with for homework 3 the objective is fairly natural so if you want to play a game as well as possible it makes sense that your reward function will be the score in the game the score is printed right there on the image so it's not a huge stretch to say that's my reward function but in many other scenarios the reward function is much less obvious imagine for instance an autonomous car navigating down the freeway now this autonomous car has to balance a a number of different competing factors it has to reach the destination go to particular speed it needs to not violate the laws of traffic it needs to not annoy other drivers and all these different factors have to be balanced against each other to drive appropriately safely and in a way that is comfortable to the passengers and writing down a single equation that describes that might be very hard but asking a professional driver to demonstrate it is comparatively much much easier so it's very appealing to think about learning reward functions in these kinds of scenarios okay so inverse reinforcement learning refers to the problem of inferring a reward function from demonstrations such as for instance in this driving scenario where you have a professional driver demonstrate a good driving policy and then you want to figure out what's a good reward function to extract from this to give to your reinforcement learning agent now by itself inverse reinforcement learning as i've stated it is unfortunately a very underspecified problem and the reason for this is that for any given pattern of behavior there are actually infinitely many different reward functions that explain that behavior this is perhaps most obvious if i give you an example let's consider this really simple great world with 16 states if i have this demonstration and i ask you what was the reward function of the agent who performed this demonstration what will be your answer now you might object at this point you might say well what the heck is going on here there's just some arrows drawn on a grid in the autonomous driving scenario the semantics of that task are much richer there are other cars stop signs traffic lights but remember the algorithm doesn't have all of those semantics that you have just like in the exploration lecture when we talked about montezuma's revenge exploration is hard because we lack the cement we have the semantics that allow us to make sense of the world but the algorithm lacks those semantics similarly in the case of inverse reinforcement learning to the algorithm these are all just states and actions it has no way to understand that meaningful reward functions have something to do with the laws of traffic and not with the particular gps coordinator which are located so i want to show you this example because i want to construct a scene where we intentionally divorced the problem of recovering the reward from any of our own prior semantic knowledge okay so for this grid world think about what the reward function might be take a guess so one very reasonable guess is that the agent gets a big reward for reaching this particular square and a bad reward everywhere else so that would definitely explain why they did what they did but there's another explanation what if they get a big reward for reaching this square if you only observe a trajectory consisting of four steps both of these rewards explain the expert's behavior equally well what if instead they have this reward function a big reward for anything in the lower half and a big penalty for crossing those darker squares that would also explain their behavior indeed their behavior could even be explained by the general reward function that just says you have a reward of negative infinity for taking any action other than the ones in the demonstration so there are in general infinitely many rewards for which the observed behavior would be optimal in the traditional sense okay so before we talk about how to clear up this uh ambiguity let's define the inverse reinforcement learning problem more formally to define inverse reinforcement learning more formally we can do it like this and i'm going to on the left side of the slide i'm going to present the formalism for regular forward reinforcement learning on the right side the formalism for inverse reinforcement learning so that you can see them side by side and compare so first what are we given in forward reinforcement learning and anniversary enforcement learning in both cases we are given a state space and an action space sometimes we are given the transition probabilities and sometimes not sometimes we have to infer them from experience in forward reinforcement learning we are given a reward function and our goal is to learn pi star the optimal policy for that reward function in inverse reinforcement learning we are given trajectories tau sampled by running the optimal policy we don't necessarily know what the optimal policy is but we do assume that our sample trajectories came from that policy or some approximation thereof and our goal is to learn the reward function rsi that pi star optimized to produce those taus where psi is a parameter vector that parametrizes the reward now there are many different choices we could make for the reward parametrization in the kind of more classical inverse reinforcement learning literature a very common choice is to use a linear reward function a reward function that is a weighted combination of features you can equivalently write it as an inner product psi transpose times bold f where bold f is a vector of features you could intuitively think of these features as a bunch of things that the agent wants or does not want and then what you're trying to determine is precisely how much do they want or not want each of those things now these days in the world of deep reinforcement learning we might also want to deal with neural network reward functions reward functions that map states and actions via a deep neural network a non-linear function to a scalar valued reward and that are parametrized by some parameter vector psi which denotes the parameters of that neural network and then once we've recovered the reward function and inverse reinforcement learning typically what we would want to do is use that reward function to learn the corresponding optimal policy pi star all right so first before i talk about the main topic of today's lecture uh which is going to be inverse reinforcement learning algorithms based on the probabilistic model that we saw in the previous lecture i want to provide a little bit of kind of historical background to discuss some ways that people have thought about the solving the inverse reinforcement learning problem prior to kind of the modern age of deep learning so many of the previous algorithms for anniversary enforcement learning were focused around something called feature matching the main albums that i'll discuss today are based around the maximum entry principle and draw on the graphical model that i presented last lecture and this is different from feature matching however i will first describe the feature matching algorithms just to provide some context and just to give you guys a broader overview of the literature so classically when people started thinking about the inverse reinforcement learning problem they approached it like this they said well let's say that we have some features and we're going to learn a linear reward function in those features if the features f are important what if the way that we disambiguate the inverse reinforcement learning problem is by saying let's learn a reward function for which the optimal policy has the same expected values for those features so the features are functions of states and actions and you could say let's let pi r psi be the optimal policy for our learned reward rsi and then we're going to select psi such that the expected value under pi r psi of our feature vector is equal to its expected value under pi star that no that's very reasonable that's just saying that well uh if you saw that the um let's say the optimal driver driving the car rarely experienced a crash rarely ran ran red lights and frequently overtook on the left right on the right and then matching the expected values of those features will probably give you somewhat similar behavior if you were given the right features now unfortunately this formulation is still ambiguous so you can do this fairly easily because you have trajectory sample from the optimal policy so while you don't know the optimal policy itself you can approximate the right hand side by averaging the feature vectors in your demonstrated trajectories but it's still ambiguous because multiple different psi vectors could still result in the same feature expectations so think back to the example with the grid rule that i gave before all of those different reward functions result in the same exact policy which means they would all have the same exact expected values so one way that people thought about disambiguating this further is by using a maximum margin principle so the the maximum margin principle for inverse rl is very similar to the maximum margin principle for support vector machines and it states that you should choose psi so as to maximize the margin between the observed policy pi star and all other policies so if the reward is psi transpose f then the expected reward is psi transpose times the expected value of f and you would like to pick psi so psi transpose times the expected value of f meaning the expected reward under pi star is greater than or equal to the expected reward under any other policy plus the largest possible margin and you would choose the margin and the psi so as to maximize this so this is basically saying find me if a weight vector psi so that the experts policy is better than all other policies by the largest possible margin now this is a little bit of a heuristic because this doesn't necessarily mean that you will recover the expert's weight vector the expert's true reward function but it's a reasonable heuristic it's saying you know if you have two different reward rewards that have the same feature expectations as the expert pick the one that makes the expert look better than all the other policies so don't pick a reward for which the expert is just a little bit better than the alternatives now the trouble with this formulation still is that if the space of policies is very large and continuous there are likely other policies that are very very similar to the experts policy in fact there are likely other policies that are almost identical so just maximizing the margin against all of the policies is maybe not such a good idea by itself and perhaps you want to weight this by some similarity between pi star and pi maybe what you want is to maximize the margin more against policies that are more distinct from the expert whereas the margin against other policies that are very similar to the expert it could be pretty small now fortunately this is again very similar to the kind of problems that we encounter support vector machines and much of the literature on this feature matching irl actually borrowed techniques from support vector machines to solve this problem so for those of you that are familiar with svms you'll probably recognize us if you're not familiar with svms don't worry about it too much you don't really need to know this but it's a good kind of side note for you to be aware of the literature so the svm trick basically takes a maximum margin problem like this which is generally difficult to solve and reformulates it as of the problem of minimizing the length of the weight vector where you yeah the margin is always one it's a little subtle why you can do this it requires a little bit of lagrangian duality but you can take me at my word that these two problems are equivalent and then it turns out that if you want to incorporate the similarity between policies into the second problem all you do is you replace that one by some measure of divergence between policies so that means that if you have another policy pi that is identical to pi star then it's okay for the left hand side and right hand side to be equal because d will be zero but as the policies become more and more different then you want to increase the margin to those policies so one good choice for d could be the difference in their feature expectations another good choice could be their expected kl divergence now there are still a few issues with this formulation it does lead to some practical inverse reinforcement learning algorithms that we could actually implement and try to use but these inverse reinforcement learning algorithms will have a number of shortcomings one major shortcoming is that maximizing the margin is a bit arbitrary what it basically says is you should find a reward function for which the expert's policy is not just better than the alternatives by small amount you don't just want to find a reward function which the expert is like tied with some very different policy you want to find the reward function for which the expert's behavior is very clearly the better choice but this doesn't say why you want to do that presumably the reason you'd want to do that is because you're making some assumption about the expert maybe one assumption that you're implicitly making is that the expert intentionally demonstrated the things that make it easy to figure out their reward but the notion of maximizing the margin is a heuristic response to that and the issue the assumption about the expert's behavior is not actually made explicit here the other problem is that this formulation doesn't really give us a clear model of expert sub-optimality it doesn't explain why the expert might sometimes do things that are not actually optimal now those of you that are familiar with support vector machines might remember that in the case where the classes are not perfectly separable you can do things like adding slack variables to account for some degree of sub-optimality but adding such slack variables in the setting is still largely a heuristic it's not really a clear model of the expert's behavior it's just heuristically modifying the problem to make it possible to accommodate sub-optimal experts and lastly this results in kind of a messy constrained optimization problem which is not a big deal if you have linearly parametrized reward functions but it does become a really big problem for deep learning if you want to have reward functions represented by neural networks however if you want to learn more about these kinds of methods there are a few readings i might suggest you can check out for example a classic paper by b lining called apprenticeship learning via inverse reinforcement learning as well as a paper by radliff at all called maximum margin planning which are very representative of this class of feature matching and margin maximizing inverse rl methods however the main topic for today's discussion will actually build on probabilistic models of expert behavior so from the previous lecture we saw that we could actually model sub-optimal behavior as inference in a particular graphical model that its states actions and these additional optimality variables so the probability distributions in this model are the initial state distribution p of s1 the transition probabilities p of st plus 1 given sda t and the object and the optimality probabilities which we chose to be equal to the exponential of the reward now before we were concerned with the problem what is the probability of a trajectory given that the expert was acting optimally um but in this so this is what we saw before we said that well if you don't assume optimality then you know any physically consistent trajectory is equally likely but if you make an assumption of optimal behavior then you could say what's the probability of a trajectory given that the expert was optimal and we saw that that had this nice interpretation that the most uh optimal trajectory was the most likely and then sub-optimum trajectories became exponentially less likely and we talked about how that might be a good model of sub-optimal expert or monkey behavior but now we're going to be doing is we're actually going to be using this model to learn reward functions so instead of asking what is the probability of trajectory given a reward which is inference in this model we'll instead do learning in this model where we say given the trajectories can we learn the parameters of r so that the mat the likelihood of those trajectories under this graphical model is maximized and that's what i'll discuss in the next part of the lecture
CS_285_Deep_RL_2023
CS_285_Lecture_13_Part_4.txt
all right in the next portion of the lecture i'll go through a few other novelty seeking exploration methods so for these i won't go through them in quite as much detail but i just want to give you a sense for other techniques that have been put forward in the literature that also exploit the notion of optimism to improve exploration so the first one i'm going to talk about uh you can think of it as a kind of account based method with a more sophisticated density model so this book from this paper by tang at all called hash exploration and the idea here is counting with hashes so um here's the the notion instead of doing the pseudo count thing what if you still do regular counts but under a different representation so perhaps what you could do is you could take your states and you compute a kind of a hash that compresses the state so it's a lossy hash in such a way that states that are very different get get very different hashes but states that are very similar might be might map to the same hash so the idea is that we're going to compress s into a k bit code via some encoder 5s and if k is chosen to be small enough such that the number of states is larger than 2 to the k then we'll have to compress some states into the same code and then we'll do counting but we'll count with respect to these codes we'll actually count how many times we've seen the same code instead of the same state and the shorter your code is the more hash collisions you get which means the broader your notion of similar for the purpose of determining two states are similar will be so will similar states get the same hash well maybe it depends a little bit on the model you choose so the way that you can improve the odds is instead of using some standard hash function that typically aims to minimize hash collisions you could instead use an auto encoder that is trained so that it gets the maximum reconstruction accuracy and if you train the auto encoder to maximize your construction accuracy then if it's forced to have hash collisions it'll uh it'll produce hash collisions for those settings where the collision results in small reconstruction error so basically if it mistakes one state for another but they still look pretty similar then that mistake costs the autoencoder a lot less than if the states look very different so learning the hash basically provides hash collisions that are a little more similarity driven and then this algorithm will take the bottleneck from the solder encoder or essentially treating the encoder of the autoencoder s5s clamp it to b01 perform a down sampling step and that's the code the k-bit code that they're going to use and then they just do regular counting on these k-bit codes and the resulting algorithm actually turns out to work decently decently well with a variety of different coding schemes so that's kind of a nice way that you could adapt regular accounts if you don't want to deal with pseudo accounts another thing you could do is you could avoid density modeling altogether by actually exploiting classifiers to give you density scores so remember that p theta of s needs to be able to output densities but it doesn't necessarily need to produce great samples and we can exploit this by devising a class of models that are particularly easy to train that can't produce samples at all but can give reasonable densities so this is from a paper called ex2 by who at all so here's the idea we're going to try to explicitly compare the new state to past states and the intuition is that if a classifier can easily distinguish whether the state is looking at is the new state or a past state then the new state is very novel and therefore should have low density if it's very hard to distinguish that means that the new state looks indistinguishable from past states and therefore as high density and while this notion is fair is somewhat intuitive and informal it can actually be made mathematically precise so if the state is novel uh the state is novel if it is easy to distinguish from all previously seen states by a classifier so for each observed state s what we're going to do is we'll fit a separate classifier to classify that state against all past states in the buffer and then we'll use the classifier likelihood or the classifier error to obtain a density so it turns out that if the probability that your classifier assigns to the state is given by d of s and i have the subscript s because this is the classifier that's trying to classify the state s against all pass states so d subscript s of s is the probability this classifier assigns to this state being a new state the density of the state turns out can be written as p theta of s is equal to 1 minus d s of s divided by d s of s and the way that you obtain this equation is you write down the formula for the optimal classifier which can be expressed in terms of the density ratio and then do a bit of algebra so this is the probability that the classifier assigns that s is a positive meaning that s is a new state and the classifiers train where the only positive is s and the negatives are all the d's now at this point you might be wondering what the heck is going on here like you have a classifier that just tries to classify whether s is equal to itself like shouldn't that always output true well remember what counts are doing what counts are doing is they're they're counting how many times you've seen that exact same state multiple times so if you're actually in that regime with counts and s has a large count then s will also occur in d so you'll have one copy of s in the positives but you might have multiple copies of s in the negatives which means that the true answer the true ds of s is not one because if you see the state s it could be a positive but it could also be a negative for example if the the state s occurs in the set of negatives 50 of the time if literally half your negative states are also s then d s of s is not 100 percent it's actually 75 percent because 50 that's positive 25 that's you know the other half of the negatives so the larger the count the lower ds of s will be and of course in larger continuous spaces where the counts are always one this model will still produce non-trivial conclusions because the classifier is not going to overfit the classifier is actually going to generalize a little bit which means that if it sees very similar states in the negatives it will assign a lower probability to the positive so that's why you can use a classifier desk to make densities like this if you want to go through the algebra for how to derive the probability from the classifier uh check out the paper it's actually a fairly simple bit of algebra the intuition is that you first write down the equation for a bayes optimal classifier which is an expression in terms of p theta of s and then you solve that expression to find an equation for b theta of s now as i mentioned before aren't we just checking if s is equal to s well if there are copies of this present in the data set then the optimal dsms is not one as i mentioned before and in fact the optimal classifier is given by one over one plus p of s and again this is a bit of algebra that you can check so if you rearrange this to solve it for p of s you get the equation on the right now in reality of course each state is unique and your classifier can overfit so you have to regularize the classifier to ensure that it doesn't overfit and doesn't just assign a probability of one all the time so you would use something like weight decay to regularize your classifier now the other problem with this is that you're you know as i've described this procedure so far we're training a totally separate classifier for every single state we see now isn't that a bit much are we going to go kind of crazy with all those classifiers well one solution we could have is we could instead train an amortized model so instead of training one classifier for every single state we can train just a single classifier that is conditioned on the state that it's classifying so it's an amortized model that takes the exemplar as input that's x star and it takes the state that's classifying as input that's x and now we just train one network and we update it with every state that we see so this is an amortized model and this basic scheme actually works pretty well it compares very favorably to some other exploration methods including the hash based exploration that i described before provides maybe an interesting perspective on how the type of density model we use for exploration doesn't necessarily need to be able to produce samples and it could even be obtained from a classifier and then in the paper there are some experiments with using this for some visual navigation tasks in this doom where you have to traverse many different rooms before you find the treasure and a good exploration algorithm should figure out when it's in a novel room and then seek out more in all the rooms that hasn't seen too much all right now there are also more heuristic methods that we could use to estimate quantities that are not really counts but that kind of serve a similar role as counts in practice and can work pretty well so remember that p theta of s needs to be able to output densities but it doesn't necessarily need to produce great samples in fact it doesn't even necessarily need to produce great densities you could just think of it as a score and you just want that score to be larger for novel states and smaller for non-novel states or the other way around so basically just need some number that is very predictable whether a state is novel or not it doesn't even have to be a proper density so you just need to be able to tell if a state is novel or not and if that's all you want there are other ways to get this that are a little more heuristic but can work well so for example let's say that we have some target function f star of s comma a don't worry about what this function is for now let's just say some scalar valued function on states and actions so maybe it's this function and we take our buffer of states and actions that we've seen and we fit an estimate to f star so we fit some function f hat theta so f hat theta is trying to match uh f star on the data so maybe our data set contains these points f hat theta might look like this so it's going to be similar to those points close to the data but far from the data is going to make mistakes because it hasn't been trained in those regions so now we can use the error between f hat and f star as our bonus because we expect this error to be large when we're very far away from states and actions that we've seen so close to the data the two functions should match far from the data f f hat theta might make really big mistakes so then we would say the novelty is low and the error is low and the novelty is high when the error is high so then we could ask well what kind of function should we use for f star and there are a number of different choices that have been explored in the literature so one common choice is to set f star to be the dynamics so basically f star of sa is s prime that's very convenient because it's a quantity that clearly has something to do with the dynamics of the mdp and of course you've observed s prime in your data so you could essentially train a model and then measure the error of that model as a notion of novelty this is also related to information gain which we'll discuss uh in the next part of the lecture an even simpler way to do this is to just set f star to be a neural network with parameters phi where phi is chosen randomly so this network is not actually trained it's actually just initialized randomly to obtain an arbitrary but structured function the point here is that you don't actually need f star to be all that meaningful you just need it to be something that can serve as a target that varies over the state in action space in ways that are not trivial to model so that's why just using a random network actually can work pretty well and this is actually part of the material that will be on homework five so it's a good idea to kind of understand why this works
CS_285_Deep_RL_2023
CS_285_Lecture_21_RL_with_Sequence_Models_Language_Models_Part_3.txt
all right in the third part of today's lecture we're going to talk about multi-step reinforcement learning with language models where we'll combine some of the ideas from the PDP discussion as well as the language model discussion from before so here's an example of a multi-turn uh RL problem with language models uh this is an example of a task called visual dialogue um which is a benchmark introduced in a paper from 2017 uh the idea here is that there's a questioner who's the bot and the answer which is considered part of the environment and uh the answer has a particular picture in mind and the questioner has to ask the questions to try to guess which picture it is so this is purely a language task for the questioner and the the questioner needs to select appropriate questions to gather information so that at the end they can figure out uh what image uh the answer was thinking of now you could imagine structuring this as a PDP where the observations are the things that are said by the answer and the actions are the questions that the questioner selects uh and this is now a sequential process there are multiple time steps and at the end there's a reward so the action is what the bot says it's a sentence like any people in the shant the observation is what the answer or simulated human says like they aren't and the state now would be a history State just like in our discussion in the first part so that would be the sequence of past observations and actions uh and the reward is the outcome of the dialogue did the an did the questioner guess the correct answer at the end so the multi-step nature of this task is very important now we're back in the full RL setting because the questioner isn't just going to ask questions that greedily get them the the answer they're going to ask questions to gather information so they can guess the right answer at the very end obviously they shouldn't ask the same question multiple times they should think about what information they've already gathered what information remains open and proceed accordingly now these kinds of multi-term problems show up in a number of places they of course show up in dialogue systems where you might be interacting with a human to achieve some final delayed goal assistant chat Bots where you might have multiple terms terms of interaction to arrive at a solution tool use settings where instead of talking to a person you might be outputting text that goes into some tool like a database a Linux terminal a calculator something that uses that tool to then produce an answer to a given query playing text games uh maybe you you produce actions that go into a text Adventure game which then responds with uh programmed observations so these are all examples of multi-turn RL problems now this is not the same as rlh uh from from before RL from Human feedback RL from Human preferences that we saw in the previous section uh learns from Human preferences here we're learning about the outcome of the entire multi-step interaction the reward only appears at the end after multiple turns the episode in the previous section was a single answer so it was a one turn banded problem with a State and an action here we have multiple turn turns multiple observation and actions the partial observ observability now matters because we need to pay attention not just to the latest response from the human but perhaps all the previous responses and all the questions we uh we asked before so this is now putting us into a different regime how can we train policies to handle this well we could use policy gradients like just like before policy gradients are a viable way to multi-term policies that's what we introduced them for and we also learned in the first section that policy gradients can actually handle partial observability we can give the policy a history of observations so we can also use those history states that is quite feasible one issue that we run into however with policy gradients is if we are training a dialogue agent that talks to a human then we need to get samples from the human for every roll out this is different from the human preferences setting that we were in before before because we had that reward model we could optimize against the reward model with multiple iterations consisting of sampling and optimization and only occasionally get more preferences but if we're using policy gradience for a dialogue task where every single episode requires talking to a human now we need to interact with a human a lot more so even though with the preference we still need a human input we would need a lot more of it if we want to optimize a dialogue agent with policy gradients so it could work but it's expensive of course it's a lot easier if you're not interacting with a human but instead are interacting with a tool such as a database value based methods however are a very appealing option because with value based methods you could use offline RL techniques like the ones that we learned about before in the course and actually train your dialogue agent directly with data of for example humans talking to other humans uh or past deployments of a bot so value based methods are actually a very appealing option for dialogue so in this part of the lecture I'll actually focus on discussing value based methods though I will say that policy greting methods could be used directly uh there's not much more more to say about that however because they would work exactly the same way as they did before so let's talk about value based methods and for Value based methods we have to make a choice which is what constitutes a Time step so in the very beginning of the previous section I discussed how there are designed choices to be made about how to turn the language problem into an MTP and here there is a particularly delicate choice that we can make which could go either way so the first choice is to have every utterance be a time step meaning that the first thing that the human says like two zebr are walking around their pen in the zoo that's observation one the first sentence that the questioner says like any people in the shot that's action one so actions and observations are entire sentences this is perhaps most directly analogous to the setting that we had in the previous section this is a natural choice because the actions uh are uh because we go in alter any fashion action observation action observation the observation is always outside of the agent's control the action is always entirely under its control the horizons are typically going to be relatively short so if the dialogue involves 10 back and forth uh questions and answers uh then we're going to have 10 time steps the problem is that the action space is huge the action space is the entire space of utterances that the bot could say an alternative choice is to consider each token to be a time step so in this case for an entire utterance from the bot for example any people in the shot every single token in this utterance is a separate action time step and this is a little bit peculiar because of course each of those actions is under its control so after action one it immediately gets to choose action two there's no additional observation uh we would still concatenate action one to our state history uh and the next action will be selected given the entire history and then every single token in the response is a separate observation now this has a very big Advantage which is now at every time step we have a simple discrete action space so the action space at any time step is just a set of possible tokens it's a large set but it's quite easy to enumerate whereas the set of actions in the per utterance setting is the set of all possible sequences which is exponentially large exponential in the Horizon the problem when we use per token time steps is that our Horizon now is much much longer so whereas before our Horizon might be on the order of 10 steps now it's going to be possibly thousands of steps even for a relatively short dial both options have been explored in the literature there's no single established standard as to which one is better so I'll discuss both of them and maybe tell you a little bit about their pros and cons so let's start with value based RL with per utterance time steps here is a an example slice of our dialogue and let's let's say that we're at this step let's say that we're at the stage where the bot is saying are they facing each other what we're going to do is we're going to take the history of the conversation up until this point which which constitutes the state that's the entire dialogue history St and we're going to pass it through some kind of sequence model um so it's some kind of it could be a pre-trained language model it could be something like Bert there are a variety of choices and the sequence model is going to up with some sort of embedding and then we're also going to take our candidate action are they facing each other and we're going to also pass it through a sequence model and this could be a separate sequence model or could be the same one and we're going to get embeddings of both these things that are going to be fed into some learned function that UPS the Q value it's uh perhaps most straightforward ta have two separate encoders for the state and the action but they could also be encoded with the same encoder and at the end we have to predict a single number for them which is the Q value so this is the critic now typically in this design we would use we could use either an actor critic architecture where we would have a separate actra Network that is trained to maximize this critic and that could be trained with for example uh one of the algorithms in the previous section treating this Q in place of the reward as the onstep uh objective or we could directly decode from the Q function to find the action that has the the highest Q value and it's a little tricky how to do that we could use that with we could do that with something like beam search we could also sample from a supervised train model and take the sample of highest Q value and then we would uh train this Q function using uh the our estimate of the maximum for the next time step so that maximum for next time step could come from doing beam search it could come from using an actor uh it could also come from sampling from a supervised train model and then taking the sample of the largest Q value as an approximation to the max so all those are valid options and different methods in literature have explored different choices for that so I'll summarize a few previous papers at the end of the section and tell you what the concrete papers actually did so there's no one way of doing this there's a variety of choices for per token time steps things are perhaps a little bit simpler so let's say that we're at this point in in the decoding process we're generating the token corresponding to facing and remember of course in reality words aren't tokens tokens are uh actually correspond to multiple characters but not entire words but let's pretend that tokens are words and let's pretend that we're at the word facing so we're going to want to do this backup the Bellman backup over individual tokens now things work much more like the uh like supervised language models so we have the these tokens and we a number for every possible token at this time step except instead of that number corresponding to the probability of that token being the next token the number is actually its Q value so the number associating associated with the token for facing is the Q value you would get if your history is the entire previous history of a conversation and then you select the token facing as the next action so your loss would take in the token facing at the next step maximize over the possible tokens at at the next time step if the agent chooses that token or simply take the data set token if it's chosen by the environment add the reward to that and then use that as the target value in the loss so this essentially implements per token Q learning so to explain that again at the token for for they the output is the Q value of every possible token being chosen at the next time step and to compute the target for that Q value we would actually input that token at the next time step see all of our uh possible next token values and take a Max over them if the agent gets to choose the next one or take the value of the data set token if it's chosen by the environment add the reward and then treat that as our Target so in some ways this simpler but remember that our Horizon gets to be a lot longer so we have SIMPLE discrete actions the rule is arguably less complex than it is for perance because we don't have to deal with actors we don't have to deal with all that other stuff but our Horizon is very long so putting it all together the usual value based details apply so we would typically need a Target Network for either the per utterance or the per token version we would typically use a replay buffer we would typically do things like use the double Q trick and so on so all the same considerations apply as they did for regular value based methods and we could use this with either online or offline RL to my knowledge these methods have primarily been studied for offline RL in which case you would use something like cql or iql to make it work properly um and the details basically require handling distributional shift in some way so you could use policy constraints if you have an actor then you would use a kale Divergence on the actor if you are just using value based methods you could use a cql style penalty on the Q values which conveniently for the per token version amounts to basically putting the standard supervised cross entropy loss if it's not clear why that's the case you can you can uh work that out just write down the cql objective and with discrete actions you'll see that it actually works out to be the same as a cross enty loss you could also do an iql style backup and that's also a decent option uh but there's no single best answer yet as to which of these is the better choice so this is very much kind of at the bleeding edge of current research uh as of uh 2023 okay so this was a little bit abstract uh to see concrete algorithms you could actually Implement to make this work let's go through some examples so one example which is a somewhat older paper by Natasha Jes called human Centric dialogue training via offline reinforcement learning uh uses an actor critic plus policy constraint architecture so there is an actor Network which has a kale Divergence penalty to stay close to the uh data distribution the rewards for this come from Human user sentiment analysis so the chatbot is actually trying to optimize the sentiment elicited from humans and the reward is automatically computed using a sentiment analyzer applied to the human responses and this uses the per utterance time step formulation another example chai a chat bot AI for a task oriented dialogue with offline reinforcement by sth Verma this one uses a q function with a cql like penalty and it uses uh rewards from the task in this case the Craig negotiation task so the reward just comes from the total revenue made by selling an item and the time step here is one utterance so the way that the maximization is done over the next time step is actually by sampling multiple possible responses from a supervised trained language model in this case a gb2 style model and then taking the max over the Q values of these sampled utterances so this is not an exact Max it's an approximate Max by using samples from a pre-train language model another more recent example is offline RL for natural language generation with implicit language Q learning by Snell at all 2022 this one uses a q function train with actually a combination of both iql and cql so it uses an iql backup with a cql uh penalty and then the policy is actually extracted by again taking a supervised train model sampling from that supervised train model and then taking the sample with the largest Q value and the rewards again come from the task this one is evaluate on the visual dialogue task from before where the reward corresponds to whether the agent gets the correct answer or not so if you want to learn more about specific value based algorithms I would encourage you to check out these papers and see the particular details they chose and my description of the methods was a little bit abstract and generic the particular instantiation is covered in these papers and the um the snal formulation uses each time step as a to uh each token as a Time step okay so to recap multi-step language interactions like dialogue are a pum DP uh which means that we need to do something like using dialogue using uh history States as our state representation time steps can be defined as either per utterance or per token and they have their pros and cons in principle any AIC method could be used once we switch to using history States but in practice especially if we're if we have dialogue agents that need to talk to humans we might really prefer an offline RL formulation because otherwise we would have to interact with humans every time we generate more samples uh of course that's not necessarily the case because if we're if we're doing something like text games or tool use then online methods are actually quite feasible value based methods either treat utterances or tokens as actions and they build Q functions with history States and uh we have to apply the same details and tricks as regular offline value based methods so that includes things like Target networks it includes tricks like double Q learning it includes the various offline regularization methods like policy constraints cql or iql there's no single established standard for what is the best best method of the sort and there are a variety of different choices with different pros and cons
CS_285_Deep_RL_2023
CS_285_Lecture_11_Part_3.txt
all right let's talk about how we can train uncertainty aware neural network models to serve as our uncertainty aware dynamics models for model based rl so how can we have uncertainty aware models well one very simple idea is to use the entropy of the output distribution i'll tell you right now this is a bad idea this does not work but i'm going to explain it just to make it clear why it doesn't work so let's say that you have your neural network dynamics model it takes in s and a as input and it produces p of st plus one given stat which could be represented by a soft max distribution if you're in the discrete action setting or it can be represented by a multivariate gaussian distribution in the continuous setting so in the multivariate gaussian case you output a mean and a variance in the soft mex case you just output the logic for every possible next state why is this not enough well we talked about how uh the the problem we're having is this uh erroneous extrapolation so for uh the setting where we have limited data we might overfit and make erroneous predictions and the particular kind of errors that we're especially concerned with are ones where the optimizer can exploit those errors by optimizing against our model when the optimizer optimizes against our model what it's really going to be doing is going to be finding out of distribution actions that lead to out-of-distribution states that then lead to more out of distribution states which means that our model is going to be asked to make predictions for states and actions that it was not trained on the problem is that if the model is outputting the uncertainty and it's trained with a regular maximum likelihood the uncertainty itself will also not be accurate for our distribution inputs so other distribution inputs will result in erroneous predictions like an erroneous mean but they'll also result in an erroneous variance for the same exact reason and this is because the uncertainty of the neural net output is the wrong kind of uncertainty so if you imagine this highly overfitted model you could say well what variance is this model going to be predicted let's say that the blue the blue curve represents the predictions from the model the model outputs a mean and the variance over y at every point well if it looks at the trading points the trading means are basically exactly the same as the actual values so the optimal variance for it to output is actually zero this model will be extremely confident but of course it's completely wrong and we'll see exactly the same thing from deep nets we'll see very confident predictions that are very good on the training points but are both incorrect and overconfident on the test points and this is not something special about neural nets it's not about neural nets being bad at estimating uncertainty it's just because this is the wrong kind of uncertainty to be predicting this measure of entropy is not trying to predict the uncertainty about the model is trying to predict how noisy the dynamics are see there are two types of uncertainty and there are a variety of names that people have used for them but we can call them aleatoric or statistical uncertainty which is essentially the case where you have a function that is itself noisy and then we have epistemic or model uncertainty which happens not because the true function itself is noisy or not but because you don't know what the right function is and these are fundamentally different kinds of uncertainty aliatoric uncertainty doesn't go down necessarily as you collect more data if the true function is noisy no matter how much data you collect you will have high entropy outputs just because the true function has high entropy like for example if you're learning the dynamics model for a game of chance for game where you roll two dice the correct answer for the model that models the numerical value of the sum of those two dice is going to be random it's never going to become deterministic as you collect more data seeing the dice roll more and more doesn't allow you to turn that statistic that's the casting system into the deterministic one that's illeatoric uncertainty that's when the world itself is actually random epistemic uncertainty comes from the fact that you don't know what the model is so epistemic uncertainty would be like this the setting we had when approaching the cliff or walking around on the top of the mountain once you collect enough data that uncertainty goes away but in the limited data regime you have to maintain that uncertainty because you don't know what the model actually is this is essentially a setting where the model is certain about the data but we are not certain about the model and that's what we want maximum likelihood training doesn't give you this so just outputting a distribution over the next state or a gaussian distribution over with a mean and variance will not get you this capability so how can we get it well we can try to estimate model uncertainty and there are a number of different techniques for doing this so this is basically the setting where the model is certain about the data but we are not certain about the model in order to not be certain about the model we need to represent a distribution over models so before we have one neural net that output a distribution over st plus one and it has some parameters theta so being uncertain about the model really means being uncertain about theta so usually we would estimate theta as the arg max of the log probability of theta given our data set which when we're doing maximum likelihood estimation we take to also be the r max of the log probability of the data given theta and that presumes having a uniform prior but can we instead estimate the full distribution p theta given d so instead of just finding the most likely theta what if we actually try to estimate the full distribution theta given d and then use that to get our uncertainty that is the right kind of uncertainty to get in this situation so the entropy of this distribution will tell us the model uncertainty and we can average out the parameters and get a posterior distribution over the next state so when we then have to predict we would actually integrate out our parameters so instead of taking the most likely theta and outputting the probability of st plus 1 given st80 and that most likely theta we'll output our parameters by integrating out theta by taking the integral p of s t plus 1 given sda t comma theta times p of theta given d d theta now of course for large high dimensional parameter spaces of the sort that we would have with neural nets performing this operation exactly is completely intractable so we have to resort to a variety of different approximations and that's what we're going to talk about in this lecture so intuitively you could imagine this is producing some distribution over next states which is going to integrate out out all the uncertainty in your model so one choice that we could use is something called a bayesian neural network i'm not going to go into great detail about bayesian neural networks in this lecture because it requires a little bit of machinery a little bit of variational inference machinery uh which we're actually going to cover next week but i do want to explain the high level idea behind bayesian neural nets so in a standard neural net of the source shown on the left you have inputs x and outputs y and every weight every connection between the hidden units the inputs and the outputs is just a number so all the neural nets that you've trained so far in this class basically work on this principle in bayesian neural networks there's a distribution over every weight in the most general case there's actually a joint distribution over all the weights if you want to make a prediction what you can do is you can sample from this distribution essentially sample a neural net from the distribution over neural nets and ask it for its prediction and if you want to get a posterior distribution over predictions if you want to sample from the posterior distribution you would sample a neural net and then sample a y given that neural net and you could repeat this process multiple times if you want to get many samples to get a general impression of the true posterior distribution y given x with theta having been integrated out now modeling full joint distributions over the parameters is very difficult because the parameters are very high dimensional so there are a number of common approximations that could be made one approximation is to estimate the parameter posterior this p of theta given d as a product of independent r marginals this basically means that every weight is distributed randomly but independently of all the other weights this is of course not a very good approximation because in reality the weights have very tightly interacting uh effects so you know if you want to vary one weight and you vary the other one in the opposite direction maybe your function doesn't change very much but if you vary them in the penalty you could change quite a lot so using a product of independent marginals to estimate the parameter posterior is a very crude approximation but it's a very simple and tractable one and for that reason it is used quite often a common choice for the independent marginals is to represent each marginal with a gaussian distribution and that means that for every weight instead of learning its numerical value you learn its mean value and its variance so for each weight you have not one number but two numbers now you have the expected weight value and the uncertainty about the weight and that is a very nice intuitive interpretation because you've gone from learning just a single weight vector to learning a mean weight vector and for every dimension you have a variance for more details about these kinds of methods here are a few relatively simple papers on this topic weight uncertainty in neural networks by blundell at all and concrete dropout by gold at all although there are there are many more recent uh substantially better methods that you could actually use if you want to do this in practice so bayesian neural networks are actually a reasonable choice to get an uncertainty aware model to learn more about how to train them check out these papers or hang on until we cover the variational inference material next week today we're instead going to talk about a simpler method that from my experience actually works a little bit better in model based reinforcement learning and that's to use bootstrap on songs here is the basic idea behind bootstrap ensembles i'll present it first intuitively and then discuss a little bit more mathematically what it's doing what if instead of training one neural network to give us the distribution over the next state given the current state in action we instead trained many different neural networks and we somehow differ diversified them so that each of those neural networks learns a slightly different function ideally they would all do similar and accurate things on the training data but they would all make different mistakes outside of the training data if we can train this kind of ensemble of models then we can get them to essentially vote on what they think the next state will be and the dispersion in their votes will give us an estimate of uncertainty mathematically this amounts to estimating your parameter posterior p of theta given d as a mixture of dirac delta distributions so you've probably learned about mixtures of gaussians a mixture of rock deltas is like a mixture of gaussians only instead of gaussians you have very narrow spikes so each element has no variance it's just a mixture of delta functions where the where each delta function is centered at the parameter vector for the corresponding network in the ensemble so intuitively you can train multiple models and see if they agree as your measure of uncertainty formally you get a parameter posterior p of theta given d represented as this mixture of dirac deltas which means that if you want to integrate out your parameters you simply average uh over your models so you construct a mixture distribution where each mixture element is the prediction of the corresponding model now very importantly in continuous state spaces it doesn't mean that we average together the actual mean state we're averaging together the probabilities which means that each of these models if each of these models is gaussian and their means are in different places our output is not one gaussian with the average of those means it's actually multiple different gauss's it's actually a mixture of gaussians so we're mixing the probabilities not the means so when you implement this in homework four don't just average together the next states that your models predict actually uh treat it as a mixture of gaps okay how can we train this bootstrap ensemble to actually get it to represent this parameter posterior well one mathematical tool we can use is something called the bootstrap the main idea in the bootstrap is that we take our single training set and we generate multiple independent data sets from the single training set to get independent models so each model needs to be trained on a data set that is independent from the data set for every other model but still comes from the same distribution now if we had a very large amount of data one very simple way we could do this is we could take our training set and just chop it up into n non-overlapping pieces and train a different model on each piece but that's very wasteful because we're essentially decimating our data set and therefore we can't have too many uh boost traps we can't have too many models so in the bootstrap ensemble each of these models is called a bootstrap so there's a cool trick that we can do which is going to maintain our data efficiency and give us as many models as we want the idea is to train each theta i on a data set di which is sampled with replacement for d so if d contains n data points d i will also contain n data points but they will be resampled from d with replacement which means that for every entry in di you if let's say you have n data points in d you select an integer uniformly at random between 1 and d and pick the corresponding element from from d so you select a random integer from 1 to n pick the element from d write that into the first entry of di for the second entry and di pick a random integer between 1 and d grab it from from d put it in entry two for entry three random integer from one to n take it from d put entry three and so on and so on and so on in expectation you get a data set that comes from the same distribution as d but every individual di is going to look a little bit different intuitively you can think of this as putting integer counts on every data point and those counts can range from 0 to n although n is very unlikely so every model trained on every di is going to see a slightly different uh data set although statistically the data sets will be similar and it turns out this is enough to give you a parameter posterior so that's the theory now in the world of deep learning it turns out that training a bootstrap ensemble is actually even easier so the basic recipe i outlined on the previous slide essentially works it's a fairly crude approximation because the number of models we would have is usually small right so if the cost of training one neural net is three hours and we have to train 10 of them that will take 30 hours of compute now you can parallelize it but it's still expensive so usually we'd use a smaller number of models typically less than 10. so our uncertainty will be a fairly crude approximation to the true parameter posterior conveniently though it appears experimentally that if you're training deep neural network models resampling with replacement is actually usually unnecessary because just the fact that you train your model with stochastic gradient descent and random initialization it usually makes the model sufficiently independent even if they are trained on exactly the same data set so when implementing this in practice we can usually actually forego the resampling with replacement so that makes things a little easier it's a it's important for theoretical results but practically you can you can skip it
CS_285_Deep_RL_2023
CS_285_Lecture_11_Part_1.txt
welcome to lecture 11 for cs285 in today's lecture we're going to talk about model-based reinforcement learning algorithms so first we'll discuss the basics of model-based rl where we learn a model and then use this model for control we'll talk about a naive way of approaching this problem discuss a few candidate algorithms and then talk about some problems with these algorithms we'll talk about the effect of distributional shift in model based rl and then we'll discuss uncertainty in model-based rl and how being aware of uncertainty can make a really big difference in the performance of algorithms then i will conclude with a discussion of model based rl with complex observations and then next time we'll discuss how we can use model based rl to learn policies so all the algorithms we'll discuss in today's lecture learn only a model and then they use algorithms such as the ones in the previous lecture to plan through that model whereas next time we'll talk about how we could also use models to learn policies so the goals for today's lecture will be to understand how to build model-based rl algorithms understand the important considerations for model based rl and understand the trade-offs between different model classes so why should we learn the model well if we had some estimate of the dynamics for example a function f of st comma a t that returns st plus one then we could simply use all those tools from last week to control our system instead of having to deal with model free rl algorithms in the stochastic case we would learn a stochastic model of the form p of sd plus 1 given sdat for most of the algorithms that i'll discuss in today's lecture i'll present them as deterministic model algorithms of the form f of st comma a t equals st plus one but uh almost all of those ideas can just as well be used with probabilistic models that learn a distribution over the next state and when this distinction is salient i'll make it explicit so let's think about how we can learn f st comma a t from data and then plan through it to select our actions we could imagine a very simple model based rl algorithm prototype i'm going to call it version 0.5 it's not quite version 1.0 it's not quite the thing you want to use but it's perhaps the most obvious thing we can start thinking about in this model-based rl version 0.5 algorithm step one would be to run some basic exploration policy to collect some data and this exploration policy could just be a completely random policy so it's not a neural network or anything like that it just selects actions uniformly at random and collects a data set of transitions so here our transition is a tuple s comma a comma s prime we saw state s we took action a and we arrived at state s prime and that got us our transition and we'll have a data set of these transitions and we can use this data set to then train our model with supervised learning so we'll learn a dynamics model f of s comma a that minimizes the average over all the points in our data set of the difference between fsi ai and si prime and if your states are discrete then maybe you'd use some other cross-entropy loss if your states are continuous you could use something like a squared error loss most generally you would use a negative log likelihood loss of which squared error is a particular special case for a gaussian likelihood and then once you've trained your dynamics model you would use your model to go and select actions using for example any of the algorithms that we covered last week so does this basic recipe work well in a sense yes so in some cases this basic recipe will work very well in fact there are many previously proposed methods that have utilized this recipe this is essentially how system identification works in classical robotics so if you have a robotics background if you've heard the term system identification system identification basically refers to the problem of taking some data and using that data to identify the unknown parameters in a dynamics model now typically the parameters that are being identified in this way are not something like the weights in a neural net they might be the unknown parameters of a known physics model so maybe you have the equations of motion of your robot but you don't know the masses or the friction coefficients of different parts and you would identify those this is why this is referred to as system identification instead of system learning so you really know a lot already about your system and just identifying a few unknowns when you use this kind of procedure you do need to take some care in designing a good base policy because that good base policy needs to explore the different modes of your system so if your system can react in different ways to different states you're ready to see representative examples of all the states that elicit different reactions so if there's some large region that you haven't visited perhaps you will get an erroneous identification of your parameters that will not model that region one but this kind of approach can be particularly effective if you can hand engineer a dynamics representation maybe using your knowledge of physics using your knowledge of the system and then fit a relatively modest number of parameters in general however this approach doesn't really work with large high-capacity models like deep neural networks and to understand why that's the case let's consider a simple example let's say that i'm trying to walk around on this mountain and i'd like to get to the top of the mountain so my procedure will be to first run a base policy pi zero like a random policy to collect my data set so maybe i do essentially a random walk on the mountain and i'm going to use the result of this random walk to learn my dynamics model uh so this is my policy pi zero maybe just a random policy that produced some data and i'm going to use it to learn f and i want to get to the highest point on the mountain so i'm going to ask f to predict how high will i be if i take certain actions and then i'll plan through that now for this part of the mountain that i walked on it seems like going further to the right gets me higher up so from this random data that i got from this pi zero policy my model will probably figure out that the more right you go the higher your altitude will be which is a very reasonable inference based on that data so when i then plan through that model to choose my actions well what do you think is going to happen right so i'm going to be in for a bad time i'm going to be in for a bad time for a reason that we've actually seen before so the data that i use to train my model comes from the distribution over states induced by the policy pi zero we can call this p pi zero s t take a moment to think about why using a model train on p pi zero s t can lead to some really bad outcomes when we then use that model to plan as a hint the answer to this is something that we've seen before in several of our previous lectures so the issue is basically the following the issue is that when we plan through our model you can think of that planning as executing another policy we can call it pi f because f is the model and pi f is the policy induced by that model pi f is not a neural net pi f is just a planning algorithm run on top of the model f so pi f is fully determined by f and it has its own state distribution its state distribution is p pi f of st and in this case that distribution involves going very far to the right and falling off the mountain the reason that the problem happens is because p pi f of st is not equal to p pi 0 of s t so we're we are experiencing distributional shift and the way the problems of this distribution shift manifest themselves is that our model is valid for estimating the outcomes of actions in the region that was visited during data collection meaning for states with high probability p pi zero of st but when we plan under that model when we select the actions for which the model produces states of the highest reward the ones that go the most up we will end up going to states during our planning process virtually that have very low probability under p pi zero of st and our model will make erroneous predictions at those states and when it makes erroneous predictions at those states then it will choose the best action in that erroneously predicted state feed that back into itself and then make an even more erroneous prediction about the following state this is basically exactly the phenomena that we saw before when we talked about imitation learning and in fact the two have a very close duality because you can think of the full trajectory distribution as just a product of policy time dynamics times policy times dynamics times policy et cetera et cetera so if you can experience distributional shift by messing with the policy you can of course also experience distributional shift by messing with the model so distribution mismatch becomes exacerbated as we use more expressive model classes because more expressive model classes will fit more tightly to the particular distribution seen in the training data in the system identification example on the previous slide if we have let's say an airplane and we're fitting you know three numbers like some drag coefficient a lift coefficient and so on yeah we can technically over fit to a narrow region of train data but there's only three numbers and there's only so many ways that those numbers can be chosen to fit the training data so if our model if the true model is in the class of our learn models then we'll probably get those three parameters right and that's why system identification basically works in robotics but when we use high capacity model classes like deep neural networks then this distributional shift becomes a really major problem and we have to do something to address it otherwise we fall off the mound so can we do better well take a moment to imagine how we could modify the model based rl algorithm version 0.5 to mitigate this distributional shift problem so one way we can do this is we can borrow a very similar idea to what we had before when we talked about dagger in dagger we also posed this question can we make the state distribution of one policy equal to the state distribution of another policy and the way we answered that question for dagger was by collecting additional data and requesting ground truth labels for that data now with dagger this was quite difficult because getting ground truth labels required asking a human expert to tell us what the optimal action was in model based rl this turns out to actually be a lot easier in model based rl you don't need to ask any human expert what the right next state is you can simply take a particular action a particular state and observe the next state in nature which means collect more data so from this we can get kind of model based rl version 1.0 the reason i call this 1.0 is because arguably this is kind of the simplest model based rl method which does work in general at least conceptually although there are a lot of asterisks attached to that statement in terms of how to implement it so it works well so the procedure goes like this number one run your base policy to collect data just like before number two learn your dynamics model from data just like before number three plan through your dynamics model choose actions and number four execute those actions and add the resulting data to your data set and then go to step two again so the main loop consists of training the dynamics model on all the data so far planning through it to collect more data appending that data to your data set and retraining it's essentially just like dagger only for models although there's a little bit of that statement is a little bit of an anachronism because this procedure will actually existed in the literature long before dagger did but we presented it in the opposite order in this class so you can think of it as dagger for model-based arrow so this recipe does in principle work it actually does mitigate distributional shift and in principle you should get a model that works well so where we see that before well this is just dagger for models okay so at this point you have version 1.0 which is a viable algorithm and you can actually use that algorithm but we can still do better so first let's ask this question what if we made a mistake right so with falling off a cliff it's rather sudden so if you fall off a cliff you realize you've made a mistake but at that point it's too late and there's nothing more that you could do but many real problems are not like that let's say that you're driving a car and your model is a little bit erroneous so your model predicts that you'll go straight not if your steering wheel is pointed straight with steering wheel is a little bit to the left so the model is just a tiny bit off it says well if you steer like two degrees to the left then you'll go straight a fairly innocent mistake in a complex dynamical system so if you do that then when you actually execute your model you'll go a little bit to the left instead of going straight and then you'll go a little bit to the left again and again and again now as you collect more data that iterative data collection should fix this problem so asymptotically this method will still do the right thing however we can do better and we can get it to learn faster by fixing the mistakes immediately as they happen instead of waiting for the whole model to get updated so we can do is we can re-plan our actions at exactly the moment when you've made the mistake and perhaps correct it so the way that you can do better is look at the state that actually resulted from taking that action and then ask your model what actually should take in this new state instead of continuing to execute your plan and this is what we're going to call model-based reinforcement learning version 1.5 which is also in the literature often called model predictive control or npc which i alluded to briefly at the end of the previous lecture so the idea here is just like before run your base policy train your dynamics model plan through your dynamics model to choose a sequence of actions but then execute only the first planned action observe the actual real state that results from taking that action and then plan again so you append that transition to your data set and then instead of waiting for that whole sequence of actions to finish you immediately re-plan now this is much more computationally expensive because you have to repeat the planning every single time step but with this procedure you can do much better with a much worse model because your model maybe doesn't realize that uh you know steering a little to the left won't cause it to go straight once it's actually gone to the left at that point the mistake is so big that it can probably figure out that okay now i really need to steer to the right to get out of here uh and this kind of model frigative control procedure can be much more robust uh to mistakes in the models uh than the naive 1.0 procedure that i showed on the previous slide and then of course every n outer loop steps you repeat this whole process and retrain your model and end here might be some multiple of the length of your trajectory so in practice this version 1.5 procedure pretty much always works better with the main consideration being that it's considerably more computationally expensive so replanning basically helps avoid model errors and your homework 4 assignment will essentially involve implementing an algorithm that basically does this so if this procedure is not clear to you please do make sure to ask a question in the comments and come to the class discussion and ask ask about clarifications because it's very important to get this particular procedure correct okay so now one question we can ask is well how should we re-plan so how do we plan through fsa to choose actions the intuition is that the more you re-plan the less perfect each individual plan needs to be so while the computational cost of replanting might be very high in practice since you're going to be re-planning and fixing your mistakes you can kind of afford to make more mistakes during this replanning process so a very common thing that people do when they actually implement this procedure is they use much shorter horizons for step three than they would if they were making a standard open loop plan and they just rely on their own planet to fix those mistakes so even things like random sampling can often work well here random shooting whereas they might not work well for constructing a long open loop plan and if you remember the demonstration that was shown at the end of class last week this was illustrating nbc with replanning using actually a fairly short horizon and really relying almost entirely on that ruby planning to fix up the mistakes
CS_285_Deep_RL_2023
CS_285_Lecture_8_Part_6.txt
all right in the last portion of today's lecture i'll go through some tips and tricks for implementing q-learning algorithms which might be useful for homework three and then i'll give a few examples of papers that have used variants of the methods that i described in this lecture so first a few practical tips q learning methods are generally quite a bit more finicky to use than policy grading methods so they tend to require a little bit more care to use correctly it takes some care to stabilize q-learning algorithms and what i would recommend is to start off by testing your algorithms on some easy reliable problems where you know that your algorithm should work just to make sure your implementation is correct because essentially you have to go through several different phases of troubleshooting you first have to make sure that you have no bugs then you have to make sure that you tune your hyper parameters and then get it to work on your real problems so you want to do the debugging before the hyperparameter tuning which means that you want to do it on really easy problems where basically any correct implementation should really work q learning performs very differently on different problems uh so these are some plots of dqm type experiments on a variety of different atari games and something you might notice is that there's a huge difference in the stability of these methods so for pong your reward basically uh steadily goes up and then flat lines for breakout it kind of goes up and then wiggles a whole bunch and then for some of the harder games like video pinball and venture it's just completely all over the place and the different colored lines here simply represent different runs of the same exact algorithm with different random seeds you can see the different random seeds for pong are basically identical for breakout they're kind of qualitatively the same but have different noise works for something like venture some of the runs work and some fail completely large replay buffers do tend to help to improve stability quite a lot so using a replay buffer of a size about 1 million can be a pretty good choice uh and at that point the algorithm really starts looking a lot more like fit a queue iteration which is perhaps part of the explanation for its improved stability and lastly q-learning takes a lot of time so be patient i it might be no better than random for a long time while that random exploration finds the good transitions and then it might take off once those good transitions are found and many of you will probably experience this at homework 3 when you train on the palm video game start with high exploration start with large values of epsilon and then gradually reduce exploration as you go because initially your q function is garbage anyway so it's mostly the random exploration that will be doing most of the heavy lifting and then later on once your q function gets better then you can decrease epsilon so it often helps to put on a schedule a few more advanced tips for q learning the errors of the bellman error i said the gradients of the development error can be very big uh so it's kind of a least squares regression so these squared error quantities can be large quantities which means their gradients can be very large and something that's a little troublesome is that if you have a really bad action you don't really care about the value of that action but your squared error objective really cares about figuring out exactly how bad it is so if you have some good actions that are like plus 10 plus 9 plus 8 and you have some bad actions that are minus 1 million that minus one million will create a huge gradient even though you don't really care that's minus one million like if you were guessed to guess minus nine hundred thousand it would be it would result in the same policy but your q function objective really cares about that and that will result in big gradients so what you can do is you can either clip your gradients or you can use what's called a huber loss a huber loss you can think of as kind of interpolating between a squared error loss and an absolute value loss so far away from the from the minimum the huber loss looks like absolute value and close to the minimum because absolute value is a non-differentiable cusp the huber loss actually flattens it out with a quadratic so the uh the green curve here on the right shows a huber loss whereas the blue curve shows a quadratic loss so the huber law set actually mechanically behaves very similarly to clipping gradients but it can be a little easier to implement double q learning helps a lot in practice it's very simple to implement and it basically has no down signs so probably a good idea to use double q learning uh nstep returns can help a lot especially in the early stages of training but they do have some downsides because n-step returns will systematically bias your objective especially for larger values of n so be careful with unstep returns but do keep in mind that they can improve things in the early stages of training schedule exploration and schedule learning rates adaptive optimization rules like atom can also help a lot so some of the older work used things like our rms prop which doesn't work quite as well as the most recent adaptive optimizers like atom so good idea to use adam and also when debugging your algorithm make sure to run multiple random seeds because you'll see a lot of variation between random seeds you'll see that the algorithm is very inconsistent between runs so you should run a few different random seeds to make sure that things are really working the way you expect and that you didn't get a fluke and the fluke can either be unusually bad or unusually good so keep that in mind okay so in the last portion of the lecture what i'm going to do is i'm going to go through a few examples of previous papers that have used algorithms that relate to the ones that i covered in this lecture the first paper that i want to briefly talk about is this paper called autonomous reinforcement learning from raw visual data by langen reed miller or lang and reed miller rather so this is a paper from 2012 it's quite an old paper and it's one of the actually earliest papers that used deep learning with physical iteration methods the particular procedure this paper used though is a little different from the methods that we covered in this lecture it's actually more similar to some of the model based algorithms that i'll discuss later so in this paper what the authors did is they actually learn a kind of a latent space representation of images by using an auto encoder and then they actually run fit a queue iteration on the latent space of this auto encoder on the feature space but the particular physical iteration they use actually doesn't use neural networks it uses something called uh random trees so they use a non-deep but still fit accurate procedure on the representation learned by a deep neural network so it's q learning on top of a latent space learned with an auto encoder using fit a q iteration uh and something called extra random trees for function approximation you can think of extra random trees as basically very similar to random forests and the demonstration that they had in this paper which is pretty cool is to use an overhead camera to look at this little slot car race track and then learn to control the slot car to drive around the racetrack here is a paper that uses convolutional neural networks with uh q learning this is dq learning so this is a paper called human level control through deep d barrel and this paper uses q learning with confidence with replay buffers and target networks and this kind of simple one-step backup that i mentioned and one gradient set to play atari games and this can be improved a lot so it can be improved a lot with double q learning the original method in this paper can actually also be improved a lot just by using atom so that alone actually gets you much much better performance but for homework 3 you'll be implementing something you know fairly similar to this paper here is a paper on queue learning with continuous actions for robotic control application uh or kind of a simulated control application so this is the the gdpg paper called continuous control with deep reinforcement learning using continuous actions with a maximizer network and uses a replay buffer and target networks of poly converting with a one-step backup and one gradient step per simulation step and they evaluated on some kind of simple toy low dimensional uh robotics tasks here's a paper that actually uses a deep q-learning algorithm with continuous actions for real-world robotic control and this actually kind of exploits some of the parallelism ideas that i discussed before so here you have multiple robots learning in parallel to open doors it's a paper called robotic manipulation with deep reinforcement learning and asynchronous of policy updates and this uses that naf representation so this is a q function that is quadratic in the actions making maximization easier you use a replay buffer and target network a one-step backup and this one actually uses four grading steps per simulation step to improve the efficiency because uh collecting data from the robots i guess it's not even simulation it's actually real world uh collecting data from the robots is expensive so you'd like to do as much computation with as little data as possible and it's further paralyzed across multiple robots for better efficiency this method which i showed actually in lecture one is also a deep q learning algorithm that takes this paralyzed interpretation of physical iteration of the extreme so here there are multiple robots that are learning grasping all in parallel and there are actually multiple workers that are all computing target values multiple workers that are all performing regression uh and a separate worker that is managing the replay buffer so this is literally instantiating that system that i showed before with process one process two and process three and in this case each of those processes are themselves forked off into multiple different workers on a large uh server farm all right if you want to learn more about q learning some suggested readings classical papers this is the watkins q learning paper that introduced the q learning algorithm in 1989 this paper called neural physical iteration introduces batch molecular with neural networks some deep rl papers for q learning the lang and read miller paper that i mentioned before the dqn paper this is the paper that introduced double q learning this is the paper that introduced that approximate maximization with mu theta this is the paper that introduced naf this is a paper that introduces something called dueling network architectures which is very very similar to the naf architecture but adapted for discrete action spaces i didn't cover this in the lecture but it's also a pretty useful trick for making q-learning work better all right so these are the suggested readings and you can find them in the slides if you want to learn more i highly encourage you to check it out and just to recap what we've covered in today's lecture we talked about q learning and practice how we can use replay buffers and target networks to stabilize it we talked about a generalized view of physical iteration in terms of three processes we talked about how double q learning can make q-learning algorithms work a lot better how can do multi-step q learning how we can do q learning with continuous actions including with random sampling analytic optimization and a second actor network and that's it
CS_285_Deep_RL_2023
CS_285_Lecture_22_Part_1_Transfer_Learning_MetaLearning.txt
today we're going to talk about trends for Learning and metal learning this deals with a question of how you can use experience from some Source domains from some Source RL problems to get into a position where you can more efficiently or more effectively solve new Downstream tasks let's start with a bit of motivation let's think back to this example that we discussed in the first exploration lecture some of the mdps that we work with some of the target games for example are fairly straightforward to solve even with a relatively simple methods like the ones that you implemented for homework 3. but some other games which might not look all that different seem to be a lot harder to solve so if you try to learn a policy from Montezuma's Revenge with your homework 3 Q learning algorithm you'll find that it really struggles to get past even the first level so why is that well the trouble has to do with the fact that these games look deceptively straightforward to us but are very difficult for the computer because the computer doesn't come equipped with prior knowledge that we have so in this game the reward structure doesn't provide very good guidance on how to solve the tasks like you get a little bit of a reward for picking up the key you get a little bit of a reward for opening the door um getting killed by a skull is bad but you don't actually get any negative reward for it so the only reason that it's bad is because if you get killed by the skull enough then you lose the game and then you can't keep picking up keys and opening doors so this reward structure doesn't really provide you with a great deal of guidance about how to make progress in the game and besides that picking up the key and opening doors isn't necessarily the right thing to do to win the game because in later levels uh you can go to different places and so on and maybe you don't need to do all the steps that are rewarding to get to the Finish but for us this all these things don't pose a huge challenge in fact when we're playing the game we might not even be paying attention to the score we might just be using our prior knowledge which informs us about how we should be making progress in a game like this there's a visual metaphor at play here this is uh some kind of uh Explorer exploring an ancient cursed Temple and we kind of have this belief both from our knowledge of video games and from our general physical common sense that you know skulls denote bad stuff keys are good for opening doors ladders are things that can be climbed and also we know from kind of basic knowledge of video games that progressing through different levels entering new rooms and so forth is a good way to make progress so our prior understanding of the problem structure can help us solve complex tests very quickly when something about these tasks Accords with our expectations which is to say that it Accords with our prior knowledge what we're doing when we learn to play Montezuma's Revenge is we're essentially doing a kind of transfer learning we're transferring what we know about the world into this new mdp so while before we talked about Montezuma's Revenge as an example of a domain in which to study transfer or image of study exploration now we're going to talk about these things as transforming problems as problems where maybe the right thing to do is not to devise a good de novo exploration algorithm but rather an algorithm that can transfer what you know from other tasks that you've solved to solve this new test more effectively okay so can reinforcement learning use this prior knowledge the way that we do perhaps so could we for example build an algorithm that can watch lots of Indiana Jones movies and use that to figure out how to solve Montezuma's Revenge well that's perhaps a very lofty goal so we're realistically not quite there yet but we can start thinking about the transfer learning problem and we can actually devise some pretty clever algorithms to address it so the idea is that if we've solve prior tasks prior mdps we might acquire useful knowledge for solving a new task how can this knowledge be encapsulated how can it be represented well there are actually quite a few choices to be made here we could say that well the Q function is good to transfer because the Q function tells us which actions or states are good so if you somehow had a Indiana Jones as Q function for uh you know stealing the treasure maybe that would be a good cue function to initialize montessor's Revenge but maybe not like you know in the video game you don't uh move around by moving your legs and arms you move around by pushing buttons so perhaps the cube puncture doesn't transfer that well in this case the policy tells us which actors are potentially useful that can transfer really well some actions are never useful so as long as you can rule those out then maybe you can make more progress or maybe you could transfer models maybe the laws of physics the laws that govern how the world works are the same in both domains even though some other factors of the task are different and transfer models can also be a very effective strategy it could also be that a more abstract settings transferring some kind of feature or some hidden State might provide you with a good representation so perhaps you don't have good actions for playing Montezuma's Revenge you don't even have good models because things don't quite line up but the visual features might help maybe from watching some videos you figured out that skulls and ladders and keys are important things in the world and as long as you just start off the game with a good skull ladder or key detector that might already get you some mileage so for this last one don't underestimate it and it may actually be that a little bit of visual pre-training can actually go a long way but before we go into the particular techniques and I won't cover techniques for doing all of these things I'll cover just a kind of a sampling of influential ideas in this area before we get into that let's nail down some definitions so formally transfer learning deals with a problem of using experience from one set of tasks for faster learning and better performance on a new task now when we talk about transfer learning in the context of RL a task is of course an mdp so you can also read this as saying use experience from one mdp or from a set of mdps for faster learning and better performance on a new MTP so um the mdp that you train on the the one where you're getting the initial experience that's called The Source domain or Source domains because that's the source of your knowledge and then the mdp that you want to solve the new task that's the target domain so that's the one where you want to get good results classically transfer learning is concerned with a question of how well you can do in the Target domain although it's closely related to things like lifelong learning or continual learning where the aim is to do well on both the source and Target domains of course typically uh you would you know it's it's pretty unusual to do well on a Target domain without also doing decently well in the source domains but uh you know there's a little bit of terminology where people will refer to things like backward transfer uh as the question of whether after training on the target domain are you still good on the source domain but for now we won't concern ourselves with that we'll just say our goal is to do well in the Target domain regardless of what happens in The Source domains people will use a little bit of terminology to refer to how quickly you learn in the Target domain and the term shot is sometimes used to refer to us like how many shots do you take at trying the target domain so you could say well maybe my algorithm transfers in zero shot zero shot means that without even trying anything in the Target domain right away right after the trading on the source domains you immediately get good performance in the Target domain so zero shot would be that if you interacted with some mdps that involve you know Indiana Jones stealing the treasure right away you'd get a policy that you could simply Deploy on the Montezuma's Revenge game and it would immediately play the game well so that would be called zero shot transfer one shot transfer means you tried the task once what that means is somewhat domain Independence so maybe from under Zuma's Revenge it would mean that you play it for one episode basically until you run out of lives or uh maybe if it's a robot interacting with the world maybe the robot will make one attempt few shot means you try the task a few times and many shot means you try the task many times so these are not necessarily very precise terms but it can provide uh you know good indication if you read a paper and you see one shot or zero shot to get a sense of what's going on all right so how can we frame these trends for learning problems well I'll start off by saying that there isn't really any one single problem statement here so a lot of what we discussed so far in the course has been you know fairly foundational material with fairly well understood Theory and guidelines on how to do this a lot of transfer learning is a little bit ad hoc because it's so dependent on the nature of the domain that you're transferring from and the one that you're transferring to so if you want a pre-train on Indiana Jones videos and play Montezuma's Revenge you might use a very different algorithm than if for example you wanted to train a robot to grasp lots of objects and then deploy to grasp a new object but there are still a few ideas a few common ideas that people use and that's what I'll try to cover uh in today's lecture so keep in mind that I'll only cover a smattering of the ideas not everything and I'll try to kind of cover the the main um kind of central Hub ideas that can be used in a variety of settings so I'll talk about forward transfer so forward transfer deals with learning policies that transfer effectively where you might train a source Tusk and maybe you just run something on a Target task or maybe you fine tune on a Target task conventionally this relies on the task being quite similar and a lot of work on forward transfer deals with finding ways to make the source domain either look like the target domain or make it so the policy recover from The Source domain is more likely to transfer to the Target domain the other big area is multitask transfer where you train on many different tasks and then transfer to a new task and that could work really well because now instead of relying on the source domain on a single Source domain being close to the Target domain you could get many Source domains so the target domain is sort of within their convex Hull intuitively so if you want a robot to grasp a new object and you've only ever traded on one other object before it might be very hard for that to transfer but if you frame on many different objects then this new object might look kind of similar to the range of things you've seen so multitask transfer tends to be easier there are a variety of ways to do it sharing representations on layers across tasks or maybe simply just training a policy that is conditioned on some representational task and generalizes to the new one immediately it does typically require that new tasks to be similar to the distribution of training tasks but that's often easier to achieve than ensuring that your single Source task is similar to your single Target task and then um for a large chunk of today's lecture we're going to actually talk about something called metal learning and metal learning you can think of as The Logical extension of transfer learning where instead of trying to Simply train on some Source domain or domains and succeed in the Target domain either in zero shot or with naive fine-tuning in metal learning we're actually going to train a special way in our source domains in a way that is aware of the fact that we're going to be adapting to a new Target domain later so metal learning is often framed as a problem of learning to learn essentially you're going to try to solve those Source domains not necessarily in a way that gets you a really great solution on all of them but in a way that prepares you to solve new domains so it accounts for the fact that we'll be adapting to a new task during training um so uh we'll talk about each of these things and I'll actually spend most of the time today on metal learning but I will briefly go over forward transfer multitask transform part of the reason why I like to spend more time on metal learning in these lectures is that in some ways that's the area where there there's a little bit more in the way of principles and common themes things that we can learn that we can use in many different settings a lot of the um non-metal learning transfer work it's very deep there's a lot of interesting work there but it tends to be a little scattered so it's a little hard to nail down a small set of principles but I'll do my best to nail down those principles that appear to be broadly applicable and hopefully they'll get you some idea for where to go and then I'll also have lots of references that you could read if you want to dive deeper into this but keep in mind that these things really are at the frontier research and there isn't sort of a one Set uh of algorithms that you could just take and use for whatever transfer learning problem you might encounter okay so let's talk a little bit about pre-training and fine-tuning in RL so in uh you know outside of reinforcement learning if you were to think about addressing just kind of General transfer learning problems let's say in computer vision a very popular approach is to train some kind of representational large data set like maybe you train a covenant on a large data set of images or you train a language model in a large data set of text like something like a bird model and you use that to basically extract representations these could be representatives of images representations of text something like that and then you would train a few additional layers on top of those maybe just a few fully connected layers or maybe you would fine-tune the entire network to solve a particular task that you want for which you have a comparatively more limited amount of data and this is a pretty standard workflow across a range of supervised learning domains and you could imagine employing something like this in reinforcement learning too in some cases this will actually work out of the box so you could learn representations with reinforcement learning and then take those representations and use a reinforcement learning algorithm to fine tune an additional a few layers on top of those for solving your task you can also learn representations with supervised learning and then fine-tune a few layers on top of those with reinforcement learning to solve your reinforcement learning task so these things are all you know fairly straightforward I won't go into them into too much detail because these are kind of the standard techniques that we would imagine inheriting from supervised learning and so you can learn about that in you know your favorite deep learning class and it's not really all that different but I will talk about a few peculiarities and a few tools that to me at least have proven to be especially useful when applying this kind of pre-training and fine-tuning schema in RL so before we do that let's talk about what issues we're likely to face when we do this and these are not necessarily issues that are exclusive to reinforcement learning but they tend to come up and reinforce some learning pretty often for my experience one issue is domain shift and this is a fairly obvious one basically representation is learned in the source domain might not work well in the Target domain this happens very often if you imagine for example learning a task in some kind of simulator that simulates visual observations or other kinds of high dimensional observations like sounds and so on and then transferring the resulting policy to some maybe real world environment where the observations are structurally similar but not exactly the same so if you trained a driving policy to drive a car in a video game and then you want to drive a real car things are not that different things basically line up the physics are similar the mechanics of the environment are similar but things don't look exactly the same so there's a little bit of a gap now it could be that the Gap was even larger it could be that the Gap was not merely perceptual it could be that there's actually some things that you can do in the source domain that are not possible in the Target domain at all so uh and this is actually a big difference so the first category just deals with things being different visually but not necessarily different mechanically the second category deals with things actually being different in a physical sense but still structurally similar enough that you feel like there is something from a source domain that you can inherit this is much harder but they're also tools that we could use to deal with this um and then there's also some issues that we get with just applying the notion of fine tuning in general to RL for example the fine-tuning process may need to explore in the new Target domain but remember that the optimal policy in any fully observed MVP can be deterministic so you might end up with a deterministic policy after running let's say policy gradient in your Source domain you deploy in the Target domain and it doesn't explore anymore because it has become fully deterministic because that was the optimal solution the target domain so there are a few of these kind of low-level technical issues that you might need to deal with let's talk about the first issue the domain shift problem there are a number of tools that have been developed principally in the computer vision Community for dealing with these kinds of problems and I should say that I'm going to discuss these issues as they relate to visual perception but these are not things that are exclusive to visual perception and they're kind of General issues with high dimensional observations of course those High dimensional observations often do tend to be visual because that's where we often want to use a simulation for example and we encounter the most challenges so let's think about this uh example of learning to drive and simulation so we want to train in the images in the simulator and we want to do well when the policy is presented with images in the real world so we're going to of course be trading our Network in the simulator and then we'll use that Network in the real world and let's imagine that we have some small number of real world images we might not even have a real world experience we might simply have a few images that kind of anchor us to the real world so they might not even know anything about the actions they're just examples of real world photographs we can supervise uh the simulated experience with the correct answer this could be supervised learning or it could be reinforcement learning so it could be the correct answer here means a Target Q value that kind of doesn't matter it's just some kind of loss that you put on top of your network um but of course when we then evaluate that model on the real world image we might get an incorrect answer right because the real world image looks different so one assumption that allows us to deal with this is something called The invariance Assumption the invaris Assumption says that everything that is different between the two domains is irrelevant let's pause for a minute to think about what this means everything that is different between the domains is irrelevant so maybe the simulator doesn't simulate rain but the real world images might have rain in them the invarious assumption would imply that whether or not it's raining is irrelevant for how you should drive on the other hand the positions of the cars on the road would match in the simulation in the real world well statistically of course um so that is not a problem that is wrong with you now is this assumption reasonable well sort of so in reality the rain might actually affect how you drive but it's not a bad assumption to make in many cases if you believe that most of the discrepancies don't have to do with the physics or the Dynamics but they really have to do with how things look so if you subscribe to the invariance assumption you can write it out formally so let's say that the images are denoted with x formerly what this means is that P of X is different so you have a different distribution of images in the source domain and in the Target domain but there exists some representation let's call it Z equals f of x basically there's some way to featureize x with a featurizer F so that the probability of the output y given Z is the same as the probability output given X so that means that essentially if you featurize X using a f of x you retain all the information needed to predict the label or to predict the Q value or to predict the action so that just means that representation's not lossy but P of Z is the same in the source in the Target domain so just to unpack the statement again P of X the distribution of inputs is different in the two domains but there exists some featurization Z equals f of x so that P of Z is the same in the two domains and P of Y given Z is equal to P of Y given X meaning that Z contains everything you need to predict y if you can find such a representation and the invariance Assumption holds then you will be able to transfer perfectly okay we can pause for a minute and think about the implications of this more for example if this assumption doesn't hold perfectly but it holds mostly you could imagine that you would be in a situation where you find a p uh a z where P of Z is the same in the source and Target domain but perhaps py given Z is not exactly the same as P of Y given X perhaps something was lost and that might happen in the example with array so if you neglect the rain you can still solve the task mostly but maybe your solution will not be as good as what you would have gotten if you had held on to that but of course you can't hold on to it because there's no rain in the simulator so there are a number of ways to acquire these kinds of invariant representations one of the most commonly used techniques um it actually goes under a variety of different names domain confusion domain adversarial neural networks and so on but the idea is basically the following we're going to take some middle layer in these neural networks people often take the layer right after the convolutions and then we add an additional loss to force that layer to be invariant in varied means that if the activation in that layer are denoted with Z it means that P of C is the same in the source of Target domain and that's where we need a few of those Target domain imagers so what we're going to do is we're going to train a binary classifier that gets to look at the activations at that layer so the classifier D5 Z and it's going to predict true if it is in the Target domain and false if it's in the source domain and then we'll compute the gradient of that classifier with respect to Z and we'll reverse the gradient and back that up into the network so we'll basically teach the network to produce such a z such that a classifier can't tell whether it came from the source Center or the target domain now of course there are a variety of details on how to do this right whether you reverse the gradient or whether you train the classifier to Output the opposite label or you train the classified Capital probability of 0.5 there's a bunch of these details that do actually matter in practice but this is the high level idea and that's why in order to do this you do need some some examples from the target domain but you don't necessarily need to have to run RL in the Target domain you just need some example imagers you do want to be a little careful with this idea and there are some ways in which you could go wrong for example if you have bad data from the target domain data of let's say very bad human drivers and then you run reinforcement learning in the source domain and you end up with very good behavior very good driving then your representation will not only be invariant to whether you're in the simulator or in the real world it'll also try to be invariant to whether you're good or bad and you really don't want that right because that will really mess up your Q learning algorithm so um you have to be a little careful about this and the nature of the data that you get on the target domain actually really does affect how well this trick works but it can be a very effective trick in practice now I mentioned that sometimes you could also get into a situation where it's not just the images that differ it's actually the Dynamics themselves that are different in which case simply forcing invariance like this might not be a good idea so can we do some kind of domain adaptation if the Dynamics change so invariance is not good enough if the Dynamics don't match because you don't actually want to ignore the functionally relevant differences but what you could do is you could actually change your reward function to punish the agent from doing things in the source domain that would not be possible in the Target domain so here's a little illustration of this let's say that in the Target domain in the real world you want to get from the start to the goal and there's a wall in between so you have to go around the wall but in your Source domain in your simulator this wall is not present so if you train in the source domain you'll get this behavior that goes straight to the goal which of course doesn't work in the Target domain so what we can do is if we have a little bit of experience from the target domain is we can change our reward function to provide a really big negative reward or doing things in the source domain that would not be possible in the Target domain this is a very similar idea to domain adaptation except that instead of changing your representation of the input to make it seem invariant you're actually changing your behavior to make your behavior seem invariant intuitively what such a technique would do is it would punish the agent for doing those things that sort of violate the illusion that make it apparent to the agent that it's in the source domain and not in the Target domain so this is a clip from a film that somebody might recognize so that this man thinks that he's sailing on a boat but then he gets to the edge of the green screen and goes through the green screen and realize that it's not actually a clear sky it's actually the wall of a film studio so you don't want to violate the the illusion and what this additional reward term will do is it'll prevent you from violating the illusion or for example if you want to train the ant to run on an infinitely large flat plane and it has a limited arena in which to practice then when it gets to the edge of the Arena it will violate the illusion and incur some negative reward it turns out that Concepts very similar to that uh invariance technique from before can actually be used to compute this reward function essentially the reward function that optimally leads to the desired Behavior here is going to be the difference between the log probability of a transition happening in the Target domain minus the slog probability in the source domain and this is very intuitive it's just saying take those transitions that are likely in the Target domain uh that that are no less likely than Target domain now in the source domain there are a variety of ways to approximate this quantity without training a Dynamics model one of the ways to do it is to train um a uh a discriminator for it now since you're estimating a conditional probability you actually two discriminators so you have a discriminator for a joint SAS Prime and a discriminator for a joint essay and you actually take the difference of the two uh I won't go into the details for why this works I for for the details of the method I would encourage you to read the paper but the high level idea I want you to take away from this uh slide is just that you can use invariance based ideas to handle changes in Dynamics what that leads to is agents that will try to behave in the source domain in such a way that they don't do anything that will be impossible in the Target domain and it can be instantiated by adding a term to the reward function based on a discriminator but it's a little more complex than just the standard image based setting from before and you need actually two discriminators you take the difference of them so to learn more about the technical approach I would encourage you to read the paper thank you now you could also consider well what might this not work right so a technique like this would prevent you from doing things in the source domain that would not be possible in the Target domain but it might also be that the source domain doesn't permit you to do certain things that are needed in the Target domain and this wouldn't do anything just to fix that so in a sense you would be learning intuitively like the intersection of the two domains and if the intersection is big and you have good behavior there then you're in good shape but maybe it isn't now if you further fine-tune in the Target domain just basically by running more RL that can also make the transfer learning process work a lot better but there are a few issues that make fine-tuning and RL a little bit harder than fine-tuning with supervised learning first RL tasks are generally less diverse so pre-trading and fine-tuning for example in computer vision or natural language processing typically relies on scenarios where your pre-training is done in extremely broad settings maybe you pre-train in a setting where you have millions of imagenet images or you pre-train in a setting where you have billions of lines of text pervert and then you fine-tune it on a much more narrow domain in RL that's usually not how it works and RL uh you might have much more narrow tasks to pre-train on although of course if you're lucky enough to have a very broad test distribution then things will go better but if you pre-train on more narrow tasks and the features are generally less General and the policies and value functions can become overly specialized the optimal policies in fully observed mdps tend to be deterministic the longer you train the more determinants it could policy will become and combined with problem number one this can be a big issue because as your policy becomes more deterministic it explores less and less so you have loss of exploration convergence and low entry policies will adapt very slowly to new settings because they're not exploring enough this combined with having features that are less General and overly specialized policies and value functions can make fine-tuning extremely slow in reinforcement learning so for that reason simply fine-tuning naively often is not very effective and we often have to do a variety of things to ensure that our pre-training process results in solutions that are more diverse than what we ordinarily would have gotten with regular RL pre-training now there isn't any one technique for doing this but um we actually discussed techniques that would provide some degree of this in the exploration 2 lecture when we talked about unsupervised skill Discovery and in the controls inference lecture when we talked about maximum entropy reinforcement learning methods and both of those classes of techniques can be effective as pre-training methods because they can get you more diverse Solutions so in Exploration 2 we talked about how you could run for example the diversity is all you need algorithm or other information theoretic methods that would discover a variety of different uh behavior is a variety of different skills you could either use those techniques directly or you could use them in combination with a conventional reward function to learn to solve a task in a variety of ways you can also use the maxim entry PRL methods that we talked about in the controls inference lecture to avoid this loss of exploration on convergence and both class sub techniques can be more effective for transfer than just naively pre-training and fine-tuning I won't go into this in more detail though because unfortunately to date there aren't really very general and broadly applicable principles here there's a lot of research but so far the best I can tell you it's well maybe borrow some of the ideas and exploration to and control the difference another thing that is often a very useful tool in transfer learning is to manipulate the source domain a little bit now you can't always do this like maybe the source domain is just a a particular simulator that you're training in that you don't get to change but if you do get to have some control over the source domain there are a few things you can do that will maximize the probability of successful transfer the basic intuition behind a lot of these things is that the more varied the training domain is the more likely we are to generalize in zero shot to a slightly different domain basically if you train a video game where you're driving a car and it's always sunny and bright out and it's always in a city that that is less likely to transfer to the real world than if you then if the game simulates different times of day and maybe different Urban environments the more variety you have in the source domain the more things will transfer the way that this often shows up in practice is through randomization where people will intentionally add more Randomness to a source domain perhaps a lot more than they would expect in the real world to increase the robustness of the policy to variability in various physical parameters so for example if you'd like to train the hopper hop and it has some physical parameters maybe it has you know friction and mass so these are two parameters and the real world system is over here so it has these values of parameter one and two if you train on a narrow range of parameters you might not generalize to the kind of parameters you will see in the Target domain but if you trade on a broader range of parameters then it's kind of more likely that the target domain will sort of be within the set of parameters that you've trained on so that's the basic idea and this idea has shown up in the literature in various ways both for randomizing physical behavior and for randomizing visual appearance one of the earliest papers to apply this in deep RL was this paper by registered opt which studied randomization of physical parameters so the idea is that maybe you want to train on one kind of Hopper and test on a different Hopper maybe a hopper with different Mass parameters um but if you do that with a single setting that might not work so you train on a variety of different parameter settings and maybe then transfer will be more effective so in this paper there is some analysis of how this works or doesn't work this is some very basic analysis so here the mass of the Torso of the hopper is being varied on the x-axis you can see the mass at test time the y-axis performance and the three different plots correspond to three different training masses so the left one is trained with a mass of three the middle one a mass of six and the right one with a mass of nine and you can see the training with a mass of three of course performs best with a mass of three and then rapidly falls off as the mouse gets bigger and training with a mass of six performs best with mass of six if the policy is trained on a range of masses then this is the performance so you can see the performance is very high across all masses and for your reference the distribution of training masses looks roughly like this so it's kind of a normal distribution centered at a mass of six it's not too surprising that training on a variety of physical parameters results in a policy that is more robust what was perhaps a little bit surprising about this paper is that it worked very well with comparatively little degradation overall performance so you would think that there would be a trade-off between robustness and optimality meaning that if you want to be more robust you'll get lower reward but the fact that doesn't seem to be the case in many in many cases of course that's not a universal conclusion but part of the reason why you might expect there to be a little bit of a free lunch is that well deep neural networks are actually very powerful so it's not actually unreasonable to imagine that you can train one network that is just as good across a variety of masses as any single Network for that mass another kind of interesting observation and this is very important when we talk about transfer learning is that randomizing enough parameters actually creates robustness to other parameters that were not randomized so the particular experiment that registered all did here was they had four different physical parameters that were being varied in the simulator and they excluded one of them from the randomization so the mass was always the same but friction joint damping and Armature were varied and found that with this kind of uh Solution that's the blue line here it was actually decently robust to Mouse and intuitively you can kind of see why that would be the case because many of these physical parameters they have kind of redundant effects so if you decrease Mass that's a lot like decreasing friction because your ground reaction forces won't be as large uh and therefore your friction force will not be as large so while changing friction is not exactly the same as changing Mouse if you randomize friction you will be a little bit more robust to mass and that's actually very important because in reality when we're doing transfer learning we almost always have unmodeled effects we almost always can't actually vary all the parameters that distinguish The Source domain from the target domain so it's very good to know that if you vary enough parameters perhaps even those parameters that you didn't vary you might still be a little bit robust to and then of course the other thing you could do is if you get a little bit of experience in the Target domain you can adapt a little bit you can change your distribution of parameters to be closer to the Target to the Target domain and then things would work better now the idea of randomization has been used very widely in transfer learning for RL this was the first paper that applied it for deep RL visual observations for flying a drone uh it's been used with recurrent neural networks that can be robust to physical variation and then also actually adaptive more recently this technique has been extremely popular for learning Locomotion policies this is a paper from dth Zurich that shows a fairly extreme degree of robustness for a policy trained in a physical simulator with a high degree of randomization so this has been a very influential idea across the board for transferring especially in robotics now the idea is not exclusive to robotics you could imagine randomizing simulators for all sorts of other domains but robotics especially seems to be one where this is really taken off so if you want to read more about the techniques that I covered here are some references for domain adaptation fine tuning and randomization in the last part of this section I'm going to also talk a little bit about multitask transfer this discussion will be quite a bit shorter because multitask transfer is a very powerful tool but we're going to talk about it a lot more when we discuss metal learning later so this will just be like kind of a quick primer to the multitask idea the basic idea is that you can learn faster and perhaps transfer better by learning multiple tasks so if you have a variety of tasks maybe you have this robot that needs to do a whole range of household chores you could learn each of these tests individually but it's very likely that the tasks share some common structure not all of their structure will be shared but perhaps they share some structure in the sense that the way the robot moves its arms is kind of similar the way it touches objects might be similar so perhaps if you don't train them individually but you instead train them all together the fact that you can share those representations will make all the tasks learn quicker so if one of the tasks figures out how to pick up an object that kind of capability can sort of immediately be used by the other tasks and maybe they'll explore better or they'll learn faster and furthermore if you learn multiple different tasks and then you are provided with a new task a test time it's uh it seems more likely that you will have something to transfer to that Target task if you have more Source tasks kind of a similar intuition as the randomization if you have a greater variety of training situations then it's more like the test situation will look somewhat familiar to you so multitask learning and accelerate learning of all the tasks that are learned together and it can provide better pre-training for Downstream tasks now there's not a heck of a lot more to say about this the basic version of this really hinges on the ability to have a sufficiently effective reinforced learning algorithm that you can do this multitask training and you know that kind of amounts of scaling things up appropriately choosing the right hybrid parameters and so on there are various techniques that people have proposed that are specifically designed to make multitask training better but there isn't really kind of one killer technique that works across the board so what I would say there is if you're really interested in this you can do a literature search and read up a multitask learning NRL but the basic starting point is just to take the same kinds of algorithms that you would use in single task RL and try to scale them up I I do want to say a few words about kind of the the way that this kind of stuff can be framed as an mdp so one you know somewhat straightforward but very important idea is that multitask Carvel really does correspond to single task rail in a joint mdp so if you're if you're ever wondering how to represent multitask reinforcement learning as an RL problem keep in mind that it's really the same RL problem so in a standard RL setting you first sample the initial state from P of s0 and then you roll out your policy if you want to embed a multitask problem into the setting all you have to do is just change the initial State distribution so you could think about like this if you're learning to play multiple different Atari games with the same policy in a regular attire game the initial State distribution is just the start of that game in this multitask Atari mdp the initial State distribution is the distribution of our games so on the first time step you sample a game and then you play that game thereafter that is still a single MTP so in principle the algorithm doesn't actually need to change at all in order to do this you just pick the mdp randomly in the first state and that's part of the mdp definition now there can be a little bit of a Nuance because maybe the policy can do multiple things in the same environment if this was an Atari game this wouldn't be industry because you could tell which a Target game you're playing by looking at the screen but maybe maybe you have a robot in your home and the robot can in the same initial State go and do the laundry or go and do the distress so in that case if you want to instantiate multitask learning as a standard RL problem you need to do a little bit more to indicate to the agent which task it's supposed to be doing so the way that we do this is by assigning some sort of context to each task the context can be a variety of different things it could be as simple as just a one hot Vector indicating whether we're doing the dishes or doing the laundry or it could be some kind of descriptor of a task maybe a gold image or even a textual description and all of those are reasonable choices so we call these contextual policies a standard policy is just Pi of a given us a contextual policy is pi of a given s comma context comma Omega and if you learn it with an actor critic method or a q learning method then your Q function would also be taking Omega in its input so this could be a one hot Vector indicating whether you're doing the dishes or the laundry it could be a textual description or it could be something else this is very very simple all you have to do is simply augment the state space to add the context to the state and now the problem of training this multitask policy to do all these tasks basically amounts to the same multitask mdp from before where Omega is chosen randomly at the initial time step and then it doesn't change for the rest of the episode so people have trained contextual policies for all sorts of settings maybe you have a robot stacking Lego blocks and Omega is location where it Stacks something you have a virtual character walking in Omega is the desired walking direction maybe you have a hockey robot and Omega is where it should hit the puck so this is all pretty straightforward to do you don't actually have to change your algorithm for the most basic version you just change your mdp definition to add this additional variable to the state a particularly common choice of contextual policies is what's called a goal condition policy so in a goal condition policy the context is just another state and your reward rewards you for reaching that state either just one if you reach the state correctly or a little ball around the state to say well if you're close to the desired State then that's considered a positive reward condition policies can be especially convenient because you don't need to manually Define the reward for each task you can just sample a bunch of random states to be your goals and you can transfer a zero shot to a new task if it's another goal so if you're lucky enough that your new task is defined by a goal then zero shot transfer is entirely possible this has some disadvantages though because training such goal condition policies is often actually difficult in practice it just represents a fairly difficult reinforcement learning problem and not all tasks are goal reaching tasks so this example from the exploration 2 lecture shows an instance of a task that cannot be represented as a goal where you have to reach the green location while avoiding the red circle so there's no single goal that explains this if you want to learn more about gold condition policies I encourage you to read a few of the related papers in this area because while setting up goal conditions RL in the most straightforward way is actually very simple you just Define a particular mdp if you want to make these methods work really well there's a variety of tricks that could be really useful and these tricks include clever ways of selecting which goals to train for clever ways of representing the value functions or Q functions and clever ways of formulating rewards and reinforcement learning loss functions that are especially effective when you're trying to reach goals so the basic version of this is straightforward however making it work really well is more complex outside of the scope of this lecture but if you're interested in this I would encourage you to read these papers
CS_285_Deep_RL_2023
CS_285_Lecture_10_Part_4.txt
all right so in the next part of today's lecture we're going to extend the discussion of lqr that we had in the previous part to stochastic dynamics and also to non-linear systems so let's start with stochastic dynamics because that's actually a pretty easy one before we have linear dynamics that are deterministic if we have stochastic dynamics in the special case of the stochastic dynamics are gaussian meaning that p of x t plus one given x d ut is a multivariate normal distribution with a mean given by the linear dynamics and a constant covariance then it turns out that exactly the same control law that we had before turns out to still be optimal i'm not going to derive this although you can derive it a as a homework exercise on your own but the intuition for why that's true is that a gaussian is symmetric which means that if your mean is that a particular value of xt if you go a little to the left and a little to the right those differences will actually cancel out in your quadratic cost and you'll end up with the same value so adding gaussian noise turns out to not change the solution for ut however there is a little bit of a nuance to this which is that adding gaussian noise does change the states that you end up visiting so remember in lqr we have this backward recursion and the forward recursion computes the states that you visit now the states that you visit are actually stochastic which means that you can't produce a single open loop sequence of actions but you can treat that expression for the optimal action kx plus little k as a controller as a policy and it turns out that if you use that policy that turns out to be the optimal closed loop policy in the linear quadratic gaussian case that's quite an interesting result so there's no change to the algorithm you can ignore the sigma due to symmetry and if you want to check this on your own the hint is that the expectation of a quadratic function under a gaussian actually has an analytic solution you can look this up and once you can express the expected value of your quadratic functions under these gaussians you can calculate their derivative set the derivative to zero and you will find that the control law is the same but the important difference here is that now you are not getting a single sequence of states and actions you're really getting a closed form control law out of lqr so that's kind of interesting because it turns out that lqr actually does produce closed-loop plans not just open-loop plans okay so xt is now sampled from some distribution and it's no longer deterministic it turns out to actually still be gaussian which is very convenient okay so this is basically the stochastic closed loop case and the particular form of pi that we ended up with after using lqr is a time varying linear controller so our action is now linear in the status capital k times the state plus little k but it's potentially a different capital and lowercase k at every time so this maybe gives us some idea of an alternative to global neural net policies all right so that's kind of the easy part now the the main thing i'm going to talk about in this portion of the lecture is actually what happens in the non-linear case and in the non-linear case we can extend lqr to get something that is sometimes called differential dynamic programming or ddp and also sometimes called iterative lqr or ilqr also sometimes called ilpg if you have the linear gaussian setting so before we had these linear quadratic assumptions which were the assumption that our dynamics is a linear function of x and u and our cost is a quadratic function of x and u so can we approximate some non-linear system as a linear quadratic system locally take a moment to think about this question what are some mathematical tools that you know of that can allow us to do this well one thing we can do is we can employ a taylor expansion so if you have some nonlinear function and you want to get let's say a first order function or a second order function that approximates it in some neighborhood what you can do is that you can compute its first and second derivatives and then employ the taylor expansion so if we have some current sequence of states and actions maybe the best states and actually found so far which i'm denoting with x hat so you have x hat one x x hat two x sub three you have one you have two you have three then you can express the dynamics approximately as the f evaluated at x hat and u hat which is just x hat t plus one plus the gradient of the dynamics with respect to the state in action and similarly you can express the cost with a linear term depending on the gradient and a quadratic term depending on the hashing so now we've approximated our dynamics and our cost as linear and quadratic in the neighborhood of some sequence of states and actions denoted by x hat and u-hat and if you do this then you know you can express this linear quadratic system i'm going to call it f bar and c bar in terms of the deviations of x from x hat so this delta x and delta u represents x minus x hat and u minus u hat so these are deviations these are differences from x hat and you had and now we're back in the linear quadratic regime which means that we can simply plug in this thing into the regular lqr algorithm and solve for the optimal delta x and delta u so delta x and delta u are the deviation from x hat and u hat you can use lqr to solve for the optimal delta x and delta u and then add them to the old x hats and u hats to find new x's and use so here is the iterative lqr algorithm based on this idea we're going to repeat the following process until convergence for all the time steps we're going to calculate the dynamics matrix as the green of the dynamics around x-hat and u-hat and we're going to calculate a linear quadratic term for the cost and then we're going to run the lqr backward pass using delta x and delta u as our state and then we're going to run the forward pass but for the forward pass we're actually going to use the original nonlinear dynamics so we're not going to actually use uh the linearized dynamics to get x t plus one we're going to use the original nonlinear dynamics and the reason that we do this is because we want to get the x the x t plus one that will actually result from taking that actual ut not just some approximation to that so we'll do the forward pass with linear dynamics and u given by capital k times delta x plus lowercase k plus u hat which we just get from substituting in the delta x and delta u equations and then we will update x hat and u-hat by simply setting them to be the x's and u's that we got from our forward pass and then repeat this process so essentially the backward pass computes a controller expressed in terms of delta x and delta u that will give you better costs than we had so far the forward pass checks how good that controller actually is and checks which states you'll get from running it and then updates x hat and you had to be those new states and actions all right uh so why does this work well let's compare this procedure to a well-known optimization algorithm which is newton's method so newton's method is a procedure that you might use for minimizing some function g of x in newton's method you repeat the following process until convergence compute the gradient at some initial x hat compute the hessian and then set x hat to be the argument of the quadratic approximation of the function formed by that gradient and hashem and then repeat this is very much like what iterative lqr does so iterative lqr is basically the same idea you locally approximate a complex nonlinear function via taylor expansion which leads to a very simple optimization problem in in the lqr a linear quadratic problem and then you repeat this process multiple times until you arrive at a local optimum in fact ilqr can be viewed as an approximation of newton's method for solving that original optimization problem imposed the main way in which ilqr differs from newton's method is that it doesn't consider the second derivative of the dynamics it's not too hard to derive a version of lqr that does consider quadratic dynamics and if you do that you get exactly newton's method and you still have an elegant recursive formulation for it and that is what differential dynamic programming or ddp is doing so newton's method needs to use second order dynamics approximations which is reasonable although it requires a tensor product because the second row of the dynamics now is a 3d tensor and that's what differentiating programming does so if you really want to consider a full newton's method check out ddp although in practice just linearizing dynamics tends to be pretty good okay now the connection to newton's method allows us to derive a little improvement to the iterative lqr procedure that i presented so far that ends up being very important for good practical performance so let's go back to regular newton's method to gain some intuition here is the optimization that newton's method does in the inner loop and we could ask why is this a really bad idea think about this for a minute if you actually do this repeatedly i would posit that for many real functions you will really struggle to find a local optimum think about why that might be the case so here's a picture that illustrates that point let's say that the blue line represents your function and you're currently located at this point now newton's method approximates your complicated function with a quadratic function and let's say that the first and second derivative of your of your blue function results in this quadratic approximation if you actually go to the option of this quadratic you'll end up at this point and this point is actually worse than the point that you started at so what you want is you want to kind of backtrack and find a point that is close to your starting point where this quadratic approximation is still trustworthy this notion of trustworthiness is very related to the trust regions that we discussed in the advanced policy gradients lecture last time so using this intuition let's go back to our iterative lqr algorithm where in the iterative lqr algorithm can we perform this backtracking so essentially what we want to do is we want to compute our solution and then we want to check if the solution is actually better than what we had before and if it's not better then we want to somehow move closer to where we were before it turns out the forward pass is a very convenient place to do this and a very simple way to modify the forward pass to perform this line search is to modify the constant term the little k by some constant alpha between zero and one so this is the only change that we've made this constant alpha allows us to control how much we deviate from our starting point so imagine what would happen if i set alpha to zero if i set alpha to zero then at the first time step my action would be x1 minus x1 hat but x1 is always the same so that's just zero alpha is zero so that means that my first action is just u hat one and because my first action is u hat one my second state is x hat two which means that x two minus x hat two is also zero alpha is zero which means my second control is also you have two so if alpha goes to zero i will execute exactly the same actions that i executed before and in general as i reduce alpha i will come closer and closer to the to the action sequence that i had before so you can search over alpha until you get improvement essentially you can run the backward run the backward pass and then run the forepass repeatedly with different values of alpha until you find one that you're happy with now in practice you actually want to be a little more ambitious so a very simple way to do it is to just reduce alpha until the new cost is lower than the old cost but the other thing you could do is you could calculate how much improvement and cost you would anticipate from your quadratic approximation and you could actually reduce alpha until you get some fraction of that anticipated improvement you could also do a bracketing line search where you basically assume that the true function looks roughly quadratic in between the basically the blue circle and the red cross and you could do in a bracketing line search will basically find the optim of that function so uh there are many other ways to do line searches but if you want to implement this in practice look up something like a bracketing line search and that can be a pretty good choice
CS_285_Deep_RL_2023
CS_285_Lecture_2_Imitation_Learning_Part_3.txt
all right the remainder of today's lecture will focus on the more practical methods that can make behavioral cloning work as well as some other algorithms that we could use so we talked about a little bit of theory but now we'll talk about how the problem can be addressed in a few ways by being smart about collecting your data by using very powerful models that make comparatively fewer Mistakes by using multitask learning and by changing the algorithm and I'll go through these pretty fast so my aim with these uh with this portion of the lecture is not really to go in great detail about how to actually Implement some of these methods but just to give you a sense for the types of methodologies that people employ the one method that you will Implement in homework is dagger and I'll go through that somewhat more precisely okay so what makes behavioral cloning easy and what makes it hard as I mentioned in the previous part of the lecture if you have very perfect data then these accumulating errors are a big problem because as soon as you make even a small mistake you're outside of the distribution of perfect data but if you if you actually already have a bunch of mistakes in your data set and Corrections for those mistakes then when you make a small mistake you'll be in a state that is somewhat similar to other mistakes that you've seen in the data set and the labels in that portion of the data set will tell you how to correct that mistake so there are a few ways that you could leverage this Insight you could actually intentionally add mistakes and Corrections during data collection that's not actually an entirely crazy idea so the mistakes will hurt meaning that it will dilute the training set but the corrections will help and often the corrections help more than the mistakes hurt the reason for it is that if the mistakes are somewhat random they tend to average out meaning that the most optimal action is still the most probable however by making mistakes during data collection you force the expert to provide examples in more diverse States and that will teach the policy how to make Corrections the simplest version of this you can think of is if you force the expert to make a mistake with some probability independent of which state they're in then the mistakes will be largely uncorrelated with the state whereas the optimal action will be correlated with the state so when your neural network learns the action that correlates most of the state it will actually tend to learn the optimal action and avoid the mistakes but it will still benefit from seeing the corrections in those worse States another thing that we could do is use some form of data augmentation and that camera trip from before can be thought of as a kind of data augmentation essentially a method that adds some fake data that illustrates Corrections like those side facing cameras and that can be done with the by leveraging some sort of domain knowledge about the problem you're solving to create some additional fake data and roughly speaking the effect of these two tricks is kind of the same in both cases the aim is to provide examples in states that the expert is unlikely to visit but the policy might end up Landing in now there isn't really much more to this methodology than that so in discussing these tricks I'm going to just show you two examples of previous papers that use tricks like this to good effect so the first one I'll mention is a data augmentation based approach in this paper which focused on flying a drone through the forest so the output action space is discrete it's just go left go straight or turn right and here's a video from their work so they're going to fly these drones through hiking trails in Switzerland the this is a from the University of Zerg and the idea is pretty straightforward so they have a continent and they the kind that looks at the image and it predicts one of three discrete labels left right and straight now uh these are examples from the training set so they're labeled and where do they get the labels where well they get the labels uh by of course using lots of machine learning using lots of hiking trails but the data collection procedure is actually very straightforward they didn't actually have humans apply the quadcopter what they did instead is they got a person to walk the hiking trails and the person was let me fast forward here wearing a funny hat their hat had three cameras on it a forward-facing camera a left-facing camera and a right-facing camera and their approach was actually even simpler than the driving example they simply assumed that the person would always go in the correct direction and they labeled the left facing camera with the action to go right and the right facing camera with the action to go left and the straight facing camera with the action to go straight that's it that is the entirety of the method so there's no attempt to record the human's actions and that actually worked pretty well and I think this is a really nice illustration of how that data augmentation approach can enable imitation learning to work well and it wouldn't surprise me in the least if had they actually flown the quadcopter through the force and only use the forward-facing camera if their results would have actually been somewhat worse foreign so this is a similar thing with a handheld camera and here is their drone here's another interesting example this is a robotic manipulation example and here the authors of this paper are using a very low cost very cheap and relatively inaccurate arm and a very simplistic tele operation system based on a kind of a hand motion detector and they're teaching the robot various skills like using a cloth to wipe down a box of screwdrivers picking up and pushing objects things like this and they're using a game controller here and one of the things that they do is they illustrate a lot of mistakes in their demonstrations kind of inevitably just because they have such a low cost and imperfect to the operation system and because they illustrate so many mistakes they actually end up within a situation where the robot when it makes mistakes actually recovers from them so they have some examples where it picks up objects sometimes it picks them up incorrectly sometimes a human actually perturbs it but the robot is actually pretty good at recovering from perturbations including ones that are introduced by the person so here the person is missing but the robot just is unworried about that and keeps trying to do the task um so here it has to slide the wrench into a particular spot so sometimes imperfect data collection can actually be better than highly perfect data collection okay now that trick for getting behavioral cloning to work is um not very reliable and it takes a little bit of domain-specific expertise although it does provide a kind of a guidance anytime you're collecting data for imitation learning keep in mind that having ways to put the system in states where the expert can demonstrate Corrections can be a very good thing and it's also worth thinking about data augmentation trucks but let's talk about some more Technical Solutions why might you fail to fit the expert Behavior because if you can minimize the number that value Epsilon perhaps even Epsilon t squared might still be a small number so if you can understand why you might fail to fit the expert Behavior maybe you can get a model that's so powerful that it's probability of mistakes is so low that even that quadratic cost doesn't actually worry you too much so why might you fail to fit the expert well one reason is what I'll refer to as non-markovian Behavior non-markovian Behavior means that the expert doesn't necessarily choose the action based only on the current state second reason is multimodal behavior that means that the expert takes actions randomly and their distribution of reactions is very complex and might have multiple modes okay let's talk about the normal coping Behavior first so when we train a policy that is conditioned on the current observation but the policy is markovian in the sense that it assumes that the current observation is the only thing that matters it's basically assuming that the observation is the state that's not necessarily a problem if the expert also chose the action based only on the current observation but humans very rarely do that humans can't really make decisions entirely in the moment completely forgetting everything they saw before so if we see the same thing twice if we were perfectly markovianized agents we would do the same thing twice regardless of what happened before and that's pretty unnatural oftentimes humans will base their decision on all of the past things they saw for example if the human driver notices something in their blind spot and then looks back on the road they still remember what they saw in their blind spot or maybe even more problematic if someone just cut them off and they got a little flustered maybe they'll be driving a little differently for the next few seconds so generally humans are actually very non-markovic in the sense that humans human behavior is very strongly affected by the temporal context so if we're training a policy that only looks at the current image it's unaware of all that context and it might simply not be able to learn a single distribution that captures human behavior accurately because human behavior doesn't depend on just the current observation so how can we use the whole history well it's actually pretty straightforward we just need a policy representation that can read in the history of observations so we might have a variable number of frames and if we just simply combine all the frames into one giant image with like 3000 channels that might be difficult because you might have too many weights so what we would typically do is use some kind of sequence model so we would have our let's say if we're using images we would have our convolutional encoder if you're not using images you would have some other kind of encoder and you would encode all the past frames and then feed them through a sequence model such as an lstm or Transformer and then just predict the current action based on the entire sequence setting up these models is a little bit involved but there's actually nothing here that is special fermentation running so the same way that you might build a sequence model to process let's say videos in supervised learning exactly the same kinds of approaches can be used here whether it's lstms or Transformers or something else entirely like temporal convolutions again I won't talk about those architectural details in detail because they're actually not imitational money specific so anything that you learned about before for sequence modeling could just as well be used for imitation line there is however an important caveat that I want to mention which is that using histories of observations does not always make things better and the reason that it might sometimes make make things worse is that it might exacerbate correlations that occur in your data and this is a little bit of an aside I don't necessarily expect all of you to kind of to know this in detail but I do think it's an interesting aside and it's something that perhaps might Inspire some ideas for things like final projects why might this work poorly well here's a little scenario let's say that you have a a strange kind of car where there's a dashboard indicator for whether you're pressing the brakes so whenever you press the brakes there's a light bulb that lights up inside the cabin and the camera that is recording your data for imitation running is inside the cabin so the camera can see out of the window and it can also see the brake indicator so whenever the person steps on the brake the light lights up now in this case there's a person standing in front of the car and the driver stepped on the brakes because there was a person there but what the policy sees in the training data is it'll see one frame where the person is visible but the brake indicator is not uh is not on and the brake is pressed and then we'll see many steps after that where the brake indicator is pressed uh it is lit and the brake is pressed so there's a very strong association between the brake indicator and the brake being pressed if you have if you're reading in histories the situation is a lot worse because now you don't even need the brake indicator when you're reading in histories just the fact that the break was pressed in previous time steps is Apparent from looking at the sequential images you see the car slowing down you know the brake was pressed so the point is that the action itself correlates with future instantiations of that action if the information is somehow hidden then of course the policy is forced to pay attention to the important queue which is the fact that there's a person but if these auxiliary cues are present even though they are not the real cues that led to the action they serve to confuse the policy as a spurious correlation as a kind of causal confounder so the slowing down that you see when you look at history is caused by breaking but the policy might not realize that it might think that whenever you see the car slowing down that's indication that you should break in the same way as the brake indicator is the effect not the cause of braking but when you see lots of images of that correlation in there you might get confused so you can call this causal confusion it's discussed a little bit in this paper there are a few questions we could ask about this does including history mitigate causal confusion or does it make it worse uh and I'll I'll leave as an exercise for you at home another exercise is at the end of this lecture we'll talk about a method called dagger and after we talk about that I want you to come back to this uh point and think about whether the method dagger will actually address this problem or make it worse so I'll leave that as an exercise for you uh to think about all right so that's non-markovian Behavior you can address it by using histories keep in mind that that's not unequivocally always a good thing but if you're what you're worried about is non-markovian behavior that's the thing to do now let's talk about the other on multimodal Behavior that's kind of a subtle one let's say that you want to navigate around a tree and if you're flying a quadcopter you can fly around the tree to the left or you can fly around the tree to the right both are valid Solutions the trouble is that at that point when you're in front of the tree some expert trajectories might involve going left and some might involve going right so in general in your training data you'll see very different actions for very similar States now this is not a problem if you're doing something like that that Zurich paper where they use a discrete action space left right and straight because you can easily represent a distribution where there's high probability for left high probability for right and low probability for straight because you're directly outputting three numbers to indicate the probability of each of those actions however if you're outputting a continuous action maybe the mean and variance of a gaussian distribution now we have a problem because a gaussian has only one mode in fact if you see examples of left and examples of right and you average them together that's very very bad so how can we address this well we have a few choices we can use more expressive continuous distributions so instead of outputting the mean and variance of a single gaussian we can output something more elaborate or we can actually use discretization but make it feasible in high dimensional action spaces and I'll talk about both Solutions a little bit next so first let's talk about some examples of continuous distribution we can use and again I won't go into great detail about each of these methods so for details about how to actually Implement them I'll have some pointers and references and I would encourage you to look that up yourself if you want to actually try it my aim here is mostly to give you a survey level coverage of the different techniques so that you kind of know the right keywords and the right ideas okay so what we're going for is some way to set up a neural network so that it can output multiple modes for example a high probability of left high probability of right and low probability of going straight so we have a few options a very simple but maybe less powerful option is to use a mixture of gaussians and I'll talk about how to set that up with neural Nets I'm more sophisticated one is to use linked variable models and then something that has recently become very popular is to use diffusion models because the fusion models have gotten a lot more effective and a lot easier to train in recent years but let's start with a mixture of gaussians because that is probably the simplest thing to implement although it's not quite as powerful as the others so the idea here is the following a mixture of gaussians is uh can be described as a set of means covariances and weights okay so let's say you have n different uh gaussians that you want output let's say maybe it's n equals 10 let's say you're going to Output 10 means 10 covariances and a weight on each of those 10 mixture elements to indicate how large each of them is so you probably learn about mixed trip gaussians in the context of something like clustering where the means and the variances are just vectors and matrices that you learn here everything is conditional in the observation so your neural network is actually outputting the means and variances so they're not numbers that you store they're actually outputs of your neural net before our output was just the mean and maybe just one covariance Matrix now it's maybe going to be 10 vectors of means and 10 covariance matrices and a scalar weight on each of those 10 to indicate how large they are but in terms of implementing it it's actually pretty straightforward all we have to do is code up our neural network so it has all those outputs write down the equation for a mixture of gaussians take its logarithm as our uh training objective and optimize it the same way that we did before so uh the way that you would implement this in let's say Pi torch is you would literally implement the equation for a mixture of gaussians take its log and use that as your training objective just don't forget to put a minus sign in front if you're minimizing it so that's basically the idea your neural net outputs means covariances and weights and Modern Auto devtools like pytorch actually make this pretty easy to implement of course the problem with a mixture of gaussians is that you choose a number of mixture elements and that's how many modes you have so if you have 10 mixture elements and you want 10 modes that's fine but what if you have 100 modes what if you have extremely high dimensional action spaces perhaps you're not driving a car but you're controlling a humanoid robot with the Degreaser freedom and you want a thousand different modes now you have to do something a little smarter latent variable models provide us a way to represent a much broader class of distributions in fact you can actually show that latent variable models can represent any distribution as long as the neural network is big enough the idea behind a latent variable model is that the output of the neural net is still a gaussian but in addition to the image it receives another input which is sampled from some prior distribution like a zero mean unit variance gaussian so you can think of it as almost like a random C the random seed is passed into the network and for different random seeds it'll output different modes so for example if we out if if we have a three-dimensional uh random Vector that we put in if we put in this random Vector we get this mode if we put in this random Vector then we get this other mode we'll train the network so that for different random vectors it outputs different modes now unfortunately you can simply naively take the network and feed in random numbers and expect it to do this because if you give it random numbers those random numbers aren't actually correlated with anything in the input or output so if you just do this in the most obvious way the neural net will actually ignore those numbers so the trick in training latent variable models is to make those numbers useful during training the most widely used type of method for this is what's called a conditional variational autoencoder we'll discuss conditional variational Auto encoders in great detail much later on in the course in the second half so I won't actually describe how to make this work right now but the high level intuition is during training the values of those random vectors that go into the network are not actually chosen randomly instead they're chosen in a smart way to correlate with which mode you're supposed to Output so the idea is that during training you figure out this particular training example has the left mode this training example has the right mode and you assign them different random vectors so the neural net learns that can pay attention to those random vectors because they tell it which mode to Output that's the intuition the particular technical way of making this work is a little bit more involved and requires more technical background so we'll talk about that in the second half of the course but the high level idea behind variable models is that you have an additional input of the model and that additional input tells it which mode to Output and then of course a test time if you want to actually make this work you can choose that random variable at random and then you'll randomly choose to go around the treat on the left or on the right Okay the third class of distributions I'll talk about which has gotten a lot of attention in recent years because these kinds of models have started working really well is diffusion models the fusion models are somehow similar to latent variable models but there's some differences so some of you might have heard about diffusion models as a way of generating images so things like Dali stable diffusion those are all methods that use diffusion models for image generation the way that diffusion models work for image generation and this is a very high level summary so if you're this feels vague to you it's because it is I Could Teach an entire class into future models in principle but for the purpose of covering it in two slides I'm going to provide a very high level overview let's say that we have a particular training image I'm going to denote the training image as x0 and the subscript here doesn't know time it actually denotes Corruptions of the image so x0 is the least corrupted XT is the most corrupted so x0 is the true image in a diffusion model you construct a fake training set where you add noise to the image and then you train the model to remove that noise so the image x i plus 1 is going to be the image x i plus noise so X1 is x0 which is a True Image plus noise X2 is X1 which has some noise added and it adds even more noise to it and if you take an image and you add these different amounts of noise now you have a training set where you can teach a neural network to go backwards so the learn Network looks at the image x i and it predicts x i minus 1. so it's going to look at the slightly noisy image X1 and predicts x0 it's going to look at the slightly more noisy image X2 is going to predict X1 and so on and so on in reality we actually often train it to go all the way back to x0 so there's a choice to be made there but for Simplicity it helps to think about it as just going back one step and in reality what we actually predict is just the noise itself and that's not actually that different because if you predict x i minus 1 well you can get that by just predicting the noise and just subtracting the noise from X I so you can either have f of x I direct output x i minus one or you can have M of x i output the noise in which case x i minus one is just x i minus f of x i and that's a much more common choice to make okay now this is image garbage what we actually want to do is not generate images we want to of course generate actions so what we can do is we can extend this framework to handle actions Now actions of course also have a temporal subscript so I'm going to use the subscript T to denote time in the temporal process for control and I'm going to use the second subscript to denote the diffusion time step so a t comma 0 is the true action in the same way that x0 was the True Image a t comma I plus 1 is ATI plus noise and just like before we can learn a network that now takes in the current observation or state St and ATI and it outputs ATI minus 1. or just like before we would actually get a top of the noise so then ATI minus 1 is ATI minus F of s t ATI so the training setup is produced by taking all the actions adding noise to them teaching the network to predict what that noise was while also looking at the image and then a test time if we want to actually figure out the action then we feed in completely random noise and run this model for many steps to get it to denoise so turning the network over on the side it gets its input ATI it outputs ATI minus one which is a slightly denoised version of that action and it also gets to look at the image we start off with noise at test time we feed that into this box as the for as the first value of ATI and then we repeat this process we denoise it put it back in denoise some more and repeat this process many times and then at the end out comes a clean glue to action and during training we add all this noise to ground shoot actions and we teach the network to undo the noise so that's the essential idea of a diffusion model actually implementing it takes a number of additional design decisions which I won't have time to go into here but I'll reference some papers and you can look at those papers for details the last trick I'm going to talk about is discretization discretization is in general a very good way to get complex distributions but it's very difficult to apply discrimination naively in high Dimensions so remember in that Zurich paper where the actions were to go left go right and go straight this multimodality problem basically didn't exist but of course that was for 1D actions in higher Dimensions if you have let's say 10 dimensional actions discretizing the action spaces impractical because the number of bins you need increases exponentially with the number of dimensions so the solution is to discretize one dimension at a time and that's the idea behind Auto regressive discretization so here's how we can do it let's say our action is a three-dimensional vector I'm going to use at 0 to node Dimension 0 81 to no Dimension 1 and 82 to denote the dimension 2. don't be confused with the notation from diffusion models before this has nothing to do with that so the second number is just the dimension and it's just a scalar value right so here's how we're going to set up our Network we take the image and we encode it with some kind of encoder like content and then we'll put into a sequence model which could be a Transformer or an lstm so whatever your favorite sequence model is and at the first step of the sequence we output Dimension zero and we can discretize Dimension zero uh just into bins right so it's just one dimension discretizing a number line is pretty easy so we have one bin for every possible value you could have ten bins you could have 100 bins since it's one dimensional that's very easy to do and then at the second time step in the sequence we feed in the value at 0 and we output at1 again with a discretization and then the next time step we output we input 81 and we output 82. so just like in a sequence model and something like a language model you would uh output the next token the next letter here you would output the next dimension of the action space and now each Dimension is discretized and the number of bins is no longer exponential in the dimensionality it's actually still linear in the dimensionality and then a test time if we want to sample then we do exactly what we do with any other sequence model is instead of feeding in the ground truth value of each Dimension which we don't have we would feed in the prediction from the previous time step again this is exactly the same as any other sequence model like language models for example so one way to implement is actually with a GPT style decoder only model now why does this work well the reason that this is a perfectly valid way to represent complex distributions is that it can be seen by looking at what probabilities actually predict at each step so the first time step predicts P of at 0 given St because you get an you get STS input and your output is at 0. the Second Step predicts the probability of at1 given stn and at 0. so the dependence on St comes from the fact that it's passed in through the sequence model and at 0 is fed as input the third time step you predict 82 given st0 and 81. if you multiply all these things together then by the chain rule of probability their product is exactly the probability of at 081 and at2 given as T which is exactly the probability of the action given the state so this is the policy if you multiply together the probabilities at all the time steps and that means that if you sample the the different dimensions in sequence then you will actually get a balance sample from the distribution P of a t given St so autoregressive discretization is actually a great way to get complex distributions but it's a little bit more heavyweight in the sense that you have to use these sequence models for outputting actions all right next let me talk about a few case studies of papers that use these kinds of formulations the first case study I'll talk about is a paper uh from 2023 from Chi at all that uses the fusion models to represent policies for robots so it works more or less exactly as you would expect there are two variants there is a consonant based variant and a transform-based variant and they differ just in how they read in the image but in both cases they read in the image and they perform this denoising operation so this is actually visualize the denoising and the denoising process yields a short trajectory for the end if I could take over the next few steps then the robot follows that trajectory and performs the task and they can do use this for imitation learning for learning things like picking up pups or uh you know putting some sauce on a pizza this is maybe with like a a Cooperative person that kind of helps you make sure the sauce doesn't go all over the place but the point is that by using these fairly expressive multimodal diffusion policies they can actually get pretty good behaviors out of their imitation Learning System here's an example of a map that uses latent variables here the latent variables are actually used in conjunction with a Transformer but it's not using that action disc conversation from before it's just outputting the continuous values of the actions so the Transformer is just used to provide a more elaborate representation the latent variable here is this letter Z that you can see at the end of the input sequence and that the paper treats it as a kind of style variable to account for the fact that human experts might perform the tasks with different styles on different trials and that's used to improve the expressivity of the model and imitation learning here uses this by manual manipulation rig which can then be used to provide demonstrations to teach the robot to do things like put a shoe on a foot so that's not a real foot it's a mannequin foot and hear the the policy is uh inferring the latent variable just by sampling random from the prior test time but during training it's using essential conditional vae method of the sort that we'll describe later in the course so here's the shoe and it's of course going to buckle the shoe because you need to make sure the shoe is buckled and here's another example uh putting some batteries into a remote while an annoying graduate student distracts the robot by throwing objects into the background I guess this is the kind of thing you want to do to stress test your policies to make sure that they're pretty good with the distractors and here's the last case study that I'll present this is a auto regressive discrimination method and in this work called rt1 the model is actually a Transformer model that reads in a history of Prior images and the language instruction and actually outputs a per dimensional discretization of the arm and base motion commands so that's a wheeled robot so I can move the base and it can actually move the arm and all of those dimensions are discretized and that makes the entire control problem kind of a sequence to sequence problem so it's a sequence of images and text converted into a sequence of per Dimension actions and because it's it's language condition it can actually learn to perform a wide variety of different tasks when provided with a suitably large data set and when it's language condition you can actually do fun things like connect it up to a large language model that will then parse complex instructions like bring me the rice trips from the drawer and then say well to do that you have to first go to the drawer and then things like open the drawer and that's then commanded to this rt1 model which then selects the per Dimension actions using that action describation truck there's of course a lot more going on to this in this paper than just Auto regressive discrimination but I wanted to show this as just one example of that method really working in practice okay so I'll pause there and I'll resume in the next part
CS_285_Deep_RL_2023
CS_285_Lecture_22_Part_2_Transfer_Learning_MetaLearning.txt
in the next section of today's lecture we're going to talk about Metal learning algorithms metal learning is a kind of a logical extension of multitask learning where instead of Simply learning how to solve a variety of tasks we're going to use many different tasks to learn how to learn new tasks more quickly so I'll first give a general introduction to metal learning in a kind of more conventional supervised learning setting and then I'll discuss how these ideas can be instantiated in RL so what is metal learning if you've learned 100 tasks already can you figure out how to learn new tasks more effectively in this case having multiple tasks becomes a huge Advantage because uh if you can generalize the learning process itself from multiple tasks then you can drastically accelerate acquisition of a new task so metal learning essentially amounts to learning to learn and in practice it's very closely related to multitask learning it has many different formulations although those formulations can be summarized under the same umbrella so the many different formulations could involve things like learning an Optimizer learning an RNN that reads in a bunch of experience and then solves a new task or even just learning a representation in such a way that it could be fine- tuned more quickly to a new task so these might seem like very very different things and this is a cartoon from a blog post by K Le that illustrates the learning and Optimizer kind of idea even though these seem like very different things uh they can actually be instantiated in the same framework and many of the different techniques for solving metal learning problems once you kind of drill down into the details actually end up looking a lot like simply different architectural choices for the same basic algorithmic scaffold okay so why is uh metal learning a good idea deep reinforcement learning especially model free learning requires a huge number of samples so if you can metal learn a faster reinforcement learner then you can learn new tasks efficiently so what might a metal learner do differently uh well a metal learned RL method might Explore More intelligently because something about solving those prior tasks tells it how to structure its exploration to acquire a new task quickly it might avoid trying actions that it knows are useless so maybe it doesn't know how to solve the new task precisely but it knows that certain kinds of behaviors are just never good to do it might also acquire the right features more quickly so maybe it was trained in such a way that the network can change rapidly uh to modify its feature representations for the new task let me describe a very basic recipe to set up metal learning for a supervised learning problem and this recipe will I think de demystify a lot of the uh question marks that surround metal learning this is an image from a paper by RAB and lell from 2017 and illustr illustrates how metal learning for image recognition could work I realized that that image recognition is pretty diff different from RL but we'll see that very similar principles will actually work in RL as well so in regular supervised learning you would have a training set and a test set in metal learning what we're going to have is actually a set of training sets and a set of test sets metat training refers to the set of data sets that we'll use for the metal learning process meta testing is referring to what we're going to see when we get the new task So Meta training is Source domain meta testing is Target domain each of the training sets during meta training contains some image classes and the test set contains test images of those classes so in this example let's say that we have five classes in every task but those classes mean different things so for that first task class zero is bird class one is mushroom class two is dog class three is person class four is piano and then in the test set there's a dog in a piano and then in the second task class zero is a gymnast Tas uh class one is landscape uh Class 2 is tank class 3 is Barrel Etc these assignments uh can be either done by hand or they can be randomized in arbitrary in this case this is random and the idea is that you're going to look at those different training sets and then you're going to use their corresponding test sets to metat Trin some sort of model that will be able to then take in a new training set for some new classes that you've never seen before and then do well on its corresponding test set so uh here's how we can look at this regular supervised learning takes in some input X and produces a prediction y so the input X might be for example an image and the output y might be a label supervised metal learning could be thought of as just a function that takes in an entire training set D Trin as well as a test image X and produces the label for that test image y so it's not actually all that different some kind of function that will read in that training set and a test image and make a prediction on the test image of course you have to resolve a few questions if you want to instantiate this for example how do you read in the training set there are many options for this uh things like recurrent neural networks or Transformers can work pretty well for this so you could imagine a recurrent neural network that reads in X1 y1 X2 Y2 X3 Y3 which are the training image label tupal then reads in the test image X test and then predicts the test label y test so you have this little few shot training set a test input and a test label we'll talk more about the specifics of this later but first let's talk about what is it that's being learned so if you're learning to learn and then you take that and you deploy it on on your target domain what is learning so metal learning is training this F what is the learning part well to try to understand this let's imagine the following schematic picture for generic learning generic learning you have some kind of parameter Theta and you're going to find the Theta that minimizes some loss function on the training set and let's call this process of finding this Argent F learn so F learn takes in a training set and outputs the argumen of your policy of your model parameters Theta generic metal learning can be thought of as finding the AR Min for the of the loss over your test set for some parameters fi where these parameters fi are a function of your training set okay so you have some function f Theta which is now a learned function it's no longer a fixed learning algorithm F Theta takes in a training set and it produces some parameters fi and those parameters f are everything you need to know about the training set to do well in the test set and the way you do metal learning is you Train F Theta so that the loss on the test sets of the metat training tasks is minimized so it's kind of a second order thing you're going to Train F Theta which reads in dra train so that the output of f Theta works well on dest so F Theta then becomes the learning procedure so what is f Theta for the RNN example well F Theta is is the part of the RNN that reads in the training set so you can think of the parameters of f Theta the Theta Stars being the parameters of this RNN and is going to produce some sort of hidden Activation so once it reads in that training set it has some hidden activation hi for the task I and that hidden activation is then given to a little classifier that takes in the hidden activation and a test image X and produces y so this little bit at the end that's your uh classifier and its parameter f are simply the combination of H the hidden activations and its own parameters Theta P so it has its own parameters it's a little neet so that's Theta p and it takes in the hidden activations from the RNN encoder and that's hi so that that is what fi what is fi for this RNN metal learner now there are other kinds of metal Learners you could devise and they will have different Notions of fii but this is kind of the simplest that the parameters that you quote unquote learn for a new task are simply the hidden state of this RNN and the parameters of that top uh um layer so the process of learning a new task basically that amounts just running the RN forward and getting the hidden state so just to recap precisely how this works and how it's trended we have an rnm it reads in a sequence of images and their labels it produces a hidden state that hidden State goes to a little neural Network that takes in a test image and produces its label the metat training process trains the parameters of all of these networks it trains the parameters of the RN encoder and it trains the parameters of that little thing at the end when you go to the Target task which we call Meta test time you get a training set for the Target task that training set is then encoded using that RNN encoder which produces a new hi that new hi is then concatenated with the parameters of the uh little classifi at the end which is not adapted to the new task and that's fii and then your prediction depends only on fii so in practice this is a very long way of explaining something very simple which is that in practice you just run this RNN forward and you get an answer but this is the explanation of how it relates to metal
CS_285_Deep_RL_2023
CS_285_Andrea_Zanette_Towards_a_Statistical_Foundation_for_Reinforcement_Learning.txt
Kevin um yeah so this is going to be kind of a lecture with a different flavor than the ones that you've seen before in particular it will be um much more focused on understanding the theoretical foundations of um some of the reinforcement learning algorithms and protocols that you've seen in class now if we take um a big step back and try to um take a look at all the algorithms that you've seen in class and think about potential applications to the real world you will see that there are still some challenges um One Challenge is for example designing um stable reinforcement learning algorithm in particular um this might uh require some designing um of um uh certain tricks to ensure stability of the reinforcement learning I evidence and often it translates into uh tuning some hyper parameters to achieve a certain performance now another key issue um about applying reinforcement learning paradigms to the real world is data efficiency in general reinforcement learning algorithms are extremely data hungry and they do require much more data than uh for example algorithms that we commonly use in supervised learning then there is the issue about generalization often um specific algorithm is tuned and and um and it learns on a specific task but it is much more difficult to design General class um general purpose algorithms that can perform well across completely different tasks um and another issue is computational efficiency if you try to do some of the homeworks in in the class you will see that um sometime it takes quite a long time to train now all these issues um are kind of specific to reinforcement learning but they are issues that sort of prevents reinforcement learning from being applied more broadly um in real world problems where issues like stability convergence and and Sample efficiency becomes really uh fundamental in particular if you're interacting in the real world somehow you would like some predictability for what the algorithm is going to do and also samples are generally quite expensive because they amount to interaction uh with the real world and so in this talk we will try to take sort of a step back and try to understand some of the foundations for um reinforcement learning algorithms and why do we want to develop sort of a theory for for reinforcement learning well perhaps the most basic motiv iation is that really the key basic protocols that people use in reinforcement learning are really algorithms that are motivated by theory for example value iteration polic iteration um upper confidence bound exploration rainforce policy gradients those algorithms are all algorithms that at least they are they are somewhat inspired by Theory and oftentimes they come uh with guarantees at least in their simplest form there are also some success stories of translating algorithms that were developed from Theory into uh something more practical one example is randomized the Square value iteration which you might know as bootstrap dqn moreover Theory can give you some consideration that apply not just to a specific problem the one that you're trying to solve but maybe more broadly uh to a a wide a wide class of problems and I would say more broadly to the field of our app and also they help uncover uh fundamental limits for example things that you cannot do and you we will see an example of that um today now what question would we like to ask um from a theoretical point of view well ideally um we would like to have some form of guarantees for an algorithm that we are studying for example if you're proposing a new algorithm you would like to understand then whether it converges and that's kind of the primary concern that you might have uh when you go ahead and you want to apply it to your problem you would like to understand how to choose the hyper parameters whether there is any sort of ways or formula or or tradeoff uh that you need to make another question is how much data um does the algorithm need to collect in order to achieve a certain level of performance so how many interactions and also you might be concerned with things like computational complexity and so running time of the algorithm this is what you would like to study but in reality um answering those questions is extremely challenging it's extremely difficult um for example for most of the deep algorithms it's not even possible to prove convergence because at the end of the day the basic temporal difference scheme uh or TV with experience replay and and and Target networks they are not always guaranteed to converge and so immediately we have sort of a challenge um and it turns out that answering those questions is generally extremely difficult and so what you will see today is that there is kind of a huge gap right now between the Practical algorithms that you've seen in the class and some of the consideration that uh we will go through today but at any rate uh I will focus mostly on understanding um the statistical aspects of and so how many samples do you need to learn a certain problems and I will look into sort of three different macro topics um one is about trying to understand what of enforcement learning problems are easy and hard um and whether we can learn faster on easier problems and then I will focus on um understanding the interplay between algorithms and function approximation so the issue of generalization I will talk briefly about um statistical limits what you cannot do with reinforcement learning algorithms and also briefly about offline reinforcement learning now let's get to sort of the first part understanding what problems are easy and what problems are hard the setting that that we consider here is uh the exploration problem I think most of the class that uh uh that you've gone through is about um exploration algorithms think about the Standard Online setting dqn all these and so in this setting you have a reinforcement learning agent that is starting with an empty data set and there is an interaction for AG steps until the end for example of a game and this inter function is ongoing and it continues for a number of episodes and you would like to measure how quickly a reinforcement learning agent is learning now intuitively the re enforcement learning agent starts with a policy that might be suboptimal if it is playing Atari the first policy is going to be bad if you start with an empty data set but then progressively it is going to learn and play policies that are better and better what we would like to do is um to measure the performance of the algorithm and the standard way to do it me try to move this thing on the standard way to do it is to um Define a quantity that is called regret which you might have seen in class and it's really um the sum of the suboptimality gaps of the policy played by the agent intuitively at least the problem is easy um an algorithm that is learning will approach in terms of performance the value of the optimal policy but it will start that in a way that he doesn't know much and so the initial value of the policy that he plays are going to be low and if we sum all the suboptimality gaps as a function of the episode uh well that would amount to Computing the integral of this curve so the the area shaded in Orange um and that's what we call regret of the algorithm our goal will be to try to design an algorithm uh that minimizes the regret now in most cases you can do that for example in DL this is not not so clear that you can do that and so we will focus on problems that are for the first part of today with small state and action spaces so problem where we have a tabular representation and if we go back to maybe 2010 2011 and subsequent years um in the foundations of there was a huge push to try to design algorithms that could be as efficient as possible on tabular problems in particular and several algorithms have been proposed and this they had some form of regret bound there was a function of the state and action space in particular is cardinality The Horizon and um and the number of episodes um those regret bounds are useful because they apply um broadly to any problem that is a mro decision process you don't need to worry about the specific of the problem as long as you have you know finite state in action space you have a guarantee on uh these algorithms and this is also their limitation in the sense that it is not clear whether a certain algorithm would perform better or worse if the problem had a certain structure and this is what we see in practice that the performance of reinforcement learning algorithms varies greatly even for the same algorithm on problems that are uh quite different and so we would like to start and try to diive some systematic understanding of what problems of difficult and what problems are easy um to explore in reinforcement learning now um from an historical point of view there has been um a lot of effort into improving those regret bounds until essentially we got to an algorithm that in in terms of worst case performance it was unimprovable meaning that it had a performance guarantee across all problems that was as good as possible given the lower bound that we knew meaning that the performance is not improvable um without any limit there is a fundamental limit that you cannot surpass at the same time we know we know that there are classes of problems that are very diff different from the type of contrite construction that creates the lower bound one example is a problem that has no Dynamics or weak memory a problem that has weak memory is a problem where the action that you took in the past they have really little impact on your state think about um a recommend a system which is a type of contextual Bandit problem well that is a situation in which um this weak memory uh sort of arises in a recommander system think about a customer to Amazon um if you make a bad recommendation intuitively you might make a certain customer um unhappy but this won't affect the next customer that you see and so that's a problem of uh with weak memory and for Bandit problems we do know that there are specific Bandit algorithms to take advantage of the structure and they are able to learn much faster than uh classical markco decision processes likewise problems that are deterministic are generally much easier it's it's so is essentially a search problem likewise problems where you can only move locally in the state and action space are generally problem that are easier because if you make a mistake you can still recover somehow one example is mon car now the question that we ask is if we treat these problems as um tabular problems where we have an explicit representation of the state and action space and the Dynamics what do the this easy problems have in common can we identify some common characteristics and try to measure how hard they are and can we learn faster if the problem belongs um if the actual problem instance that we Face belongs to some of these subclasses well we gave a positive answer to this and we proposed first problem dependent complexity and then an algorithm based on that that had certain specific properties first of all we propose some problem dependent complexity measure that characterizes the complexity of um different reinforcement learning problems in particular it is defined um by the interaction of the System Dynamics and the value of the optimal policy it is defined as the variance of the next state optimal value function and this is not something that um the algorithm can compute if you do not know the actual Mark decision process because the optimal value function is unknown and the Dynamics are also unknown but nonetheless you can design an algorithm that has a performance bound that scale with this quantity uh which is generally unknown and the algorithm doesn't need to know that as a result the algorithm is able to match uh the best performance for uh tabular mro decision processes meaning that it is Minimax Optimum it is an improvable but compared to the state-ofthe-art it can also attain the Optimal Performance um if the problem belongs to a certain class of easier problems for example if it is a contextual banded problem then the algorithm autom ically matches um essentially the performance of basic UCB on contextual Bandits and in addition to being analytically small on certain problem Sub sub classes you can evaluate the quantity numerically and it's going to appear here is on on problems that people have considered before it takes a value that is much more smaller than sort of a worst case value that was suggested by prior bounds so essentially it is a quantity that um it is B analytically small on problems that we care about but also numerically small um on problems that have been considered before now I want to pause one second and ask if there is any technical question on this part before I move ahead yeah um the tuition well it really depends on um the type of problem so for example if a problem has weak memory it's a contextual banded problem what happens is that a mistake that you might make in a certain state it doesn't really have long-term consequences and so the next state value function um it wouldn't be too much different across different states and so essentially this quantity has to be small you may make an error but you only lose with the current customer right you don't mess up the entire long-term plan right um and so this quantity end up being smaller think about as being um some challenge in estimating the um the effect of transitions but the transitions can be highly stochastic for example again in Bandits they are highly stochastic um but still there is not much variability in the value of the state that you end up with in that case it's going to be small the supremum you can you can relax it as expectation over trajectory it is supremum in the actual work but you can relax okay um I want to give one slide that is perhaps a bit more technical um about how do we go about achieving something like this um well exploration generally is typically achieved at least for proably efficient algorithms by adding an exploration bonus to the exper experience reward think about dqn exploration there is done with Epsilon greedy at least in the most basic form um but if you want more sophisticated schemes um think about UCB in in banded algorithms normally what's done is a bonus is added now the bonus can take uh different forms the most basic one that prior art was using is something that scales um with the inverse of the number of samples it comes from ofing inequality uh but this type of um exploration bonus is essentially problem independent meaning that it's it's not tied to any particular feature of the mdp and so the algorithm would explore in the same way regardless of problem and this won't give rise to problem dependent bounds now the ideal choice that one would like to make is to use some form of Bernstein based concentration inequality which does indeed contain um something very similar to the quantity that we want um it would give a rise to problem dependent bounds but there is one issue that in general the optimal action value function you don't know what it is and the transition Dynamics you also don't know what it is so although this choice of the bonus would be ideal um it will not practically give rise like it's not something that you can do uh in practice the way around it is sort of intuitive is to try to use the empirical Dynamics and some empirical estimate of the optimal value function but there are several challenges that arises if you try to do that the main challenge is that generally those quantities they are unknown think about when you start initially you know very very little about the Dynamics and so you have essentially no way to guess what these quantities are and if you take uh the wrong the wrong guess um essentially the algorithm might not be optimistic enough it might not explore enough um and and it would just um not find a good policy so what what you have to do is rather to introduce some correction terms thankfully those correction terms that try to correct for your wrong estimates um they Decay very quickly so they Decay at a faster rate and so it is it is as if the agent was applying a sort of the correct Bernstein based concentration inquality but with one correction teral that is decaying very quickly and the challenge here lies in um estimating the the size of the correction in particular because we have to correct some value function that is different from the optimal one and estimating those errors it requires estimating how error propagates um through dmdp um from states that perhaps we haven't even visited them much and this Choice essentially gives rise to uh those problem dependent bounds now this is good because it does give you some initial sort of strong understanding of whether it's possible to adapt to the problem difficulty and whether it's possible to be at the same time in Max optimal but also instance optimal on a variety of problem classes that we are interested into but the big limitation here is that uh of course this thing applies only to small state and action spaces um in practice we would like to tackle problems that have a very large potentially continuous um State and action space and to be clear what you've seen in the class is always in the second category as soon as you start using any form of function approximation uh you are in in this category and so the next question that we will try to understand is what can we say about reinforcement learning with function approximation and the answer will turn out to be a bit more negative than than here here we made some sort of um positive progress but here we will see that when you start to talk about reinforcement learning with function approximation even problems that seem to be easy uh they might be very challenging and so to do a quick recap practical problems they always have a state space that uh is extremely large Most states have never visited what we would like to do is to introduce some form of function approximation um that can add generalized Knowledge from the states that we have seen to states that we have not yet yet observed and the hope is that we do not need to learn what to do in every state rather we need only a number of samples that is roughly of the same order um as the number of parameters in our model now the observation that um the F Clore observation if you want that we have is that reinforcement learning algorithm they use function approximation they still need a lot of samples compared to um supervis learning and so we would like to ask a very basic question whether reinforcement learning is for example fundamentally more difficult um than classical supervised learning and in order to study this question we consider a setting that is uh very similar to the offline reinforcement learning setting that you have seen in in the second part of the the class um in the offline reinforcement learning setting you have um some data set that is available and it consists of State actions reward and and success of states and we try to ask questions about for example policies that might be different from the one that generated the data set you may for example want to try to identify the optimal policy or you may try to do off policy evaluation um the specific setting that we consider is one in which we allow some sort of data collection with a static distribution before um and the reason to do that to allow for some flexibility is because if the data set is poor intuitively we cannot do much and that's not the algorithm fault it's just the data set maybe I have just data on a single state so we do consider a case in which you can do some form of data collection with a static policy beforehand and then we try to understand uh whether we can successfully predict um the value of a different policy for example or extract the value of the optimal policy now our expectation is that if the action value function has a simple representation for example if the action value function um has a linear expansion and and perhaps we even know the feature extractor then this should be an easy problem why well it's just by analogy with linear regression if you are solving a regression problem and I give you a feature map and and I promise that the problem is realizable so the target do have some linear expansion perhaps with some noise then you can open a textbook in in statistics and you will see that um standard linear regression can learn this problem very quickly however in reinforcement learning even problems that are linear they don't seem to be so easy in particular there have been um examples of Divergence of classical TD and and fitted Q even on problems that are linearly realizable and in fact if you take a look at the analysis that that are available for uh some of the basic algorithms and protocols you will see that they all make some assumptions that seem to be um much more stronger than just realizability and so as a matter of fact uh we don't know in 2020 2021 whether even the simplest linear setting is something that we can provide stable algorithms for can we provide an algorithm um that for example converges and converges yes because you can use LSD but can we have any guarantee about um for example the amount of samples that are required to learn even in this simple setting which is the first step after tabular problems and really to understand what's happening um you need to compare supervised learning with reinforcement learning and the key difference is whether you're trying to make predictions for one time step or for many time step this is because if you're trying to make predictions for one time step and you start with a data set that you might have recollected in an intelligent way um while if you're trying to just predict the first reward and you have the promise that the reward function is linear then we know that linear regression um solves this problem very very quickly so we know an algorithm and we know guarantees as well and this is the most basic U machine learning algorithm that you can think of however the our question is what happens if we want to predict the value of a policy for multiple time steps with the problem with the with the guarantee that that value is actually realizable meaning that we have a feature extractor um that correctly predicts the value of the target policy for some data parameter it turns out that this problem is uh in the worst case extremely difficult meaning that um as opposed to supervised learning you can find problems where you have this beautiful linear model and yet um any algorithm would take a number of samples to make the correct predictions that is exponential in the dimensionality of the feature extractor and when I say predictions um you can intend this broadly meaning that the the answer would remain the same if you are trying to for example identify an optimal policy I want a policy that does better than random you still need um a number of samples that in the worst case might be exponential in the dimensionality of the feature extractor and so we see that there is a strong separation between what what is achievable in supervised learning which is concerned with making predictions and so if you want a one-step prediction um and the reinforcement learning which considers sequential processes as as the Horizon becomes longer uh the problems can become exponentially harder this doesn't mean that all problems are exponentially harder but it does tell you that even problems that appear to be simple the problems that should be linear uh and and so they should be easily learnable you will not be able to find an algorithm that has guarantees even on those problems and so for you in order to learn or for for an algorithm to learn there has to be some um additional special structure and indeed this is something that we sort of see uh that inde PO sample complexity is a major issue in RL um and this issue is also related to um Divergence but the contribution here is really to identify that those issues are algorithm independent they information theoretic meaning that there is some fundamental hardness in the reinforcement learning problem that applies broadly to all algorithms that you can come up with you will not be able to find an algorithm that is able to solve all problems even if they are as simple L linear and this issue has been studied more broadly uh uh by some other sort of important papers and some have sort of similar result and um if you want to sort of reinterpret this uh second section um you can also take a look at that from the point of view of online RL I might have an action value function think about what you have in dqn and instead of having a deep no network you just have a simple linear map and I promise to you that the problem really does a linear action value function still um it will not be able you will not be able to find an algorithm that can learn um polom fast um in in a problem that is linear and so the main takeway here is that linear regression is easy in start but the equivalent in the reinforcement learning from a model free point of view um is already out of reach and so we have to be sort of um not too not too optimistic about uh the type of problems that we can solve and there has been indeed a big effort trying to understand what additional conditions are necessary um in order to have polinomial sample complexity as we have for many statistical um algorithms in statistic now before we move forward is there any question on on on sort of this second section yeah [Music] yeah I think the thing that is really tricky is that this is really a model free um point of view right so we're looking at whether we have enough information on the Q values um but somehow you can have problems that are extremely complex but the action value function ends up being simple it's it ends up being sort of linear so the actual cont of example has essentially a reward function that is very complex it's like a Ru noal networks that is Known Zero only in a in a very you know hidden area of the state space which is exponentially larger the Dynamics are very complex and the Dynamics are sort of engineered in a way that they linearize the reward function in the sense that once you do many steps of the bman backup you end up with an action value function that looks linear and so it looks like the problem is easy because that thing is really linear but what you're really trying to do is to identify where the reward function is no zero in a sort of an exponentially larger sphere um I don't know if I can say much more than this but um it's really related to The High dimensionality um that you have that there's a lot of space in in high dimension if you take random vectors in high dimension that will almost always be AOG and so you can sort of hide information in very high dimension it's not something that is obvious in two or 3D you really have to go high dimension yeah the policy p is fixed them um it could be you know think about predict the value of the optimal pocy it could be fixed or it could be you know the optimal one um I do want to spend one slide talking about um indeed what happens with more General function approximation um in terms of positive result well if you open some book about statistics High dimensional statistics at least for regression you will see that there are performance guarantees um that are a function of the very function class you're using so if you're using K methods convex functions or other things you will have some performance bound some trade-off between approximation error and um statistical complexity and the statistical complexity is normally expressed in sort of more uh in Notions like R the market complexity B dimensions and other things but the same is not sufficient in a EV so it looks like uh the interplay between bman operator and the function the very same function class that you use to model the action value function for TD methods but the interplay is really um kind of important and so what people have focused on on to understand some foundations over L is not just about the complexity of the function class this is not sufficient like we saw before we have a linear map and that is already too hard well there has to be something that makes the problem sort of learnable and that's really the interaction between bman operator and the action value function The Reason Why That's essential for TD method is that you're taking an action value function you creating the bman backup out of it and you're fitting sort of the same and you want that to be zero um and so the interaction becomes critical and so many many Notions have been proposed to try to understand uh in what cases can you do this uh sort of learning in a way that is St stable and and uh uh statistically efficient but I won't I won't go into that instead I'm going to jump to um some offline reinforcement learning um offline reinforcement learning you've seen it already in the class but just to do a quick recap is there's already a lot of data out there so we would like to U leverage them how can we do so without collecting further data and um the setting is the same as that that you've seen in class we have an historical data set of State actions where reward and successor States and the task is how do we find the policy with the highest value what does it even mean to find the policy with the highest value given a data set well the highest value of course is is is the policy that has that is the optimal policy but your data set may contain no information about the optimal policy and so it's like we we have to make sort of a best effort um some best effort in trying to identify a good policy and lower hour of expectation um and perhaps not find the optimal policy the main challenge I think you have discussed this in class as well is that of distribution shift meaning that um the data set well a best case scenario which never happened is one in which your data set as uniform samples all over the state and action space in which case you could just try to evaluate policy Pi 1 pi two and Pi three and pick up the best but normally what you've given is um trajectories that they might be for example from humans and so they're generally narrowly concentrated and that's what we call problem of partial corat and in the example here the data set may have a lot of information about Pi 1 no information about Pi 2 and some information about pi3 and somehow you have to come up with and choose between the three and and figure out which policy is the best and the thing that they want to sort of highlight today is how do we even measure this um coverage how much information the data set contains to find um a good policy and intuitively um the way to solve this problem is precisely what you've seen in class um or actually there are two ways if you want one is to try to stay close to the policies that generated the data set some form of um behavioral cloning another way is to attempt to estimate the uncertainty about your predictions so generally your data set has um is generated by policies certain policies that are sort of narrowly concentrated and and they give you data about State actions reward and Transitions and you would try to sort of fit some form of model and try to use the model to make predictions about the value of other policies now the model doesn't actually need to be a model you may do this in a model free way but you're still using some data that has been generated by some policies and made predictions about other policies now of course what you would like to do is to pick up the policy that has the highest value but but that's not known um instead you would like to return a policy that looks like it has good value but that you're also reasonably confident about and so one way to look at the fine is um as has a procedure that tries to find some optimal tradeoff between value of the policy that that is returned and the uncertainty about this policy think about some Biance bias variance tradeoff um in statistics you would like to have an algorithm that has sort of an optimal Biance viance trade off um the Biance is generally unknown the variance you can try to estimate in offline RL there is sort of a similar uh notion if you want you would like to balance the value of um the policy that you return which is unknown to you with its uncertainty and so guarantees for offline reinforcement learning algorithm uh they generally look something like this one algorithm should return with very high probability the best tradeoff between um the value of the policy that he returns and his uncertainty which essentially amounts to finding the point with the highest lower Bound in a sense offline reinforcement learning the one that you've also seen in the class in some way they try to uh get to this optimal tradeoff now one big question is what is this sort of constant C that depends on the policy well if you have seen um concentration inequalities in statistics you might be already familiar with the term one divided by square root of n is what arises from uh for example ofing inequality but here there is an extra uh coefficient that depends on the policy which should en capsulate the distribution shift now this coefficient depends on the actual algorithm and it depends for example on um uh the function classes that you're using and the interaction Within sorry with the bman operator and as a concrete instantiation you can take for example soft Max policies think about those that arise from a natural policy gradient and um again for Simplicity linear action value functions and those are two distinct parameters and you can design algorithms that um essentially try to solve this offline reinforcement learning problem and will have some guarantees that are precisely of this form where this coverage coefficient um has a certain analytical expression and the analytical expression highlights really the interplay between the information that is contained in the data set and the target policy that you're trying to estimate in particular the information contained in the data set is reflected in The covariance Matrix which is a somewhat familiar object from statistics linear regression you compute some covariance Matrix the covariance contains the amount of information that you know about the problem and this interact with a certain Norm in this inverse with the expected feature over um the target policy that you are considering for the optimization um this quantity you know but this one is generally not computable right so this sort of tells you how the two interact um to create confidence interval for off policy valuation that you can use to find a good policy what is sort of surprising and and and perhaps not surprising but important about this is that this coverage which is also called Concentra ability um it doesn't really it it doesn't have an expression in the state and action space if you open some of the papers that do statistical analysis you would often find a ratio between busy distribution uh discounted Vis distribution of the target policies versus the behavioral policy and and that is a ratio in the state in action space this one has none of that it's all projected down to a lower dimensional um feature space where converage can indeed be sort of much larger think about having a Coan Matrix that is the identi uh that would certainly make um the coverage coefficient be very small [Music] um I don't think I want to talk about sort of how we we achieve this and and sort of the technicalities I think the important part is how would for example a guarantee in a fly RL look like in terms of actual statement which is what you saw in the the prior slide um but if you want at a very high level um we're trying to avoid penalizing um actions directly and and we want to retain a very sort of low statistical complexity and and we operate in the parameter space to compute these confidence intervals and and all this is put into a big actic algorithms that uses um natural policy gradient and some pessimistic version of um TD with Target networks where the parameters are moved in a way that computes a pessimistic solution um I'm GNA keep the algorithm um now one limitation of this study that you've seen is that it applies to the linear setting of course the question is what happens if I use a richer set of functions for example ofline reinforcement learning with more General function approximations such as the ones that you've seen in class can we can we give any guarantees for those um the answer is unfortunately there is a huge gap in the sense that the type of algorithms that you've seen in the class is very difficult to prove guarantees for them because they may not converge there are variants that are sort of in a sense uh that you can provide guarantees for but the big problem is that it's not clear how you would Implement them um and that's kind of an issue of all overl with function General function approximation um if you want guarantees it's not clear how you would come up with an algorithm that you can actually Implement and so oftentimes what is analyzed is sort of a conceptual version that is not the same as uh the actual algorithm that is implemented in practice now before I head to the conclusion any question on this sort of third part yeah of course Orizon no this moment um well the kabian if this is a finite Horizon problem the cians can really change Through Time steps um is the ciance of the features so sum of fee fee transfers Feature Feature ex same as linear regession the same object appears yeah what do you mean by Epsilon optimal policy this won't be this won't be optimal right because it's offline RL so it really depends on your data set the policy that you find may be very crappy if your data set doesn't have good information suppose your data set is just from a policy that is narrowly concentrated and the behavioral policy is bad and uh this feature metrix is like rank one is so concentrated in One Direction then that doesn't really tell you much and so you won't be able to find um uh a good policy but somehow this they sort of reflected in precis that statement right because policies that are very good they would have a coverage coefficient that is very larger and so you will not know their value and no algorithm would return them yeah um no I think I think if you want this is the Epsilon that you're talking about right this is the policy that we will return um you can always think about the optimal policy in the supremum I want to evaluate this expression at the optimal policy I can do that but then this guy will become the value the sort of coverage about the optimal policy and so this is your your Epsilon right this is Epsilon suboptimal compared to the optimal policy but EP can be huge basically I'm telling you the value of Epsilon given the data set that we have DC is really a way to measure how much information are contained in the data set um and and as a result what the performance that you can expect um at the initial State I would say the value of the policy at the initial State that's that's sort of the one that you care about and your estimates may be more of in states that for example you don't visit or even in states that you visit but they might compensate but what you really car is performance at at the starting point okay yeah um right right right so I think you have seen something similar with cql I think um basically on nexer let's plot let's put different policies to be clear you're going to have uncountably many right but let's you know put them on on a graph and on the Y we're going to plot the value of the policies yeah yeah yeah yeah expect this country sum of the world now what you wish you knew is the actual value of the policies right the green line is the value of the policy so if you know the actual mdp you do value iteration and you will find this guy the optimal policy unfortunately what you have is a data set now how to use the data set is up to you intuitively if you're doing model based you can try to fit some model and try to use that to make predictions your model may be good it may be bad it might generally be good to make predictions about value of policies that generated data set it might be very bad to predict value of different policies you may not use a model based version and you may do something different for example you might adopt a model free approach like cql um and intuitively if you could what you would do is the following try to come up with an estimator for the value of different policies and try to measure also the uncertainty the uncertainty is really sort of this band here right and and this curve may move up and down depending on on the data set um but you would like to try to estimate the uncertainty about your predictions intuitively the uncertainty will be smaller for on policy valuation for for the very policy that generated the data set you have a bunch of data you just take the average but it will be very bad for a policy that is very very different that visits completely uh different areas of the state and action space right right your data is narrowly concentrated you have no idea about a policy that that does something in a completely unknown area of the state in action space so even if that policy by doing you know fitted queue it looks good you have to take into account that youve extremely uncertain about this value and so somehow you would like to penalize it and so you would like to say oh this policy like is I'm too uncertain like I'm I'm going to assign it a very low value and if I do this procedure um the optimal tradeoff is to maximize this lower bound the lower a lower bound on the performance of the policies and this gives you sort of abstractly this expression here which will become concrete as soon as you consider a specific algorithm and a specific type of you know function class and mdp and that will determine the the Cy but yeah yeah yeah yeah yeah yeah yeah yeah of course thank you I think what's important here like you want one takeway is really how would a guarantee of an offline algorithm look like it would look something like this some trade-off between values of course you want the highest value but policy that have high value you might be very uncertain about them and so there's going to be some some trade off um and just to make it if you want more clear um for the policy that are in the data set generally you have a lot of data right and so this C piy is going to be something like one you have a lot of data so n is big but this quantity is small and so what this expression tells you is that you should do better than the policies that generated the data set if you design the algorithm correctly this is sort of the minimum that you would expect you want to do better than theavy over clone right and and this expression tells you exactly that if I put pi as behavioral policy that generated the data set C pi will be small it's going to be one this expression will be smaller and this tells me I do better than behavior of cloning which is what we would expect okay um time to sum up um we have seen sort of three things one is most problems in RL they're not worst case they belong to a much easier class of problems um we have seen that as soon as we move to function approximation R is much more difficult than standard supervised learning and then we are seeing what type of guarantees we can obtain for um offline reinforcement learning algorithms and to conclude I think after the presentation this is much more clear there is a huge gap between sort of theory and practice um I think working at intersection uh is of course difficult right because you have to sort of please both communities but it might have make um reinforcement learning more applicable in the sense that there are going to be compromises to be made you won't sort of beat any Benchmark and or or you know top any Benchmark but you might be able to come up with some algorithm that has some um sort of analysis and uh at least in simple cases some stability guarantee is and this will be kind of critical in order to apply reinforcement learning to a very different problems you would feel much more confident to apply an algorithm if it is backed by some form of guarantees that apply even in a restricted set um and generally theory he will not tell you how to sort of tune hyper parameters and all that so they will not necessar it will not necessarily inform you on the specific of any given application but it can give you sort of more broad um insights and Foundation that app that apply more broadly uh to the field we have seen some fundamental lower bounds before um and um yeah I think with this I conclude and this is everything I had for today so thank you for your [Applause] attention I'm going to ask if there is any final question thank you for coming here
CS_285_Deep_RL_2023
CS_285_Lecture_5_Part_3.txt
so in the next portion of today's lecture we're going to talk about how we can modify the policy gradient uh calculation to reduce its variance and in this way actually obtain a version of the policy gradient that can be used as a practical reinforcement learning algorithm the first trick that we'll start with is going to exploit a property that is always true in our universe which is causality causality says that the policy at time t prime can't affect the reward at another time step t if t is less than t prime this is another way of saying that what you do now is not going to change the reward that you've got in the past now it's important to note here that this is not the same as the markov property the markov property says that the state in the future is independent of the state in the past given the present the markov property is sometimes true sometimes not true depending on your particular temporal process causality is always true causality just says that rewards in the past are independent of decisions in the present so this is not really an assumption this is always true for any process where time flows forward the only way this would not be true is if you had time travel and you could take an action or travel back into the past and change your action but we're not allowed to do that all right so i'm going to claim that the policy gradient that i've derived so far does not actually make use of this assumption and that it can be modified to utilize this assumption and thereby reduce variance you can take a moment to think about where this assumption might be introduced the way that we're going to see this is we're going to rewrite the policy grading equation i've i've not changed it anyway i've simply rewritten it and what i've done here is i use the distributive property to distribute the sum over rewards into the sum over grad log pies so you can think of it as taking that first set of parentheses over the sum of grad log pies and taking the outer parenthesis and wrapping it around the rewards so this gives me the sum over all of my samples from i equals 1 to n times the sum over time steps from 1 to capital t of grad log pi at that time step multiplied by another sum over another variable t prime from one to capital t of the rewards so that means that at every time step i multiply the grand log probability of the action at that time step t by the sum of rewards over all time steps in the past present and future now at this point you might start imagining how causality fits into this we're going to change the log probability of the action every time step based on whether that action corresponded to larger rewards in the present and in the future but also in the past and yet we know the action time step t can't affect rewards in the past so that means that those other rewards will necessarily have to cancel out an expectation meaning that if we generate enough samples eventually we should see that all the rewards at time steps t prime less than t will average out to a multiplier of zero and they will not affect the the log probability at this time step in fact we can prove that this is true the proof is somewhat involved so i won't go through it here but once we show that this is true then we can simply change the summation of rewards and instead of summing from t prime equals one to capital t simply sum from t prime equals t to capital t basically discard all the rewards in the past because we know the current policy can't affect them now we know they'll all cancel out an expectation but for a finite sample size they wouldn't actually cancel out so for a finite sample size removing all those rewards from the past will actually change your estimator but it will still be unbiased so this is the only change that we made now having made that change we actually end up with an estimator that has lower variance the reason it has lower variance is very simple we've removed some of the terms from the sum which means that the total sum is a smaller number and expectations of smaller numbers have smaller variances now one aside that i might mention here is that this quantity is sometimes referred to as the reward to go you can kind of guess why that is it's the rewards from now until the end of time which means that it refers to the rewards that you have yet to collect basically all the rewards except for the ones in the past or the reward to go and we sometimes use the symbol q hat i comma t to denote the reward to go now take a moment to think back to the previous lecture where we also use the symbol q the reward to go q hat here actually refers to an estimate of the same quantity as the q function that we saw in the previous lecture we will get much more into this in the next lecture when we talk about extra critical algorithms but for now we'll just use a similar symbol with a hat on top to note that it's a single sample estimate all right now the causality trick that i described before you can always use it you'll use it in homework two it reduces your variance there's another slightly more involved trick that we can use that also turns out to be very important to make policy gradients practical and it's something called a baseline so let's think back to this cartoon that we had where we collect some trajectories and we evaluate the rewards and then we try to make the good ones more likely and the bad ones less likely that seemed like a very straightforward elegant way to formalize trial and error learning as a grain ascend procedure but is this actually what policy gradients do well intuitively uh policy gradients will do this if the rewards are centered meaning that the good trajectories have positive rewards and the bad trajectories have negative rewards but this might not necessarily be true what if all of your rewards are positive then the green check mark will be increased its probability will be increased the yellow check mark will be increased a little bit and the red x will be also increased but a tiny bit so intuitively it kind of seems like what we want to do is we want to center our rewards so the things that are better than average get increased and the things that are worse than average get decreased for example maybe we want to subtract a quantity from our reward which is the average reward so instead of multiplying grad log p by r of tau we multiply by r of tau minus b where b is the average reward this would cause policy gradients to align with our intuition this would make policy gradients increase the probability of trajectories that are better than average and decrease the probabilities of trajectories that are worse than average and then this would be true regardless of what the reward function actually is even if the rewards are always positive that seems very intuitive but are we allowed to do that it seems like we just arbitrarily subtract our constant from all of our rewards is this even correct still well it turns out that you can show that subtracting a constant b from your rewards in policy gradient will not actually change the gradient in expectation although it will change its variance meaning that for any b doing this trick will keep your grading estimator unbiased here's how we can derive this so we're going to use the same convenient identity from before which is that p of tau times grad log p of tau is equal to grad p of tau and now we're going to substitute this identity in the opposite direction so what we're going to do is we're going to analyze grad log p of tau times b so if i take the difference r of tau minus b and i distribute grad log p into it then i get a grad log p times r term which is my original policy gradient minus a grad log p times b term which is the new term uh that i'm adding so let's analyze just that terms the expected value of grad log p times b which means that it's the integral of p of tau times grad log p of tau times v and now i'm going to substitute my identity back in so using the convenient ending in the blue box over there i i know this is equal to the integral of grad p of tau times b now by linearity of the gradient operator i can take both the gradient operator and b outside the integral so this is equal to b times the gradient of the integral over tau of p of tau but p of tau is a probability distribution and we know that probability distributions integrate to one which means that this is equal to b times the gradient with respect to theta of one but the grading with respect to theta of one is zero because one doesn't depend on theta therefore we know that this expected value comes out equal to zero in expectation but for a finite number of samples it's not equal to zero so what this means is that subtracting b will remain will keep our policy gradient unbiased but it will actually alter its variance so subtracting a by a baseline is unbiased in expectation the average reward which is what i'm using here turns out to not actually be the best baseline but it's actually pretty good and in many cases when we just need a quick and dirty baseline we'll use average reward however we can actually derive the optimal baseline the optimal baseline is not used very much in practical policy grading algorithms but it's perhaps instructive to derive it just to understand some of the mathematical tools that go to studying variants so that's what we're going to do in the next portion the next portion will go through a mathematical calculation where we'll actually derive the expression for the optimal baseline to optimally minimize variance so to start with we're going to write down variance so if you have the variance of some random variable x it's equal to the expected value of x squared minus the expected value of x squared so we can use the same equation to write down the variance of our policy gradient so here's our policy gradient the variance of the policy gradient is equal to the expected value of the quantity inside the bracket squared minus the whole expected value squared now the second term here is just the uh the policy gradient itself right because we know that r of tau minus b in expectation ends up not making a difference so basically the actual expected value of grad log p times r minus b is the same as the expected value of grad log p times r so we can just forget about the second term changing r is not going to change its value in expectation so it's really only the first term that we care about all right i'm going to change my notation a little bit just to declutter it so i'll just use g of tau in place of uh grad log p of tau so if you see g at the bottom that's just grad log p i just wanted to write a short a shorter value so i know that the second term in the variance doesn't depend on b but the first term does so then in order to find the optimal b i'm going to write down the derivative d var db and solve for the best b so the derivative of the second part is 0 because it doesn't depend on b so that just leaves the first part ddb of the expected value of g squared times r minus b squared now i can expand out the quadratic form and i get ddb of the expected value of g squared r squared minus 2 times the expected value of g squared rb plus b squared times the expected value of g squared so all i've done here is i've just expanded out the quadratic form r minus b squared distributed the g squared into it and then pulled constants out of expectations now looking at this equation we can see the first term doesn't depend on b but the second two terms do so we can eliminate this part and the second two terms if we take the derivative with respect to b the minus two term is linear in b and the plus term is quadratic in it so we get the derivative is equal to negative 2 times the expected value of g squared r plus 2b times the expected value of g squared now we can push the constant term on the right hand side and solve for b and we get this equation b is equal to the expected value of g squared r divided by the expected value of g squared right so i've just solved for b when the derivative is equal to zero so this is the optimal value of b now looking at this thing you could try to imagine what is the optimal baseline really intuitively well perhaps one thing that might jump out at you is that the baseline now actually depends on the gradient which means that if the gradient is a vector with multiple dimensions if you have multiple parameters you like to have a different baseline for every entry in the gradient so if you have a hundred different policy parameters you'll have one value of the baseline for parameter one a different value of the baseline for parameter two and intuitively looking at this equation the baseline for each parameter value is basically the expected value of the reward weighted by the magnitude of the gradient for that parameter value so it's a kind of re-weighted version of the expected reward it's not the average reward anymore it's a re-weighted version of it it's re-weighted by gradient magnitudes so this is the baseline that minimizes the variance now again in practice we often don't use the optimal variance we just uh sorry we often don't use the optimal baseline we typically just use the expected reward but if you want the optimal baseline this is how you would get it all right so to review what we've covered so far we talked about the high variance of policy gradients algorithms we talked about how we can lower that variance by exploiting the fact that present actions don't affect past rewards and we talked about how we can use baselines which are also unbiased and we can analyze variance to solve for the optimal baseline
CS_285_Deep_RL_2023
CS_285_Lecture_13_Part_3.txt
all right next let's talk about some actual exploration algorithms that we could use in deep reinforcement learning so to recap the classes of exploration methods that we have our optimistic exploration which basically say that visiting a new state is a good thing this requires estimating uh some kind of state visitation frequency or novelty just like we had to count the number of times that we took each action in the bandit setting and this is typically realized by means of some kind of exploration bonus we have thompson sampling style algorithms so these are algorithms that learn a distribution over something either a model a q function or a policy just like we learned this distribution over banded parameters before and then they sample and act according to that sample and then we have information gain style algorithms which reason about the information gained from visiting new states and then actually choose the transitions that lead to large information again so let's start with the optimistic exploration methods so in the banded world we saw that one rule that we could use to balance exploitation and exploration is to select an action based on the arg max of its average expected value empirical estimate based on what we've seen before plus the square root of 2 times log t divided by n a and the important thing here is really the denominator so you're basically assigning bonuses based on some function of the inverse of the number of times you've pulled that arm so this is a essentially a kind of exploration bonus and the intuition in reinforcement learning is that we're going to construct an exploration bonus uh that is not just for different actions but actually also for different states and the thing is lots of different functions will work for this exploration bonus as long as they decrease with n of a so don't worry too much about the fact that it's a square root or that it has a 2 times log t in the numerator the important thing is that it's some quantity that decreases rapidly as n of a increases okay so can we use this idea with mdps one thing we could do is uh essentially extend it to the mdp setting and create what is called count based exploration so instead of counting the number of times you've pulled some arm which is n of a you would count the number of times you've visited some state action tuple s comma a or even just the number of times you visited some state and of s and use it to add an exploration bonus to your reward so the ucb estimate in the bandit case is estimating the reward with an additional exploration bonus in the mdp case we will also estimate reward with an exploration bonus so what that means is that we will define a new reward function r plus which is the original reward plus this bonus function applied to n of s and the bonus function is just some function that decreases with n of s so maybe it's the square root of 1 over n of s and then we would simply use our plus instead of r as our reward function for any rl algorithm that would care to use and of course in this case r plus will change as our policy changes so maybe every episode would update r plus so we need to make sure that our rl algorithm doesn't get too confused by the fact that our rewards are constantly changing but other than that this is a perfectly reasonable way to kind of extend this ucb idea into the mdp setting so it's a simple addition to any rl algorithm it's very modular um but you do need to tune a weight on this bonus because you know if you do uh 1 million divided by n of s that'll work very differently than if you do you know 0.001 divided by n of s so you need to decide how important the bonus is relative to the reward and of course you need to figure out how to actually do the counting so let's talk about the second problem what's the trouble with counts the trouble with counts is that the notion of account while it makes sense in small discrete mdps doesn't necessarily make sense in more complex mdps so let's look at this frame for montezuma's revenge so clearly if i find myself seeing the same exact image 10 times then the count for that image should be 10. but what if only one thing in the image varies so what if for example the the guy here just stands in the same spot but the skull moves around every location for the skull is now a totally different state now if the guy is moving and the skull is moving what are the chances that they would ever be in the same exact combination of locations twice so maybe they'll be in very similar states but they might not be in exactly the same spot twice and there in general if you have many different factors of variation you get combinatorially many states which means the probability of visiting the same exact state a second time becomes very low so all these moving elements will cause issues what about continuous spaces there the situation is even more dire so if you imagine that robotic hand example from before now the space is continuous so no two states are going to be the same so the trouble is in these large rl problems we basically never see the same thing twice which makes counting kind of useless so how can we extend counts to this complex setting where you either have a very large number of states or even continuous states well the notion that we want to exploit is that some states are more similar than others so even though we never visit the same exact state twice there's still a lot of similarity between those states and we can take advantage of that by using a generative model or a density estimator so here's the idea we're going to fit some density model to p theta of s or p theta of s comma a depending on whether we want state counts or state action counts so a density model could be something simple like a gaussian or it could be something really complicated we'll talk about the particular choice of density model later but for now we just need it to be something that produces an answer to the question what is the density or the likelihood of this state now if you learn a density model as for example some highly expressive model like a neural network then p theta of s might be high even for totally new states that you've never seen before if they're very similar to states that you have seen before so maybe you've never seen the guy and the skull and precisely this position but you've seen the guy in that position and you've seen the skull in that position just not together so that state will probably have a higher density than if something totally weird happened like if for example you suddenly picked up the key you know if in all prior states the key is always present now suddenly it's absent that'll have a very low density so the question that we could ask then is can we somehow use p theta of s as a sort of pseudo count now it's not a count because it's not literally telling you how many times you've visited a particular state but it kind of looks a little bit like a count in that you know if you take p of s and you multiply by the total number of states you've seen that will be a kind of a density for that state so if you have a small mdp where practical accounting is is doable then the probability of a state is just the count on that state divided by the total number of states you've seen so it's n of s divided by n so the probability it relates to the count and the total number of states you visited which means that after you see the state s your new probability is the old count plus one divided by the old n plus one so here's the question can you get p of theta of s and p of theta prime of s to obey these equations so instead of keeping track of counts we keep track of p of essence but we will update theta when we see s to get a new theta prime meaning we'll update our density model we'll change its parameters so can we look at how p theta of s and p theta prime of s have changed and recover something that also obeys these equations that essentially looks like a count it's not a count but it looks like it counted acts like account so it could be used as a count so this is based on a paper called unifying count based exploration by belmar at all the idea is this we're going to fit a model p theta of s to all the states that we've seen so far in our data set d then we will take a step i and observe the new state s i then will fit a new model p theta prime of s to d with the new state appended to it and then we'll use p theta s i and b theta prime of s i to estimate a pseudo count which i'm going to call n hat of s and then we'll set r plus to be r plus some bonus determined by n hat of s so this and ahead of s is a pseudo count and then we repeat this process so how do you get the pseudo count well we'll use the equations from the previous slide so the equations in the previous slide describe how counts relate to probabilities so we'll say that we want our pseudo counts to also relate to probabilities in the same way so we know b theta of s and we know p theta prime of s because that's what we get by updating our density model we don't know n hat of s and we don't know little n hat however we have two equations and two unknowns so we could actually solve the system of equations and recover n hat of s and n of n and little n hat so we have two equations and two unknowns if we do a bit of algebra here's what the solution looks like n hat of s is equal to little n hat times p theta of s that's kind of the obvious statement and if you manipulate the algebra you can solve for n hat and find that it's equal to 1 over p theta prime of s divided by p theta prime of s minus p theta of s and that whole thing multiply that b theta of s it's a little bit of an opaque expression but it's pretty easy to solve for it you basically take that expression from n hat of s substitute that in in place of n hat of s for the top two equations so you get two equations that are both expressed in terms of little n hat and then you solve them for a little n hat and you get the solution at the bottom so now every time step you just solve you just use these equations to figure out big n hat of s and use that to calculate your bonus and now your bonus will be aware of similarity between states uh now there is a there are a few technical issues left we have to resolve what kind of bonus to use and what kind of density model to use now there are lots of bonus functions that are used in the literature and they're all basically inspired by methods that are known to be optimal for bandits or for small mdps so for example the classic ucb bonus would be 2 times log little n divided by big n and then you take the square root of that whole thing another bonus in this paper by australian littmann is to just use the square root of 1 over n of s that's a little simpler another one is to use 1 over n of s they're all pretty good they could all work this is the one used by belmar at all but you could choose whichever one you prefer does this algorithm work well here's the evaluation that's used in the pseudo-accounts paper so in this paper they are comparing different methods the important curves to pay attention to are the green curve and black curve so the black curve is basically q learning the way that you're implementing it right now and the green curve is their method with a 1 over square root of n bonus and you can see here that you know on some games it makes very little difference like hero on some games it makes a little bit of a difference and on some games like montezuma's revenge it makes an enormous difference where there's almost no progress without it the pictures at the bottom uh illustrate the rooms that you visited and so as i mentioned before in montezuma's revenge the rooms are arranged in a kind of pyramid and you start at the top of the pyramid and you can see that without the bonus uh you only visit two rooms with the bonus you actually visit more than half of the pyramid so the method is doing something pretty sensible what kind of model should you use for p theta of s uh well there are a few choices to be made about this model that are a little peculiar than the trade-offs we typically consider for density uh modeling and generative modeling usually when we want to train a generative model like a gan or vae what we care about is being able to sample from that model but for pseudo-accounts all you really want is a model that will produce a density score you don't really need to sample from it and you don't even really need those scores to be normalized so as long as the number goes up as the state has higher density you're happy with it so that means that the trade-offs for these density models are sometimes a little different than what we're used to from the world of generative modeling in fact they're often the opposite of the considerations for many popular general models in the literature like gans which can produce great samples but don't produce densities the particular model that belmont all uses is a little peculiar so it's a cts model it's actually a very simple model it just models the probability of each pixel conditioned on its upper and left neighbors so you can think of it as a directed graphical model where there are edges from the upper and left neighbors of each pixel pointing to that pixel it's a little weird but it's very simple and produces these scores it's not a good density model and there are much better choices but that's the one they use in the paper so other papers have used stochastic neural networks compression length and something called ex2 which i'll cover shortly but in general you could use any density model you want so long as it's it produces good probability scores without caring about whether it produces good samples or not
TedEd_American_Government
어떻게_잘못된_뉴스가_확산될_수_있는가_노아_타블린Noah_Tavlin.txt
There's a quote usually attributed to the writer Mark Twain that goes, "A lie can travel halfway around the world while the truth is putting on its shoes." Funny thing about that. There's reason to doubt that Mark Twain ever said this at all, thus, ironically, proving the point. And today, the quote, whoever said it, is truer than ever before. In previous decades, most media with global reach consisted of several major newspapers and networks which had the resources to gather information directly. Outlets like Reuters and the Associated Press that aggregate or rereport stories were relatively rare compared to today. The speed with which information spreads now has created the ideal conditions for a phenomenon known as circular reporting. This is when publication A publishes misinformation, publication B reprints it, and publication A then cites B as the source for the information. It's also considered a form of circular reporting when multiple publications report on the same initial piece of false information, which then appears to another author as having been verified by multiple sources. For instance, the 1998 publication of a single pseudoscientific paper arguing that routine vaccination of children causes autism inspired an entire antivaccination movement, despite the fact that the original paper has repeatedly been discredited by the scientific community. Deliberately unvaccinated children are now contracting contagious diseases that had been virtually eradicated in the United States, with some infections proving fatal. In a slightly less dire example, satirical articles that are formatted to resemble real ones can also be picked up by outlets not in on the joke. For example, a joke article in the reputable British Medical Journal entitled "Energy Expenditure in Adolescents Playing New Generation Computer Games," has been referenced in serious science publications over 400 times. User-generated content, such as wikis, are also a common contributer to circular reporting. As more writers come to rely on such pages for quick information, an unverified fact in a wiki page can make its way into a published article that may later be added as a citation for the very same wiki information, making it much harder to debunk. Recent advances in communication technology have had immeasurable benefits in breaking down the barriers between information and people. But our desire for quick answers may overpower the desire to be certain of their validity. And when this bias can be multiplied by billions of people around the world, nearly instantaneously, more caution is in order. Avoiding sensationalist media, searching for criticisms of suspicious information, and tracing the original source of a report can help slow down a lie, giving the truth more time to put on its shoes.
TedEd_American_Government
미국에서_투표권의_투쟁닉키_비멘_그리핀Nicki_Beaman_Griffin.txt
When the next general election rolls around, who will be eligible to show up at the polls and vote for the President of the United States? It's really pretty simple. If you are at least 18 years old, a citizen of the U.S., and a resident of a state, you can vote, assuming, that is, you are not a felon. Seems about right. After all, the United States prides itself on being a democracy, or a government in which the ultimate authority lies with the citizens of the nation. But it was not always this way. In 1789, George Washington won the electoral college with 100% of the vote, but whose vote was it? Probably not yours. Only 6% of the entire United States population was allowed to vote at all. Voting was a right that only white, male property owners were allowed to exercise. By the 1820s and 1830s, the American population was booming from the east coast into the western frontier. Frontier farmers were resilient, self-reliant, and mostly ineligible to vote because they did not own land. As these new areas of the nation became states, they typically left out the property requirement for voting. Leaders such as Andrew Jackson, the United State's first common man President, promoted what he called universal suffrage. Of course, by universal suffrage, Jackson really meant universal white, male suffrage. All he emphasized was getting rid of the property requirement for voting, not expanding the vote beyond white men. By the 1850s, about 55% of the adult population was eligible to vote in the U.S., much better than 6%, but far from everybody. Then, in 1861, the American Civil War began largely over the issue of slavery and states' rights in the United States. When it was all over, the U.S. ratified the 15th Amendment, which promised that a person's right to vote could not be denied based on race, color, or previous condition as a slave. This meant that black men, newly affirmed as citizens of the U.S., would now be allowed to vote. Of course, laws are far from reality. Despite the promise of the 15th Amendment, intimidation kept African-Americans from exercising their voting rights. States passed laws that limited the rights of African-Americans to vote, including things like literacy tests, which were rigged so that not even literate African-Americans were allowed to pass, and poll taxes. So, despite the 15th Amendment, by 1892, only about 6% of black men in Mississippi were registered to vote. By 1960, it was only 1%. And, of course, women were still totally out of the national voting picture. It wasn't until 1920 that the women's suffrage movement won their 30-year battle, and the 19th Amendment finally gave women the vote, well, white women. The restrictions on African-Americans, including African-American women, remained. After World War II, many Americans began to question the state of U.S. democracy. How could a nation that fought for freedom and human rights abroad come home and deny suffrage based on race? The modern civil rights movement began in the 1940s with those questions in mind. After years of sacrifice, bloodshed, and pain, the United States passed the Voting Rights Act of 1965, finally eliminating restrictions such as literacy tests and protecting the voting rights promised under the 15th Amendment to the Constitution. Now, any citizen over the age of 21 could vote. All seemed well until the United States went to war. When the Vietnam War called up all men age 18 and over for the draft, many wondered whether it was fair to send men who couldn't vote to war. In 1971, the 26th Amendment to the Constitution made all citizens 18 and older eligible to vote, the last major expansion of voting rights in the United States. Today, the pool of eligible voters in the U.S. is far broader and more inclusive than ever before in U.S. history. But, of course, it's not perfect. There are still active efforts to suppress some groups from voting, and only about 60% of those who can vote do. Now that you know all the hard work that went into securing the right to vote, what do you think? Do enough citizens have the right to vote now? And among those who can vote, why don't more of them do it?
TedEd_American_Government
미국_정부의_권력은_어떻게_나뉘어져_있을까벨린다_스터츠만Belinda_Stutzman.txt
Translator: Andrea McDonough Reviewer: Bedirhan Cinar Have you ever wondered who has the authority to make laws or punish people who break them? When we think of power in the United States, we usually think of the President, but he does not act alone. In fact, he is only one piece of the power puzzle and for very good reason. When the American Revolution ended in 1783, the United States government was in a state of change. The founding fathers knew that they did not want to establish another country that was ruled by a king, so the discussions were centered on having a strong and fair national government that protected individual freedoms and did not abuse its power. When the new constitution was adopted in 1787, the structure of the infant government of the United States called for three separate branches, each with their own powers, and a system of checks and balances. This would ensure that no one branch would ever become too powerful because the other branches would always be able to check the power of the other two. These branches work together to run the country and set guidelines for us all to live by. The legislative branch is described in Article 1 of the U.S. Constitution. Many people feel that the founding fathers put this branch in the document first because they thought it was the most important. The legislative branch is comprised of 100 U.S. Senators and 435 members in the U.S. House of Representatives. This is better known as the U.S. Congress. Making laws is the primary function of the legislative branch, but it is also responsible for approving federal judges and justices, passing the national budget, and declaring war. Each state gets two Senators and some number of Representatives, depending on how many people live in that state. The executive branch is described in Article 2 of the Constitution. The leaders of this branch of government are the President and Vice President, who are responsible for enforcing the laws that Congress sets forth. The President works closely with a group of advisors, known as the Cabinet. These appointed helpers assist the President in making important decisions within their area of expertise, such as defense, the treasury, and homeland security. The executive branch also appoints government officials, commands the armed forces, and meets with leaders of other nations. All that combined is a lot of work for a lot of people. In fact, the executive branch employs over 4 million people to get everything done. The third brand of the U.S. government is the judicial branch and is detailed in Article 3. This branch is comprised of all the courts in the land, from the federal district courts to the U.S. Supreme Court. These courts interpret our nation's laws and punish those who break them. The highest court, the Supreme Court, settles disputes among states, hears appeals from state and federal courts, and determines if federal laws are constitutional. There are nine justices on the Supreme Court, and, unlike any other job in our government, Supreme Court justices are appointed for life, or for as long as they want to stay. Our democracy depends on an informed citizenry, so it is our duty to know how it works and what authority each branch of government has over its citizens. Besides voting, chances are that some time in your life you'll be called upon to participate in your government, whether it is to serve on a jury, testify in court, or petition your Congress person to pass or defeat an idea for a law. By knowning the branches, who runs them, and how they work together, you can be involved, informed, and intelligent.
TedEd_American_Government
미국_대통령직_만들기_케네스_데이비스Kenneth_C_Davis.txt
Transcriber: Andrea McDonough Reviewer: Bedirhan Cinar The Oval Office, Inauguration Day, Rose Garden signings, and secret service agents with dark sunglasses and cool wrist radios. For a moment, forget all of it. Toss out everything you know about the President. Now, start over. What would you do if you had to invent the President? That was the question facing the 55 men who got together in secret to draw up the plans for a new American government in the summer of 1787 in Philadelphia, in the same place where the Declaration of Independence had been written eleven years earlier. Declaring independence had been risky business, demanding ferocious courage that put lives and fortunes in jeopardy. But, inventing a new government was no field day either, especially when it's summer and you're in scratchy suits, and the windows are closed because you don't want anybody to hear what you are saying, and the air conditioning doesn't work because it won't be invented for nearly 200 years. And, when you don't agree on things, it gets even hotter. For the framers, the question they argued over most while writing the Constitution and creating three branches of government had to do with the executive department. One man or three to do the job? How long should he serve? What would he really do? Who would pick him? How to get rid of him if he's doing a bad job or he's a crook? And, of course, they all meant him, and he would be a white man. The idea of a woman or an African American, for instance, holding this high office was not a glimmer in their eyes. But the framers knew they needed someone who could take charge, especially in a crisis, like an invasion or a rebellion, or negotiating treaties. Congress was not very good at making such important decisions without debates and delays. But the framers thought America needed a man who was decisive and could act quickly. They called it energy and dispatch. One thing they were dead-set against: there would be no king. They had fought a war against a country with a monarch and were afraid that one man with unchecked powers, in charge of an army, could take over the country. Instead, they settled on a president and laid out his powers in Article 2 of the Constitution. But who would choose him? Not the people, they were too liable to be misled as one framer worried. Not the legislature, that would lead to cabal and factions. Got it: electors, wise, informed men who have time to make a good decision. And if they didn't produce a winner, then the decision would go to one of the other branches of government, the Congress. The House of Representatives would step in and make the choice, which they did in 1801 and 1825. In the long, hot summer of 1787, compromises were made to invent the presidency, like counting slaves as 3/5 of a person, giving the President command of the army but Congress the power to declare war, and unlimited four-year terms. Since then, some of those compromises have been amended and the men in office have sometimes been too strong or too weak. But, if you could start from scratch, how would you redesign the Oval Office?
TedEd_American_Government
Why_is_the_US_Constitution_so_hard_to_amend_Peter_Paccone.txt
When it was ratified in 1789, the U.S. Constitution didn't just institute a government by the people. It provided a way for the people to alter the constitution itself. And yet, of the nearly 11,000 amendments proposed in the centuries since, only 27 have succeeded as of 2016. So what is it that makes the Constitution so hard to change? In short, its creators. The founders of the United States were trying to create a unified country from thirteen different colonies, which needed assurance that their agreements couldn't be easily undone. So here's what they decided. For an amendment to even be proposed, it must receive a two-thirds vote of approval in both houses of Congress, or a request from two-thirds of state legislatures to call a national convention, and that's just the first step. To actually change the Constitution, the amendment must be ratified by three-quarters of all states. To do this, each state can either have its legislature vote on the amendment, or it can hold a separate ratification convention with delegates elected by voters. The result of such high thresholds is that, today, the American Constitution is quite static. Most other democracies pass amendments every couple of years. The U.S., on the other hand, hasn't passed one since 1992. At this point, you may wonder how any amendments managed to pass at all. The first ten, known as the Bill of Rights, includes some of America's most well-known freedoms, such as the freedom of speech, and the right to a fair trial. These were passed all at once to resolve some conflicts from the original Constitutional Convention. Years later, the Thirteenth Amendment, which abolished slavery, as well as the Fourteenth and Fifteenth Amendments, only passed after a bloody civil war. Ratifying amendments has also become harder as the country has grown larger and more diverse. The first ever proposed amendment, a formula to assign congressional representatives, was on the verge of ratification in the 1790s. However, as more and more states joined the union, the number needed to reach the three-quarter mark increased as well, leaving it unratified to this day. Today, there are many suggested amendments, including outlawing the burning of the flag, limiting congressional terms, or even repealing the Second Amendment. While many enjoy strong support, their likelihood of passing is slim. Americans today are the most politically polarized since the Civil War, making it nearly impossible to reach a broad consensus. In fact, the late Supreme Court Justice Antonin Scalia once calculated that due to America's representative system of government, it could take as little as 2% of the total population to block an amendment. Of course, the simplest solution would be to make the Constitution easier to amend by lowering the thresholds required for proposal and ratification. That, however, would require its own amendment. Instead, historical progress has mainly come from the U.S. Supreme Court, which has expanded its interpretation of existing constitutional laws to keep up with the times. Considering that Supreme Court justices are unelected and serve for life once appointed, this is far from the most democratic option. Interestingly, the founders themselves may have foreseen this problem early on. In a letter to James Madison, Thomas Jefferson wrote that laws should expire every 19 years rather than having to be changed or repealed since every political process is full of obstacles that distort the will of the people. Although he believed that the basic principles of the Constitution would endure, he stressed that the Earth belongs to the living, and not to the dead.
TedEd_American_Government
History_through_the_eyes_of_the_potato_Leo_BearMcGuinness.txt
Baked or fried, boiled or roasted, as chips or fries. At some point in your life, you've probably eaten a potato. Delicious, for sure, but the fact is potatoes have played a much more significant role in our history than just that of the dietary staple we have come to know and love today. Without the potato, our modern civilization might not exist at all. 8,000 years ago in South America, high atop the Andes, ancient Peruvians were the first to cultivate the potato. Containing high levels of proteins and carbohydrates, as well as essential fats, vitamins and minerals, potatoes were the perfect food source to fuel a large Incan working class as they built and farmed their terraced fields, mined the Rocky Mountains, and created the sophisticated civilization of the great Incan Empire. But considering how vital they were to the Incan people, when Spanish sailors returning from the Andes first brought potatoes to Europe, the spuds were duds. Europeans simply didn't want to eat what they considered dull and tasteless oddities from a strange new land, too closely related to the deadly nightshade plant belladonna for comfort. So instead of consuming them, they used potatoes as decorative garden plants. More than 200 years would pass before the potato caught on as a major food source throughout Europe, though even then, it was predominantly eaten by the lower classes. However, beginning around 1750, and thanks at least in part to the wide availability of inexpensive and nutritious potatoes, European peasants with greater food security no longer found themselves at the mercy of the regularly occurring grain famines of the time, and so their populations steadily grew. As a result, the British, Dutch and German Empires rose on the backs of the growing groups of farmers, laborers, and soldiers, thus lifting the West to its place of world dominion. However, not all European countries sprouted empires. After the Irish adopted the potato, their population dramatically increased, as did their dependence on the tuber as a major food staple. But then disaster struck. From 1845 to 1852, potato blight disease ravaged the majority of Ireland's potato crop, leading to the Irish Potato Famine, one of the deadliest famines in world history. Over a million Irish citizens starved to death, and 2 million more left their homes behind. But of course, this wasn't the end for the potato. The crop eventually recovered, and Europe's population, especially the working classes, continued to increase. Aided by the influx of Irish migrants, Europe now had a large, sustainable, and well-fed population who were capable of manning the emerging factories that would bring about our modern world via the Industrial Revolution. So it's almost impossible to imagine a world without the potato. Would the Industrial Revolution ever have happened? Would World War II have been lost by the Allies without this easy-to-grow crop that fed the Allied troops? Would it even have started? When you think about it like this, many major milestones in world history can all be at least partially attributed to the simple spud from the Peruvian hilltops.
TedEd_American_Government
시위로_강력한_변화를_이끌어내는_방법에릭_리우_Eric_Liu.txt
We live in an age of protest. On campuses and public squares, on streets and social media, protesters around the world are challenging the status quo. Protest can thrust issues onto the national or global agenda, it can force out tyrants, it can activate people who have long been on the sidelines of civic life. While protest is often necessary, is it sufficient? Consider the Arab Spring. All across the Middle East, citizen protesters were able to topple dictators. Afterwards, though, the vacuum was too often filled by the most militant and violent. Protest can generate lasting positive change when it's followed by an equally passionate effort to mobilize voters, to cast ballots, to understand government, and to make it more inclusive. So here are three core strategies for peacefully turning awareness into action and protest into durable political power. First, expand the frame of the possible, second, choose a defining fight, and third, find an early win. Let's start with expanding the frame of the possible. How often have you heard in response to a policy idea, "That's just never going to happen"? When you hear someone say that, they're trying to define the boundaries of your civic imagination. The powerful citizen works to push those boundaries outward, to ask what if - what if it were possible? What if enough forms of power - people power, ideas, money, social norms - were aligned to make it happen? Simply asking that question and not taken as given all the givens of conventional politics is the first step in converting protest to power. But this requires concreteness about what it would look like to have, say, a radically smaller national government, or, by contrast, a big single-payer healthcare system, a way to hold corporations accountable for their misdeeds, or, instead, a way to free them from onerous regulations. This brings us to the second strategy, choosing a defining fight. All politics is about contrasts. Few of us think about civic life in the abstract. We think about things in relief compared to something else. Powerful citizens set the terms of that contrast. This doesn't mean being uncivil. It simply means thinking about a debate you want to have on your terms over an issue that captures the essence of the change you want. This is what the activists pushing for a $15 minimum wage in the U.S. have done. They don't pretend that $15 by itself can fix inequality, but with this ambitious and contentious goal, which they achieved first in Seattle and then beyond, they have forced a bigger debate about economic justice and prosperity. They've expanded the frame of the possible, strategy one, and created a sharp emblematic contrast, strategy two. The third key strategy, then, is to seek and achieve an early win. An early win, even if it's not as ambitious as the ultimate goal, creates momentum, which changes what people think is possible. The solidarity movement, which organized workers in Cold War Poland emerged just this way, first, with local shipyard strikes in 1980 that forced concessions, then, over the next decade, a nationwide effort that ultimately helped topple Poland's communist government. Getting early wins sets in motion a positive feedback loop, a contagion, a belief, a motivation. It requires pressuring policymakers, using the media to change narrative, making arguments in public, persuading skeptical neighbors one by one by one. None of this is as sexy as a protest, but this is the history of the U.S. Civil Rights Movement, of Indian Independence, of Czech self-determination. Not the single sudden triumph, but the long, slow slog. You don't have to be anyone special to be part of this grind, to expand the frame of the possible, to pick a defining fight, or to secure an early win. You just have to be a participant and to live like a citizen. The spirit of protest is powerful. So is showing up after the protest. You can be the co-creator of what comes next.
TedEd_American_Government
왜_미국인들은_화요일에_투표를_할까.txt
I want to tell you all about a piece of American history that is so secret, that nobody has done anything about it for 167 years, until right now. And the way that we're going to uncover this vestigial organ of America past is by asking this question: Why? As we all know -- (Laughter) we are in the middle of another presidential election, hotly contested, as you can see. (Laughter) But what you may not know is that American voter turnout ranks near the bottom of all countries in the entire world, 138th of 172 nations. This is the world's most famous democracy. So ... Why do we vote on Tuesday? Does anybody know? And as a matter of fact, Michigan and Arizona are voting today. Here's the answer: Absolutely no good reason whatsoever. (Laughter) I'm not joking. You will not find the answer in the Declaration of Independence, nor will you find it in the Constitution. It is just a stupid law from 1845. (Laughter) In 1845, Americans traveled by horse and buggy. As did I, evidently. It took a day to get to the county seat to vote, a day to get back, and you couldn't travel on the Sabbath, so, Tuesday it was. I don't often travel by horse and buggy, I would imagine most of you don't, so when I found out about this, I was fascinated. I linked up with a group called, what else -- "Why Tuesday?" to go and ask our nation's most prominent elected leaders if they knew the answer to the question, "Why do we vote on Tuesday?" (Video) Rick Santorum: Anybody knows? OK, I'm going to be stumped on this. Anybody knows why we vote on Tuesdays? Jacob Soboroff: Do you happen to know? Ron Paul: On Tuesdays? JS: The day after the first Monday in November. RP: I don't know how that originated. JS: Do you know why we do vote on Tuesday? Newt Gingrich: No. Dick Lugar: No, I don't. (Laughter) Dianne Feinstein: I don't. Darrell Issa: No. John Kerry: In truth, really, I'm not sure why. JS: OK, thanks very much. (Laughter) JS: These are people that live for election day, yet they don't know why we vote on that very day. (Laughter) Chris Rock said, "They don't want you to vote. If they did, we wouldn't vote on a Tuesday in November. Have you ever thrown a party on a Tuesday? (Laughter) No, of course not. Nobody would show up." (Laughter) Here's the cool part. Because we asked this question, "Why Tuesday?" there is now this bill, the Weekend Voting Act in the Congress of the United States of America. It would move election day from Tuesday to the weekend, so that -- duh -- more people can vote. (Applause) It has only taken 167 years, but finally, we are on the verge of changing American history. Thank you very much. (Applause) Thanks a lot. (Applause)
TedEd_American_Government
미국_헌법의_탄생_쥬디_월턴_Judy_Walton.txt
Transcriber: tom carter Reviewer: Bedirhan Cinar It is the spring of 1787. The Revolutionary War has been over for only six years, and the young United States is still struggling in its infancy. Uprisings, boundary disputes and the lack of a common vision all plague the newborn country. In an effort to steady this precarious ship, the Confederation Congress calls on states to send delegates to the grand Convention, to begin on May 14 in Philadelphia. The delegates must draft revisions to the Articles of Confederation, which would then be considered by the Congress and approved by the states. Under the terms of the Articles, all 13 states had to agree to any changes. Since the purpose of the Convention is just to make recommendations, not everyone is excited about attending, and frankly, some think it's a waste of time. As men from different parts of the country began to travel down dusty, rugged roads on the way to Philadelphia, not all states send delegates. In fact, Rhode Island never even shows up. On May 14th, only 8 delegates -- not states, but individual delegates -- are present, so they wait. Finally, on May 25th, the necessary quorum of seven states is acheived. In all, 55 delegates arrive in Philadelphia over the course of the Convention. They are all white males, property owners, and the average age is about 44. Some are slaveholders, some had signed the Declaration of Independence, [James Madison, Roger Sherman] and almost all are well-educated. [Benjamin Franklin] Picture the delegates, James Madison and George Washington among them, sitting in Independence Hall in hot, humid Philadelphia. They're all wearing the dress of the day: frock coats, high collars and thick pants. They vote to keep their discussions secret to encourage honest debate. But that means the windows are closed, and there is no air conditioning in 1787, not even an electric fan. and they'll sit in that sweltering heat, in those heavy clothes, for three months. Shockingly, they all keep their vow of secrecy. That could never happen today, not even for an hour-long meeting. Someone would share "James Madison thinks he's so smart. Keyword: articles are dead" via social media, and the whole thing would be a disaster. But in 1787, there are no leaks. Not even a drip that hints at what they are doing. And what they are doing is nothing short of overthrowing the very government that sent them there. Within a few days, with only a seven-state quorum, and only six of those states agreeing, a handful of men change the course of history. They vote to get rid of the Articles of Confederation, and write a new, more nationalistic document that becomes our Constitution. The risk is immense. Everyone on the outside assumes they were working on recommended revisions to the Articles. It's an incredible gamble, and even when the Convention presents the signed Constitution on September 17th, not all delegates endorse it. The country will argue and debate for two more years before the document is adopted by the required nine out of 13 states. But instead of punishing them for their deception, today we celebrate the wisdom and vision of those men in Philadelphia.
TedEd_American_Government
The_oddities_of_the_first_American_election_Kenneth_C_Davis.txt
Transcriber: tom carter Reviewer: Bedirhan Cinar Lawn signs sprouting everywhere. Round-the-clock ads on radio and television. The phone rings. It's a robo-call from the president, or his opponent, asking for your money, and your vote. And while you're at it, watch their YouTube videos and like them on Facebook. Election time. We all know the look and feel of modern campaigns. But what was it like in the early days of the Republic, when, say, George Washington ran for office? Well, in fact, he didn't run. When Washington became the first president in 1789, there were no political parties, no conventions or primaries, no campaign, no election season. Not really any candidates. Even the year was odd. Literally. 1789 was the only presidential election ever held in an odd year. After the framers invented the constitution and the presidency 225 years ago, the country set about the business of choosing its first executive. Agreeing with Ben Franklin, many people thought "The first man at the helm will be a good one," and by that, Franklin meant George Washington. Greatest hero of the Revolution, Washington presided over the convention that created the constitution, rarely speaking. He never discussed the job of president, or of wanting it. And when the first presidential election took place, it was a crazy-quilt affair, with many hands stitching the pattern. Under the new constitution, each state was given a number of electors. who would cast a vote for two names. The man with the most votes would be president, the second-place finisher was vice president. Ah, but who picked the electors? That was left up to the states. Six of them let the people decide, or at least white men over 21 who owned property. In New Jersey, some women voted, a right later taken away. But in other states, the legislature picked the electors. At that time, many people thought democracy was one step away from mob rule and a decision this important should be left to wiser men. These electors then voted for president. All the states had to do was get their votes in on time. But there were glitches. Only 10 of the 13 states voted. Rhode Island and North Carolina hadn't ratified the constitution and couldn't vote. New York missed the deadline for naming its electors, and also was not counted. When the votes were tallied, it was unanimous. George Washington won easily. John Adams trailed far behind, finishing second, and became the vice president. Told of his victory, George Washington was not surprised. At Mount Vernon, his bags were already packed. He moved to New York City, the nation's temporary capital, and he would have to figure out just what a president was supposed to do. Since that first election, American democracy and elections have come a long way. The constitution has been changed to open up voting to more people: black men, women, Native Americans, and eighteen-year-olds included. Getting that basic right extended to all those people has been a long, hard struggle. So when you think you can't stand any more of those lawn signs, and TV ads, just remember: the right to vote wasn't always for everyone, and that's a piece of history worth knowing.
TedEd_American_Government
3분_권리장전_안내_벨린다_스터츠먼_Belinda_Stutzman.txt
Transcriber: tom carter Reviewer: Bedirhan Cinar The first 10 amendments to the U.S. Constitution -- also known as the Bill of Rights -- were ratified or passed over 200 years ago. But even though they're a bit, well, old, these first 10 amendments are still the most debated and discussed section of our Constitution today. So, can you remember what they are? Let's take a look. The First Amendment is the freedom of speech, press, religion, assembly and petition. This may be the most revered of the amendments. The First Amendment protects our rights to say and write our opinions, worship how we please, assemble together peacefully and petition our government, if we feel the need. The Second Amendment is the right to bear arms. The original intent of the Second Amendment was to protect colonists from the invading British soldiers, but it now guarantees that you have the right to own a gun to defend yourself and your property. The Third Amendment is called the "Quartering" amendment. It was written in response to the British occupation, and as a result of the colonists having to house -- or quarter -- soldiers in their homes during the American Revolution. Because of this amendment, our government can never force us to house soldiers in our home. The Fourth Amendment is the right to search and seizure. The police can't come into our home without a search warrant and take our personal property. Today, many concerns have arisen about our rights to privacy in technology. For example, can the government track your location with your smartphone, or can social media postings such as on Facebook and Twitter be used without a warrant? On to the Fifth: It's all about due process. You've probably heard the phrase "I plead the Fifth" in movies or on TV. They're talking about the Fifth Amendment, which says that you don't have to take the witness stand against yourself if you may end up incriminating yourself. OK, we're halfway done. The Sixth and Seventh Amendments are about how the legal system works. If you're accused of a crime, you have the right to a speedy public trial and an impartial jury. You also have the right to a lawyer, and the right to take the stand if you choose. This is important because it will prevent the accused from sitting in prison forever and insists that the prosecution proceed with undue delay. The Seventh says you have the right to a jury trial, where 12 impartial peers decide your innocence or guilt in the courtroom, as opposed to a judge doing it all alone. The Eight Amendment prohibits cruel and unusual punishment. Is the death penalty cruel? Is it unusual? It's hard for Americans to agree on the definitions of cruel and unusual. The Ninth and Tenth Amendments are called the non-rights amendments. They say that the rights not listed in the Bill of Rights are retained by the people in the states. We have other rights that are not listed in the Constitution, and the states have the right to make their own policies, like instituting state taxes. So now you know all 10 amendments. Can you remember them all? If not, remember this: the Bill of Rights is a crucial piece of American history, and though society has undergone many changes these past 200 and some years, the interpretation and application of these amendments are as vital today as they were when they were written.
Brief_History_of_Things_TED_ED
What_makes_the_Great_Wall_of_China_so_extraordinary_Megan_Campisi_and_PenPen_Chen.txt
A 13,000 mile dragon of earth and stone winds its way through the countryside of China with a history almost as long and serpentine as the structure. The Great Wall began as multiple walls of rammed earth built by individual feudal states during the Chunqiu period to protect against nomadic raiders north of China and each other. When Emperor Qin Shi Huang unified the states in 221 BCE, the Tibetan Plateau and Pacific Ocean became natural barriers, but the mountains in the north remained vulnerable to Mongol, Turkish, and Xiongnu invasions. To defend against them, the Emperor expanded the small walls built by his predecessors, connecting some and fortifying others. As the structures grew from Lintao in the west to Liaodong in the east, they collectively became known as The Long Wall. To accomplish this task, the Emperor enlisted soldiers and commoners, not always voluntarily. Of the hundreds of thousands of builders recorded during the Qin Dynasty, many were forcibly conscripted peasants and others were criminals serving out sentences. Under the Han Dynasty, the wall grew longer still, reaching 3700 miles, and spanning from Dunhuang to the Bohai Sea. Forced labor continued under the Han Emperor Han-Wudi , and the walls reputation grew into a notorious place of suffering. Poems and legends of the time told of laborers buried in nearby mass graves, or even within the wall itself. And while no human remains have been found inside, grave pits do indicate that many workers died from accidents, hunger and exhaustion. The wall was formidable but not invincible. Both Genghis and his son Khublai Khan managed to surmount the wall during the Mongol invasion of the 13th Century. After the Ming dynasty gained control in 1368, they began to refortify and further consolidate the wall using bricks and stones from local kilns. Averaging 23 feet high and 21 feet wide, the walls 5500 miles were punctuated by watchtowers. When raiders were sighted, fire and smoke signals traveled between towers until reinforcements arrived. Small openings along the wall let archers fire on invaders, while larger ones were used to drop stones and more. But even this new and improved wall was not enough. In 1644, northern Manchu clans overthrew the Ming to establish the Qing dynasty, incorporating Mongolia as well, Thus, for the second time, China was ruled by the very people the wall had tried to keep out. With the empire's borders now extending beyond the Great Wall, the fortifications lost their purpose. And without regular reinforcement, the wall fell into disrepair, rammed earth eroded, while brick and stone were plundered for building materials. But its job wasn't finished. During World War II, China used sections for defense against Japanese invasion, and some parts are still rumored to be used for military training. But the Wall's main purpose today is cultural. As one of the largest man-made structures on Earth, it was granted UNESCO World Heritage Status in 1987. Originally built to keep people out of China, the Great Wall now welcomes millions of visitors each year. In fact, the influx of tourists has caused the wall to deteriorate, leading the Chinese government to launch preservation initiatives. It's also often acclaimed as the only man-made structure visible from space. Unfortunately, that's not at all true. In low Earth orbit, all sorts of structures, like bridges, highways and airports are visible, and the Great Wall is only barely discernible. From the moon, it doesn't stand a chance. But regardless, it's the Earth we should be studying it from because new sections are still discovered every few years, branching off from the main body and expanding this remarkable monument to human achievement.
Brief_History_of_Things_TED_ED
History_through_the_eyes_of_a_chicken_Chris_A_Kniesly.txt
The annals of Ancient Egyptian king Thutmose III described a marvelous foreign bird that “gives birth daily.” Zoroastrians viewed them as spirits whose cries told of the cosmic struggle between darkness and light. Romans brought them on their military campaigns to foretell the success of future battles. And today, this bird still occupies an important, though much less honorable position – on our dinner plates. The modern chicken is descended primarily from the Red Junglefowl, and partially from three other closely related species, all native to India and Southeast Asia. The region’s bamboo plants produce massive amounts of fruit just once every few decades. Junglefowls’ ability to lay eggs daily may have evolved to take advantage of these rare feasts, increasing their population when food was abundant. This was something humans could exploit on a consistent basis, and the birds’ weak flight capabilities and limited need for space made them easy to capture and contain. The earliest domesticated chickens, dating at least back to 7,000 years ago, weren’t bred for food, but for something considered less savory today. The aggressiveness of breeding males, armed with natural leg spurs, made cockfighting a popular entertainment. By the second millennium BCE, chickens had spread from the Indus Valley to China and the Middle East to occupy royal menageries and to be used in religious rituals. But it was in Egypt where the next chapter in the bird’s history began. When a hen naturally incubates eggs, she will stop laying new ones and sit on a “clutch” of 6 or more eggs for 21 days. By the middle of the 1st millennium BCE, the Egyptians had learned to artificially incubate chicken eggs by placing them in baskets over hot ashes. That freed up hens to continue laying daily, and what had been a royal delicacy or religious offering became a common meal. Around the same time as Egyptians were incubating eggs, Phoenician merchants introduced chickens to Europe, where they quickly became an essential part of European livestock. However, for a long time, the chicken’s revered status continued to exist alongside its culinary one. The Ancient Greeks used fighting roosters as inspirational examples for young soldiers. The Romans consulted chickens as oracles. And as late as the 7th Century, the chicken was considered a symbol for Christianity. Over the next few centuries, chickens accompanied humans wherever they went, spreading throughout the world through trade, conquest, and colonization. After the Opium Wars, Chinese breeds were brought to England and crossed with local chickens. This gave rise to a phenomenon called “Hen Fever” or “The Fancy”, with farmers all over Europe striving to breed new varieties with particular combinations of traits. This trend also caught the attention of a certain Charles Darwin, who wondered if a similar selective breeding process occurred in nature. Darwin would observe hundreds of chickens while finalizing his historic work introducing the theory of Evolution. But the chicken’s greatest contribution to science was yet to come. In the early 20th century, a trio of British scientists conducted extensive crossbreeding of chickens, building on Gregor Mendel’s studies of genetic inheritance. With their high genetic diversity, many distinct traits, and only 7 months between generations, chickens were the perfect subject. This work resulted in the famous Punnett Square, used to show the genotypes that would result from breeding a given pairing. Since then, numerous breeding initiatives have made chickens bigger and meatier, and allowed them to lay more eggs than ever. Meanwhile, chicken production has shifted to an industrial, factory-like model, with birds raised in spaces with a footprint no larger than a sheet of paper. And while there’s been a shift towards free-range farming due to animal rights and environmental concerns, most of the world’s more than 22 billion chickens today are factory farmed. From gladiators and gifts to the gods, to traveling companions and research subjects, chickens have played many roles over the centuries. And though they may not have come before the proverbial egg, chickens’ fascinating history tells us a great deal about our own.
Brief_History_of_Things_TED_ED
The_history_of_the_world_according_to_corn_Chris_A_Kniesly.txt
Corn currently accounts for more than one tenth of our global crop production. The United States alone has enough cornfields to cover Germany. But while other crops we grow come in a range of varieties, over 99% of cultivated corn is the exact same type: Yellow Dent #2. This means that humans grow more Yellow Dent #2 than any other plant on the planet. So how did this single variety of this single plant become the biggest success story in agricultural history? Nearly 9,000 years ago, corn, also called maize, was first domesticated from teosinte, a grass native to Mesoamerica. Teosinte’s rock-hard seeds were barely edible, but its fibrous husk could be turned into a versatile material. Over the next 4,700 years, farmers bred the plant into a staple crop, with larger cobs and edible kernels. As maize spread throughout the Americas, it took on an important role, with multiple indigenous societies revering a “Corn Mother” as the goddess who created agriculture. When Europeans first arrived in America, they shunned the strange plant. Many even believed it was the source of physical and cultural differences between them and the Mesoamericans. However, their attempts to cultivate European crops in American soil quickly failed, and the settlers were forced to expand their diet. Finding the crop to their taste, maize soon crossed the Atlantic, where its ability to grow in diverse climates made it a popular grain in many European countries. But the newly established United States was still the corn capital of the world. In the early 1800’s, different regions across the country produced strains of varying size and taste. In the 1850’s, however, these unique varieties proved difficult for train operators to package, and for traders to sell. Trade boards in rail hubs like Chicago encouraged corn farmers to breed one standardized crop. This dream would finally be realized at 1893’s World’s Fair, where James Reid’s yellow dent corn won the Blue Ribbon. Over the next 50 years, yellow dent corn swept the nation. Following the technological developments of World War II, mechanized harvesters became widely available. This meant a batch of corn that previously took a full day to harvest by hand could now be collected in just 5 minutes. Another wartime technology, the chemical explosive ammonium nitrate, also found new life on the farm. With this new synthetic fertilizer, farmers could plant dense fields of corn year after year, without the need to rotate their crops and restore nitrogen to the soil. While these advances made corn an attractive crop to American farmers, US agricultural policy limited the amount farmers could grow to ensure high sale prices. But in 1972, President Richard Nixon removed these limitations while negotiating massive grain sales to the Soviet Union. With this new trade deal and WWII technology, corn production exploded into a global phenomenon. These mountains of maize inspired numerous corn concoctions. Cornstarch could be used as a thickening agent for everything from gasoline to glue or processed into a low-cost sweetener known as High-Fructose Corn Syrup. Maize quickly became one of the cheapest animal feeds worldwide. This allowed for inexpensive meat production, which in turn increased the demand for meat and corn feed. Today, humans eat only 40% of all cultivated corn, while the remaining 60% supports consumer good industries worldwide. Yet the spread of this wonder-crop has come at a price. Global water sources are polluted by excess ammonium nitrate from cornfields. Corn accounts for a large portion of agriculture-related carbon emissions, partly due to the increased meat production it enables. The use of high fructose corn syrup may be a contributor to diabetes and obesity. And the rise of monoculture farming has left our food supply dangerously vulnerable to pests and pathogens— a single virus could infect the world’s supply of this ubiquitous crop. Corn has gone from a bushy grass to an essential element of the world’s industries. But only time will tell if it has led us into a maze of unsustainability.
Brief_History_of_Things_TED_ED
A_brief_history_of_dogs_David_Ian_Howe.txt
Since their emergence over 200,000 years ago, modern humans have established homes and communities all over the planet. But they didn’t do it alone. Whatever corner of the globe you find homo sapiens in today, you’re likely to find another species nearby: Canis lupus familiaris. Whether they’re herding, hunting, sledding, or slouching the sheer variety of domestic dogs is staggering. But what makes the story of man’s best friend so surprising is that they all evolved from a creature often seen as one of our oldest rivals: Canis lupus, or the gray wolf. When our Paleolithic ancestors first settled Eurasia roughly 100,000 years ago, wolves were one of their main rivals at the top of the food chain. Able to exert over 300 lbs. of pressure in one bone-crushing bite and sniff out prey more than a mile away, these formidable predators didn’t have much competition. Much like human hunter-gatherers, they lived and hunted in complex social groups consisting of a few nuclear families, and used their social skills to cooperatively take down larger creatures. Using these group tactics, they operated as effective persistence hunters, relying not on outrunning their prey, but pursuing it to the point of exhaustion. But when pitted against the similar strengths of their invasive new neighbors, wolves found themselves at a crossroads. For most packs, these bourgeoning bipeds represented a serious threat to their territory. But for some wolves, especially those without a pack, human camps offered new opportunities. Wolves that showed less aggression towards humans could come closer to their encampments, feeding on leftovers. And as these more docile scavengers outlasted their aggressive brethren, their genetic traits were passed on, gradually breeding tamer wolves in areas near human populations. Over time humans found a multitude of uses for these docile wolves. They helped to track and hunt prey, and might have served as sentinels to guard camps and warn of approaching enemies. Their similar social structure made it easy to integrate with human families and learn to understand their commands. Eventually they moved from the fringes of our communities into our homes, becoming humanity’s first domesticated animal. The earliest of these Proto-Dogs or Wolf-Dogs, seem to have appeared around 33,000 years ago, and would not have looked all that different from their wild cousins. They were primarily distinguished by their smaller size and a shorter snout full of comparatively smaller teeth. But as human cultures and occupations became more diverse and specialized, so did our friends. Short stocky dogs to herd livestock by nipping their heels; elongated dogs to flush badgers and foxes out of burrows; thin and sleek dogs for racing; and large, muscular dogs for guard duty. With the emergence of kennel clubs and dog shows during England’s Victorian era, these dog types were standardized into breeds, with many new ones bred purely for appearance. Sadly, while all dog breeds are the product of artificial selection, some are healthier than others. Many of these aesthetic characteristics come with congenital health problems, such as difficulty breathing or being prone to spinal injuries. Humanity’s longest experiment in controlled evolution has had other side effects as well. Generations of selection for tameness have favored more juvenile and submissive traits that were pleasing to humans. This phenomenon of selecting traits associated with youth is known as neoteny, and can be seen in many domestic animals. Thousands of years of co-evolution may even have bonded us chemically. Not only can canines understand our emotions and body language, but when dogs and humans interact, both our bodies release oxytocin; a hormone commonly associated with feelings of love and protectiveness. It might be difficult to fathom how every Pomeranian, Chihuahua, and Poodle are descended from fierce wolves. But the diversity of breeds today is the result of a relationship that precedes cities, agriculture, and even the disappearance of our Neanderthal cousins. And it’s heartening to know that given enough time, even our most dangerous rivals can become our fiercest friends.
Brief_History_of_Things_TED_ED
차의_역사_슈난_텡_Shunan_Teng.txt
During a long day spent roaming the forest in search of edible grains and herbs, the weary divine farmer Shennong accidentally poisoned himself 72 times. But before the poisons could end his life, a leaf drifted into his mouth. He chewed on it and it revived him, and that is how we discovered tea. Or so an ancient legend goes at least. Tea doesn't actually cure poisonings, but the story of Shennong, the mythical Chinese inventor of agriculture, highlights tea's importance to ancient China. Archaeological evidence suggests tea was first cultivated there as early as 6,000 years ago, or 1,500 years before the pharaohs built the Great Pyramids of Giza. That original Chinese tea plant is the same type that's grown around the world today, yet it was originally consumed very differently. It was eaten as a vegetable or cooked with grain porridge. Tea only shifted from food to drink 1,500 years ago when people realized that a combination of heat and moisture could create a complex and varied taste out of the leafy green. After hundreds of years of variations to the preparation method, the standard became to heat tea, pack it into portable cakes, grind it into powder, mix with hot water, and create a beverage called muo cha, or matcha. Matcha became so popular that a distinct Chinese tea culture emerged. Tea was the subject of books and poetry, the favorite drink of emperors, and a medium for artists. They would draw extravagant pictures in the foam of the tea, very much like the espresso art you might see in coffee shops today. In the 9th century during the Tang Dynasty, a Japanese monk brought the first tea plant to Japan. The Japanese eventually developed their own unique rituals around tea, leading to the creation of the Japanese tea ceremony. And in the 14th century during the Ming Dynasty, the Chinese emperor shifted the standard from tea pressed into cakes to loose leaf tea. At that point, China still held a virtual monopoly on the world's tea trees, making tea one of three essential Chinese export goods, along with porcelain and silk. This gave China a great deal of power and economic influence as tea drinking spread around the world. That spread began in earnest around the early 1600s when Dutch traders brought tea to Europe in large quantities. Many credit Queen Catherine of Braganza, a Portuguese noble woman, for making tea popular with the English aristocracy when she married King Charles II in 1661. At the time, Great Britain was in the midst of expanding its colonial influence and becoming the new dominant world power. And as Great Britain grew, interest in tea spread around the world. By 1700, tea in Europe sold for ten times the price of coffee and the plant was still only grown in China. The tea trade was so lucrative that the world's fastest sailboat, the clipper ship, was born out of intense competition between Western trading companies. All were racing to bring their tea back to Europe first to maximize their profits. At first, Britain paid for all this Chinese tea with silver. When that proved too expensive, they suggested trading tea for another substance, opium. This triggered a public health problem within China as people became addicted to the drug. Then in 1839, a Chinese official ordered his men to destroy massive British shipments of opium as a statement against Britain's influence over China. This act triggered the First Opium War between the two nations. Fighting raged up and down the Chinese coast until 1842 when the defeated Qing Dynasty ceded the port of Hong Kong to the British and resumed trading on unfavorable terms. The war weakened China's global standing for over a century. The British East India company also wanted to be able to grow tea themselves and further control the market. So they commissioned botanist Robert Fortune to steal tea from China in a covert operation. He disguised himself and took a perilous journey through China's mountainous tea regions, eventually smuggling tea trees and experienced tea workers into Darjeeling, India. From there, the plant spread further still, helping drive tea's rapid growth as an everyday commodity. Today, tea is the second most consumed beverage in the world after water, and from sugary Turkish Rize tea, to salty Tibetan butter tea, there are almost as many ways of preparing the beverage as there are cultures on the globe.
Brief_History_of_Things_TED_ED
How_the_worlds_first_metro_system_was_built_Christian_Wolmar.txt
It was the dawn of 1863, and London’s not-yet-opened subway system, the first of its kind in the world, had the city in an uproar. Digging a hole under the city and putting a railroad in it seemed the stuff of dreams. Pub drinkers scoffed at the idea and a local minister accused the railway company of trying to break into hell. Most people simply thought the project, which cost more than 100 million dollars in today’s money, would never work. But it did. On January 10, 1863, 30,000 people ventured underground to travel on the world’s first subway on a four-mile stretch of line in London. After three years of construction and a few setbacks, the Metropolitan Railway was ready for business. The city’s officials were much relieved. They’d been desperate to find a way to reduce the terrible congestion on the roads. London, at the time the world’s largest and most prosperous city, was in a permanent state of gridlock, with carts, costermongers, cows, and commuters jamming the roads. It’d been a Victorian visionary, Charles Pearson, who first thought of putting railways under the ground. He’d lobbied for underground trains throughout the 1840s, but opponents thought the idea was impractical since the railroads at the time only had short tunnels under hills. How could you get a railway through the center of a city? The answer was a simple system called "cut and cover." Workers had to dig a huge trench, construct a tunnel out of brick archways, and then refill the hole over the newly built tunnel. Because this was disruptive and required the demolition of buildings above the tunnels, most of the line went under existing roads. Of course, there were accidents. On one occasion, a heavy rainstorm flooded the nearby sewers and burst through the excavation, delaying the project by several months. But as soon as the Metropolitan Railway opened, Londoners rushed in to ride the new trains. The Metropolitan quickly became a vital part of London’s transport system. Additional lines were soon built, and new suburbs grew around the stations. Big department stores opened next to the railroad, and the railway company even created attractions, like a 30-story Ferris wheel in Earls Court to bring in tourists by train. Within 30 years, London’s subway system covered 80 kilometers, with lines in the center of town running in tunnels, and suburban trains operating on the surface, often on embankments. But London was still growing, and everyone wanted to be connected to the system. By the late 1880s, the city had become too dense with buildings, sewers, and electric cables for the "cut and cover" technique, so a new system had to be devised. Using a machine called the Greathead Shield, a team of just 12 workers could bore through the earth, carving deep underground tunnels through the London clay. These new lines, called tubes, were at varying depths, but usually about 25 meters deeper than the "cut and cover" lines. This meant their construction didn’t disturb the surface, and it was possible to dig under buildings. The first tube line, the City and South London, opened in 1890 and proved so successful that half a dozen more lines were built in the next 20 years. This clever new technology was even used to burrow several lines under London’s river, the Thames. By the early 20th century, Budapest, Berlin, Paris, and New York had all built subways of their own. And today, with more than 160 cities in 55 countries using underground rails to combat congestion, we can thank Charles Pearson and the Metropolitan Railway for getting us started on the right track.
Brief_History_of_Things_TED_ED
Where_did_English_come_from_Claire_Bowern.txt
When we talk about English, we often think of it as a single language but what do the dialects spoken in dozens of countries around the world have in common with each other, or with the writings of Chaucer? And how are any of them related to the strange words in Beowulf? The answer is that like most languages, English has evolved through generations of speakers, undergoing major changes over time. By undoing these changes, we can trace the language from the present day back to its ancient roots. While modern English shares many similar words with Latin-derived romance languages, like French and Spanish, most of those words were not originally part of it. Instead, they started coming into the language with the Norman invasion of England in 1066. When the French-speaking Normans conquered England and became its ruling class, they brought their speech with them, adding a massive amount of French and Latin vocabulary to the English language previously spoken there. Today, we call that language Old English. This is the language of Beowulf. It probably doesn't look very familiar, but it might be more recognizable if you know some German. That's because Old English belongs to the Germanic language family, first brought to the British Isles in the 5th and 6th centuries by the Angles, Saxons, and Jutes. The Germanic dialects they spoke would become known as Anglo-Saxon. Viking invaders in the 8th to 11th centuries added more borrowings from Old Norse into the mix. It may be hard to see the roots of modern English underneath all the words borrowed from French, Latin, Old Norse and other languages. But comparative linguistics can help us by focusing on grammatical structure, patterns of sound changes, and certain core vocabulary. For example, after the 6th century, German words starting with "p," systematically shifted to a "pf" sound while their Old English counterparts kept the "p" unchanged. In another split, words that have "sk" sounds in Swedish developed an "sh" sound in English. There are still some English words with "sk," like "skirt," and "skull," but they're direct borrowings from Old Norse that came after the "sk" to "sh" shift. These examples show us that just as the various Romance languages descended from Latin, English, Swedish, German, and many other languages descended from their own common ancestor known as Proto-Germanic spoken around 500 B.C.E. Because this historical language was never written down, we can only reconstruct it by comparing its descendants, which is possible thanks to the consistency of the changes. We can even use the same process to go back one step further, and trace the origins of Proto-Germanic to a language called Proto-Indo-European, spoken about 6000 years ago on the Pontic steppe in modern day Ukraine and Russia. This is the reconstructed ancestor of the Indo-European family that includes nearly all languages historically spoken in Europe, as well as large parts of Southern and Western Asia. And though it requires a bit more work, we can find the same systematic similarities, or correspondences, between related words in different Indo-European branches. Comparing English with Latin, we see that English has "t" where Latin has "d", and "f" where latin has "p" at the start of words. Some of English's more distant relatives include Hindi, Persian and the Celtic languages it displaced in what is now Britain. Proto-Indo-European itself descended from an even more ancient language, but unfortunately, this is as far back as historical and archeological evidence will allow us to go. Many mysteries remain just out of reach, such as whether there might be a link between Indo-European and other major language families, and the nature of the languages spoken in Europe prior to its arrival. But the amazing fact remains that nearly 3 billion people around the world, many of whom cannot understand each other, are nevertheless speaking the same words shaped by 6000 years of history.
Brief_History_of_Things_TED_ED
A_brief_history_of_alcohol_Rod_Phillips.txt
This chimpanzee stumbles across a windfall of overripe plums. Many of them have split open, drawing him to their intoxicating fruity odor. He gorges himself and begins to experience some… strange effects. This unwitting ape has stumbled on a process that humans will eventually harness to create beer, wine, and other alcoholic drinks. The sugars in overripe fruit attract microscopic organisms known as yeasts. As the yeasts feed on the fruit sugars they produce a compound called ethanol— the type of alcohol in alcoholic beverages. This process is called fermentation. Nobody knows exactly when humans began to create fermented beverages. The earliest known evidence comes from 7,000 BCE in China, where residue in clay pots has revealed that people were making an alcoholic beverage from fermented rice, millet, grapes, and honey. Within a few thousand years, cultures all over the world were fermenting their own drinks. Ancient Mesopotamians and Egyptians made beer throughout the year from stored cereal grains. This beer was available to all social classes, and workers even received it in their daily rations. They also made wine, but because the climate wasn’t ideal for growing grapes, it was a rare and expensive delicacy. By contrast, in Greece and Rome, where grapes grew more easily, wine was as readily available as beer was in Egypt and Mesopotamia. Because yeasts will ferment basically any plant sugars, ancient peoples made alcohol from whatever crops and plants grew where they lived. In South America, people made chicha from grains, sometimes adding hallucinogenic herbs. In what’s now Mexico, pulque, made from cactus sap, was the drink of choice, while East Africans made banana and palm beer. And in the area that’s now Japan, people made sake from rice. Almost every region of the globe had its own fermented drinks. As alcohol consumption became part of everyday life, some authorities latched onto effects they perceived as positive— Greek physicians considered wine to be good for health, and poets testified to its creative qualities. Others were more concerned about alcohol’s potential for abuse. Greek philosophers promoted temperance. Early Jewish and Christian writers in Europe integrated wine into rituals but considered excessive intoxication a sin. And in the middle east, Africa, and Spain, an Islamic rule against praying while drunk gradually solidified into a general ban on alcohol. Ancient fermented beverages had relatively low alcohol content. At about 13% alcohol, the by-products wild yeasts generate during fermentation become toxic and kill them. When the yeasts die, fermentation stops and the alcohol content levels off. So for thousands of years, alcohol content was limited. That changed with the invention of a process called distillation. 9th century Arabic writings describe boiling fermented liquids to vaporize the alcohol in them. Alcohol boils at a lower temperature than water, so it vaporizes first. Capture this vapor, cool it down, and what’s left is liquid alcohol much more concentrated than any fermented beverage. At first, these stronger spirits were used for medicinal purposes. Then, spirits became an important trade commodity because, unlike beer and wine, they didn’t spoil. Rum made from sugar harvested in European colonies in the Caribbean became a staple for sailors and was traded to North America. Europeans brought brandy and gin to Africa and traded it for enslaved people, land, and goods like palm oil and rubber. Spirits became a form of money in these regions. During the Age of Exploration, spirits played a crucial role in long distance sea voyages. Sailing from Europe to east Asia and the Americas could take months, and keeping water fresh for the crews was a challenge. Adding a bucket of brandy to a water barrel kept water fresh longer because alcohol is a preservative that kills harmful microbes. So by the 1600s, alcohol had gone from simply giving animals a buzz to fueling global trade and exploration— along with all their consequences. As time went on, its role in human society would only get more complicated.
Brief_History_of_Things_TED_ED
The_history_of_the_world_according_to_cats_EvaMaria_Geigl.txt
On May 27th, 1941, the German battleship Bismarck sank in a fierce firefight, leaving only 118 of her 2,200 crew members alive. But when a British destroyer came to collect the prisoners, they found an unexpected survivor - a black and white cat clinging to a floating plank. For the next several months this cat hunted rats and raised British morale - until a sudden torpedo strike shattered the hull and sank the ship. But, miraculously, not the cat. Nicknamed Unsinkable Sam, he rode to Gibraltar with the rescued crew and served as a ship cat on three more vessels – one of which also sank - before retiring to the Belfast Home for Sailors. Many may not think of cats as serviceable sailors, or cooperative companions of any kind. But cats have been working alongside humans for thousands of years - helping us just as often as we help them. So how did these solitary creatures go from wild predator to naval officer to sofa sidekick? The domestication of the modern house cat can be traced back to more than 10,000 years ago in the Fertile Crescent, at the start of the Neolithic era. People were learning to bend nature to their will, producing much more food than farmers could eat at one time. These Neolithic farmers stored their excess grain in large pits and short, clay silos. But these stores of food attracted hordes of rodents, as well as their predator, Felis silvestris lybica - the wildcat found across North Africa and Southwest Asia. These wildcats were fast, fierce, carnivorous hunters. And they were remarkably similar in size and appearance to today’s domestic cats. The main differences being that ancient wildcats were more muscular, had striped coats, and were less social towards other cats and humans. The abundance of prey in rodent-infested granaries drew in these typically solitary animals. And as the wildcats learned to tolerate the presence of humans and other cats during mealtime, we think that farmers likewise tolerated the cats in exchange for free pest control. The relationship was so beneficial that the cats migrated with Neolithic farmers from Anatolia into Europe and the Mediterranean. Vermin were a major scourge of the seven seas. They ate provisions and gnawed at lines of rope, so cats had long since become essential sailing companions. Around the same time these Anatolian globe trotting cats set sail, the Egyptians domesticated their own local cats. Revered for their ability to dispatch venomous snakes, catch birds, and kill rats, domestic cats became important to Egyptian religious culture. They gained immortality in frescos, hieroglyphs, statues, and even tombs, mummified alongside their owners. Egyptian ship cats cruised the Nile, holding poisonous river snakes at bay. And after graduating to larger vessels, they too began to migrate from port to port. During the time of the Roman Empire, ships traveling between India and Egypt carried the lineage of the central Asian wildcat F. s. ornata. Centuries later, in the Middle Ages, Egyptian cats voyaged up to the Baltic Sea on the ships of Viking seafarers. And both the Near Eastern and North African wildcats – probably tamed at this point -- continued to travel across Europe, eventually setting sail for Australia and the Americas. Today, most house cats have descended from either the Near Eastern or the Egyptian lineage of F.s.lybica. But close analysis of the genomes and coat patterns of modern cats tells us that unlike dogs, which have undergone centuries of selective breeding, modern cats are genetically very similar to ancient cats. And apart from making them more social and docile, we’ve done little to alter their natural behaviors. In other words, cats today are more or less as they’ve always been: Wild animals. Fierce hunters. Creatures that don’t see us as their keepers. And given our long history together, they might not be wrong.
Brief_History_of_Things_TED_ED
언어가_진화하는_방법_알렉스_젠들러_Alex_Gendler.txt
In the biblical story of the Tower of Babel, all of humanity once spoke a single language until they suddenly split into many groups unable to understand each other. We don't really know if such an original language ever existed, but we do know that the thousands of languages existing today can be traced back to a much smaller number. So how did we end up with so many? In the early days of human migration, the world was much less populated. Groups of people that shared a single language and culture often split into smaller tribes, going separate ways in search of fresh game and fertile land. As they migrated and settled in new places, they became isolated from one another and developed in different ways. Centuries of living in different conditions, eating different food and encountering different neighbors turned similar dialects with varied pronunciation and vocabulary into radically different languages, continuing to divide as populations grew and spread out further. Like genealogists, modern linguists try to map this process by tracing multiple languages back as far as they can to their common ancestor, or protolanguage. A group of all languages related in this way is called a language family, which can contain many branches and sub-families. So how do we determine whether languages are related in the first place? Similar sounding words don't tell us much. They could be false cognates or just directly borrowed terms rather than derived from a common root. Grammar and syntax are a more reliable guide, as well as basic vocabulary, such as pronouns, numbers or kinship terms, that's less likely to be borrowed. By systematically comparing these features and looking for regular patterns of sound changes and correspondences between languages, linguists can determine relationships, trace specific steps in their evolution and even reconstruct earlier languages with no written records. Linguistics can even reveal other important historical clues, such as determining the geographic origins and lifestyles of ancient peoples based on which of their words were native, and which were borrowed. There are two main problems linguists face when constructing these language family trees. One is that there is no clear way of deciding where the branches at the bottom should end, that is, which dialects should be considered separate languages or vice versa. Chinese is classified as a single language, but its dialects vary to the point of being mutually unintelligible, while speakers of Spanish and Portuguese can often understand each other. Languages actually spoken by living people do not exist in neatly divided categories, but tend to transition gradually, crossing borders and classifications. Often the difference between languages and dialects is a matter of changing political and national considerations, rather than any linguistic features. This is why the answer to, "How many languages are there?" can be anywhere between 3,000 and 8,000, depending on who's counting. The other problem is that the farther we move back in time towards the top of the tree, the less evidence we have about the languages there. The current division of major language families represents the limit at which relationships can be established with reasonable certainty, meaning that languages of different families are presumed not to be related on any level. But this may change. While many proposals for higher level relationships -- or super families -- are speculative, some have been widely accepted and others are being considered, especially for native languages with small speaker populations that have not been extensively studied. We may never be able to determine how language came about, or whether all human languages did in fact have a common ancestor scattered through the babel of migration. But the next time you hear a foreign language, pay attention. It may not be as foreign as you think.
Brief_History_of_Things_TED_ED
Where_does_gold_come_from_David_Lunney.txt
In medieval times, alchemists tried to achieve the seemingly impossible. They wanted to transform lowly lead into gleaming gold. History portrays these people as aged eccentrics, but if only they'd known that their dreams were actually achievable. Indeed, today we can manufacture gold on Earth thanks to modern inventions that those medieval alchemists missed by a few centuries. But to understand how this precious metal became embedded in our planet to start with, we have to gaze upwards at the stars. Gold is extraterrestrial. Instead of arising from the planet's rocky crust, it was actually cooked up in space and is present on Earth because of cataclysmic stellar explosions called supernovae. Stars are mostly made up of hydrogen, the simplest and lightest element. The enormous gravitational pressure of so much material compresses and triggers nuclear fusion in the star's core. This process releases energy from the hydrogen, making the star shine. Over many millions of years, fusion transforms hydrogen into heavier elements: helium, carbon, and oxygen, burning subsequent elements faster and faster to reach iron and nickel. However, at that point nuclear fusion no longer releases enough energy, and the pressure from the core peters out. The outer layers collapse into the center, and bouncing back from this sudden injection of energy, the star explodes forming a supernova. The extreme pressure of a collapsing star is so high, that subatomic protons and electrons are forced together in the core, forming neutrons. Neutrons have no repelling electric charge so they're easily captured by the iron group elements. Multiple neutron captures enable the formation of heavier elements that a star under normal circumstances can't form, from silver to gold, past lead and on to uranium. In extreme contrast to the million year transformation of hydrogen to helium, the creation of the heaviest elements in a supernova takes place in only seconds. But what becomes of the gold after the explosion? The expanding supernova shockwave propels its elemental debris through the interstellar medium, triggering a swirling dance of gas and dust that condenses into new stars and planets. Earth's gold was likely delivered this way before being kneaded into veins by geothermal activity. Billions of years later, we now extract this precious product by mining it, an expensive process that's compounded by gold's rarity. In fact, all of the gold that we've mined in history could be piled into just three Olympic-size swimming pools, although this represents a lot of mass because gold is about 20 times denser than water. So, can we produce more of this coveted commodity? Actually, yes. Using particle accelerators, we can mimic the complex nuclear reactions that create gold in stars. But these machines can only construct gold atom by atom. So it would take almost the age of the universe to produce one gram at a cost vastly exceeding the current value of gold. So that's not a very good solution. But if we were to reach a hypothetical point where we'd mined all of the Earth's buried gold, there are other places we could look. The ocean holds an estimated 20 million tons of dissolved gold but at extremely miniscule concentrations making its recovery too costly at present. Perhaps one day, we'll see gold rushes to tap the mineral wealth of the other planets of our solar system. And who knows? Maybe some future supernova will occur close enough to shower us with its treasure and hopefully not eradicate all life on Earth in the process.
Brief_History_of_Things_TED_ED
A_brief_history_of_cheese_Paul_Kindstedt.txt
Before empires and royalty, before pottery and writing, before metal tools and weapons – there was cheese. As early as 8000 BCE, the earliest Neolithic farmers living in the Fertile Crescent began a legacy of cheesemaking almost as old as civilization itself. The rise of agriculture led to domesticated sheep and goats, which ancient farmers harvested for milk. But when left in warm conditions for several hours, that fresh milk began to sour. Its lactic acids caused proteins to coagulate, binding into soft clumps. Upon discovering this strange transformation, the farmers drained the remaining liquid – later named whey – and found the yellowish globs could be eaten fresh as a soft, spreadable meal. These clumps, or curds, became the building blocks of cheese, which would eventually be aged, pressed, ripened, and whizzed into a diverse cornucopia of dairy delights. The discovery of cheese gave Neolithic people an enormous survival advantage. Milk was rich with essential proteins, fats, and minerals. But it also contained high quantities of lactose – a sugar which is difficult to process for many ancient and modern stomachs. Cheese, however, could provide all of milk’s advantages with much less lactose. And since it could be preserved and stockpiled, these essential nutrients could be eaten throughout scarce famines and long winters. Some 7th millennium BCE pottery fragments found in Turkey still contain telltale residues of the cheese and butter they held. By the end of the Bronze Age, cheese was a standard commodity in maritime trade throughout the eastern Mediterranean. In the densely populated city-states of Mesopotamia, cheese became a staple of culinary and religious life. Some of the earliest known writing includes administrative records of cheese quotas, listing a variety of cheeses for different rituals and populations across Mesopotamia. Records from nearby civilizations in Turkey also reference rennet. This animal byproduct, produced in the stomachs of certain mammals, can accelerate and control coagulation. Eventually this sophisticated cheesemaking tool spread around the globe, giving way to a wide variety of new, harder cheeses. And though some conservative food cultures rejected the dairy delicacy, many more embraced cheese, and quickly added their own local flavors. Nomadic Mongolians used yaks’ milk to create hard, sundried wedges of Byaslag. Egyptians enjoyed goats’ milk cottage cheese, straining the whey with reed mats. In South Asia, milk was coagulated with a variety of food acids, such as lemon juice, vinegar, or yogurt and then hung to dry into loafs of paneer. This soft mild cheese could be added to curries and sauces, or simply fried as a quick vegetarian dish. The Greeks produced bricks of salty brined feta cheese, alongside a harder variety similar to today’s pecorino romano. This grating cheese was produced in Sicily and used in dishes all across the Mediterranean. Under Roman rule, “dry cheese” or “caseus aridus,” became an essential ration for the nearly 500,000 soldiers guarding the vast borders of the Roman Empire. And when the Western Roman Empire collapsed, cheesemaking continued to evolve in the manors that dotted the medieval European countryside. In the hundreds of Benedictine monasteries scattered across Europe, medieval monks experimented endlessly with different types of milk, cheesemaking practices, and aging processes that led to many of today’s popular cheeses. Parmesan, Roquefort, Munster and several Swiss types were all refined and perfected by these cheesemaking clergymen. In the Alps, Swiss cheesemaking was particularly successful – producing a myriad of cow’s milk cheeses. By the end of the 14th century, Alpine cheese from the Gruyere region of Switzerland had become so profitable that a neighboring state invaded the Gruyere highlands to take control of the growing cheese trade. Cheese remained popular through the Renaissance, and the Industrial Revolution took production out of the monastery and into machinery. Today, the world produces roughly 22 billion kilograms of cheese a year, shipped and consumed around the globe. But 10,000 years after its invention, local farms are still following in the footsteps of their Neolithic ancestors, hand crafting one of humanity’s oldest and favorite foods.
Brief_History_of_Things_TED_ED
Who_decides_how_long_a_second_is_John_Kitching.txt
In 1967, researchers from around the world gathered to answer a long-running scientific question— just how long is a second? It might seem obvious at first. A second is the tick of a clock, the swing of a pendulum, the time it takes to count to one. But how precise are those measurements? What is that length based on? And how can we scientifically define this fundamental unit of time? For most of human history, ancient civilizations measured time with unique calendars that tracked the steady march of the night sky. In fact, the second as we know it wasn’t introduced until the late 1500’s, when the Gregorian calendar began to spread across the globe alongside British colonialism. The Gregorian calendar defined a day as a single revolution of the Earth about its axis. Each day could be divided into 24 hours, each hour into 60 minutes, and each minute into 60 seconds. However, when it was first defined, the second was more of a mathematical idea than a useful unit of time. Measuring days and hours was sufficient for most tasks in pastoral communities. It wasn’t until society became interconnected through fast-moving railways that cities needed to agree on exact timekeeping. By the 1950’s, numerous global systems required every second to be perfectly accounted for, with as much precision as possible. And what could be more precise than the atomic scale? As early as 1955, researchers began to develop atomic clocks, which relied on the unchanging laws of physics to establish a new foundation for timekeeping. An atom consists of negatively charged electrons orbiting a positively charged nucleus at a consistent frequency. The laws of quantum mechanics keep these electrons in place, but if you expose an atom to an electromagnetic field such as light or radio waves, you can slightly disturb an electron’s orientation. And if you briefly tweak an electron at just the right frequency, you can create a vibration that resembles a ticking pendulum. Unlike regular pendulums that quickly lose energy, electrons can tick for centuries. To maintain consistency and make ticks easier to measure, researchers vaporize the atoms, converting them to a less interactive and volatile state. But this process doesn’t slow down the atom’s remarkably fast ticking. Some atoms can oscillate over nine billion times per second, giving atomic clocks an unparalleled resolution for measuring time. And since every atom of a given elemental isotope is identical, two researchers using the same element and the same electromagnetic wave should produce perfectly consistent clocks. But before timekeeping could go fully atomic, countries had to decide which atom would work best. This was the discussion in 1967, at the Thirteenth General Conference of the International Committee for Weights and Measures. There are 118 elements on the periodic table, each with their own unique properties. For this task, the researchers were looking for several things. The element needed to have long-lived and high frequency electron oscillation for precise, long-term timekeeping. To easily track this oscillation, it also needed to have a reliably measurable quantum spin— meaning the orientation of the axis about which the electron rotates— as well as a simple energy level structure— meaning the active electrons are few and their state is simple to identify. Finally, it needed to be easy to vaporize. The winning atom? Cesium-133. Cesium was already a popular element for atomic clock research, and by 1968, some cesium clocks were even commercially available. All that was left was to determine how many ticks of a cesium atom were in a second. The conference used the most precise astronomical measurement of a second available at the time— beginning with the number of days in a year and dividing down. When compared to the atom’s ticking rate, the results formally defined one second as exactly 9,192,631,770 ticks of a cesium-133 atom. Today, atomic clocks are used all over the Earth— and beyond it. From radio signal transmitters to satellites for global positioning systems, these devices have been synchronized to help us maintain a globally consistent time— with precision that’s second to none.
Brief_History_of_Things_TED_ED
A_brief_history_of_chess_Alex_Gendler.txt
The attacking infantry advances steadily, their elephants already having broken the defensive line. The king tries to retreat, but enemy cavalry flanks him from the rear. Escape is impossible. But this isn’t a real war– nor is it just a game. Over the roughly one-and-a-half millennia of its existence, chess has been known as a tool of military strategy, a metaphor for human affairs, and a benchmark of genius. While our earliest records of chess are in the 7th century, legend tells that the game’s origins lie a century earlier. Supposedly, when the youngest prince of the Gupta Empire was killed in battle, his brother devised a way of representing the scene to their grieving mother. Set on the 8x8 ashtapada board used for other popular pastimes, a new game emerged with two key features: different rules for moving different types of pieces, and a single king piece whose fate determined the outcome. The game was originally known as chaturanga– a Sanskrit word for "four divisions." But with its spread to Sassanid Persia, it acquired its current name and terminology– "chess," derived from "shah," meaning king, and “checkmate” from "shah mat," or “the king is helpless.” After the 7th century Islamic conquest of Persia, chess was introduced to the Arab world. Transcending its role as a tactical simulation, it eventually became a rich source of poetic imagery. Diplomats and courtiers used chess terms to describe political power. Ruling caliphs became avid players themselves. And historian al-Mas’udi considered the game a testament to human free will compared to games of chance. Medieval trade along the Silk Road carried the game to East and Southeast Asia, where many local variants developed. In China, chess pieces were placed at intersections of board squares rather than inside them, as in the native strategy game Go. The reign of Mongol leader Tamerlane saw an 11x10 board with safe squares called citadels. And in Japanese shogi, captured pieces could be used by the opposing player. But it was in Europe that chess began to take on its modern form. By 1000 AD, the game had become part of courtly education. Chess was used as an allegory for different social classes performing their proper roles, and the pieces were re-interpreted in their new context. At the same time, the Church remained suspicious of games. Moralists cautioned against devoting too much time to them, with chess even being briefly banned in France. Yet the game proliferated, and the 15th century saw it cohering into the form we know today. The relatively weak piece of advisor was recast as the more powerful queen– perhaps inspired by the recent surge of strong female leaders. This change accelerated the game’s pace, and as other rules were popularized, treatises analyzing common openings and endgames appeared. Chess theory was born. With the Enlightenment era, the game moved from royal courts to coffeehouses. Chess was now seen as an expression of creativity, encouraging bold moves and dramatic plays. This "Romantic" style reached its peak in the Immortal Game of 1851, where Adolf Anderssen managed a checkmate after sacrificing his queen and both rooks. But the emergence of formal competitive play in the late 19th century meant that strategic calculation would eventually trump dramatic flair. And with the rise of international competition, chess took on a new geopolitical importance. During the Cold War, the Soviet Union devoted great resources to cultivating chess talent, dominating the championships for the rest of the century. But the player who would truly upset Russian dominance was not a citizen of another country but an IBM computer called Deep Blue. Chess-playing computers had been developed for decades, but Deep Blue’s triumph over Garry Kasparov in 1997 was the first time a machine had defeated a sitting champion. Today, chess software is capable of consistently defeating the best human players. But just like the game they’ve mastered, these machines are products of human ingenuity. And perhaps that same ingenuity will guide us out of this apparent checkmate.
Brief_History_of_Things_TED_ED
The_fascinating_history_of_cemeteries_Keith_Eggener.txt
Spindly trees, rusted gates, crumbling stone, a solitary mourner— these things come to mind when we think of cemeteries. But not so long ago, many burial grounds were lively places, with blooming gardens and crowds of people strolling among the headstones. How did our cemeteries become what they are today? Some have been around for centuries, like the world’s largest, Wadi al-Salaam, where more than five million people are buried. But most of the places we’d recognize as cemeteries are much younger. In fact, for much of human history, we didn’t bury our dead at all. Our ancient ancestors had many other ways of parting with the dead loved ones. Some were left in caves, others in trees or on mountaintops. Still others were sunk in lakes, put out to sea, ritually cannibalized, or cremated. All of these practices, though some may seem strange today, were ways of venerating the dead. By contrast, the first known burials about 120,000 years ago were likely reserved for transgressors, excluding them from the usual rites intended to honor the dead. But the first burials revealed some advantages over other practices: they protected bodies from scavengers and the elements, while shielding loved ones from the sight of decay. These benefits may have shifted ancient people’s thinking toward graves designed to honor the dead, and burial became more common. Sometimes, these graves contained practical or ritual objects, suggesting belief in an afterlife Communal burials first appeared in North Africa and West Asia around 10 to 15,000 years ago, around the same time as the first permanent settlements in these areas. These burial grounds created permanent places to commemorate the dead. The nomadic Scythians littered the steppes with grave mounds known as kurgans. The Etruscans built expansive necropoles, their grid-patterned streets lined with tombs. In Rome, subterranean catacombs housed both cremation urns and intact remains. The word cemetery, or “sleeping chamber,” was first used by ancient Greeks, who built tombs in graveyards at the edges of their cities. In medieval European cities, Christian churchyards provided rare, open spaces that accommodated the dead, but also hosted markets, fairs, and other events. Farmers even grazed cattle in them, believing graveyard grass made for sweeter milk. As cities grew during the industrial revolution, large suburban cemeteries replaced smaller urban churchyards. Cemeteries like the 110-acre Père-Lachaise in Paris or the 72-acre Mt. Auburn in Cambridge, Massachusetts were lushly landscaped gardens filled with sculpted stones and ornate tombs. Once a luxury reserved for the rich and powerful, individually marked graves became available to the middle and working classes. People visited cemeteries for funerals, but also for anniversaries, holidays, or simply an afternoon outdoors. By the late 19th century, as more public parks and botanical gardens appeared, cemeteries began to lose visitors. Today, many old cemeteries are lonely places. Some are luring visitors back with tours, concerts, and other attractions. But even as we revive old cemeteries, we’re rethinking the future of burial. Cities like London, New York, and Hong Kong are running out of burial space. Even in places where space isn’t so tight, cemeteries permanently occupy land that can’t be otherwise cultivated or developed. Traditional burial consumes materials like metal, stone, and concrete, and can pollute soil and groundwater with toxic chemicals. With increasing awareness of the environmental costs, people are seeking alternatives. Many are turning to cremation and related practices. Along with these more conventional practices, people can now have their remains shot into space, used to fertilize a tree, or made into jewelry, fireworks, and even tattoo ink. In the future, options like these may replace burial completely. Cemeteries may be our most familiar monuments to the departed, but they’re just one step in our ever-evolving process of remembering and honoring the dead.
Brief_History_of_Things_TED_ED
A_brief_history_of_plastic.txt
Today, plastics are everywhere. All of this plastic originated from one small object— that isn’t even made of plastic. For centuries, billiard balls were made of ivory from elephant tusks. But when excessive hunting caused elephant populations to decline in the 19th century, billiard balls makers began to look for alternatives, offering huge rewards. So in 1863 an American named John Wesley Hyatt took up the challenge. Over the next five years, he invented a new material called celluloid, made from cellulose, a compound found in wood and straw. Hyatt soon discovered celluloid couldn’t solve the billiard ball problem–– the material wasn’t heavy enough and didn’t bounce quite right. But it could be tinted and patterned to mimic more expensive materials like coral, tortoiseshell, amber, and mother-of-pearl. He had created what became known as the first plastic. The word ‘plastic’ can describe any material made of polymers, which are just the large molecules consisting of the same repeating subunit. This includes all human-made plastics, as well as many of the materials found in living things. But in general, when people refer to plastics, they’re referring to synthetic materials. The unifying feature of these is that they start out soft and malleable and can be molded into a particular shape. Despite taking the prize as the first official plastic, celluloid was highly flammable, which made production risky. So inventors began to hunt for alternatives. In 1907 a chemist combined phenol— a waste product of coal tar— and formaldehyde, creating a hardy new polymer called bakelite. Bakelite was much less flammable than celluloid and the raw materials used to make it were more readily available. Bakelite was only the beginning. In the 1920s, researchers first commercially developed polystyrene, a spongy plastic used in insulation. Soon after came polyvinyl chloride, or vinyl, which was flexible yet hardy. Acrylics created transparent, shatter-proof panels that mimicked glass. And in the 1930s nylon took centre stage— a polymer designed to mimic silk, but with many times its strength. Starting in 1933, polyethylene became one of the most versatile plastics, still used today to make everything from grocery bags, to shampoo bottles, to bulletproof vests. New manufacturing technologies accompanied this explosion of materials. The invention of a technique called injection-moulding made it possible to insert melted plastics into molds of any shape, where they would rapidly harden. This created possibilities for products in new varieties and shapes— and a way to inexpensively and rapidly produce plastics at scale. Scientists hoped this economical new material would make items that once had been unaffordable accessible to more people. Instead, plastics were pushed into service in World War Two. During the war, plastic production in the United States quadrupled. Soldiers wore new plastic helmet liners and water-resistant vinyl raincoats. Pilots sat in cockpits made of plexiglass, a shatterproof plastic, and relied on parachutes made of resilient nylon. Afterwards, plastic manufacturing companies that had sprung up during wartime turned their attention to consumer products. Plastics began to replace other materials like wood, glass, and fabric in furniture, clothing, shoes, televisions, and radios. Versatile plastics opened up possibilities for packaging— mainly designed to keep food and other products fresh for longer. Suddenly, there were plastic garbage bags, stretchy plastic wrap, squeezable plastic bottles, takeaway cartons, and plastic containers for fruit, vegetables, and meat. Within just a few decades, this multifaceted material ushered in what became known as the “plastics century.” While the plastics century brought convenience and cost-effectiveness, it also created staggering environmental problems. Many plastics are made of nonrenewable resources. And plastic packaging was designed to be single-use, but some plastics take centuries to decompose, creating a huge build up of waste. This century we’ll have to concentrate our innovations on addressing those problems— by reducing plastic use, developing biodegradable plastics, and finding new ways to recycle existing plastic.
Brief_History_of_Things_TED_ED
Chocolate_A_short_but_sweet_history_Edible_Histories_Episode_3_BBC_Ideas.txt
chocolate food of the gods that's the Greek meaning of Theobroma cacao the name of the tree that provides it for a plant which is notoriously difficult to cultivate its takeover of global tastes is decidedly impressive although not sold in Britain until the 1650s its history goes back about two-and-a-half thousand years before that when it was almost certainly first domesticated in Central and South America chocolate was an important part of early Central and South American culture the classic Mayans and their successors including the Aztecs consumes chocolate usually as a drink with water and perhaps chili or thickened with maize they also use the beans as currency as well as using them in ceremonies from baptism to burial it was a rich person's beverage imbued with health and spiritual properties and inevitably when the Spanish invaded and colonized the areas where it was found their doctored it for their own use at first it was slow to spread when one Spanish ship transporting beans was captured by the British in the 16th century they apparently threw the cargo overboard thinking it was some bull done however as the Spanish and then the French and then the Italians adapted the drink for their own tastes they replaced the water with milk and added sugar and also started drinking it hot by the time the British cotton Don it was a rich thick concoction both delicious and pleasing exotic it was also healthy 17th century medicine wasn't always certain what the new foods from the Americas would do to a Western disposition but chocolate mainly got a resolute thumbs up taken correctly it was said to restore natural heat generator blood in live in the heart and conserve the natural faculties it was also claimed to be enough for dizzy AK and one author wrote to lay cold women young and fresh creates new motions of the flesh and caused them to long for you know what if they were taste of chocolate the market assad was said to be addicted to it using it to fuel ferocious orgies no wonder it was popular at this time chocolate was a drink but in the early 19th century manufacturers worked out how to remove much of the fat called cocoa butter which could then be added back carefully to improve the texture making it edible though still very bitter the defatted chocolate became cocoa powder which allowed the poor access to their own version of the food of the gods it was also used for cooking though we have to wait a few more decades for chocolate cake it wasn't until the second half of the 19th century that developments in milk process a sharp reduction in the price of sugar and fierce competition between confectionery companies resulted in the first really popular eating chocolate milk chocolate sales exploded and chocolate quickly came to mean the stuff you ate not the stuff you drank less than 50 years later chocoholics could choose from an ever-increasing range of bars boxes and novelty shapes today Chopra is polarized from cheap milky sugary stuff to high-end black bars of joy the former we're told high in sugar and fat is leading to an a beast nation for the latter it's hinted may actually be beneficial early studies suggest small doses of very dark chocolate rich in antioxidants theobromine and caffeine may make us happier healthier and less stressed perhaps those 17th century chocolate lovers were right after all thanks for watching don't forget to subscribe and click the bail to recieve notifications for new videos see you again soon
Brief_History_of_Things_TED_ED
History_through_the_eyes_of_the_potato_Leo_BearMcGuinness.txt
Baked or fried, boiled or roasted, as chips or fries. At some point in your life, you've probably eaten a potato. Delicious, for sure, but the fact is potatoes have played a much more significant role in our history than just that of the dietary staple we have come to know and love today. Without the potato, our modern civilization might not exist at all. 8,000 years ago in South America, high atop the Andes, ancient Peruvians were the first to cultivate the potato. Containing high levels of proteins and carbohydrates, as well as essential fats, vitamins and minerals, potatoes were the perfect food source to fuel a large Incan working class as they built and farmed their terraced fields, mined the Rocky Mountains, and created the sophisticated civilization of the great Incan Empire. But considering how vital they were to the Incan people, when Spanish sailors returning from the Andes first brought potatoes to Europe, the spuds were duds. Europeans simply didn't want to eat what they considered dull and tasteless oddities from a strange new land, too closely related to the deadly nightshade plant belladonna for comfort. So instead of consuming them, they used potatoes as decorative garden plants. More than 200 years would pass before the potato caught on as a major food source throughout Europe, though even then, it was predominantly eaten by the lower classes. However, beginning around 1750, and thanks at least in part to the wide availability of inexpensive and nutritious potatoes, European peasants with greater food security no longer found themselves at the mercy of the regularly occurring grain famines of the time, and so their populations steadily grew. As a result, the British, Dutch and German Empires rose on the backs of the growing groups of farmers, laborers, and soldiers, thus lifting the West to its place of world dominion. However, not all European countries sprouted empires. After the Irish adopted the potato, their population dramatically increased, as did their dependence on the tuber as a major food staple. But then disaster struck. From 1845 to 1852, potato blight disease ravaged the majority of Ireland's potato crop, leading to the Irish Potato Famine, one of the deadliest famines in world history. Over a million Irish citizens starved to death, and 2 million more left their homes behind. But of course, this wasn't the end for the potato. The crop eventually recovered, and Europe's population, especially the working classes, continued to increase. Aided by the influx of Irish migrants, Europe now had a large, sustainable, and well-fed population who were capable of manning the emerging factories that would bring about our modern world via the Industrial Revolution. So it's almost impossible to imagine a world without the potato. Would the Industrial Revolution ever have happened? Would World War II have been lost by the Allies without this easy-to-grow crop that fed the Allied troops? Would it even have started? When you think about it like this, many major milestones in world history can all be at least partially attributed to the simple spud from the Peruvian hilltops.
Brief_History_of_Things_TED_ED
Why_is_Herodotus_called_The_Father_of_History_Mark_Robinson.txt
Giant gold-digging ants, a furious king who orders the sea to be whipped 300 times, and a dolphin that saves a famous poet from drowning. These are just some of the stories from The Histories by Herodotus, an Ancient Greek writer from the 5th century BCE. Not all the events in the text may have happened exactly as Herodotus reported them, but this work revolutionized the way the past was recorded. Before Herodotus, the past was documented as a list of events with little or no attempt to explain their causes beyond accepting things as the will of the gods. Herodotus wanted a deeper, more rational understanding, so he took a new approach: looking at events from both sides to understand the reasons for them. Though he was Greek, Herodotus's hometown of Halicarnassus was part of the Persian Empire. He grew up during a series of wars between the powerful Persians and the smaller Greeks, and decided to find out all he could about the subject. In Herodotus's telling, the Persian Wars began in 499 BCE, when the Athenians assisted a rebellion by Greeks living under Persian rule. In 490, the Persian King, Darius, sent his army to take revenge on Athens. But at the Battle of Marathon, the Athenians won an unexpected victory. Ten years later, the Persians returned, planning to conquer the whole of Greece under the leadership of Darius's son, Xerxes. According to Herodotus, when Xerxes arrived, his million man army was initially opposed by a Greek force led by 300 Spartans at the mountain pass of Thermopylae. At great cost to the Persians, the Spartans and their king, Leonidas, were killed. This heroic defeat has been an inspiration to underdogs ever since. A few weeks later, the Greek navy tricked the Persian fleet into fighting in a narrow sea channel near Athens. The Persians were defeated and Xerxes fled, never to return. To explain how these wars broke out and why the Greeks triumphed, Herodotus collected stories from all around the Mediterranean. He recorded the achievements of both Greeks and non-Greeks before they were lost to the passage of time. The Histories opens with the famous sentence: "Herodotus, of Halicarnassus, here displays his inquiries." By framing the book as an “inquiry,” Herodotus allowed it to contain many different stories, some serious, others less so. He recorded the internal debates of the Persian court but also tales of Egyptian flying snakes and practical advice on how to catch a crocodile. The Greek word for this method of research is "autopsy," meaning "seeing for oneself." Herodotus was the first writer to examine the past by combining the different kinds of evidence he collected: opsis, or eyewitness accounts, akoe, or hearsay, and ta legomena, or tradition. He then used gnome, or reason, to reach conclusions about what actually happened. Many of the book's early readers were actually listeners. The Histories was originally written in 28 sections, each of which took about four hours to read aloud. As the Greeks increased in influence and power, Herodotus's writing and the idea of history spread across the Mediterranean. As the first proper historian, Herodotus wasn't perfect. On occasions, he favored the Greeks over the Persians and was too quick to believe some of the stories that he heard, which made for inaccuracies. However, modern evidence has actually explained some of his apparently extreme claims. For instance, there's a species of marmot in the Himalayas that spreads gold dust while digging. The ancient Persian word for marmot is quite close to the word for ant, so Herodotus may have just fallen prey to a translation error. All in all, for someone who was writing in an entirely new style, Herodotus did remarkably well. History, right down to the present day, has always suffered from the partiality and mistakes of historians. Herodotus’s method and creativity earned him the title that the Roman author Cicero gave him several hundred years later: "The Father of History."
Brief_History_of_Things_TED_ED
The_history_of_chocolate_Deanna_Pucciarelli.txt
If you can't imagine life without chocolate, you're lucky you weren't born before the 16th century. Until then, chocolate only existed in Mesoamerica in a form quite different from what we know. As far back as 1900 BCE, the people of that region had learned to prepare the beans of the native cacao tree. The earliest records tell us the beans were ground and mixed with cornmeal and chili peppers to create a drink - not a relaxing cup of hot cocoa, but a bitter, invigorating concoction frothing with foam. And if you thought we make a big deal about chocolate today, the Mesoamericans had us beat. They believed that cacao was a heavenly food gifted to humans by a feathered serpent god, known to the Maya as Kukulkan and to the Aztecs as Quetzalcoatl. Aztecs used cacao beans as currency and drank chocolate at royal feasts, gave it to soldiers as a reward for success in battle, and used it in rituals. The first transatlantic chocolate encounter occurred in 1519 when Hernán Cortés visited the court of Moctezuma at Tenochtitlan. As recorded by Cortés's lieutenant, the king had 50 jugs of the drink brought out and poured into golden cups. When the colonists returned with shipments of the strange new bean, missionaries' salacious accounts of native customs gave it a reputation as an aphrodisiac. At first, its bitter taste made it suitable as a medicine for ailments, like upset stomachs, but sweetening it with honey, sugar, or vanilla quickly made chocolate a popular delicacy in the Spanish court. And soon, no aristocratic home was complete without dedicated chocolate ware. The fashionable drink was difficult and time consuming to produce on a large scale. That involved using plantations and imported slave labor in the Caribbean and on islands off the coast of Africa. The world of chocolate would change forever in 1828 with the introduction of the cocoa press by Coenraad van Houten of Amsterdam. Van Houten's invention could separate the cocoa's natural fat, or cocoa butter. This left a powder that could be mixed into a drinkable solution or recombined with the cocoa butter to create the solid chocolate we know today. Not long after, a Swiss chocolatier named Daniel Peter added powdered milk to the mix, thus inventing milk chocolate. By the 20th century, chocolate was no longer an elite luxury but had become a treat for the public. Meeting the massive demand required more cultivation of cocoa, which can only grow near the equator. Now, instead of African slaves being shipped to South American cocoa plantations, cocoa production itself would shift to West Africa with Cote d'Ivoire providing two-fifths of the world's cocoa as of 2015. Yet along with the growth of the industry, there have been horrific abuses of human rights. Many of the plantations throughout West Africa, which supply Western companies, use slave and child labor, with an estimation of more than 2 million children affected. This is a complex problem that persists despite efforts from major chocolate companies to partner with African nations to reduce child and indentured labor practices. Today, chocolate has established itself in the rituals of our modern culture. Due to its colonial association with native cultures, combined with the power of advertising, chocolate retains an aura of something sensual, decadent, and forbidden. Yet knowing more about its fascinating and often cruel history, as well as its production today, tells us where these associations originate and what they hide. So as you unwrap your next bar of chocolate, take a moment to consider that not everything about chocolate is sweet.
Brief_History_of_Things_TED_ED
The_dark_history_of_bananas_John_Soluri.txt
On a December night in 1910, the exiled former leader of Honduras, Manuel Bonilla, boarded a borrowed yacht in New Orleans. With a group of heavily armed accomplices, he set sail for Honduras in hopes of reclaiming power by whatever means necessary. Bonilla had a powerful backer, the future leader of a notorious organization known throughout Latin America as El Pulpo, or "the Octopus," for its long reach. The infamous El Pulpo was a U.S. corporation trafficking in, of all things, bananas. It was officially known as United Fruit Company— or Chiquita Brands International today. First cultivated in Southeast Asia thousands of years ago, bananas reached the Americas in the early 1500s, where enslaved Africans cultivated them in plots alongside sugar plantations. There were many different bananas, most of which looked nothing like the bananas in supermarket aisles today. In the 1800s, captains from New Orleans and New England ventured to the Caribbean in search of coconuts and other goods. They began to experiment with bananas, purchasing one kind, called Gros Michel, from Afro-Caribbean farmers in Jamaica, Cuba, and Honduras. Gros Michel bananas produced large bunches of relatively thick-skinned fruit— ideal for shipping. By the end of the 1800s, bananas were a hit in the US. They were affordable, available year-round, and endorsed by medical doctors. As bananas became big business, U.S. fruit companies wanted to grow their own bananas. In order to secure access to land, banana moguls lobbied and bribed government officials in Central America, and even funded coups to ensure they had allies in power. In Honduras, Manuel Bonilla repaid the banana man who had financed his return to power with land concessions. By the 1930s, one company dominated the region: United Fruit, who owned over 40% of Guatemala’s arable land at one point. They cleared rainforest in Costa Rica, Colombia, Guatemala, Honduras, and Panama to build plantations, along with railroads, ports, and towns to house workers. Lured by relatively high-paying jobs, people migrated to banana zones. From Guatemala to Colombia, United Fruit’s plantations grew exclusively Gros Michel bananas. These densely packed farms had little biological diversity, making them ripe for disease epidemics. The infrastructure connecting these vulnerable farms could quickly spread disease: pathogens could hitch a ride from one farm to another on workers’ boots, railroad cars, and steamships. That’s exactly what happened in the 1910s, when a fungus began to level Gros Michel banana plantations, first in Panama, and later throughout Central America, spreading quickly via the same system that had enabled big profits and cheap bananas. In a race against “Panama Disease,” banana companies abandoned infected plantations in Costa Rica, Honduras, and Guatemala, leaving thousands of farmers and workers jobless. The companies then felled extensive tracts of rainforests in order to establish new plantations. After World War II, the dictatorships with which United Fruit had partnered in Guatemala and Honduras yielded to democratically elected governments that called for land reform. In Guatemala, President Jacobo Arbenz tried to buy back land from United Fruit and redistribute it to landless farmers. The Arbenz government offered to pay a price based on tax records— where United Fruit had underreported the value of the land. El Pulpo was not happy. The company launched propaganda campaigns against Arbenz and called on its deep connections in the US Government for help. Citing fears of communism, the CIA orchestrated the overthrow of the democratically elected Arbenz in 1954. That same year in Honduras, thousands of United Fruit workers went on strike until the company agreed to recognize a new labor union. With the political and economic costs of running from Panama Disease escalating, United Fruit finally switched from Gros Michel to Panama disease-resistant Cavendish bananas in the early 1960s. Today, bananas are no longer as economically vital in Central America, and United Fruit Company, rechristened Chiquita, has lost its stranglehold on Latin American politics. But the modern banana industry isn’t without problems. Cavendish bananas require frequent applications of pesticides that create hazards for farmworkers and ecosystems. And though they’re resistant to the particular pathogen that affected Gros Michel bananas, Cavendish farms also lack biological diversity, leaving the banana trade ripe for another pandemic.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Supplemental_Video_The_Three_Dees_of_Thermodynamics.txt
[SQUEAKING] [CLICKING] [RUSTLING] RAFAEL JARAMILLO: Hi. Today we're going to discuss the many D's of thermodynamics. So what do I mean? We're going to talk about lowercase d, lowercase Greek d, and uppercase Greek D. And I wanted to clarify why we have different D's that we use, when we use them, and give some physical intuition for what they mean. So, for example, when we write the combined statement of the first and second law, we have Du equals TdS minus PdV. In this case, the d's indicate exact differentials, which is equal to infinitesimal changes in state variables. I'll write that out. OK. The next D we want to talk about are the lowercase Greek d's. So, for example, conservation of energy gives us the following expression. dU equals dQ plus dW, which tells us that the total change of internal energy for a system equals the sum of heat and work. Heat and work our process variables. In this case, the lowercase Greek d's indicate inexact differentials. That is infinitesimal changes in process variables. These terms, exact and inexact differentials, they have a specific meaning in thermodynamics, which is related to, but not identical to, meanings they have in mathematics. So this is a statement about thermodynamics. OK, this brings us to the third D, the capital Greek D. So I'll write an example. In the context of the Clausius-Clapeyron on equation. dP equals delta-- let me get this right-- S over delta V. In this case, the capital D's indicate transformations, transformations. So in materials thermodynamics, the most common transformations that we talk about are transformations that take place at constant pressure and constant temperature. That is isobaric and isothermal transformations. So of the three D's-- lowercase d, lowercase Greek d, and uppercase Greek D-- this uppercase Greek D is the one that contains the most physical, unstated assumptions. It contains the most information about this system. But as a result, it also is often the hardest for students to understand when you first encounter it. So we're going to take some time to draw out what is meant by transformations and transformation quantities. So to illustrate the concept of transformation quantities, it helps to draw state function surfaces. So as a reminder, for a given phase and composition, we can draw state functions of two-- we can draw these as surfaces-- that is, state function of two independent variables. So, for example, we could have the entropy of phase alpha drawn as a function of temperature and pressure. So I'm going to draw a representative state function surface now. Let's see. We'll keep the axes. So here we go. Here is-- for some hypothetical phase. That's a state function surface. Let's draw the axes. We'll make the vertical axis entropy. One of the axes in the plane can be pressure, and the other would be temperature. So this is a 3D visualization of how the entropy for a given phase might vary with pressure and temperature at a fixed composition. And to keep track of the fact that it's for a given phase, I'll label it alpha. So this is the state function surface for a given, particular phase. So to illustrate transformation quantities, I want to draw the transformation entropy for an isobaric, isothermal phase transformation. So it's a isobaric-- we're going to consider a transformation between two phases, alpha and beta. So we're going to be two different state function services, one for each phase. We'll label this beta. And I'll draw another state function service for phase alpha. Alpha. And as before, we'll say that we're measuring the transformation entropy. So the vertical axis is entropy. And the independent variables are temperature and pressure, pressure and temperature. All right. So at a given pressure and temperature, we can visualize the transformation entropy for a transformation between phase alpha and phase beta. So we're going to pick a given temperature and pressure-- that is, a point down here on the PT plane-- and we're going to draw a vertical line and see where it intercepts these surfaces. We come up from the plane. And at some point, we're going to cut that state function surface and then keep coming. We'll cut that state function service and keep going. So a given point of pressure and temperature-- we now have a visualization of the transformation entropy. It's exactly the vertical distance between these two surfaces. So this is the transformation entropy between phase alpha and beta at a given pressure and temperature. And using this visualization-- at least, in your mind's eye-- you can see that this transformation quantity is a function of pressure and temperature. Because as I move this point around in pressure and temperature, the vertical distance between the surfaces might change. So this is illustration for an isobaric, isothermal transformation, and we're illustrating this for entropy. But in thermodynamics, we have many different transformation quantities that we keep track of. In material science, our most common independent variables are pressure and temperature, because those are often the easiest for us to regulate in the laboratory. But for any transformation quantity, you can at least imagine, if not draw out on a piece of paper, state function surfaces corresponding to the quantity that you're trying to measure, independent variables corresponding to the variables that you're regulating, and a vertical distance, which is a function of those independent variables that measures the transformation quantity for the transformation between two different phases.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_32_Case_Study_Reacting_Multicomponent_Multiphase_Systems.txt
[SQUEAKING] [RUSTLING] [CLICKING] RAFAEL JARAMILLO: All right. Good morning, 020. Today is the last lecture of technical content of the semester. So that's kind of exciting. And it's the last time we're going to use this little piece of graph paper, which is not particularly exciting. But, anyway, next week, we're going to change gears a little bit and do our second social and personal hour on Monday. And I'll send around reading for that shortly. And on Wednesday, we're going to do a Zoom game show. So I hope we have a good attendance for that because we're going to try to make it as fun as we can over Zoom. It won't be as much fun as in person. But, still. Anyway, that's next week. Today, we're going to work an example which pulls together some concepts which we've been using the last couple of lectures. So this is what we're going to work on today-- reacting systems that have gases and condensed phases. So we spent a couple lectures in the middle of the term on reacting systems with just gases. And then in the last couple of lectures, we've talked about oxidation, which are reacting systems with oxygen gas and in condensed phases. That is metals and oxides. So we're going to just generalize that a little bit. I shouldn't say generalize because we're going to work a specific example. We're going to move away from oxides and do an example problem which I really think ties together a lot of concepts. And it's a silicon-carbide problem. And before I move in here, this is not-- everything we're going to do today is not necessarily particular to silicon carbide. It's particularly the silicon carbide and the details of course. But the approach is general for a lot of materials. So what I'm going to do is show you the phase diagram of silicon carbide and briefly talk about why anyone would care about silicon carbide. And then we'll start working on that problem. So let's see. Here we go. The formation of silicon carbide. OK, before I label the solid phases, somebody, how many solid phases are there in this diagram? You should be able to read a diagram like this by now. AUDIENCE: Three? RAFAEL JARAMILLO: Three solid phases. Silicon is a line compound here, which is sort of giving you a hint, silicon carbide. So silicon, silicon carbide, and carbon. So three solid phases. Thank you. And one liquid phase. And, all right, so here's a peritectic reaction, right? This is a peritectic reaction. Has that t-shaped. It's liquid, silicon, carbon-containing melt in graphite reacting to form silicon carbide. So silicon carbide forms out of a peritectic. Silicon of course melts at 1,414 C. Carbon doesn't melt actually. It's sublimes. And it's way off the scales of 4,000 degrees or something like that. And silicon carbide decomposes. It doesn't melt. It decomposes. It's very high temperature. So you can see the silicon carbide here is a refractory material. It's a refractory ceramic material. So it's a very hard. It's conundrum. That's the mineral. It's also known as moissanite. So it is a wide-bandgap semiconductor and an abrasive. And I learned a couple of things about it. I knew it's an abrasive. A lot of times, if you go and buy grinding wheels and such, or high-performance mechanical abrasives, or ultra-hard components used for brake pads and things like that, it'll be silicon-carbide containing. That I knew. Silicon carbide is a wide-bandgap semiconductor. And it may be the future of power electronics. So it's important in my area of expertise for that reason. I also learned today that it was the first ever LED, was made with silicon carbide. It's about 110 years ago. And it was noticed that silicon carbide can be made into an LED. Of course it's not it's not the LED wide-bandgap material of choice. Does anyone know what wide-bandgap semiconductor is providing most of the light quite possibly in the room you're sitting right now? Nobody knows. It's a Nobel Prize a couple of years ago. Gallium nitride. So another wide-bandgap compound semiconductor, not a refractory material. It falls apart more easily than silicon carbide. But, anyway, there's lots of exciting reasons to know about-- you should know about silicon carbide. And I think you should care about it, too, because it's important for a number of reasons. And it's got an interesting history. Enough about that. Let's talk about a problem. I'm going to give you a problem. This is like a word problem you might find on a test or something like that. Place pieces of silicon carbide and carbon-- and for now, I'll label that graphite but just keep in mind it's carbon-- into a vacuum oven. A vacuum oven is an oven that can operate under vacuum. They're all over campus, if you don't know already. And now I'm going to do it. I'm going to put them in the oven. And I'm going to pull vacuum. That means evacuate the oven. And we're going to pull this vacuum to approximately 0 pascals. So there's no absolute vacuum. But on this scale, your gauge is reading 0. We're going to seal it. Pull vacuum. Seal. And then heat to 1,700 C. You observe two things. Silicon carbide and carbon are both still present. You also observe that the pressure is 4.0 pascals. So let me draw this visualization here of the experiment. And what you observe is a vacuum oven. It's got a nice, insulated walls, nice vacuum seal. A lot of money for a vacuum furnace that can grow to 1,700 C. But there's a number of them on campus. And, OK, so here's silicon carbide. It's a chunk of silicon carbide sitting there in the vacuum furnace. And a chunk of graphite should be gray, right? Why not? Graphite, a chunk of graphite. Yeah. And you've got some gauges on there. Often, gauges and block diagrams are indicated like that. So you've got a gauge saying that pressure equals 4.0 pascal. And you've got another gauge reading the temperature equals 1,700 degrees C. This is the problem statement. Are there any questions about the problem statement? I haven't asked the question yet. This is a setup. Will somebody express this observation in thermodynamics language for me? What does this observation mean? AUDIENCE: There are multiple phases. RAFAEL JARAMILLO: Right. And the phases are? AUDIENCE: Silicon carbide, and carbon. RAFAEL JARAMILLO: Oh, right. I'm sorry. Yes. That's right. The phases are in-- AUDIENCE: (WHISPERS) Equilibrium. RAFAEL JARAMILLO: Someone just whispered. AUDIENCE: Equilibrium? RAFAEL JARAMILLO: Equilibrium, right? The phases are in equilibrium. You heat it up. And you observe that they're present. This is a class in equilibrium thermodynamics. You can infer from context that these things are both present. So they are coexisting in equilibrium, right? Multiple condensed phases coexisting in equilibrium. What's the third phase that's present? You can't unsee it. This is important. You need to be able to interpret these observations in thermodynamics language if you're going to solve thermodynamics problem. So we've got two condensed phases in equilibrium. There's a third phase. Would somebody tell me what is the third phase? AUDIENCE: Gas phase. RAFAEL JARAMILLO: Gas. Right. There's gas. There's a gas phase there. You can't see it with your eyes. But it's there. Your pressure gauge tells you it's there. There's some pressure. There's some gas pressure. Good. Thank you. All right, here's a question. Question, what is the Gibbs free energy of formation of silicon carbide at 1,700 degrees C. That is silicon in its solid phase plus carbon in its graphite phase going to silicon carbide. And I'm asking for a delta formation. And I'm going to give you some reference data. You need a little bit more information to answer that question. Reference data. You know or you measure that the saturation vapor pressures at 1,700 degrees C are-- this is the data you have. P silicon saturation, 1,700 degrees C equals 4.4 pascals. And P carbon saturation at 1,700 degrees C, what do you think it is? Is it larger than silicon? Or is it smaller than silicon? The saturation vapor pressure of carbon. You might ask yourself, which is more volatile? I'll give you a hint. What are the saturation vapor pressures of these refractory materials? Are they higher than that of silicon? Or are they lower than that of silicon? Think about what you know about bonding. Think about what you know about refractory materials. Ask yourselves, what's more volatile? AUDIENCE: So would the saturation vapor pressure of carbon be 0, then? RAFAEL JARAMILLO: The saturation vapor pressure of nothing-- there's no component or compound for which the saturation vapor pressure is 0, right? Think back to the unary phase diagrams. It's always finite. It's something. It exists. But how does it compare to that of silicon? AUDIENCE: It would be much smaller than that. RAFAEL JARAMILLO: Much smaller. Right. Exactly. Much, much less than P sat of silicon. That's right. And this should be something that you just intuitively understand, which is that more strongly bound compounds, refractory compounds are less volatile and therefore have lower vapor pressures. They also have lower melting points, right? You know that carbon is very strongly covalently bonded to itself. Silicon is strongly but less so covalently bonded to itself. Germanium is even more volatile. It falls apart more easily. And by the time you get to tin and lead, you have really high vapor pressure, low melting point materials. So periodic table knowledge there. All right, both of these are about below 10 to the minus 5 pascals. So you have these two materials that have very, very low saturation vapor pressure and one that has a reasonable saturation vapor pressure. That's the reference data. OK, here's the problem statement. What is the Gibbs free energy of formation of silicon carbide? So what I'm going to do is I'm going to start by-- I'm not going to start by asking how do you solve this problem. I'm going to start just by writing down an expression for the thing we want to solve for, just manipulating terms, trying to think out loud here on the page. So let's see. Delta formation of silicon carbide-- well, just by definition you know what that is. It's the Gibbs free energy of silicon carbide minus the Gibbs free energy of silicon in its reference state and its pure state minus the Gibbs free energy of carbon and its reference statement because it's-- and the stoichiometry's really easy. 1 mole of silicon and 1 mole of carbon forms 1 mole of silicon carbide. So this expression is very, very easy. All right, well, for 1 mole of silicon carbide, G. so carbide equals-- this is just partial molar properties. So this we know. And it's just the chemical potential of silicon plus the chemical potential of carbon-- right? This we know-- in silicon carbide, in silicon carbide. So it's just partial molar properties and stuff we know. And I'm just going to plug this in. And what we get is the free energy of Information of silicon carbide expressed in terms of the chemical potentials. So it's the chemical potential of silicon and silicon carbide plus the chemical potential of carbon and silicon carbide minus the chemical potential of silicon in its reference state minus the chemical potential of carbon in its reference state. I'm going to do one more thing. I'm just going to rearrange. I'm going to say it's silicone and silicon carbide minus silicon in its reference state and the chemical potential of carbon and silicon carbide minus chemical potential of carbon in its reference state. So I have the free energy of formation expressed in terms of differences in chemical potential. And you can see how this is similar to solution modeling, right? I have delta mus. This is a delta mu. This is a delta mu. It's just the stoichiometry is fixed. And so we don't have to worry about composition variables anymore. We just have one of these and one of these. All right, and I'm going to I'm going to tackle these one at a time. So I'm going to tackle this one first. So I'll just label it 1. And I'm going to tackle this one second. I'm going to label it 2. So if we can find equations for these quantities, then we know our answer. So I'm going to take these one at a time. And then I'm going to start asking you folks some questions. So look alert. Let's take 1 first. 1. Here, what I want to do-- sorry. One of the things I want to do with this working an example thing is I want to split back and forth between equations and word problems, word problems and equations. You have to be nimble. And you have to know the material well enough to do that. You know by now that's an important skill. So how does the chemical potential of carbon differ from its pure reference state? All right, this sentence is this expression, all right? This expression is the difference in chemical potential for carbon in between this compound and its reference state. That's what that expresses, right? That's also what this expresses. How does the chemical potential of carbon differ from its pure reference state? So I'll ask you. Somebody, please volunteer and answer. How does the chemical potential of carbon differ from its pure reference state? And while you're thinking about that, I will just return to my observation. AUDIENCE: So I guess this is kind of like the P sat where they're in equilibrium. It's in equilibrium with the pure phase. RAFAEL JARAMILLO: So, OK. Good. And so-- sorry, I'm talking over you, And so, Priya, your answer is? AUDIENCE: It should be the same. RAFAEL JARAMILLO: Right. Right. And that was you, right? You're hidden. But, yeah. So, OK. AUDIENCE: Yeah. Sorry. RAFAEL JARAMILLO: It doesn't. It doesn't It doesn't differ. Why? As pointed out, pure carbon is in equilibrium with silicon carbide. That's an observation. What that means is that the chemical potential of carbon and silicon carbide is the same as the chemical potential of carbon in that graphite phase, which is sitting there, which is, by definition, the chemical potential of carbon in its reference state. Good. And let me remind you why that is. The type of thing we've been doing all semester is thinking about the conditions for equilibrium, the criteria for equilibrium, if you like. Here's graphite. Here's silicon carbide. Well, there's a gas phase, too, right? The carbon atoms are exchanging. I'm going to draw some carbon atoms in the gas. The carbon atoms are exchanging freely, right? That's an unconstrained internal process. They're exchanging freely between silicon carbide and carbon-- well, graphite-- via vapor phase, via the vapor phase. When we had conservation of mass with mass exchange between two phases, that led to the condition for chemical equilibrium. Chemical equilibrium. Right? That was about the middle of the term that we arrived at that. So chemical equilibrium, right? That's what I wrote down here. OK, So that's great. That means that-- what is this term equal to? What is the term in the expression in the parentheses equal to? AUDIENCE: 0? RAFAEL JARAMILLO: 0. Right. Exactly. There's no difference in chemical potential for carbon between the compound and the pure state, right? Thank you, Ian. OK, great. So half the term, half the right-hand side of the equation is put to bed. Let's put the other half of the right-hand side to bed. Let's take care of part 2, which is, expressing in words, part 2 is the difference in chemical potential for silicon between the compound and its pure state. So I'll write that out in words. How does the chemical potential of silicon differ from its pure reference state? OK, so that's why that-- OK. OK, so this is a little harder. I'll walk you through this. Are there any initial thoughts on how to go at this? Anyone want to take a gamble? Let me give you a hint. OK, if solid silicone, or I should say liquid because we're above the melting point condensed-- I said solid in the lecture notes. So that's a typo. If condensed silicon were present, the partial pressure, P silicon, would be equal to-- what would the partial pressure of silicon be equal to if condensed silicon were present? If condenses silicon were present, what would the partial pressure of silicon equal to? AUDIENCE: The saturation volume. RAFAEL JARAMILLO: Right. Thank you. OK. So that's a hypothetical because there's no condensed silicon there. But it's a starting point if condensed silicon-- OK. So therefore, chemical potential of silicon minus chemical potential of silicon in its reference state equals-- this is a bit of an a-ha thing. You have to see the connection here between different parts of the semester. The chemical potential of silicon in this system relative to what it would be if condensed silicon were present. If condensed silicon were present, the chemical potential would be equal to its reference state chemical potential. Also, if condensed silicone were present, the pressure would be equal to the saturation pressure. OK, I'll just give it. Therefore, the chemical potential difference of silicon relative to the reference state is RT log P silicon over P silicon sat. Why is this? This is recognizing that silicon in silicon carbide is in equilibrium with silicon in the gas phase. All right, here's another. You have to imagine this. Silicon carbide and there's silicon vapor. You can't see it. But it's there. And you know that the silicon vapor is in equilibrium with the silicon carbide. Those components are in equilibrium because they can freely exchange. So they're in chemical equilibrium. So whatever the chemical potential of this vapor phase is, it's the same as the chemical potential of silicon in this compound. And you know how to calculate the chemical potential of a vapor phase. It's RT log partial pressure over the reference partial pressure. OK, that was cool. We're not done yet. What is partial pressure of silicon? I'll remind you. The total pressure now equals 4.0 pascals. The total pressure equals 4 pascals, right? Gas phase, what are the gas phase components? What sort of molecules am I finding in the gas phase? Told you I have silicon. And what else is there? AUDIENCE: Silicon? Oh. RAFAEL JARAMILLO: Yep. Thanks for that. What else? AUDIENCE: Probably nitrogen, oxygen if it's from the general air. RAFAEL JARAMILLO: No because we pulled vacuum before we did this. That was a key thing. So there's nothing in there except what we put in there, which was silicon and silicon carbide. It's carbon and silicon carbide. So what other components are in the gas? In general, what else could be there? AUDIENCE: Carbon? RAFAEL JARAMILLO: Sorry? AUDIENCE: Carbon. RAFAEL JARAMILLO: Yeah. There's carbon. And we could allow that there's maybe some silicon carbide molecules. Turns out, doesn't matter whether there's silicon carbide molecules, right? We know that carbon and silicon carbide are saturated, right? We know they're saturated. That's just my observation because the pure components are there. So their vapor pressures are saturated. They're saturated. But their saturation vapor pressures are-- they're negligible Because they're refractory materials. We talked about that 20 minutes ago. OK, so what does that mean? Somebody? AUDIENCE: All right, I have a quick question if that's all right. RAFAEL JARAMILLO: Yeah. Go ahead. Please. AUDIENCE: Earlier, I think we talked about how the carbon wouldn't vaporize nearly as much as the silicon. So would it make sense to ignore it in future calculations? Or do we still need to take it into account even though it's less likely to vaporize? RAFAEL JARAMILLO: So, Sara, your question is exactly my question, which is this. So what is the silicon partial pressure? And if it's not clear right now why your question is my question, I'll make that clear. But, somebody, please. Here's your observations and some facts and things that you know. Would somebody make a guess for the silicon partial pressure? AUDIENCE: It's just equal to the total pressure. RAFAEL JARAMILLO: Thank you. Yes. That's exactly right. The silicon partial pressure is basically the same as the total pressure. In other words, the vapor is essentially pure silicon. That's the second big intellectual connection which you need to make to solve this problem. The first was the change in chemical potential for gases when they're rarefied. The second is, well, it's a more pragmatic thing, right? You have all these things in the gas phase. But I know that the pressure of carbon in silicon carbide is really small, unmeasurable on my gauge, definitely not making a contribution to that 4.0 pascals, maybe at, like, sixth decimal point or something. But who cares? It's 4.0 pascals. So my gauge only has two digits of precision anyway. So I can infer that that pressure is totally silicon. So now I plug. And I chug. The chemical potential of silicon in this system minus the chemical potential of silicon for condensed pure silicon, which is RT log P silicon P silicon sat is just RT log 4.0 over 4.4. So I'm putting it all together. Therefore, delta formation G of silicon carbide-- it's two terms, chemical potential of silicon minus chemical potential of silicon in its reference state, and the chemical potential of carbon minus the chemical potential of carbon in its reference state. And that's 0. And this is RT log 4.0 over 4.4, which happens to equal to minus 1,563 joules per mole. That's the answer. So I intentionally left a good amount of time to talk about this problem. But I want to start with a strategy for solving these types of problems and to solving all thermal problems, actually. The key to solving such problems-- this is sort of the intellectual key-- is to ask, who is in equilibrium with whom? This is the key to getting good at these problems. What do I mean by that? Let's see. We have vapor. We have vapor, carbon, silicon, silicon carbide. We have condensed silicon carbide. We have condensed graphite. And the components are freely exchanging. All right, so this here, we have these components freely exchanging, right? Exchanging mass. Back to the basics, right? Back when we developed chemical equilibrium as opposed to vapor, silicon. Imagine silicon carbide, silicon carbide, silicon carbide. Those are in equilibrium. But what about solid silicon? Not in equilibrium. Not in equilibrium. Not in equilibrium with condensed silicon. What does that mean? It means that the vapor pressure is below saturation. And because when I saturate the vapor pressure of a thing, that thing starts condensing. It starts condensing out of the air. It starts raining silicon. If the vapor pressure reached the saturation vapor pressure of silicon, it would start raining silicon in that system. That didn't happen. So the vapor pressure is below. OK, so that was a lot of stuff all wrapped up into one problem. Good that I have 10 minutes to talk about it. So I'm happy to take questions about this or anything else. I will mention one last thing on course scheduling. Wednesday's office hours, I'm planning to do a really informal semester in review. So hopefully, that'll be useful. But back to this problem. Questions? Confusion? AUDIENCE: I have a question about something very early on. RAFAEL JARAMILLO: Mm-hmm? AUDIENCE: So how is it that we knew that the vapor pressure of carbon was a lot less than silicon? Was that reading from the graph or? RAFAEL JARAMILLO: No, no, no. So that was not-- so there's two things. First of all, I gave you reference data. So that was data given with the problem. So one way is that it was just in the problem statement. So you see it. It's fine. But it is worth asking yourself, would you have been able to infer that if that hadn't been given in the problem? And, yeah. You might guess that. You might know the silicon carbide and graphite are refractory materials, right? You might know that they're very tightly bound molecular solids with a huge cohesion energy. It's very hard to break them up. And the phase diagram here tells you that silicone is a lot more volatile than either silicon carbide or carbon. It's got a much lower melting point, right? So you might just guess. And I wouldn't ask you to do that. But you might be able to infer that the saturation vapor pressure of silicon carbide and carbon is very low. And if you remember, the saturation vapor pressure, it's proportional to e to the minus delta H of formation. You remember that from your phase diagram. So a material that has a very large energy of formation has a very small saturation vapor pressure. So maybe you know that silicon carbide and carbon are very happily-bonded solids. Again, I wouldn't ask you to necessarily come up with that in the context of a big problem like this. But it could be done. It would be it would be an educated guess. Does that help? AUDIENCE: Yeah. That helps a lot. Thanks. RAFAEL JARAMILLO: Yeah. I mean they're-- volatile. compounds are compounds that you can smell, right? So you can't smell silicon. You can't-- I mean, maybe a dog can. But dog noses are amazing. But you can't smell silicon. You can't smell these things. And the amount that you can smell something corresponds to its volatility, right? You know this when you're dealing with solvents and paint solvents and stuff. Very volatile compounds are very smelly compounds because they have a large vapor pressure. They're just in the air. So things like that. Can dog's smell of silicon? That is the question that everyone is dying to know. All right, what else? What else? No other questions? Well, maybe that was clear. All right. If there are no other questions, we can get out 6 minutes early. All right, well, thank you. I will see everyone on Monday. And again, homework due Sunday at 8:00 PM.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_15_Introduction_to_Solutions_General_Case.txt
[SQUEAKING] [RUSTLING] [CLICKING] RAFAEL JARAMILLO: So today's lecture is the start of binary phase diagrams, which are going to occupy most of the rest of the class, meaning most of the rest of the semester. So this topic is comprehensively treated in De Hoff. But I have some reading in McAllister, in Callister, not McAllister, it's Callister associated with today's lecture. So we just finished something on reacting chemical systems. Please don't fall behind on the reading. Turn to Callister and do that reading as well. Callister is a little bit more wordy than De Hoff, a lot nicer pictures. De Hoff is thorough and august. And Callister is a little bit more like those textbooks from high school with lots of pictures in them. So Callister's is a little bit easier to start the topic. And I like some of the examples that Callister gives. So that reading, like with Denby reading, that reading is posted on Canvas. So binary phase diagrams, let's just start by drawing some and talking about the primary features. What are binary phase diagrams? Somebody tell me. What is a binary phase diagram? What is a binary phase diagram? AUDIENCE: A phase diagram for a system with two components. RAFAEL JARAMILLO: Right, thank you, So it is a phase diagram for a system with two components. And normally, it's plotted like this with temperature on the vertical axis and a mole fraction on the x-axis. So if we have two components, we have x1 and x2. And we know that x1 plus x2 has to be 1 by definition. And so you have two composition variables. But only one of them is independent. And so we normally plot something like this. So the mole fraction of two will go from 0 to 1. That means on the left hand side here, we're talking about pure-- pure component 1. And on the right hand side, we have pure component 2. So it's a diagram of equilibrium for temperature and composition. Here's a typical thing that you have. You might have a solution phase. So you might have a large region where 1 and 2 mix. And often, you'll encounter something like a solubility limit. The solubility limit-- and when you hit the solubility limit, you enter two phase regions. Two phase regions are often indicated like that, with something that we call tielines. So those horizontal lines are tielines. And what tie lines do is they connect the two phases that can coexist at equilibrium. So let's make this specific. Let's pretend that this is sugar water, that this is a water and sugar solution. And so pure component one is, let's say, water. So let's say water. And pure component two is, let's say, sugar. All right, so what this region here is is sugar water. That's where you can dissolve sugar and water. And you often see that the solubility limits increase as you increase temperature. That's because the process of making a solution increases the entropy, increases the mixed-uppedness of the universe. We saw that in the very first lecture. And so as you go to a higher temperature, entropy becomes more important. We know that because Gibbs equals h minus ts. So as you go to higher, temperature solution phases tend to broaden. What's the solubility limit? What does that mean? Imagine you're actually making sugar water and you start off with pure water. You start off with pure water. And you start adding sugar. You're adding sugar. You're moving to the right on this phase diagram. What does that mean you hit the solubility limit? AUDIENCE: It'll start to precipitate out. RAFAEL JARAMILLO: Start to precipitate. So when we hit this composition, we can no longer add any more sugar to the solution. Any sugar that we add in addition will precipitate out as solid. So when that happens, I end up with two phases in coexistence, sugar water and solid sugar. That's the precipitate, stuff on the right. And as I add more sugar to the system, more and more sugar precipitates out while the composition of the solution doesn't change. So in this region, the composition of the solution changes continuously with the overall system composition. But in this region, the composition of the solution is fixed, even as the overall system composition can vary because I have two phases. So that's the meaning of two phase regions. And the tielines connect the phases that coexist. OK, so that's just the basics. So let's do an example here. Let's look at an example. Crystal growth by supersaturation. And this is-- if you've ever made rock candy, in fact, I've got a-- I've got a big one in my-- I'm at home. I've got a big one in my son's room. So I might take a walk and grab it in a minute. So let's see. He's at school. Otherwise, he'd get mad at me because I'd be taking his rock candy. Don't worry, I'll put it back. So there's the same phase diagram. Here's x2. We'll call this the solvent, keep the solvent on the right. Solute on the-- solvent on the left, solute on the right. And here's temperature. And here is a two phase region with a solubility limit. And here's what I'm going to do. Has anyone made rock candy or do you remember? Somebody tell me, how do you make rock candy? What do you do first? Well, let's do it this way. Start-- AUDIENCE: Start like with hot syrup. RAFAEL JARAMILLO: Hot syrup, good. So let's just get there to hot syrup. Let's start with a room temperature solution. Let's start-- let's start here. One, we're going to start there. And then what we're going to do is we're going to heat up. We normally do this on the stove. So let's see, we're going to heat up two. Now, we heat it up. And then, we're going to add more solute. Add more solute. So what we're going to do is we're going to make it more syrupy. I think that was Ali. So like Ali said, we're going to make this more syrupy. We're adding more solute. And what's the final thing we do? Maybe somebody new who hasn't contributed yet? AUDIENCE: You cool it down. RAFAEL JARAMILLO: Yeah, we're going to cool down into supersaturation. So we're going to cool down into supersaturation. That's step four. And then what's the final thing we do? We wait for spontaneous crystallization. I'll be right back. What I'm showing you is a piece of-- a large piece of rock candy. And how did we make this? We took water, and then we made syrup. And then we heat it up on the stove. And made the thing even syrupier until it was really viscous on the stove. And then we cooled it down. And we poured the-- poured the syrupy solution into-- not a beaker, because we're at home, a glass-- a glass dish. And then we waited. And over the course of maybe a day, this thing grew. And you see lots of little crystallites here. It's hard to see over the camera. And if we were really making rock candy, we would have put some strings in the solution. They grow rock candy on strings. That's how you normally see it. All right, so that's one way to use a binary phase diagram. So let's see, we're going to move on to do some formalism. I want to stop now and take questions on what we've covered so far. Spontaneous crystallization, it means you're going to end up with sugar water and the crystal coexisting. AUDIENCE: Can we get the energy from this part? Like if we get like-- if we have the [INAUDIBLE] information? RAFAEL JARAMILLO: Can you get an energy from this if you have-- can you repeat one more time, if you have something information? AUDIENCE: Like if I have the delta h of max in at 2. RAFAEL JARAMILLO: Can you get energy from this? I mean, it's an exothermic process, crystallization. So this was an instant hot pack. This was the instant hot pack. This whole class is coming back to the thing we did on day one with the hot and cold pack. So this is an instant hot pack situation. You have spontaneous exothermic process. It's driven by enthalpy, not by entropy. So there is a heat of solidification. And that heat of solidification is not something we worry about so much when we're making rock candy. But if you're growing semiconductor boules or any other solidification process that you care about quite a lot, then you have to engineer that. Any other questions on these basics? This is introducing binary phase diagrams through, I think, a very, very gentle example. Of course, they're going to get a lot hairier than this. But I want to make sure people have seen these basic concepts of solutions, two phase regions, tielines, and how we can think about processing them. I'll move on. So naturally because this is thermo, a lot of what we're going to do boils down to bookkeeping. So what we're going to do now is establish some necessary conventions. Let me pause for a minute and call something out. And I hope I remember to do this again at the end of class. In the first class, I mentioned that we were going to be using the software Thermo-Calc. And so that starts on the piece that's released today. So if you haven't yet, please do go either look at the course info document or just go directly to the Thermo-Calc website. And download and install their educational version for students. It's free. In our experience, every year, there's maybe one or two students that have a little bit of trouble installing it. That's always been with Macs. But almost everybody, including people with Macs, are able to get this running. In fact, I actually don't know anybody who wasn't able to get it running, which is to say it's a fairly well-designed software installation process because normally, there's always problems. So get this running on your computer before you need it on the P set, you could do it today or tomorrow. It should take five to eight minutes at most. It's not a huge program. Educational version of Thermo-Calc. All right now, back to regularly scheduled programming. So now, we're going to consider the process of making solutions. And we're going to do this really generically. We have some component A. It has a volume. We're going to spend time on notation here because the notation, when we're making solutions, it gets hairy because we have mixtures. And we have components. And we have reference phases and everything like this. So it becomes really-- again, it's all about the bookkeeping. So here, we have some volume of A, component A. It has a volume. It has an entropy. It has a Gibbs Free energy, so forth. The subscripts label component. So maybe it's water, for instance. Maybe that's water. The dash reminds you that it's an extensive quantity. So that's not volume per mole, or entropy per mole, or Gibbs per mole. It's volume, entropy, and Gibbs. And the circle is going to remind you that it's a reference state, which when you're making solutions, means the pure phase, the pure state. So we have pure A. And we're going to do something similar with B. So we have a container of B. It has its own volume, it's own entropy, its own-- it's own Gibbs free energy, and so on. And now, we're going to mix them. And you get something which is new. You don't get A. You don't get B. You get A plus B. Let's do some bookkeeping. We're going to introduce quantities of mixing, quantities of mixing. So let's see. The volume of the mixed system, we're going to set it up this way. It's the volume of A plus the volume of B in their reference states plus something we're going to call the volume of mixing. This is free. We can do this. This is no problem. This doesn't tell you anything about the science. This is just a bookkeeping scheme. I see now the top of my screen is Sarah, Josh, and Ali. So let's say that I want to see the amount of money Ali has in his pocket right now is the amount of money that Sarah has in her pocket plus the amount of money that Josh has in his pocket plus the difference. Which doesn't tell you anything about how much money you have in your pockets. It's just a bookkeeping scheme. That's all this is. So we're going to do something similar with all of the other extensive quantities. AUDIENCE: Could the delta v mixture value be a negative number? RAFAEL JARAMILLO: It can be, it can be, sure. What's an example of when it's a negative number. I've got to think about this. AUDIENCE: Like maybe like water and acid. They usually shrink when you mix them, it getting smaller than it was. RAFAEL JARAMILLO: Well I love the example. But I didn't quite hear it. What exactly was it that shrinks in water? AUDIENCE: It's like water and acid. RAFAEL JARAMILLO: Oh, water and acid. Sure. Or sometimes it's larger, sometimes it's smaller. You shouldn't think that, oh, it should be bigger because we mixed it. So the system got bigger. It's a volume of A plus the volume of B, however it changed. So if there's chemical interactions that cause the molecules to pull more closely together than they do when they're unmixed, then that volume term is negative. If there are chemical interactions that cause a thing to swell, that's the case with glass formers, then that becomes positive. So let me just make this clear. We have this structure here. These are the reference state-- reference states of pure components. And these are the mixing terms. And the point of having these mixing terms is because when you make solutions, the whole is not simply the sum of the parts. That's why we have these mixing terms. There are solutions all around us. So there are solutions all around us. We know this, right? Liquid phase, this is where we started. And when people colloquially say solutions, normally, they mean a liquid, liquid phase. So ocean water, syrup, gasoline, et cetera. Gas phase, of course, air is a solution. But in material science, in MSC, we spend most of our professional lives working on solid phase solutions. So this is less colloquial. Normally, I think we stop somebody on the street and hold up a solid chunk of something and say, is this a solution? And they would say no, it's a solid. But everything we're doing applies definitely to solids. And that's actually where we get most of our technological impact. So would somebody like to name for me some solid phases that matter to you, or everyday life, or cool, extreme examples, or anything at all? AUDIENCE: Carbon steel. RAFAEL JARAMILLO: Steel. The most basic steel is carbon steel. It's iron and carbon. Iron is very, very soft, ductile. Rusts easily, not good for structure. You add a little bit of carbon, and you get steel. Good. Other metal alloys. This is-- if you go back millennia, this is where material science started, brass, bronze. These are kind of old-fashioned metal alloys. Even though steel, there's nothing old-fashioned about it as practiced today. We also have superalloys. So now, we're getting really into late 20th century stuff. So for example, cutting edge materials research enables things like jet turbine blades. How can you have those big turbo fans in modern jet engines? They're withstanding enormous stresses and heat. The bigger you make them, the more efficient the engine becomes. But in order to make them big, you need a material that can withstand that. And so these superalloys are developed for that application. What else? What about lithium ion battery electrodes? Nobel Prize in chemistry from two years ago for lithium ion battery development, largely, almost 80% of the work and intellectual input on alloy design that is designing solid phase solutions for those electrodes. Semiconductors, semiconductor is pure silicon with nothing added, is useless. It doesn't do anything for us. We alloy it. We make solid phase solutions to change its properties and enable electronic circuits. Any other examples? This is enough, I think. But this is material science. You take something, you add a little bit of something else, and you dramatically change the properties, make it more useful. Let's take three minutes to talk about solutions and reactions. Why? Because we just finished reactions. And because many of you have worked with reactions in introductory chemistry classes. So you're used to something like this. Take two components, they react to make a third. What's the difference? This is a discrete process, a discrete process with fixed reactants and products. What do I mean by that? Let's say we have carbon and oxygen reacting to make CO2, we're OK with that, right? We don't also have to consider carbon plus O 0.71, just choosing a random number of moles of oxygen reacting to make some molecule that's CO 0.71. Because on a molecular level, there's no such thing as 0.71 of an atom. So so we don't worry about composition being a continuous variable when we're dealing with reactions. Why? Because the individual chemical components are discrete. And they undergo substantial atomic scale changes. So really, there's some bonding going on, bonds made and broken. And we're going to contrast this with solutions. Solutions, we talk about this, A sub x plus B sub 1 minus x forming a solution of A of x B 1 minus x. This is a continuous process. And this is a qualitative thing. But you can say that the individual molecular components remain recognizable on an atomic scale. So I'm not going to say too much more about solutions versus reactions until we work on reacting solid gas systems in a couple of weeks. But I do want to stop and point this out because we just did reactions. And everything I'm about to tell you in the next couple of weeks could be analyzed using the mechanism of reactions. But it would be inconvenient to do so. Just as everything which we did in the last two lectures could have been done using the formalism of solutions. But it would have been inconvenient to do so. So we're doing solution modeling because it's a convenient way to treat systems that are solutions. Nature doesn't care. Nature doesn't care how we write this down, what I call it. So it's just about formalism and choosing the right formalism to make our lives easier. All right, let's move on. Let's talk about solution modeling. This is basically a postulate. This is about making a postulate. So we know that to make predictions, we're in the prediction business for intellectual growth and profit. To make predictions, we need thermodynamic data. You can't solve any problems if you don't have data. You need the data. And when we were doing reacting systems, you had carbon, and oxygen, and CO2, and other molecules. And you could say, oh, I know what that is. I've seen that before. I know how to search for that data. Maybe I'll just find that on Wikipedia. That's a well-defined thing, oxygen. It's a well-defined thing, carbon dioxide. No problem. But every material is different. Every material is different. So for example, if we have database-- here's my database. Imagine a big cloud database. And here's the entry for carbon monoxide. And here's the entry for carbon dioxide. And they each have their own properties. You wouldn't assume that you can use something for CO2 and apply it to CO. It's a different substance. Every material is different. Does this mean we need a separate database entry for every possible composition? So this is a motivating question. Let's say we have silicon. And let's say like Professor Fitzgerald did when he developed strained silicon, which is now in every transistor below the 40 nanometer node, he started alloying it with germanium. And he added a little more germanium. And maybe added a little more. And I'm just putting a 1 at the fourth decimal point. I could put it at the 10th decimal point. The question is, do I need a separate database entry for each and every one of these compositions, for every possible composition no matter how minutely I change it? Do I need a separate database entry as I do for molecular systems? All right, we hope not. We hope not. So we postulate not. We postulate that we don't. We're not guaranteed it's going to work. But we do the following. We do the following. We try to model trends in thermodynamics with composition. We try to model trends, right? This is about modeling. Modeling is another very important aspect of the shadow curriculum in 020. The shadow curriculum meaning things which are essential to being a professional scientist and engineer, but most of you won't take a separate class in. So accessing and manipulating data, basic statistics, plotting. Communication, we do explicitly tie in. Modeling is up there. How do we model? What is that? What does it mean to model a physical system? We're going to model trends in composition. I think what modeling is. Let's say you have data. And you have some control parameter, that's the abscissa. And you have observations. And you go to the lab and you measure that. And each one of these measurements cost money. Let's not make this that you're students. Let's make this that you're working at a company. And each time you want to run a test, you have to spend some money. And you have a boss. And that boss, they don't want to spend any more money. And you have to make a decision about what's going to be the value for that composition. How do you do it? Do you spend more of your boss's money? No, Sarah says no. Don't spend more of my boss's money. You know what to do. You say, well, I'm going to postulate that there's an underlying model. And that model has some truth to it. Or maybe it's purely empirical. Either way, it might have predictive power. And I'm going to curve fit. But it's not about curve fitting. It's about postulating the existence of a model and using that model to make a prediction. So you fit the model somehow. And then you say, OK, I haven't done the measurement. But I'm willing to bet that there's a value somewhere over there. And you could be wrong. But you're probably right. We saw an example earlier with the heat capacity, where was capacity versus temperature for cobalt. I don't if you remember that. And the heat capacity data was like this. And then it had a weird spike during the magnetic transition. And then it went back. So maybe you missed the spike, and maybe you get the data wrong. Nature always is full of surprises. But the point is that this works a lot of the time. This type of approach works a lot of the time. And when we're modeling solutions, we call it solution models. Solution models are used in two ways. First of all, we use them to understand atomic scale phenomena. We use it for understanding. And we use it to make predictions. And this is almost always what models are used for in the science. When you're taught models, like in a physics class, it normally starts from some underlying atomic scale phenomena. And you get F equals NA. And then you build up a model of some thing, a weight on an inclined plane. And then you can use that to make calculations. And it's not really obvious that you're really generating a prediction engine. But once you get towards engineering and you're doing something in the material sciences and you realize how much experiments cost, the economic value of modeling for predictions starts to become a little bit more clear. And it's always two sides of this. You do it for understanding. And you do it because it's going to let you do something else that matters to you. So let's look at some examples very qualitatively. What about water and ethanol? Water and ethanol, right? Well, these are both polar solvents, polar solvents. And they're totally miscible. What does that mean, totally miscible? Anybody? AUDIENCE: Every composition, they always don't separate regardless of composition. RAFAEL JARAMILLO: Yeah, that's right. So I'll repeat what Ali said. It means that there's no phase separation. There's no solubility limit. Unlike the sugar water system, these two components will always mix in any composition. So let's say the horizontal axis here is ethanol composition. And I'm going to make the y-axis something which we'll be spending a lot of time with, which is the Gibbs free energy of mixing. I'm gonna put zero here. So if mixing will happen spontaneously at a given temperature and a given pressure, what that means is that the Gibbs free energy of mixing is strictly negative. Not only that, it also means that the curvature is positive. But we won't mess with that now. Just draw over that there in red, so I'll make it a little more clear. So there's my free energy mixing. Now, in the next couple of lectures, we'll start developing functional forms for this. But we're not going to do that now. We're just going to look at this. Unmixed, we have the Gibbs free energy per mole of system equals the Gibbs free energy per mole of pure water plus the mole fraction of water plus the Gibbs free energy per mole of pure ethanol plus the mole fraction of ethanol. That's like this. We have the pure stuff, this pure stuff, and this pure stuff before we've mixed it. And then mixed, the Gibbs free energy is that Gibbs free energy before we made the mixture plus delta G of mixing. This is just according to that bookkeeping that we split up-- that we set up. Delta G of mixing is less than zero, which is equivalent to saying that G mixed is less than G unmixed. The system mixes spontaneously to achieve equilibrium at a fixed temperature and pressure. So this plot here is not a phase diagram. This is-- that's a variable you find in a phase diagram. But that's not. That Gibbs free energy, that's not found in a phase diagram. This is not a phase diagram. This is called a free energy composition diagram. And he will be very, very tired of free energy composition diagrams by the end of the semester. But anyway, that's what it is called, the free energy composition diagram. What about-- what about water and oil? Polar and non-polar, I'll just go ahead and draw. You know they don't mix, at least not in the binary system. You can cause them to mix if you add a surfactant. So let's do x of oil. But in the binary, delta G mix, at a given temperature and pressure, that's typically what you find. You have a positive delta G of mixing with a negative curvature. And what that means if the system is initially mixed, it will spontaneously unmix to achieve equilibrium at fixed temperature and pressure. And in the edX version of this class, there's a little bit of a separate demo of whiteboard video and a demo about the ouzo system. Which ouzo is a Greek liquor. It's also known as pastis in French. And it's oil and water. But they're caused to mix with the addition of a surfactant. And so that gets into ternary phase diagrams, which we don't really cover in this class, we don't spend time on. But it's a fun example. So I encourage you to check that out. All right, so let me just summarize here. Free energy composition diagrams, free energy composition diagrams are how we draw, visualize, and then evaluate solution models. So they're like the engine room of binary phase diagrams. So we have here something like delta G of mixing. Let me draw this generically. And draw a curve. That could be a solution model. This is the x-axis, x sub 2. The solution model comes from somewhere. We'll talk about that. And it allows you to make very specific predictions. It allows you to build binary phase diagrams. And we'll talk about that. The solution models, they represent particular phases. Or I should say structures, like FCC, or BCC, and so forth, or liquid, or gas. And the models are-- where they're derived from, they're derived from experimental data. They're derived from empirical modeling. Or they're derive from atomistic modeling. So that is theory. All right, and and so this is the core of the class. We're getting there. This is what really enables phase diagrams, which is what enables materials processing. The solution models, we're going to deal with only the most basic ones in this class. And you can go pretty far with that. The basic ones are what they give you in Thermo-Calc. To get the real ones, if you wanted to get the solution model that corresponds to a superalloy that say United Technologies uses for turbine fan blades, you would have to break into a very secure server. Those are databases with enormous economic value. They're trade secrets. Those are not published in the literature. If you go into alloy development, you're going to be doing a PhD. And a lot of that's going to be doing-- building solution models. So if you were in the Allanore group, or the Tasan group, or maybe the [INAUDIBLE] group, building these models, these mathematical objects from data and using it to make predictions to design new materials, that's the core-- that's what's done. So that's the end of the hour. I hope that I've been able to introduce you to the formalism of solution modeling without it being too boring, because at the end of the day, we're doing bookkeeping. And bookkeeping is hard to make-- it's obviously important. I think we can all agree on that. It's hard to get too very excited about it. But we have to do it. So I'll remind you, again, Thermo-Calc. Thermo-Calc Academic, Thermo-Calc Academic, free software. Please download it. Install it well before you start working on this next P set. The reading for today is in De Hoff and Callister. So do the Callister reading. It's pretty straightforward. There's also an accompanying light board video on binary phase diagrams, and tielines, and the lever rule. So I encourage you to watch that. You'll find that same content in De Hoff and the same content in Callister. But it's a little more colorful in the video.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_9_Case_Studies_Specific_Heats_and_Phase_Transformations.txt
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: We will get started. So today's lecture is what I call case studies. We're going to explore heat capacity. It's not strictly speaking new material. But it definitely is worth going through in a little more depth and a little more slowly than we've seen it so far. Today's lecture is all about heat capacity. And for those who are just joining, the lecture notes will be up shortly. So this is heat capacity. And I hope to convince you that these are keys to the kingdom, or at least the thermal kingdom. Heat capacity is not just another coefficient, it's the thing that you're going to need to solve many problems in material science. So let's see, Gibbs free energy. Equilibrium at fixed T and P is balanced between H and S. So this you already saw. This you saw on day one with the baby book and then with the hot and cold packs. We're balancing enthalpy and entropy to find equilibrium. So if you're going to balance two things, if they're really important, how do we calculate them? How to calculate H and S for a given phase at a given temperature and pressure? Doing this sort of calculation is going to be a lot of what we're doing going forward in 020. So let's step through that. Let's start by calculating H. Calculating H of T at a fixed pressure. Start with H. Back to the beginning. H equals U plus PV. So DH equals DU plus PDV plus VDP. This is the chain rule. That's the chain rule. Then we're going to use the combined statement. This is TDS minus PDV plus PDV plus VDP. This is using combined statement. And canceling terms, we have TDS plus VDP. You did something like this on the p set. So it should be familiar. So combining this TH at fixed P equals TDS equals DQ reversible. That's why the enthalpy is often called the heat. And, of course, we know this equals Cp dT So the change in enthalpy is tabulated by the heat capacity. So that means we're going to use the fundamental theorem of calculus to express enthalpy at some T2 as a function of fixed pressure. It is going to be the enthalpy at some T1, the same pressure, plus the integral from T1 to T2, the total differential DH. And plugging in from above, this equals HT1 at P plus T1 to T2 dT Dp of T. So in order to calculate the enthalpy at some other temperature based on reference data, the enthalpy at, let's say, a reference temperature, we need this heat capacity data, which, in general, is temperature-dependent. So this data we need. What about entropy? Calculating S at T for fixed pressure. dS equals Cp over T dT minus V alpha dP. There's many ways you can get this. You could just go to table 4.5, or you could derive it. So that means that dS and fixed pressure equals Co over T dT. So, again, using the fundamental-- AUDIENCE: Sorry. PROFESSOR: Sorry? AUDIENCE: Would it be possible for you to write with a bigger tipped marker? I'm kind of having trouble reading the details. PROFESSOR: Sure, let me try to do that. That's always a-- AUDIENCE: If that's OK with everyone. PROFESSOR: Yeah, yeah. No, I appreciate those sort of comments. Because I know that Zoom also samples things differently, and I see it differently on my screen than you see it, which is also different than you see it in the recording because of the way Zoom samples for bandwidth issues. So thank you. So we're going to use fundamental theorem of calculus again. S at T2 at a given pressure equals S at T1 at that pressure plus T1 to T2 DS. Then we can put equals S at T1 pressure plus T1 to T2 Dt. And the integrand is Cp at T over T. So, again, we need Cp as a function of temperature to do that calculation. So, again, we see for both enthalpy and entropy, the key to calculating these things for varying temperature is knowing the heat capacity function or the heat capacity data. So we're going to use these expressions so much in this class. Let's talk a little bit about where heat capacity comes from. What is Cp of T for real materials? We know for ideal gases it has this really simple form, like 5/2R for monatomic ideal gas. But we're interested in real materials in materials science. We don't make cars, and medical implants, and computer chips out of ideal gases. We care about the real stuff. So let's start with a relatively simple model. Consider bonds as atoms on springs. We're going to consider bonds or atoms on springs, sounds good. Let's draw bond energy as a function of displacement. I trust everybody has seen something like this before, if not in 3091, then in another introductory chemistry class. And let's see. So U as a function of R. This is energy of the function of atom to atom spacing. And you typically have something like this. That's going to be your bond energy as a function of distance. And there's an equilibrium position, let's call it R0. And let's just say that my atoms are sitting there at that displacement. Let me draw my atoms here. Here's a spring. And they are at distance R0 from each other. Let's say that the-- I'll call this atoms are quiescent. They're just sitting there. So this is a low-- or I'll say lower-- energy state. It's also lower in entropy. There's no disorder. There's no sense of mixed-upness. You know exactly where your atoms are. Lower than what? Let's draw a higher temperature state. Again, there's R, U. I can never draw the same thing twice. So let's see what happens here. It's not too terrible. So, now, R0 is still the same. But let's say that now I have more energy in the system. So my atoms are oscillating. They're vibrating around, I'll say there's some sort of-- they're vibrating around that minimum. So let's draw that over here. Let's say that this is R0. But as drawn here, the atoms are a little bit closer together. They've compressed the spring. And they're on the move. They're vibrating. So the atoms are vibrating. So you can see from this picture that the system is in a higher energy because this is an average energy of the atoms. It's increased here. And there's also a higher entropy, because there's an amount of disorder associated with the location or the vibrational state of the atoms. So this is higher energy. Sorry, I said I was going to write with a fat marker-- higher energy and higher entropy. So that's a relatively straightforward concept. You can do a lot with this. There are models-- I should say models for the heat capacity as a function of temperature that consider vibrations of the atoms. Vibrations of atoms in crystals, that's what we just drew. We drew a simplified model of vibrations in atoms and crystals. There's a word for these. What are vibrations of atoms in crystals? Those have a fancy name. AUDIENCE: Is it oscillations? PROFESSOR: It could be called oscillations. That's a little bit of a fancier name. There's also a name, phonons, which is a lattice vibration. So there are different models. There's the Einstein model. This is a simple model. All vibrations have the same frequency. It's simple, but it was a first important test of quantum theory. And there's a fancier model. There's the Debye model, which is that there's a range of available vibration frequencies-- a range of frequencies. And this range is associated with the range of wavelengths. That goes beyond our class. But I wanted to mention this. We will solve these models later using stat mech. So later in the class, when we come around to statistical mechanics, We're going to solve for the heat capacity of solids using these models. So that's kind of neat, but we're not ready for that yet. So let's look at some slides. Let's look at some data. So this is just an example of heat capacity-- sorry, there's a little bit of resolution issue here. But this is the heat capacity and constant volume, in this case, of aluminum as a function of temperature. And this is reduced temperature. It is temperature over something called the Debye temperature. And the Debye temperature is characteristic of the highest energy vibrations in the solid. This is a concept you'll encounter in later classes. So the circles here are data and the solid lines are models. And you see that the Einstein model and the Debye model both show heat capacity going to zero, at zero temperature. That's kind of interesting. And they both also show the capacity saturating. Heat capacity comes up and it saturates at high temperature. But the Debye model is a little more accurate. It captures that temperature dependence better than the Einstein model. So we'll come back to that. Here's another interesting comparison. These are the heat capacity for lead, silver, aluminum, and diamond. And in any given temperature, let's say 10 to the 2. So this is what? 100 Kelvin. At 100 Kelvin, the heat capacity of these four elements is very different. But there's a self-similarity in the shape of the curves. So there's some underlying physical law that describes the heat capacity of all of these materials. And it's the Debye model which we just were introduced to. The difference here is that diamond characteristic lattice vibrations are at a much higher energy than lead's characteristic lattice vibrations. That comes to kind of bonding, it comes down to structure, and it comes down to the weight of the atoms. Lightweight things vibrate faster than heavier things. It's true in atoms, it's true in swings. So this is interesting. And the circles here are data and the solid lines are models. So you see the Debye model is really excellent. Another thing I'll point out is that they all seem to converge at the same number. This is an early law. It's not really a law, it's an empirical observation, the law of delaying empathy, which is that the heat capacity at high temperature converges just to 3R for all solids. And it's a decent approximation for solids at high temperature. If you need to estimate the heat capacity of a solid, and you have no data at your disposal, and you're in a hurry, 3R is not a bad estimate. R is 8.314 joules per Kelvin per mole. So 3 times that is roughly between 24 and 25. And that's what you see. If this data were taken up high enough in diamond, you can imagine that diamond would also converge to that law of Dulong and Petit. I want to talk about cobalt. AUDIENCE: Sorry, I have a question. PROFESSOR: Yeah. [INTERPOSING VOICES] AUDIENCE: So these graphs, we're plotting CV, C sub V. Do these trends also hold for C sub V? PROFESSOR: Yeah, so one thing, which we'll start to learn with experience is that, for solids at atmospheric pressures and the kind of pressure that we encounter for most materials processing and most industrial processes, CV and Cp are very similar. Pressure dependencies with solids don't matter very much unless you get to very, very high pressure. So everything you just saw for these solid phases pretty much applies the same whether you're at Cp or CV. And we'll start evaluating those differences quantitatively later on in the class. One way to think about that is it takes a lot of work to squeeze a solid as opposed to a gas, which we can imagine takes less work. Good point. Let's talk about cobalt. Why about cobalt? I think it's interesting to read a phase diagram. We haven't really done that yet. This is a unary phase diagram. Let me bring out my laser pointer. So this is pressure and this is temperature. Somebody, how many different phases are pictured here? AUDIENCE: Four? PROFESSOR: Four. Can you name them? AUDIENCE: I see three different solid phases of cobalt and then the liquid. PROFESSOR: Right, liquid at high temperature. If you kept on heating, presumably you'd get the gas. But this is a high pressure phase diagram. These are in gigapascals, so these are high pressures. This over here is 100,000 atmospheres. So this is high pressure stuff. And then, as you cool down, you have the FCC structured paramagnetic. Paramagnetic, that says para for short. You continue to cool down, you get an FCC-structured ferromagnet. And if you continue to cool down farther, you get a non-magnetic hexagonal close packed phase. So you got one phase, two phase, three phases, four phases. Although, structurally, there's only two distinct solid phases, FCC and HCp. So that's how to read the study. So, let's see. What if we heat up at ambient pressure. On this scale here, gigapascal, ambient pressure is basically 0. This 100 gigapascal is 10 to the 8 pascals, 10 to the 8. And atmospheric pressure is 10 to the 5. So atmospheric pressure is basically on the y-axis here. And as we heat up, we go through a couple of transformations. We go from HCp to FCC. We go from FCC fara to FCC para. And we go from para FCC to liquid. And these are the transformation temperatures. Epsilon to gamma, then there's a magnetic transmission at the Curie temperature, and then a melting point, a sub f. Those are just the reading phase diagrams. Where do we find data for the heat capacity of cobalt? Lots of places. You've explored some of the databases already here. In this class, let's go with the NIST chemistry web book. It's a very convenient resource. I hope you've used it already. Heat capacity of cobalt. So this is what happens if you go to the NIST website and you follow the links to this solid phase heat capacity of cobalt. And what do you see here? You see a parametric form. There's a polynomial expression. There are the values of the coefficients of the polynomial. And there are three different ranges. There's a room temperature at 700K, 700K to 1,394, and 1,394 to 1,768. Based on what we just saw with the phase diagram, would somebody volunteer, why are there three different ranges given for the heat capacity of cobalt? AUDIENCE: Is it because of the different phases? PROFESSOR: Because of the different phases. The different phases have different heat capacities. In general, different phases have different heat capacities. So you see, here's the-- this is a non-magnetic HCp. And then there's a little jump in the heat capacity. Everybody can see that. There's a little bit of a jump there in the heat capacity. It jumps to a slightly lower value. And you enter the FCC magnetic phase. And then there's this funny spike. This is at Curie temperature. And above the Curie temperature, there's a real discontinuity in heat capacity. It comes down. Another thing I'll note here is, this data is all well above the Dulong and Petit. It starts at 25 and then it goes up. It goes up as high as 55. We're going to talk about that in a minute. So how do you get this data? How do you-- where do you think this data came from? Sorry, Catherine? AUDIENCE: Experiments? PROFESSOR: Yeah, it came from experiments. That's right. It's empirical. This expression, expression like this, a plus b times t, plus c times t squared, plus d times t cubed, plus e over t squared, that's not a first principle solution. That's not a theoretical model, that's just curve fitting. That's just curve fitting. So whenever you are dealing with data that has been curve fit, you have to always ask yourself, what is the range of validity? So there's a model that gives you the capacity of this magnetic phase. And that model is only valid over the range over which the data has been fit. That's why this database and every other database will always specify the range. These coefficients are good from 700 to minus-- to 1,394 Kelvin. But if you exceed that range, use that model at your own peril. That's a really important point. Now, let's talk a little bit about why this heat capacity is higher than 25. Why is it higher than those other solids? What is it about cobalt? I'll give you a hint. It's not particular to cobalt. There are other metals, there are other materials that have this feature. Anybody know what it is? Anybody? AUDIENCE: Is it because it's hcp structured? PROFESSOR: No, it's not, although I do like the idea, the suggestion that it's structure-dependent. Is there anything else about this system? We've talked about it a little bit. I'll give you a hint. So we talked about ways that energy can be partitioned in lattice vibrations. You have this idea about bond lengths. And as we excite the atoms, there's more energy in those bonds. But there's other ways that atoms can interact with each other and other ways that they can store energy. So, for example, you might have a molecular system. And the molecules can rotate. That doesn't apply to cobalt. You might have an electronic conductor. And the states of the conduction electrons contribute. That's really something that physicists care a lot about. But there's another one here that's a particular to cobalt. Anybody? AUDIENCE: Magnetism? PROFESSOR: Magnetism, yeah. Thanks. So this is what we're going to-- there are others. So anything microscopic on the atomic and molecular level that can store energy, that can be energized, that can be excited, it all factors into the heat capacity. It's all in there. Heat capacity is this amazing thing that contains so much science. So let's consider a ferromagnet such as cobalt. There are other magnets, of course. I don't know why I picked cobalt. Now, we're going to draw pictures the way we did before with the case of lattice vibrations, except, now, we're going to consider the magnetic system. So let's consider two scenarios. Scenario one, the magnetic moments, that's what I'm drawing here, magnetic moments-- and Those can be spin and orbital-- are aligned and quiescent. This is a lower energy, lower entropy state. Now, what happens if I heat this state up? Let's say this is very low temperature. What happens if I heat it up? Let's pour some energy in. This is the low energy state, like that r not equilibrium bond links with the low energy state. What happens when I heat up? I know this isn't of course on magnetism, so just feel free to guess. AUDIENCE: They can change the orientation? PROFESSOR: Yeah. Let's say they start-- I don't know if that's visible at all. The idea here is that magnetic moments are fluctuating. They're fluctuating just like the bond lengths were fluctuating before. Now the orientation of the magnetic moments are fluctuating. This is higher energy, higher entropy. And just like there was a term phonons for the excitations of a crystal lattice structure, we have a term magnons for excitations of magnetic structure. Now, that was the beginning, middle, and end of our discussion of magnetism in 020. We're not going to discuss magnetism. But I did want to make sure you're aware that the heat capacity includes all the interesting things that a material can do, not just vibrate bond lengths, but any other ways that energy can be partitioned into the crystal, or amorphous material, or liquid, or gas, or anything. It's all in there. It's all in the heat capacity. So we go back and we look here at the heat capacity of cobalt again. We've got this spike. We've got this spike at the Curie temperature. And that is associated with all of the fluctuations that you find when a system is just about to order. I think that's really interesting stuff. This excess heat capacity, well above and beyond the Dulong and Petit value of 25 can be largely associated with the magnetic interactions. That's neat. I think we are done with cobalt and magnetism. Questions before I move on? Let's talk about phase transformations. And this is really about using data. But it also leads us into what we start-- the topic we start on Friday. Let's consider something simple and important-- good place to start. Consider solid to liquid transformation in silicon. That's an important material. So I looked it up, and I found the enthalpy change, delta H solid to liquid at the melting temperature. Somebody, is that positive or negative? Is that an endothermic or an exothermic process, melting? AUDIENCE: Endothermic? PROFESSOR: Endothermic. You have to put a heat in to melt something. Plus a 50.2 kilojoules per mole. I looked at up. This is the change in enthalpy at the equilibrium melting temp, t melting equals 14-- 1,685 Kelvin. I always remember the Celsius a little bit better-- 1,412 degrees C. So that's hot, but not too hot. Let me ask a question. What is enthalpy change, delta H, solid to liquid for t less than t melting? Solidification is exothermic. Melting is endothermic. So the question is, how much heat does the solidification process give off when you're solidifying below the melting point? This is highly relevant to a lot of important industrial processes because, often, we don't form materials, including silicon, right at the melting point. Often we solidify at a slightly lower temperature. And you have to know this number in order to engineer your process. There's a lot of really good MIT IP on this and a couple spinoff companies that you may be familiar with. So let me just write some stuff down. For the solid phase, the enthalpy of the solid at temperature equals enthalpy of the solid at temperature of melting, which is just what we did about 25 minutes ago, plus the integral from t melting to t, d dummy variable, heat capacity of the solid phase. And for the liquid phase, we have the same type of calculation. Enthalpy of the liquid at a function at temperature equals enthalpy of the liquid at the melting temperature plus-- sorry, there it got a little squished-- dT prime Cp of the liquid. So, again, all the information we need is contained here in these heat capacity data. So the thing we asked to calculate is delta H solid to liquid at some temperature. And I'm going to simply combine these results. It is equal to delta H solid to liquid at the melting temperature plus the integral of the heat capacity difference. Heat capacity differences are really important when considering phase transformations. Let's look at some silicon data. Let's look at some data for silicon. I downloaded it and plotted it. So, first of all, where would you get thermodynamic data for silicon? Many, many places. Here are two. Here's some data right in the book, in the appendix. Silicon, Appendix C or something. What is this data? Silicon, dia to l. Dia to l. What does dia mean in this case? Anybody? I called it solid. I called it S for solid. Why is it here dia? AUDIENCE: Does that stand for a diamond-like configuration? PROFESSOR: Right. Silicon is a diamond cubic structure. So that's a solid phase of silicon. So a lot of times, you need to interpret. Dia to l. And they use lowercase l, which is always problematic. So I use uppercase L. But, anyway, that happens at 1,685 Kelvin. There is an entropy of transformation of 29.8. And the units are in the bottom of the table. In this case, it's joules per Kelvin per mole. And there's an enthalpy of transformation at 50.2 kilojoules per mole. I already told you that. And you can also look up the values here for the boiling of liquid silicon at atmospheric pressure. There's lots of silicon data here at NIST, and also a string of materials and Wikipedia. Silicon is really important. So you're going to find the data all over the place. So, pardon me, let me see what comes next. So this is what I'm going to do. I'm going to get-- I need this heat capacity data. I need the heat capacity data. Cp is a function of temperature. So I go over here to the NIST data. And I grab the numbers, and I plotted them. So here we go. This is my plot. I made this in MATLAB. So this is my plot of the heat capacity, what they said it was, of liquid silicon as a function of temperature. And here it's the heat capacity of solid silicon as a function of temperature. Now, let me ask you a straw man question. What sort of functional form would capture this trend? What sort of functional form captures this pretty well? AUDIENCE: Linear? PROFESSOR: Linear, right. What about this, what sort of functional form do I need to capture this data? AUDIENCE: Quadratic? PROFESSOR: Right. It looks quadratic. Here's a word to the wise. In thermodynamics, as in any science and engineering, you've got to always pay attention to the numbers. You always have to pay attention to the numbers. And plots can lie. Let's plot these together. What happened? Same data. You see what happened? The y-axis range here was really teensy tiny. Basically, what this data is telling you is that for all intents and purposes, the heat capacity of liquid silicon is a constant. And the heat capacity of solid silicon is a straight line with a slope. So I'm trying to sneak things in here, lessons that aren't strictly about thermodynamics, lessons in paying attention to significant figures, and how data is plotted, and how data is modeled and fit and such. So sneaking it in. I marked here what the melting point. So now we have data. We have heat capacity of the solid, and heat capacity of the liquid, and we have the melting point. Let me point something else out. I always told you this stuff comes from measurements. This is data. It comes from measurements. That means that the data for liquid may not be reliable for y very far below t melting. It might not be possible to super-cool silicon so far below the melting point-- 1,600 degrees, or 600 degrees below the melting point-- in order to take data. So that liquid phase data may not be reliable for far below t melting. And, likewise, the solid phase data may not be reliable for t far above t melting. And this comes back again to the range of validity of your models. So when you're professional scientists and engineers, you have to think critically about your data resources. How good are they? Where do they apply, and where do they not? So let me switch back to the board and just do some calculations. And then we'll get to the answer. So we're going to calculate delta H. And we're going to use the thermodynamic data. Use thermal data. From NIST, this gives it to us in this form like this, Cp equals A plus Bt plus Ct squared plus Dt cubed plus E over t squared. This is the general form that NIST gives the data, and many other resources too, not just NIST. And if you're not sick of me saying it yet, you will be soon-- this is empirical. This is not some quantum first principles result. And NIST normalizes temperature like this. Got to always be careful. Different databases treat temperature different ways. Here it's normalized by 1,000 Kelvin. So this lowercase t is unitless. So let's grab the data for these materials we care about. A joules per Kelvin per mole, B in joules per Kelvin per mole, C, and so on, solid, liquid. I'm just grabbing the data from NIST here, this copying over. So, for solid, we get 22.82. For liquid we got 27.20 for the A parameter. Now, the B parameter, for solid it was 3.90. You'll remember, the data was pretty well modeled by a straight line with the finite slope. The liquid, forget it. It looked constant to me. I mean, it's really constant. So, likewise, curvature, forget it. Forget it. We're going to approximate that there's no curvature in the data. And so forth and so on for those other parameters. So we're reducing our model to a model with three coefficients. Align with the slope for the solid, and just a fixed value for the liquid. And we are, by inspection of the data, ignoring these terms. These terms. This is the sort of thing that I and all my colleagues in academia and industry do every day 10 times a day, is you look at data, you've got to make a decision. You have to boil down your model. You need something you can solve. And you need to be able to solve it in the time given. And sometimes you just have to make decisions. So here's an example. Just make a decision. The data looked like this was a line of the slope and this was a fixed value. You're really sick of me talking about this. So let's just move on to the answer. Delta Cp for solid to liquid as a function of temperature is A liquid minus A solid, plus B liquid minus B solid, times t plus terms we're ignoring. We're setting the liquid linear coefficient to 0. So this is simply equal to A liquid minus A solid, minus B solid. And here I'll make this explicit here. This is t over 1000K. And, finally, delta H solid to liquid, t equals delta H solid to liquid at the melting point. And polynomials are easy to integrate. That's one of the reasons why they're used in this case. The heat capacities are expressed as polynomials because they're easy to integrate. That's it. So we've boiled down an important engineering problem to evaluating a polynomial with coefficients that we can look up in the databases. And this is a really important quantity for controlling silicon and solidification, which is critical for making computer chips and solar cells. So let me plot the answer. Here is the answer plotted. I plotted the answer. This is the enthalpy of melting of silicon. There's t melting marked. This is in kilojoules per mole. And I'll tell you, it probably doesn't actually cross over 0. So you look at this data, it's probably only reliable close to the melting point. So that was just a bit of a walk-through, some things which you've seen formally but it's nice to see fleshed out a little bit. And highlighting some important things in this class, using data, plotting data, making approximations, so forth and so on. On Friday, after the exam, we're going to pick up with the introduction to unary phase diagrams and doing a lot of calculating transformation quantities for phase transformation. So this is a lead into that. I'll mention, there's one of these mini lectures that is posted on the website called the three D's of thermodynamics. And that's going to start becoming relevant and hopefully helpful because there's going to be a lot of D's. There's going to be regular D, lowercase Greek d, uppercase Greek D, and they all mean different things. So if you start to find yourself confused over what all these deltas and D's have to mean, hopefully that's a good resource for you.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_16_Partial_Molar_Properties.txt
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: All right, let's get going. I mentioned that a lot of what we're going to do when we model solutions is bookkeeping. So this is a really important quantity that we're going to track, and I'm going to introduce it-- partial molar properties. And we're going to use the notation as in the text. So the partial molar property for some generic state function-- we're using B here because it can stand in for any state function-- is the partial of the total B-- this is an extensive state function-- with moles of component k at fixed temperature, fixed pressure, and fixed moles of everything that's not k. j not equal to k. So B is an extensive property. Extensive property. n of k is moles of k. And B bar of k equals a partial molar property. Sometimes, I'll just call this PMPs. So this is kind of formal looking, but it's not a hard concept. You could think of this in many different ways. You could think of the extensive property as the total amount of money in all of our pockets right now. But we each have different amounts of money in our pockets. So our own contribution to the total amount of money in all of our pockets right now would be our partial personal property, I guess. You can think of this as the total area in my whole neighborhood, land area. But each of our houses has a different lot size. So we each have a different contribution to the total land area in my neighborhood over here in West Cambridge and so forth and so on. So this bookkeeping is kind of formal, but the concept is familiar. We're going to make sure we know what the total differential of property B is. We have these coefficients. dB dT affects P and mole number dT plus dB dP. That affects T and mole number. And here's where we get to use the partial molar properties. Plus B i dni. So as defined here, it's a coefficient. Partial molar property is this coefficient. And one clarification. For pure phase, for a pure phase, you only have one component. Its molar fraction is 1. And the partial molar property is simply the molar property. There's no partial involved. One thing about PMPs is PMPs are-- they're intensive. They're independent of the system size. And I'm going to use some illustrations to tell you what I mean there. But let's see. How do we know they're intensive? A PMP is an extensive quantity divided by an extensive quantity. So if I imagine this is a partial molar property, and the system is a given size, and I double the system size, then both the numerator and the denominator here will double. And the intensive partial molar property will remain unchanged. But there are useful consequences of this. So let's imagine a system made out of triangles and circles. So for example, let's imagine a system with X circle equals X triangle. And it's 50/50, just to be specific. So let's imagine three different systems here. Here we have one, two, three, four, five circles. And we have five triangles. And if we have B circle equals 7 and B triangle-- I'm just making a numbers here. It doesn't matter what numbers I use. And for T and P constant, in this case, the total amount of B equals B circle n circle plus B triangle n triangle. And this is 7 times 5 plus 4 times 5. Let's consider a slightly different case. Let's make the system a little smaller. Let's have three moles of each. In this case, the total amount of B again is sum B of i n of i equals 7 times 3 plus 4 times 3. And then, of course, you can keep on going here. Let's imagine one circle and one triangle. The total amount of B equals 7 plus 4. So why am I telling you this? Why did we just go through this exercise with circles and triangles? It's because we're going to add up partial molar properties a lot. We're going to be doing this quite a lot. And it comes back to this. The whole is the sum of the contributions of the parts, according to this bookkeeping scheme. It's according to this bookkeeping scheme. Let's keep going. We need some mathematical relations between the whole that is a total amount of B and the PMPs. We're going to consider scaling the system size by a scaling factor. And this is the unit of this thing. I'll just use lambda. So imagine B is a function of temperature, pressure, and the scaled system size. Well, if we imagine, let's say, doubling the amount of moles of everybody, we simply double all extensive quantities. And that holds for any-- it doesn't have to be double. It could be triple. It could be half. It could be any factor. So we have this. So if I multiply it by some constant scale factor, the size of the system, it's the same thing as multiplying the overall extensive quantity by that same scale factor. I'm going to use this. So this is not just for fun. This becomes useful. But I will call out, for those of you who are mathematically inclined, that that means that B prime is a homogeneous function of the Mk's of order one. So for those of you who do real analysis and stuff, that comes into it. Never mind. So what we're going to do with that is we're going to take the derivative, the total derivative, d d lambda of both sides. So let's see. The left-hand side is going to give us dB d lambda equals dB d lambda n of k. Just using the chain rule here. This is chain rule. And this is our definition of partial molar property. And this derivative is really easy. Right-hand side, d d lambda times lambda B T P n of k. And that is also an easy derivative. That's B T P n of k. So I took the total derivative of the left-hand side on the right-hand side. And I'm going to then compare them. Again, this is exactly what we were doing with circles and triangles. The total extensive quantity for a system is the sum of the partial molar properties times the mole numbers. So again, circles and triangles, scaling with system size. And this is called an Euler equation. I think it's the only one we're going to use in this class. So you don't have to worry so much about that there's this category of equations called Euler equations in thermo. But you might encounter them, if not now, then in a later class. And that's what this is. It is the consequence of that first-order homogeneous property. Where is it stated a little more colloquially? Extensive properties of a solution phase are made up of the mole weighted PMPs of the components. That's what it means. So for example-- this is the one we'll be using almost to the exclusion of any other in this class-- the Gibbs free entry of a system is going to be the sum of these PMPs times the mole numbers. And because Gibbs free energy is so special, we have a special expression for the molar Gibbs free energy and its chemical potential. So the Gibbs free energy, PMP, is the chemical potential. So you've seen this expression before. So if you ever see someone add up the Gibbs free energy of a system, and they just jump right to this, you know where it comes from. So sum of the chemical potentials weighted by the mole numbers. Unfortunately, we have to continue with more equations. So what we're doing here is we're building up to these useful expressions, which you can use-- you will use on the homework, for instance-- without necessarily knowing where they come from. So we have an Euler equation, which is like the one we just derived, which is the total amount of B equals the sum of the partial molar B times the mole numbers. And then we're going to use the chain rule. And we're going to calculate the total derivative dB. And that's going to be equal to chain rule stuff. Bk dn of k plus n of k d d of k. And that's relatively straightforward. And we're going to use a coefficient relation. The coefficient relation is that dB at constant temperature and pressure equals-- that was from the beginning, when we defined the partial model property. And so what we're going to do is we're going to simply compare these expressions, this and this. And we find that one of these two sums is 0. This is called a Gibbs-Duhem equation. And by the way, I really would not be spending the time on this if it didn't end up in a really useful place. So if you're wondering why on Earth you sign up for a class on analysis, all I can say is this is a class on thermo. And this is part of it. And I'm including it because it's going to take us somewhere really useful. Enough excuse making. This is thermo. So n dbk, what on Earth does this mean? It means that you can't change all the partial molar properties independently of each other. If you have two components, let's say, the sum runs over 2, there's a linkage between how the partial molar properties of one change when the partial molar properties of the other change. There's this equation. They're related to those. Let me say that. The Gibbs-Duhem equations are constraints on the variation of intensive properties. Or in other words, they express how the partial molar properties covary. That's a term from optimization. They covary with system composition. In other words, it's a differential equation. An equation that describes how things covary with each other is a differential equation. That's what it is. So now we're going to move towards partial molar properties of mixing and so forth. And we will finally get to graphical interpretation, which is why we're doing all of this. So for example, Gibbs-Duhem for Gibbs free energy. If anyone's wondering, what's with Gibbs? How come he gets his name on everything? He's one of the most important and prolific American scientists ever. And he was a thermodynamics researcher, obviously, in the late 19th century. And he spent his career at Yale. And yeah. His name is all over classical thermo. Euler, Euler gives us the following. The total of Gibbs free energy is the sum of mu of k n of k, the chain rule. It gives us dG equals mu of k dn of k plus n of k d mu of k. And then we combine the combined statement. The combined statement is a coefficient relation. dG equals minus s dT plus VdP plus mu of k dn of k. And I'm going to use these, and I get the following. n of k du of k equals minus S dT plus V dP. Or if I want to divide through by the total number of moles, I get an equivalent statement. Mole fraction d mu of k equals molar entropy. These are also gives qm. So again, you see what these are is they're constraints on how the intensive properties are varying. Because intensive properties have pressure, temperature, and chemical potential, we can't all vary independently of each other. We have these constraints. Let's move on. And this is really like actuarial science. We just keep on coming up with these bookkeeping schemes. And we have to stick with it. So we're going to have something called the mixing quantities. So the change due to mixing is going to be the total quantity in the solution minus what you started with before you made the solution, the pure components. We talked about this last time, that, in general, the whole is not simply the sum of the parts when you make a solution. And there were some examples given of materials which, for example, get bigger than you would expect or smaller than you would expect and so forth. So we're just going to capture that change here. And we're going to write the total B using partial molar properties, as we just learned how to do, Bk n of k. And then we're going to introduce even more notation and say, for the pure components, you had pure molar properties. And we've got group terms. And we're going to define that as delta B partial molar property due to mixing. Oof. Change in PMP due to mixing. My goodness. What is this? Let me give an analogy. I don't know whether this helps. Let's imagine this is personal space. We're calculating here the area occupied by a crowd of people. And you could define a personal space as the exclusive area around everybody such that the total area is everyone's personal space added up. Does that makes sense? Imagine a crowd of people. What this is is an expression, acknowledgment, that your personal space in a given crowd varies with the crowd. So if you are-- let's see. If your crowd is a dispersal of people in Killian Court on a sunny day, you're going to have a lot of personal space per person. It would be very, very strange to go up to somebody who is sitting there, having a picnic, and to sit so that your shoulders are rubbing, and you don't know the person. Total stranger. Killian Court's wide open and sunny. And you go sit with your shoulders rubbing. And that's the space you picked. That would be very unusual. But same people in a crowded subway car or on a dance floor, you might have a very different personal space. So your personal space is not fixed by who you are as a person. It's determined by the solution you find yourself in. And different solutions will result in different personal space, space per-- area per person. And we have this term, delta B of k, personal property of mixing, to capture how that changes. So how does your personal space change from your ideal situation, where you're in a field by yourself? If you're in a crowded subway car, you're surrounded by a crowd of people who you like. Maybe your personal space shrinks. Or you're surrounded by a crowd of people who you don't really like. Maybe your personal space expands. And there's going to be analogies for all of these in thermo. When a molecule is surrounded by other molecules that it likes or that it wants to bond with, its purse's volume is going to shrink a little bit. Its volume per molecule will shrink a little bit. And if a molecule is surrounded by molecules that it doesn't want to bond with, its partial molar volume will expand a little bit. So these concepts depend on the surroundings. So let's make this specific again, Gibbs free energy. The total Gibbs free energy of a system is equal to the sum of partial molar Gibbs free energy times mole number is-- because Gibbs is special, it has its own term, chemical potential, times mole number. And according to our new bookkeeping scheme, this is the chemical potential of the pure stuff plus however it changed when it entered the solution phase. And then we'll divide through by the total mole count. And we get Gibbs free energy equals-- And again, this is the sum of the pure components. And this is change, the change upon mixing. We have two more slides. And we're going to get to the very, very useful graphical interpretation of all this. How do we calculate these things? So you're to do a little bit of this on the p set. So we're going to do the case of chemical potential in a binary system, binary system. So binary system-- delta G due to mixing equals delta mu of 1 X1 plus delta mu of 2 X2. And you have two components. That's good. And so this gets-- you really have to know your d's of thermodynamics here. The change of the Gibbs free energy of mixing for fixed pressure and temperature as a system composition varies. So again, the change of the Gibbs free energy of mixing at fixed pressure and temperature as a new system composition varies. What about the chain rule? Well, that thing equals 0 if I give 2n. So I'm left with these two terms here. And then I'm going to use X1 plus X2 equals 1, which means what? d X1 equals minus d X2. It's a binary system. So there's only one composition variable. And get the following. D dT G dT delta G mix d X2. I chose X2 here as my independent variable. At fixed pressure and temperature equals delta mu of 2 minus delta mu of 1. And then I'm going to eliminate delta mu of 1 using the above. What's the above? That's the above. And I get the following. Delta mu of 2 equals delta G mix plus 1 minus X2 d delta G mix d X2. And believe it or not, this is really useful. And likewise for delta mu of 1. And let me just show you why for each one, and we can come back for questions. It's really useful because we have a graphical interpretation of this. Graphical interpretation of PMPs using solution models. So we're going to do this for a case of delta G mix and delta and delta mu k. So let's draw a plot. I'll show you what I mean here. Here's a solution model. I'm just going to draw anything. I'm going to draw-- imagine someone gave you this data, or you figured this out yourself, something like that. Here's a solution model. Something. It's some trend of thermodynamics with what? With system composition. So this is X2. So we have some solution model. And we're going to imagine sitting here at-- let's see. Let's imagine sitting here at this composition. Let me just go ahead and draw these tangents, and then we'll come back to why they matter. So there's a point on the curve and my overall system composition. This intercept here I'm going to label. This label, this over here, and also label this over here. I'm going to label. I'm going to give these number labels P, S, R, and Q. So here's my system composition. Here's my system composition. There's a tangent line, the tangent line. And we have a solution model. So delta G mix is just at the height of this curve, is this QR. That's that length, the curve evaluated at P. Quantity 1 minus X2 equals PR. It's just that length. And the slope, d delta G mix d X2. Rise over run. So the rise is RS. And the run is PR. And so I'm going to plug these geometries into the expression, which we derived on the previous slide. And you have QR plus PR times RS over PR. And this length segments cancel each other. And you get QR plus RS, so this distance plus this distance, which, of course, equals QS. What does that mean? It means that this point here, this intercept, is delta mu 2. And likewise for delta mu 1. You have just over here is delta mu 1. So right. That means if we have a solution model, some data, some curve, if we draw the tangent to that, we can pick out the partial molar properties of component 1, component 2, by the intercept of the tangent on the left-hand axis and the right-hand axis. And so that's where I'm going to leave it for today. Either this is kind of mysterious and wasn't worth the effort, in which case I would ask you just to hold on. Or it now it starts to make sense if you've seen the common tangent construction before, in which case, I'll say, all right, well, we'll get there. Right. So this was a bunch of math around solution modeling.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_10_Introduction_to_Unary_Phase_Transformations.txt
[SQUEAKING] [RUSTLING] [CLICKING] RAFAEL JARAMILLO: Right, OK, let's get going. We're going to talk about unary phase transformation today. So we talked about this. We started on this road a little bit last time. So we're going to now really dive in. I think I was asked to use the fat pen. So I'm going to do that. So first, phase transformation, a phase transformation is when a material transforms from one phase to another. Sounds pretty straightforward. In this case, because it's unary, there's no change in composition. By definition, there can't be. There's only one component. But we'll see that more generally, phase transformations come in lots of different flavors, including transformations that are associated with composition changes. And this is the first one we're going to encounter. So we're going to start, as we often do, with water. It's a very familiar substance and the phase diagram of water. So here's temperature. Here's pressure. And let's see. At low pressure and high temperature, we should have the gas phase. At low temperature and high pressure, we should have liquid phase. Sorry. Solid phase. And kind of in the middle, we should have the liquid phase. And there's this phase boundary that separates them. And we'll just take that solid to liquid transformation as an example. And imagine heating up across the solid to liquid boundary. And we'll ask what is this process? What actually happens? So what happens, we break bonds. We break bonds. We do something to the enthalpy. Do we increase it or decrease it? AUDIENCE: Increase. RAFAEL JARAMILLO: Increase. Bonds are low energy states. So to break them, you need to put energy in and pull those molecules apart from each other. So you're going to increase the enthalpy. We need to increase disorder. And we go from a ordered crystal to a liquid. And so we increase entropy. That's what happens across that phase transformation. Let's talk about it a little more quantitatively here. So let's see, I'm going to draw two plots, one on top of the other. This is when having the graph paper really comes in handy. Even with the graph paper, I can't draw a straight line. Oh well. So what I'm going to plot here is the reversible heat. I'm going to heat the solid and melt it. I'm going to plot reversible heat. And on the first plot, I'm going to draw a temperature. On the second plot, I'm going to draw entropy. And so let's see, let's start over here in the solid phase. And we're going to be heating the system. And what's going on? We're going to be heating the system. And at some point, we reached the melting point. So let's say that this is the melting point. So let me draw some lines here. So this is like-- we're adding heat. We're heating the solid. And at some point, we reach the melting point. What does entropy versus heat look like in this region? What does entropy versus heat look like? Let's see, this is region one, give you a moment to think about it, region one, heating the solid. And I want you to recall that ds equals dq reversible over t. So what does entropy versus heat look like? AUDIENCE: [INAUDIBLE] plus or maybe constant. RAFAEL JARAMILLO: I'm sorry, I didn't hear what the first thing. You said, perhaps constant. But there was another answer in there too. AUDIENCE: Perhaps very having a very small, kind of similar to [INAUDIBLE] slope that is plateauing. RAFAEL JARAMILLO: Would it be a positive slope or a negative slope. AUDIENCE: A positive slope. RAFAEL JARAMILLO: Positive slope, good because you can see ds dq is strictly positive. It's one of a temperature. All right, and is it going to be curving as a positive curvature or a negative curvature? AUDIENCE: Just have a positive curvature. RAFAEL JARAMILLO: So it's going to have a negative curvature because you can see the slope is 1 over t. So as t increases, it's going to-- slope is going to decrease. So let's draw that a little bit exaggerated, decreasing slope. OK, good. All right, now, we've reached the melting point. And I'm going to keep heating. I'm going to keep adding heat. What happens to the temperature as I continue to add heat energy to the system? AUDIENCE: It stays constant. RAFAEL JARAMILLO: Stays constant. It stays constant as long as I'm in the solid plus liquid coexistence region. That's right. Good, so that hopefully you know from experience that at unary phase transformations, the temperature is constant as long as you have two phases present. And you know that from boiling water on the stovetop. So so the temperature stays constant. But what happens to the entropy? Does the entropy stay constant as well? AUDIENCE: I'd say so for breaking bonds for transitioning to the liquid phase, it would increase. RAFAEL JARAMILLO: Whomever that was, I had a little bit of a hard time hearing you. Can you try again? AUDIENCE: I'd say, since we're breaking into [INAUDIBLE] bonds and transitioning to the liquid phase with a lot more disorder, it would increase. RAFAEL JARAMILLO: Good, that's good intuition. You're thinking you're increasing disorder, so entropy has to increase. That's right. And the math bears you out. ds equals dq over t. So as long as there's dq, we have ds. The slope is fixed. So we're going to increase there our entropy, literally with the added heat. Now, all right, great. We have melted all the solid. And let's keep on heating the system, what's going to happen now? AUDIENCE: So it would kind of look similar to when we were in phase one, but I think the slope would increase. RAFAEL JARAMILLO: Thank you, So let's see. So let's say we're going to have-- we're going to have increase. We're going to be back in this kind of a regime. I don't know what the temperature dependence of the heat capacity is. But it tends-- the heat capacity tends to increase with increasing temperature. You might know that from some experience recently. So I draw that with a slightly decreasing negative curvature. But it's kind of neither here nor there. And down here, the entropy continues to increase. But it's going to continue to increase at a slowing rate. So let me use a different color here. And so let's make this really explicit, this slope, that slope is 1 over cp of the solid phase. And its-- I drew it as weakly temperature dependent. I drew some curvature there. And this slope is 1 over cp of the liquid phase. And so these heat capacities can be temperature dependent. So there's some curvature there. Good, so phase one was heating the solid phase. Phase two is melting. Melting solid at the melting point. And phase three was-- step three, I should say, liquid. OK, so that's good. Now, let's draw another plot, which is let's draw now-- before I had q as my control parameter. You're burning fuel. And that's the control parameter. But now, I want a parametrically plot, or before 2 dependent parameters, s versus t. So I'll start you off here, s versus t it's like this until we reach the melting point. Now, so this is phase one. What does s versus t look like in phase two? AUDIENCE: I think it's still a line. RAFAEL JARAMILLO: Yeah, it's a line. But with what slope? AUDIENCE: I think still a positive slope, but not curving. RAFAEL JARAMILLO: Well, in phase two, is entropy changing? You're adding heat energy to the system. So entropy is changing. In phase two, is temperature changing? AUDIENCE: It's also heat added in some substance to get the temperature to increase. [INTERPOSING VOICES] AUDIENCE: Sorry. RAFAEL JARAMILLO: Go ahead, please. AUDIENCE: Would it just be like a straight line up-- RAFAEL JARAMILLO: There you go. AUDIENCE: Temperature isn't changing. RAFAEL JARAMILLO: That's right. In phase two, we're adding heat energy. So we are increasing the entropy of the system. And as I think correctly pointed out, we're increasing disorder. We're melting the solid. So we definitely want that increasing entropy. But the temperature is locked at the equilibrium melting point. So we just have a vertical line up. And then at some point, at some-- it's after some amount of heat, we have melted all the solid. And we are-- so this was phase two. And we are back in the single phase region. So this is the parametric plot of entropy versus temperature. And there's an important quantity here. The quantity is delta s solid to liquid, entropy of melting. So this is the quantity you're going to find in the databases. In fact, you'll find it in the appendix of the textbook. The entropy-- or if you like, the enthalpy, or heat of melting delta h solid to liquid equals reversible solid to liquid equals t melting delta s solid to liquid. So the enthalpy of melting is another thing which you'll find in the databases. Or maybe you'll find the melting point. You see that there's one equation here that relates enthalpy, melting point, and entropy. So often, the databases, including the textbook, may not give you all three numbers because from any two, you can determine the third. So if you ever go to a database, and this will happen to you, I promise you, in your professional life, where you're looking for, let's say, enthalpy. And the database doesn't list it. It lists entropy, and heat, and equilibrium transformation temperature, you'll be momentarily annoyed. And then you'll say, oh yeah, I'll just calculate the enthalpy change from the entropy change and the melting temperature. That's happened to me more times than I care to think about. OK, good. So these parametric plots happen a lot in thermo. I hope you're getting used to them. You have one on the exam, not quite that, but a parametric plot. All right, so let's have some facts about phase transformations. The higher temperature phase always has higher entropy. So if you have a transformation from a low temperature to a high temperature phase, high temperature phase always has higher entropy. Let's see how that works out. Let's consider now a generic solid solid transformation, alpha to beta. Consider alpha to beta. And they coexist at some temperature. That's the transformation temperature. So at that temperature, heating converts alpha. Heating converts alpha to beta. That's the definition of beta being the high temperature phase. So that means that as we convert alpha to beta, dn of alpha equals minus dn of beta is less than zero. We're transforming moles of alpha into moles of beta. We also know that heating increases the entropy because ds equals dq reversible at the temperature that this is happening, which is t alpha beta, which is greater than zero. We also know that the entropy is extensive. So s equals n of alpha, s of alpha plus n of beta s of beta. And here, I'm going to do-- right around here, it's useful to start using different notation for extensive versus molar quantities. So I'm going to put a dash here. This is consistent with the textbook. Quantities like s of alpha or s of beta, so forth, are molar entropies. That is entropies per mole, which are intensive properties. Quantities with a dash are total entropies and are extensive. So we're going to use that from here on out. It's consistent with the textbook. Previously in the class, I think it's been-- it would have been more confusing than worth it to introduce this notation. But from this point forward, it becomes worth it. So questions. This is a good time for questions on what this means, because we'll be using this notation a whole heck of a lot. OK, well, I'll let you come back. AUDIENCE: Yeah, I have a question. The line where it says heating converts alpha to beta, is that talking about molar quantities? RAFAEL JARAMILLO: Well, here, it's just a phase label. So it's not really molar quantities. But here, what we're saying is that we're converting moles of alpha into moles of beta. So n of alpha is the number of moles of alpha. And n of beta equals the moles of beta. And so this equation is conservation of mass. And I'm only saying that I'm going I'm transforming from alpha to beta. That's why that's less than zero. I don't know if that's helpful. Ask me again. Let me make the conclusion here. And then ask me again if that's so-- so all these facts that I'm heating, that beta is a high temperature phase, that heating increases entropy, and entropy is extensive, these require-- I wrote r three times, require that s of beta is greater than s of alpha. Hopefully that's intuitive. If I'm converting from alpha and beta and I'm increasing the entropy, then beta has to have more entropy than alpha. That's all there is to it, really. And what this means is what does this actually mean? I had two phase system. Beta is a high temperature phase. Let's use a nice blue for the low temperature phase. And I have a phase boundary between them. The phase boundary is moving. The phase boundary's on the move. The beta phase is expanding at the expense of the alpha phase. So entropy is increasing. All right, so let's talk about building unary phase diagrams. We've talked a little bit about what happens when the phase is transformed. How do we build the diagrams? And I'll remind you that the reading for this is chapter 7. So first, I want to talk about the role of chemical potential. Why do I want to talk about that? Role of chemical potential, well, let's remember that the change-- now, I'm going to use this extensive thing. The change in the total Gibbs free energy of a system equals minus sdt plus vdp plus-- let's see, what do I want to use-- I'll use k. u component-- well, no, this is a phase label because it's unary. So I've got some temperature. I've got some pressure. And I've got the potential to transform between phases. k is a phase label. OK, good. So at fixed temperature, at fixed temperature and pressure, I get to ignore 2/3 of the right hand side of the equation. That's why those are the natural variables. At fixed temperature and pressure, equilibrium condition is dg equals-- it becomes really rather simple. So let's talk about what that means in reality. Here's phase alpha. Some two phases, here's phase beta. And they're going to have a phase boundary between them. And alpha phase has chemical potential mu of alpha. And beta phase has a chemical potential mu of beta. And this phase boundary is open, so mask and flow. So that means that a molecule is free to jump from beta to alpha. So let's imagine this is a dn of beta. And this is a dn of alpha. And we require mass conservation. It means that dn of alpha equals minus dn of beta. So what happens if the chemical potential of beta is less than the chemical potential of alpha? What happens if the chemical potential is beta less than the chemical potential of alpha? Let me write that down. If mu beta is less than mu alpha then what will happen? AUDIENCE: Alpha start going to beta. RAFAEL JARAMILLO: The mass will flow from alpha to beta to decrease Gibbs, right? So there's going to be a little bit of a downhill flow. There's a driving force for the mass to flow from alpha to beta to lower the overall Gibbs energy. Similarly, if mu of alpha is less than mu of beta, then mass will flow from beta to alpha to decrease Gibbs. So what's the one and only scenario in which the system undergoes no spontaneous change, where the two phases coexist without any net flow of molecules from one phase to the other? AUDIENCE: The mu of alpha equals mu beta. RAFAEL JARAMILLO: Right, transition for phase coexistence is mu alpha equals mu beta. That's right. So if and only if the chemical potentials are equal, then there's no driving force for a molecule of beta to hop into alpha. And there's no driving force for a molecule of alpha to hop into beta. Those transformations will happen spontaneously on a microscopic level. But on a macroscopic level, they'll balance out. There'll be no net flux from one phase into the other. And so observing the system, we'll see that these two phases are in equilibrium. They're coexisting. And one is not spontaneously transforming into the other. That's the meaning of phase coexistence. So this is yet another example of what I call Gibbs free energy price shopping. Nature molecules, at fixed temperature and pressure, they go price shopping for Gibbs-- lowest Gibbs free energy. Whatever is the lowest, that's what they're buying. That's what they choose. Risking some extreme amount of anthropomorphization here. But I find that helpful to think about. All right, so that's the role of chemical potential. It tells molecules where to go to reach equilibrium. What is chemical potential? What is this mysterious thing that's so important? Well, first of all, just a reminder, chemical potential is the partial of Gibbs by mole number at fixed temperature and pressure. In other words, it's the molar Gibbs free energy. So if you have a system for which Gibbs free energy at t and p and 1 mole equals mu, then the Gibbs free energy for t, p, and some arbitrary number of moles equals n times mu. So that's what chemical potential is. We've done some calculations of how it varies. You're going to do a lot more. Let's talk-- have we done calculate-- yeah, we have. Determining phase equilibrium. That is how do we draw phase diagrams. How do we do it? It's a three-step process. Step A, at each pressure and temperature, calculate mu for all possible phases. k is a phase label. So we're going to calculate this thing, mu, and we'll say equilibrium at a given temperature and pressure is the phase with lowest mu. And as we've seen a couple of minutes ago, if multiple phases have the same mu, then they can coexist. So that's the conceptual way to build unary phase diagrams. And I'll tell you, this is how CALPHAD software does it. So soon, you're going to be playing with CALPHAD software. If you haven't downloaded and installed ThermoCalc student version, please do so. And what that software does, and really, the software helps you gain an intuition for this. What the software does is it calculates mu of k for all the phases that it knows about. And then it goes Gibbs free energy price shopping. It looks for the lowest one. And it makes a map of the lowest ones. And where two phases coexist with the same chemical potential, it draws a coexistence line or coexistence point. So I find it helpful to think of these in three dimensions. And unfortunately, this means I'm going to have to draw in three dimensions. So I'm going to try here to draw a couple of chemical potential surfaces. I think even the process of watching the struggle to draw these will be instructive. Of course, there are better pictures in a textbook. So let's see. Mu of t and p is the surface, right? It's a function of two variables. It's a surface over the tp plane. And each phase has its own surface. OK, so let's see. Let's try to draw that. Let's draw-- all right, so let's draw the first surface. I'm going to draw some phase here. It's going to have-- running out of purple. Don't like running out of purple. All right, let's make this look really nice and three dimensional. All right, t, p, and g. And then let's draw another phase. Let's draw a blue phase. So let's call this phase alpha. And let's call this phase beta. So these-- I'm just drawing sections of surfaces. These are functions, right? This is mu as a function of pressure and temperature for phase beta. And this is mu as a function of temperature and pressure for phase alpha. OK now, let's think about a given temperature and pressure point. Grab another color. Let's think about pressure and temperature point down here. And let's-- if I draw a vertical line, I cut that surface. And then I cut that surface. Whenever I cut it, I draw a little thing there. All right, so at this t and p-- this pen is totally dead-- at this point, I'm drawing with fading pink. Let me just toss that marker on to the floor so I'm not tempted again. At that point, what's the equilibrium phase, alpha or beta? AUDIENCE: Alpha the equilibrium phase because it has the lower Gibbs energy. RAFAEL JARAMILLO: Thank you. That's right. So you imagine a number of these manifolds, these surfaces. And the computer, you don't normally have to do this, the computer does it for you. It calculates Gibbs free energy for each point. That's a function, calculates Gibbs free energy as a function of t and p. And at any given point, you can look at this and say, oh, alpha is the equilibrium phase because it has the lower Gibbs free energy. Another thing to point out here is there is a distance, a vertical distance between those two surfaces. It's the change of Gibbs free energy from alpha to beta. And that thing is a function of temperature and pressure. So I want you to imagine that this is a surface with some curvature. This is a surface with some curvature. And as I move my pointer around on the xy plane, the vertical distance between those surfaces will change. In general, it's temperature and pressure dependent. So this is, again, a fancier version of that transformation quantity, which we saw in the first 10 minutes of lecture with entropy. With entropy, we had this transformation quantity. This transformation quantity was a delta entropy at a given temperature. And it's assumed here, this is all at some given pressure. So here's a delta s as I transform, in this case, from the solid to the liquid phase. This is in water. Well now, I have, again, a transformation quantity. I have a delta g. I say transform from the alpha to the beta phase of some unknown material. This is very generic. This will apply to many, many different materials. And now, we've made it explicit that transformation quantity, its distance, is both temperature and pressure dependent. In about a week and a half, we'll see it's also composition dependent. And then I no longer have any chance of being able to draw this. But the concepts remain the same. OK, I want-- I'm going to stay here for a minute, even though I'm worried about time. But this is such a critical concept. I want you to-- I want to refer you to a little mini-lecture that I recorded called the three D's of thermodynamics. This is about delta, delta, and delta. It is confusing when you first see it. We have all these change quantities. And so I hope that little mini-lecture helps clarify this point a little bit. I also want to share a page from the textbook, a page from the textbook, which, of course, does very beautiful drawing. This is-- they're drawing this as mu, the same thing as g for urinary system, molar Gibss free energy for-- let's see, they're drawing this here for a liquid phase. And then they drive separately-- sorry, they're drawing this here for a solid phase. And they're separately drawing this here for a liquid phase. And then when they put the two drawings together, you'll see that there are points where those two surfaces touch. Two surfaces intersect along a line. That's just a geometric effect. So what we're seeing here is that there is a line of phase coexistence, where solid and liquid coexist. And that's what we saw at the very beginning when we started with the phase diagram of water. There's a line of phase coexistance. Those lines are where these surfaces intersect. So I didn't try to draw this on my graph paper. But it's the same concept as what I did try to draw. So let me go back to the camera. And why don't I finish up. And then hopefully we'll have some time for questions. All right, so how do you actually calculate this thing? Calculating delta mu for, let's say in this case, a change of t and p. All right, this thing is a function of t and p. So it's going to vary as we vary t and p for a given phase. Let's see, all right, so we know something. We know that the d mu equals minus s dt plus vdp. That's molar Gibbs free energy. So that means that the change in mu is initial to final d mu. That's just fundamental theorem of calculus. And now, I have to plug something in. So now, I'm going to have-- well now, I better draw some coordinates. Let's see. Here is temperature. Here's pressure. Let's say here's my initial point. Here's my final point. And my actual process that I'm trying to engineer might be this. But we're talking state variables here. So I can take any path that I like. And so naturally, I want to integrate first along one fixed, variable, and then along the other fixed variable. So let's make that explicit. We have initial, then final. Let's see, I did dp first. I did dp first. v, and this is a function of t and p. And then I'm going to go from my initial to final dt. And here, I have minus s, which is itself a function of t and p. So I'm integrating the change. The fixed temperature, and then I integrate this change for fixed pressure. And so this is it as far as the calculus is concerned. This implicitly assumes that we know the equations of state, there's plural equations of state, v as a function of t, p and s as a function of t, p. So formally, it's all well and good. It's relatively easy to write down. In practice, you're probably going to be making some approximations. Or the computer is going to be doing a numerical integration based on database values, or so forth. Right now, we're at the level of concepts. All right, let's go one step further. In general, we don't know v t, p and s t, p. But we can calculate from standard values. Let's say s0 and v0, standard values, standard conditions. So for instance, we know that ds equals t, p over t dt minus v alpha dp. And we know that dv equals v alpha dt minus v beta dp. And so if we have data, we can take standard values and integrate the change in s and v as we vary temperature and pressure. And this highlights that s of t and p and v of t and p, they're also surfaces over the tp plane. So your mind has to think about all these surface in multi-dimensional space, the surfaces of the state functions. They might be entropy. They might be Gibbs free energy. They might be volume. Depends on what you're trying to calculate. In principle, you could do the calculus if given the equations of state. In practice, it's a computer going to be doing it for you. One last-- one last line in this vein. And we'll be done. So let's talk about calculating delta mu for a change of phase at fixed. So we just talked about how to do it for varying t and p. Now, let's talk about change of phase. So delta mu is the same thing as delta molar Gibbs free energy equals delta h minus t delta s. For isothermal process, t is fixed. So we have delta h minus t delta s. At phase coexistence, at phase coexistence, delta mu equals 0. We figured that out about 40 minutes ago. That for two phases to be coexisting in equilibrium, they have to have the same chemical potential. So that means at phase coexistence, delta h equals t delta s. Another-- we were here again about 40 minutes ago. This is a very, very useful expression. And these values, these are tabulated. So turn to the back of the book and you'll see tables of these data. Away from phase coexistence, away from phase coexistence, we can evaluate delta h and delta s, again, from standard data h0 and s0 and the differential forms, which in this case, are dh equals cp dt plus v1 minus temperature alpha dp. And ds equals cp over t dt minus v alpha dp. Again, these are surfaces. These are state function surfaces over the tp plane. So it's exactly 10:55. And I got through what I wanted to get through. The main thing I want to leave you with is this visual. All of these state functions are surfaces over the temperature pressure plane. They each have their own local curvature as a function of temperature and pressure. When comparing phases, we compare vertical distances between surfaces. What's the difference between alpha and beta at this temperature pressure point? When calculating changes as a function of temperature and pressure in a given phase, we consider moving along these surfaces. And the calculus that we've established so far to this point in the class gives us everything we need to do those calculations.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_31_Reacting_Multiphase_Systems.txt
[SQUEAKING] [RUSTLING] [CLICKING] RAFAEL JARAMILLO: All right, so today is the last lecture of new material. We're going to talk about reactions between gases and condensed phases. So we're going to talk about that, and then, we'll make it specific to oxidation. And then, on Friday, I'll do an extended practice problem, I suppose, on reactions between gases and condensed phases. So in general, some reaction-- I'm going to write this really generally. aA plus bB going to cC plus dD. And these are not just for ideal gases. So the last time we've seen reaction like this in 020, it's been for ideal gases reacting. And we're going to move away from that and do this more generally. So let me just quickly remind you of how we get to the action coefficient. We have a general expression for the change in Gibbs free energy with these unconstrained internal variables, and we can re-express this in terms of the reaction extent, remembering that DNI over mu I is a constant for all I reaction extent. I don't need to have an I there. Just xe. And reminding us that mu of I, the chemical potential of a given species is its reference chemical potential plus [INAUDIBLE] activity. So this is still completely general. We take that expression and we write d of g equals-- let's see, mu of I not mu of I plus rt log a of I to the power mu of I. And this is all times dt. And this is d G0 plus RT log product a of I mu of I. Again, whole thing times dt. And of course, this is 0 at equilibrium, because at equilibrium, Gibbs free energy is optimized. So the coefficient is 0, and we get, at equilibrium, this product, a of I mu of I, which we call the equilibrium constant, equals e to the minus delta G0 over RT. So that's just a reminder. But this is why I wanted to take three minutes for that because we're going to make this specific to metal oxidation. That's what we're doing. So equilibrium constant for metal. And what you're going to find is that it simplifies a lot. So that's good. Metal oxidation. As we started the last lecture, we have some metal reacting with a modal of oxygen gas going to MzO2. Remember, z here has to do with the fact that we don't know what the oxidation state of that metal is. So potassium would be 1, iron has a couple of different oxidation states. Copper would be 1 or 2-- so there's different oxidation states. So this is to do this generally. So all right, now, we're going to write the equilibrium constant for this reaction. So we have the activity of the oxide MzO2. Sorry, the writing is getting a little bit small there. The activity of the oxide over the activity of the metal to the power of z times is the activity of O2 gas. The products of the numerator, reactants, and the denominator. The metal is z modal of metal. So we have that activity to the power of z. And straightforward to write down. Everyone with me so far? All right, this oxidation reaction at equilibrium. We're going to come back to that. This is the metal and oxygen and the oxide all coexisting at equilibrium. And now come the approximations. We're going to treat O2 as-- how are we going to treat oxygen, anybody? How will we model oxygen? AUDIENCE: Constant pressure? [INAUDIBLE] question. RAFAEL JARAMILLO: Not constant pressure, but we have a-- AUDIENCE: It's a gas. RAFAEL JARAMILLO: --as a-- what kind of gas? AUDIENCE: Ideal gas. Ideal? RAFAEL JARAMILLO: Yeah, because it's the only model for gases that we know. [INAUDIBLE] So an ideal gas-- the activity of an ideal gas is just as partial pressure by atmosphere for 1 atmosphere reference statement. Good. That's easy. And we're going to assume that condensed phases are pure. That's big. If we assume the condensed phases are pure, that's equivalent to saying that their composition is unchanging, all right? That's saying that this metal does not dissolve any oxygen, and it's saying that this oxide is what? What kind of a compound is always at fixed stoichiometry? AUDIENCE: Like, perfect crystal? RAFAEL JARAMILLO: Sorry? AUDIENCE: The perfect crystal? RAFAEL JARAMILLO: No, well, not perfect. No, not a perfect crystal, actually. Be something else. We learned this just two days ago. AUDIENCE: [INAUDIBLE] ? RAFAEL JARAMILLO: Mm, yes. It's a pure oxide, meaning that stoichiometry is ideal. It's a line compound. It doesn't deviate from that. So if we can assume the condensed phases are pure, what is the activity of a pure phase? We've got to think back. What's the activity of a component in its reference statement? AUDIENCE: 1? RAFAEL JARAMILLO: Right. Thanks, that's recalling material from the middle of the semester, so it goes back a little bit, but the activity of the material in its reference date is 1. So now, I have some really handy approximations. The activity of oxygen gas. We're treating it like an ideal gas, so it's just Po2 by atmosphere. And the activity of the metal and the activity of the oxide are both 1. So it's equivalent to assuming that there's vanishing oxygen solubility in the metal and that the oxide is a line compound. So with those approximations, we have the equilibrium constant is Po2 by atmosphere, so minus 1, and this is going to be equal to the E minus delta G0 over RT. So you see this simplified quite a lot. Simplified quite a lot. All right, I want to share some slides on binary metal oxygen systems to convince you that they do tend to be pair of materials. So let's look here. I grabbed a couple. OK, so here's the tin oxygen system. And you could see that an oxygen-- well, it's just a gas here. Tin down here is a metal until it becomes a liquid. In the liquid phase, it dissolves some oxygen. You can see that the liquid phase here has a solution region. So you can dissolve oxygen in liquid tin, but what about in solid tin? I know oxygen is off. So you see solid tin will just coexist with this line compound tin oxide. So this is what we're talking about, a line compound of an oxide and a metal that doesn't dissolve any oxygen. Here's another example, copper oxygen. Again, copper. It's written here as a parentheses as if it's a solid solution, but really, there's vanishingly small oxygen dissolution and copper. There's some. There's some, but it's so small as to be invisible on this part. I'll tell you that when you're making a component for cryogenics or for space applications, you want oxygen-free copper. You can buy it from McMaster, and you want it, because oxygen is magnetic when it freezes, and so a lot of times you want non-magnetic copper. So you have to get oxygen-free copper, but it's there at the parts per million level. It doesn't show up at all on the spot. So again, oxygen not dissolving in copper and these oxides being line compounds. Here's more examples. This is a little counter example. This is manganese. Manganese actually does dissolve some oxygen. So you see it dissolves 1% or 2% of oxygen. So maybe the activity of manganese should be not quite 1. Maybe it should be 0.99. And making these oxide similarly to not quite long compounds. They have some solid solution region, so that's a counterexample. Another example a titanium oxide oxygen system is titanium dissolves a lot of oxygen. So this is an even counter counterexample. This is a titanium, so funny system. It dissolves a lot of oxygen into solid titanium, but its oxides here are line compounds. So what to make of it? It's good to understand where the approximations are and where they might not. Questions on these faith diagrams before I go back to the board? AUDIENCE: Does this dissolving all owe to the molecule or just oxygen in general? RAFAEL JARAMILLO: I'm sorry, I didn't quite-- could you repeat? AUDIENCE: Going back to the solution, is this like dissolving oxygen as auto form or other forms of oxygen? RAFAEL JARAMILLO: It almost certainly disassociate. So this would be dissolving oxygen atoms. Well, I mean, how do you dissolve oxygen atoms? You expose the material to oxygen gas, and there's some catalytic process by which the oxygen gas splits into oxygen atoms at the surface and the oxygen atoms diffuse into the metal. It's almost certainly not dissolving O2 molecules. Metals that do dissolve oxygen are the basis for a lot of interesting technologies, but we won't have time for that. So let's move back to-- sorry, any other questions on these phase diagrams and interpreting them or applications and so forth before I move back? All right, so we have this really simple expression for the equilibrium constant, and now we want to evaluate delta G0 equals delta H0 minus T delta S0 for metal oxidation. So we're going to evaluate that. So we're going to start with enthalpy. Enthalpy delta H0 equals delta H0-- let's say 298 plus 298 temperature D-- temperature and you can have a delta Cp for the reaction. So this is general, but here's the-- and you know what's coming is an approximation. So I'm going to tell you for most metal oxides, delta H0-- let's say 298-- is large. These are energetic reactions. Why would the enthalpy of formation of an oxide be large? Does somebody have the sense for why that should be? I'll give you a hint. It's large negative. It's large in the negative direction. What reaction are we talking about here? AUDIENCE: Oxidation? RAFAEL JARAMILLO: Oxidation. So what kind of reaction is an oxidation reaction other than just-- you could say it's an oxidation reaction, but what's actually happening? Maybe somebody who hasn't replied yet today. What's happening on the atomic level during an oxidation reaction? AUDIENCE: Does it include a favorable transfer of electrons? RAFAEL JARAMILLO: Yeah. Yeah, this is due to-- so thank you to the two of you. Due to exothermic nature of electron transfer during metal oxygen bond formation. And I could-- I get to ionic. So it's an ionic bond formation. You have very electronegative element, and that's oxygen, and less electronegative elements, those are metals. And that bonding mechanism is electron transfer, and it's very exothermic. So as a result, delta H's tend to be very large and negative. And this allows us to neglect temp dependence. Neglect temp dependence. We'll just say that it's whatever it is at 298, because that's a convenient reference point. And these are tabulated. The enthalpy of oxidation reactions at standard condition, those are tabulated. So that takes care of the enthalpy. What about the entropy? Entropy delta S is delta S at some reference temperature, 298 plus integral. OK, so who knows-- who has a guess on what we're going to do? We're going to approximate this as what? What am I looking for? I want to justify what approximation. AUDIENCE: Aren't entropy values usually much smaller? So if the enthalpy is already so big, we could assume entropy is 0? RAFAEL JARAMILLO: I see. So you're right that entropy values tend to be much smaller, but the thing is they're multiplied by temperature. So they do matter quite a lot, because the temperature number can get large. So we're not we're not going to neglect it completely, but I'll tell you-- I'm sorry? AUDIENCE: Neglecting its temperature dependence? RAFAEL JARAMILLO: Yeah, we're going to neglect its temperature dependence. So I'm going to write this before I write this. It's just the why we get to do that. So let me write neglect the temperature dependence, and now, somebody please tell me why we might be able to do this? And it's not just that we really like to make our lives easier. It turns out to be a really good approximation in many cases. What about this oxidation reaction? What's the dominant entropy change? What's the dominant contribution to delta S? AUDIENCE: Could it be when the phase change happens? So when the temperature is constant? RAFAEL JARAMILLO: The phase change meaning from metal to metal oxide? AUDIENCE: Yeah. RAFAEL JARAMILLO: That's true, but maybe tell me what part of that? AUDIENCE: [INAUDIBLE] condense it? RAFAEL JARAMILLO: Yes, thank you. That's what I'm looking for. The reaction entropy is dominated by-- I didn't see who said that, but thank you so much. By condensation of O2 from the gas. That's it. So here's metal. There's a chunk of metal, and then I have O2 gas. Gas is very high entropy, as you know. O2. And this reaction pulls the oxygen out of the gas and makes a metal oxide. So the reactants have much, much higher entropy than the products, simply due to the fact that the reactants included a mode of gas and the products don't include any gas. And in almost all cases, that change dominates the reaction entropy. And since we're modeling oxygen as an ideal gas, the entropy of this gas is temperature independent. It's configuration entropy. It's the thing we've been talking about pretty much the whole semester, which is the entropy of randomly configured gas molecules. Good. Thank you. So now we have waved our hands, although we've done it scientifically. Scientifically waving of hands has happened, and we figured out that we can neglect the temperature dependence of the enthalpy and we can neglect the temperature dependence of the entropy, and this is going to make our lives even easier than they already were. We can solve for this thing. We can solve for that oxygen partial pressure. PO2 by atmosphere equals e the delta H0 over RT. e the minus delta S0 over R. This is the oxygen pressure at which a metal and its oxide coexist at equilibrium. All right, so you've got a metal. They've got it's oxide, and this is the oxygen partial pressure, so there's some oxygen here. This is the oxygen partial pressure at which this kind of situation is that equilibrium. What happens if the oxygen partial pressure is higher than the equilibrium value? What do you think happens spontaneously in this system? Think Le Chatelier's principle. I have the system in equilibrium and then I shove more oxygen gas in. I increase the oxygen partial pressure. What do you expect to happen? How will the system respond? AUDIENCE: Spontaneously oxidized? RAFAEL JARAMILLO: One more time. I'm sorry, I didn't quite hear that. AUDIENCE: Spontaneously oxidized? RAFAEL JARAMILLO: Yes, fantastic. Metal spontaneously. So you're going to have spontaneous conversion of more metal into oxide. We've already established that the oxygen can't dissolve in the metal and the oxygen can't dissolve in the oxide. They're pure components, so all we can do is convert metal to oxide, metal to oxide, metal to oxide. Excellent. Thank you. What about at lower? At lower oxygen pressure pressure? If we're below this value, what do we expect to happen spontaneously? AUDIENCE: We will see the reverse reaction happening. So more O2 gas will form? RAFAEL JARAMILLO: How does it form? AUDIENCE: Through the oxide decomposing back into a metal and O2 gas. RAFAEL JARAMILLO: Exactly, thank you. I'm going to use this term the oxide is reduced, but your expression of spontaneously decomposing is exactly the right thing. So if we pull down the oxygen pressure below the equilibrium value, the system will respond by converting oxide into metal and oxygen gas. And you can think of this in terms of Le Chatelier. If I add more oxygen, the system will try to counteract that by condensing the oxygen into more oxide, so it'll take more metal, convert it into oxide by pulling some of my extra oxygen out of the atmosphere. If I pull down the oxygen pressure, the system will try to counteract that by giving off more oxygen gas from the oxide. The oxide will decompose into more oxygen gas and more metal. That's right. So that's what happens. So for example, let's imagine Ti oxidizing. That's a nice reaction. At 298 Kelvin, I looked it up. The PO2 at this equilibrium is-- anybody have a guess? Forget the guess. It's completely naive. 10 to the minus 150 atmospheres. A completely silly, make believe number. It's 0. It's 0. It might as well be 0. What does that mean about titanium metal? AUDIENCE: It's very favorable for it to oxidize. RAFAEL JARAMILLO: Very favorable with oxidized. Titanium metal, if you see it in the air, will actually be titanium metal with a thin layer of its own native oxide on top of it. Because if you were to remove the oxide and wait a split second, the oxide would spontaneously reform. And so this spontaneous oxide formation process is really important for a lot of reasons. It's important for microelectronics, it's important for stainless steel. That's how stainless steel remains stainless. You have spontaneous formation of a passive rating, or you can think of it as a protecting oxide layer and the engineering of those materials continues. It's all interesting stuff. All right, so that is enough to take us to Richardson Ellingham diagrams. So we're going to do this. I can't really motivate this, because it sounds kind of random what we're going to do. So I'm just going to tell you what it is, and at the end tell you why it's useful. We're going to plot delta G0 as a function of temperature for metal oxidation reactions. As you know, it's like this. And so this is a pretty simple thing. This is a line with slope minus delta S0 and intercept delta H0. This is a really, really simple line. So here's is temperature. Let's see in Kelvin. Here's 0, here's 0. Here's delta G0, and we have line. We have a line, the slope, and an intercept. Slope, intercept. That is a Richardson Ellingham diagram. Now, why did we do that? A couple of reasons. Let's start by talking about the PO2 scale. What's that? The equilibrium constant is PO2 by atmosphere to the minus 1, which is equal to e to the minus G0 over RT. So I'm just going to rearrange that and write delta G0 equals RT log PO2 by atmosphere. This is a line on delta G0 by t plot with slope our log, PO2 and 0 intercepts. Let's draw that. All right, here is temperature in Kelvin. 0 is 0. Here's delta G0. I'll draw the oxidation of a given metal just to have something on there. And now, I'm going to draw these two contours, right? Delta G0 equals RT log PO2 over atmosphere. So I'm going to draw a series of lines all with 0 intercepts in varying slope. So there's a series of lines with 0 intercept and varying slope. And as I move in this direction, I'm increasing PO2. So that's what a Richardson Ellingham plot is. Now, I get to tell you why it's useful. All right, so if I rush through the explanation what it is, because it's just like, I have to tell you what it is. But now, I can show you why it's useful. So why is this useful? It's useful for materials processing. Let me illustrate that. Let's consider two metals, and I pick these just because they appear nicely separate on the plot. Let's consider tin in manganese. If I look up the Richard Ellingham diagram for tin and manganese, I could do that, but I'll just draw qualitatively what it looks like. It's temperature in Kelvin. It's delta G0. And I'm just here to tell you that the tin line sits well above the manganese line. So that's just a fact about these metals. This here is tin plus O2 going to tin O2. That's what that is, and this is 2 manganese plus O2 going to 2 manganese oxide. And here are my PO2 contours. What is this plot telling me? It's telling me that, at a given-- I have a typo on my scan lecture notes, I'm sorry. At a given temp at any given temperature, PO2 for the manganese oxidation reaction is lower than for the tin oxidation reaction. That's a useful fact. How do I see that? Let's say at a given temperature, let's see that temperature here. I have this point on the tin oxidation line. And this point on the manganese oxidation line. Well, the PO2 contour that runs through the manganese oxidation line intercepting of that temperature corresponds to a lower partial pressure than the PO2 contour that runs through the tin oxidation line for that given temperature. Again, at its given temperature, the PO2 partial pressure contour for manganese oxidation is a lower partial pressure than the contour for tin oxidation. What does that mean? Manganese metal will reduce tin oxide. So if you were to put tin oxide-- let's say an ore into a furnace in the presence of manganese metal and heat it up, the manganese will convert to manganese oxide and then tin oxide will convert to tin metal. Why is that? Manganese has higher affinity for oxygen than does tin. Or saying the same thing, the enthalpy of oxidation is more negative for manganese oxidation than for tin oxidation. So you can see that from the plots. The intercept of these plots is delta H. So you can see the intercepts for the manganese oxidation-- that's at this point down here-- is lower. Let me extend these lines. The intercept of the manganese oxidation reaction is lower than the intercept of the tin oxidation reaction. The enthalpy of manganese oxidation is more negative. Or if we are speaking a little bit colloquially, manganese metal pulls-- in quotation marks-- oxygen out of tin oxide. So everything I just wrote, these are all saying the same thing about metal oxygen bonds. All right, these are all equivalent statements. They're just ways of interpreting metal oxygen bonding and the implications of that for materials processing. If you're going to be processing manganese in the presence of tin, you need to know which one is going to have a higher PO2 at equilibrium for oxidation, because it's going to determine your process outcome. I want to share with you some pictures of Ellingham diagrams. So there's lots of Ellingham diagrams out there. Here is one representation that's a little busy, but you can see lots of things here. You can see many metals which are all represented. Let me grab my laser pointer. Sometimes the laser pointer is stubborn. So here's iron. Oxidation reaction is a nickel reaction copper. As you move down, you're going to metals that have higher affinity for oxygen. So copper is sometimes considered noble. So it doesn't readily form an oxide. You remember we talked about roof flashing. It takes a long time for it to turn green. As opposed to calcium, right? If you have calcium metal around, you should look out, because it's going to explosively oxidize. So down here, you have calcium, magnesium. Aluminum. Aluminum is a very energetic oxidation reaction, but we know it forms a passivating oxide. So you don't have exploding aluminum all over the place. You have almost instantaneous formation of aluminum oxide on the surface of aluminum. Titanium silicon manganese chromium. So this is a limited Richardson diagram, Ellingham diagram, because it only shows about a dozen or so metals. Of course, you can get very, very busy plots. I want to point out one more thing, which is the PO2 scale. PO2. And you see it's a series of numbers running from 1 atmosphere to 0 to the minus 200 atmospheres. And each of these numbers has a little tick mark, and the tick marks all point towards the origin. You see these tick marks, they're at different angles. Why is that? Because it's asking you to imagine a series of straight lines connecting the origin to those values. Those series of straight lines connecting the origin to-- I've never done this before. I draw the line. So here is-- oh, there we go. Look at that. So this would be a line of 10 to the minus 50 atmospheres of oxygen. So for example, you might say that zinc and its oxide are equilibrium at 400 degrees C and 10 to the minus 50 atmospheres of oxygen. That would be one way to read this plot. And you also see that the delta H of formation here is correlated to the electronegativity. So this idea that metal oxygen bond formation is energetic, well, we know that, but it's more energetic for less electronegative metals. So we expect noble metals like silver and copper to be fairly electronegative. They like their electrons, 1.9, 1.93. And we expect alkaline, alkaline Earth metals-- let's say alkaline earth-- alkali metals. Alkaline Earth metals like magnesium, right? They are happy to give up their electrons to oxygen. And that has a lower electronegativity calcium is down here at 1. So there's some solid chemistry here. There's one more thing I want to tell you about Ellingham diagrams, and that is the effect of phase transitions. So let's consider melting of metal or its oxide. Metal's metal-- oxide's also metal-- causes-- it causes discrete jumps in those values. So for example, let's consider solid metal plus O2 going to MO2 solid. Reaction one. This says standard entropy 1. It's negative. It's negative, because we're pulling oxygen gas out of the gas, right? Solids. Solids. Now, consider case where metal melts at lower temp than MO2. So metals have a lower melting point, which is true for most but all oxides. And raise the temp until then the metal solid becomes liquid metal. So that's what we're going to do. We're going to consider that, and now we're going to consider the oxidation again, except now we're oxidizing liquid metal. So the oxide is still solid but the metal is now liquid. And we'll call this reaction to have delta S02 Well, we know that the liquid metal has higher entropy than solid metal. We know that. So what that means is that delta S02 is going to be smaller than delta S01, and they're all less than 0. In other words, it's going to be even more negative. On a plot, it looks like as follows. There's temperature. This is delta G0, and we have a kink in the plot. The kink happens at the melting point. So here is metal melting at that temperature. The lower curve is a solid oxidation reaction-- solid metal oxidation. The upward curve is liquid metal oxidation. This intercept is delta H0 for solid metal oxidation, and this intercept is delta H0 for liquid metal oxidation. So let's go back to some real plots. So now we can understand why there are break points in these. There are break points in these curves, because at these transition points, the standard enthalpy and entropy of these reactions changes. So this melting point here of aluminum, it changes the slope very slightly. You can't really see it. You can't really see it, but there is a slope change. Then up here, the aluminum boils. So there's another break in the slope. Here, manganese melts. Here, the manganese melts. Here, zinc melts and zinc boils. So forth and so on. And if you want to really lose your mind, you can look at more complete Ellingham diagrams. So this is posted on the website. And this is a very, very, very thorough Ellingham diagram now with so many elements and melting points and boiling points and so forth. And you can start to look at these and learn what the melting points and the boiling points of the metals and the oxides are, and you can see how they change the slope of these curves. There are cases where the slope actually dips downward for a little while. That's typically a melting point of an oxide and so forth. And if you really like to explore this a little more, which I encourage you to do. It's a nice way to learn. Go to DoITPoMS page with Ellingham diagrams. For those of you who don't know DoITPoMS, you should know DoITPoMS. It's a really excellent resource for learning material science concepts. It's maintained by University of Cambridge, and it's a nice complement to things like OpenCourseWare and MIT. Let me share that just to show you where you might go to play with this a little more, get a feel for it. So here is the DoITPoMS page for Ellingham diagrams, and if you click on oxides, there's a whole bunch of oxides. And so let's go at random cobalt oxide. See Ellingham diagram. So it now shows you Ellingham diagrams for cobalt for its two different oxidation states. And it will give you data, you can mouse around, get the actual numbers. Change the temperature. It gives you free energy information, and so forth. And it's a nice learning tool. So now I will call it a day and stop recording.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Supplemental_Video_GalliumIndium_Eutectic_Demo.txt
[SQUEAKING, RUSTLING] [CLICKING] AKSHAY SINGH: In this video, we will look at different compositions of gallium and indium and at what temperature they will melt. First, we start with solid indium. We put a small piece of solid gallium. And we just shake it for some time, maybe around one or two minutes. And then it is a liquid alloy. Why does this happen? So this is the phase diagram of gallium-indium system. As you can see, gallium melts around 30 degrees Celsius, which is close to room temperature, as indium melts around 157 degrees Celsius. We have prepared four different compositions. A is pure gallium. And it all melts at room temperature but not quite. Mixture B, which is a composition that's close to the eutectic composition. It's a liquid-like material. But you can see, when we shake it, when we shake the cuvette, you can see the movement. And C, similarly, close to the eutectic but at a higher temperature. So it moves less, whereas D is, as we saw before, solid indium, which melts at 150 degrees Celsius. So it's solid at around 30 [AUDIO OUT].. Take another look at composition B, which clearly is a liquid. Now, let's dynamically create a eutectic alloy. So what we'll do is-- so for this, we start with almost room-temperature gallium, slightly higher, around 30 degrees Celsius. And we put a piece of indium in there. And we will see how they mix. So we put this piece of indium. And we will gently mix it in the next minute or so. So as you mix it, this is a piece of indium that actually melts at a much higher temperature. It melts because we are going down the composition slope into close to the eutectic, which melts at room temperature. This is a very cool way to create a eutectic alloy at temperature and also trying to understand the phase diagram of this system.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_13_Introduction_to_Ideal_Gas_Mixtures.txt
[SQUEAKING] [RUSTLING] [CLICKING] RAFAEL JARAMILLO: OK, so today we are starting a new topic. We're done with unary phase diagrams. We are going to start reacting gas systems. And it's useful, not just because it's useful and it's all right, but because it's an entree for solutions. So just a bit of topic here, we have start, and then we went to-- we did some heat engines. We did a little bit of heat engines, a little bit. And this is where the course-- this is where the course do people get off the bus, at the engines. But we stayed on the bus, and then we're going to do a little bit of reacting gas reacting gases. We're going to do a little bit of that. And after this, this is where the course 10 people get off the bus. They get off the bus here, but we stay on the bus. And reacting gases-- we're doing it because in addition to being useful, it sets us up for solutions. And I mean mixtures, not answers. So this is where this goes. And I find this interesting because thermo is a weird topic in that it's taught by, like seven or eight different courses at MIT. You don't have that with multivariable calculus. You don't have seven different classes teaching their all multi and others. But thermo is really unique in that we have so many different disciplines teaching thermo, and we each have our own flavor. So that's that. So anyway, let's get started. Gas mixtures-- and what do I mean? Homogeneous now, homogeneous multi component. So this is a complement of what we just did. We did unary. Unary was one component, multiple phases. Now, we have multiple components, one phase. So let's get that clear in our head. So let's talk about air. Air, what's in air? Here's my bucket of air. What's in my bucket of air? AUDIENCE: Primarily oxygen, then nitrogen. RAFAEL JARAMILLO: Primarily nitrogen-- about 80% by mole than oxygen. So it's a mixture of those. What else is in there? Just kind of curiosity, what else do you know to be in air? What's the third dominant component? AUDIENCE: Argon maybe. RAFAEL JARAMILLO: Sorry? AUDIENCE: Argon. RAFAEL JARAMILLO: Argon, yes. Yeah, sorry, I had a hard time understanding you for a minute. Yeah, argon is the third major-- the third-- it's not a principal component because 1% or something. But it's, I think, the third most prevalent component. H2O, we know H2O is in air. We talked all about humid air. CO2, of course, is in there. And there are others. So let's say that that's air. So there we have our bucket of air. So the phase is gas. The components-- the components are what we just mentioned. There's nitrogen. There's oxygen. There's argon. There's water. There's carbon dioxide and so forth. Those are the components. And the components are not to be confused with elements, OK elements are nitrogen, oxygen, argon, hydrogen, carbon, OK, so don't confuse elements and components, although sometimes they overlap. All right, let's get right into Dalton's law because this is important. We've talked about it in the previous lecture. This is a little bit of a preview. Apparently Professor Carter has been talking about it. So it's in the air. Get it? That was an air pun. Darton's law of partial pressures-- Dalton's law of partial pressures. All right, so I'm just going to write it out. In a gas mixture-- so gas mixture-- each component contributes a partial pressure in proportion to its mole fraction. All right, so this is full of thermodynamics terms here. Let's highlight on component, mole fraction, partial pressure. It's also a mixture. It's full of thermal concepts in here. And I won't circle gas because that's not really new for us. But, all right, so what does that mean? So, for example, if I have two components, this is how we label it. The pressure of the system equals pressure 1 plus pressure 2 I think you'll remember that we use subscripts for components. Subscripts are for components. We won't have any phase labels here because it's one phase. It's only gas. So we don't have to worry about phase labels, but we do have subscripts for components. And in order to find the partial pressure for component i, you take the total pressure, and you multiply by the moles of i over the total moles of gas in the system. And we already have seen that we have a symbol. We use X for the mole fraction-- mole fraction sub i. All right, so there's a lot here actually. It may look simple. That's sort of a deceptively simple question. Are partial pressure real? Are these real measurable things or are these concepts that we just use out of convenience? AUDIENCE: Aren't partial pressure is just like different pressures at different places within like a material? RAFAEL JARAMILLO: So we're assuming that the pressure is everywhere uniform. So it can't be different places, not exactly. But maybe you're on to something. AUDIENCE: I think I've have seen, like the pressure is uniform, which is kind of like force over area. So if we take any random area since like very small fraction like 50% or like 80% is nitrogen, 80% of the like momentum heading that area is from nitrogen molecules. So it's kind of like having 80% contribution to the total pressure. But it's because of a small fraction. Maybe like we have a chemical reaction that's only permitting one, it would only work as a spare as, like 80% pressure of the nitrogen that penetrate. RAFAEL JARAMILLO: OK, that's good. So here's a gas mixture, red and green. And those gases're in a box. So one thing you can think is pressure is forced by area. So these gases-- the molecules hit the side of the box, and they bounce off. And maybe 80% of the force experienced by any given patch of the wall comes from nitrogen and 20% comes from oxygen. If it's air, 80 20. So that's a good way of thinking about it. Or you could say 80% of all the molecules that hit the wall are nitrogen and 20% are air. So the pressure is proportional to the mole fraction. That's exactly what we're saying, that-- but that-- that's the right way to think about it. But you can really go down the rabbit hole. Because if you have a force transducer, it measures force. It doesn't measure chemistry. It doesn't know whether what hit it was a nitrogen molecule or an oxygen molecule. So you can really head down this path. And you can convince yourself that partial pressure-- although it's very adjacent to a lot of measurable things, and it's very, very adjacent to electrochemistry-- it may not seem that we're close to electric chemistry. We really are. We're very close to electric chemistry. Itself is kind of an immeasurable concept. Anyway, I don't want to go down that rabbit hole, but it's worth thinking about. It's interesting. So let's move on with foundational concepts here. We did partial pressure. Let's talk about reactions. Reactions can change. They can change components. Or they can change phases or both, but not elements. So right now, we're doing multi component unary. So now, we're dealing with reactions that can change components, but everything is remaining in the gas phase. So phases, that's later. I mean, we actually did that already when we did unary transformations, but we didn't talk about them like reactions. But you can. It's fine. And this is course 22. So if you're changing the elements, go to course 22. But the formulism is the same. It actually is. It's just-- all right. So reactions can change components and phase. So here's an example. Let's see, here's an example-- a reaction that changes components but not phase. Reaction that can change its components but not phase. 1/2 nitrogen gas-- I'll carry the phase labels here just in a simple way just for-- since we're just getting started-- 1/2 oxygen gas equals NO2 gas. So same phase-- this is a mixture changed components, but not the phase. Here's another example-- a reaction that changes components and phase. A reaction that changes components and phase. You can have H2 gas plus 1/2 O2 gas go to H2O liquid. So that's a common reaction. We're not going to deal with reacting heterogeneous systems for weeks and weeks. So we're not going to deal with that yet, but I just wanted to let you know that everything we're writing down here-- the formalism for reactions-- will apply to more complicated situations. Now, before I move on, I want to-- the attendance is low today. So if there are reasons for that that I should know, please tell me. You won't hurt my feelings. I'm really curious. Why do I bring that up now? I want to remind you, those who are here today, that the reading for today and on Monday is in Denbigh. So the PDF is on the website. So we're changing books because Denbigh treats reacting chemical systems much better than DeHoff. Denbigh is a chemical engineer. This is an exceptionally well-written text. So don't be put off by the fact that it's got some years on it. It's exceptionally well written. And as with the rest of the class, if you're not doing the reading associated with lecture, if you're not doing the reading ahead of time, and if you're not working the problems in the text, you're not getting your money's worth. So please do keep up with the reading, especially on a topic like this where it's a little bit-- it's just a little bit outside of the mainstream of material science. It's not just covered in DeHoff. And there are so many textbooks on this topic, and I picked the one that I think is just the most elegantly written, really intellectually very clear. So I hope that you find the same. Also, some of the problems on the p set are straight out of Denbigh. So there's that too. All right, so let us move on, more really foundational concepts. Ideal gas mixtures-- and throughout this course, I often use mixtures or solutions interchangeably. They're always going to mean the same thing. All right, so we're going to set up a little bit of a toy problem, sort of a thought experiment here. Here's the thought experiment. It's a really simple one. We've got a box. And as so often happens in thermodynamics, we find ourselves with the box that's been partitioned. And we have now two components in their pure state, not mixed. Let's call this pure number 1. You can think of this as a nitrogen, pure nitrogen gas if you like. It's at some-- it's at a temperature. It's at a pressure. And it is at a volume, and I'm using that tag there for an extensive quantity of volume. So there's pure number one. And then we've got in the other part, pure number 2. It's orange and green. I'm like three days delayed for St Patrick's Day, but I got there. All right, temperature, pressure, and volume. And although it may not look like it in my drawing, I intended for those volumes to be equal. So here's a very simple situation. And this really brings us back to the baby book almost. We've got these gases, and we're going to mix them. Are there any other national flags that combine orange and green? Those colors have nice contrast, orange and green. AUDIENCE: Ireland, I think. RAFAEL JARAMILLO: I think there's orange in the Indian flag. AUDIENCE: This-- yeah, I think some country too called, like the Ivory Coast that's basically just the Irish flag reversed. RAFAEL JARAMILLO: OK, OK, all right. Any others? Ivory Coast, I think India had the orange and green in the flag, Ireland. Anyway, all right, so now we've got a mixture of these things. So they mixed. There we go. And so we have a mixture of 1 and 2. I'll claim or just postulate for this model system that temperature didn't change. The pressure didn't change. And we have twice the volume. And if you don't believe me, it's PV equals nRT. You solve for the moles of this. You solve for the moles of this. You add them together, a very simple situation. Why is it so simple? It's because ideal gas molecules don't interact. That's why it's so simple. They don't interact. Each component behaves as if it undergoes-- and here is kind of the less intuitive part. Each component behaves as if it's undergoing an isothermal expansion. And you could either choose to believe this or disbelieve this. It may or may not make sense. If it doesn't make sense, that's fine. Just treat it as a model. I'm showing you a model. It may apply. It may not. All right, so we know how to treat that. Delta G of component i equals nI RT log its partial pressure over the pressure. This is a really critical equation that comes from this assumption. We're using an expression for delta G for an isothermal process and ideal gas. And we are modeling that as an expansion from pressure P to partial pressure P sub i. So the Gibbs free energy is its starting value plus the change where Gi have 0nP because it's some standard state, however it started off. So they started off in a standard state. And they started off in a standard state. So this slide here-- this board-- there's a lot of intellectual content here. The equation might look simple, but there's a lot of intellectual content here. So it's going to take a little while and some practice to understand this fully. But I'll restate the assumption here. This model is that each component behaves as if it undergoes isothermal expansion. And if you buy that, then you can use the apparatus which we've already established, which is calculating the change in Gibbs free energy for an isothermal process as each gas expands from initial pressure P to partial pressure P sub i. To follow up on my earlier comment that we're really close to electrochemistry-- and for those of you who have seen the Nernst equation or done any batteries research, this looks a lot like the Nernst equation actually. And it's just a hop, skip-- it's a little bit of a stutter step from here to get to talking about electrochemistry. So if you don't care about electric chemistry or haven't been exposed to that yet, don't worry about it. But I think those connections are kind of neat for those of you who have been exposed to that. So now, let's talk a little bit about notation. To remind you here, this is a reminder. G of i prime is an extensive property of component i. That is the total Gibbs free energy of all the stuff in component i. G sub i without the prime is a molar intensive property. That is Gibbs free energy of i per moles of i. So you could read that Gibbs per mole of i. Here's a new concept-- G of i 0 molar Gibbs free energy of component i in its standard state. What is that? I'm going to define it for gases-- standard state for gases. It's a pure gas that's unmixed at standard pressure-- 1 got its one bar-- and same temp as the solution or mixture in question. So concept, concept, concept, concept-- these two are not new to you today. They are-- they can be confusing, but they're not new today. And all of this is consistent with DeHoff. This is new today. We're introducing standard states. And standard states are defined somewhat differently than you might have seen than before. Standard states for gases, to be specific for today, is the pure gas non mixed at one pressure but at whatever temperature the problem you're analyzing is. So it's not the same thing as STP. It's not standard temperature and pressure. It's not one bar 25 degrees C. It's whatever you have before you make the mixture. And the process of making mixtures is so central to material science, I'm going to slide backwards here. We have a pure gas and another pure gas, a pure substance and another pure substance. They're both at the same temperature. I don't know what temperature that is. It's just a parameter, but they're at the same temperature. The standard state of pure component 1 is this temperature and one bar. The standard state of pure component 2 is this temperature and one bar. And those are standards because it's what you have before you make the mixture. So I'll use this analogy often in this course. You imagine you have a beaker of A and a beaker of B. And you're at whatever temperature you want to make your solution at, whatever temperature you're processing your material at. Beaker of A beaker of B-- they're at temperature. And you mix them, stir it around. That's material science. The standard state of a substance is however it's found when it's pure at one bar at the temperature that you're doing your solution form. All right, let's move on. Chemical potential-- chemical potential and ideal gas mixtures. So we know that for any sort of change of temperature, and pressure, and mole number, this is the change in chemical potential. So mu i-- and we've been here before-- equals dG Eni at fixed temperature, fixed pressure, and fixed moles of everything else. All right, this is now. From before we have mu of i equals mu i 0 plus RT log P of i over P0. So chemical potential is molar Gibbs free energy. We calculated the change in Gibbs free energy for the mixture process. And so we can write an analogous equation for chemical potential. So again, how to read this. Chemical potential of component i-- chemical potential of component i dot dot dot relative to its standard state dot dot dot at temp T and partial-- so the chemical potential component i relative to its standard state at temperature T and partial pressure Pi. That's how to read that. And we're going to use this so very, very much. I think it deserves a purple square. I'm going a little fast through this material, so I remind you to please interrupt me with questions. Here, let's get a question. Do ideal gases mix spontaneously? Right, do ideal gases mix spontaneously? Here's my setup from lecture 1. These are ideal gases, two different components. Are they going to make spontaneously? AUDIENCE: Yes. RAFAEL JARAMILLO: Yes. Yes, they do. OK, so that you knew on day one, I think. Most of you sort of got that on day one. But now, we can answer this question. Why? What is the driving force? On day one, we talked about randomness and entropy, and it just sort of seemed like what would happen. And we said oh, this is going to be an entropy-driven process, which is true. That's true. But now we have a more useful way of analyzing this problem in general, which is what is the driving force? At fixed T and P, the driving force is to lower the Gibbs free energy, the total Gibbs free energy. So Gibbs free energy equals the sum of the molar Gibbs free energies-- and the molar Gibbs free energies are their reference values plus the change. I can combine these, and I find that the Gibbs free energy is the Gibbs free energy before mixing plus delta G due to mixing and delta G due to mixing the change of Gibbs free energy when this spontaneous process happens. And you can plug and chug here. It's i ni RT log Pi over P. And here's the kicker-- positive or negative? What should it be? Without-- forget the math. What should it be? Should this be positive or negative? The process will happen spontaneously. AUDIENCE: Negative. RAFAEL JARAMILLO: Got to be negative. If a process happens spontaneously at fixed temperature and pressure, the driving force is lowering the Gibbs free energy. The change of Gibbs for that process must be negative if it is to be spontaneous. Does the math work out? This is strictly positive. This is strictly positive. This is strictly positive. What about the partial pressures? AUDIENCE: The i is strictly lower than P, so it's like a negative mole. RAFAEL JARAMILLO: Pi equals Xi P. Xi is less than or equal to 1 by definition. You can't have a mole fraction greater than 1. Therefore, Pi's are less than or equal to P. So log Pi over P is less than or equal to 0. This is our driving force for mixing. The driving force for wave mixing is that the chemical potential of each and every component is lowered by mixing. So we're just a little more than 30 days past the start of the semester. And at the start of the semester, you knew this would happen, and you sort of-- we were able to discuss why it would happen in a hand way to be an intuitive way. And now, a little under five weeks later, we now see the same result. But we have a formalism attached to it. We can calculate the change of strategy for this process, and we can see that it's negative. And we can quantify and count things. So I guess you could say we've come full circle. All right, for the remaining 10 minutes or so or eight minutes, I'm going to talk about balancing chemical reactions, which gets back to-- for those of you who took chemistry in high school, it would be high school chemistry. And I'm just going to remind you of some basic facts. So this is probably the most opportune time for questions on this mixing process and the thermodynamics of mixing ideal gases. Now there's a chat. Again, I'm very slow on the uptake when it comes to chats when I'm lecturing-- so much work in this circle. Well, it took five weeks. If not, then let's remind ourselves about how to balance chemical reactions. All right, here we go-- balancing chemical reactions. So, for example, a gas-- sorry, give me one second. I want to remind you of something. Not a reminder, it's a public service announcement. Here's a PSA. PSA-- can ignore fugacity in Denbigh. It's the only count against using Denbigh at this point in this class is that Denbigh admits to the existence of something called fugacity. Fugacity is the same as activity. We haven't gotten there yet. Fugacity is a chemical engineer's term. Activity is a material scientist's term. They have the same meaning. And as I said, we haven't gotten there yet. And you can completely ignore the discussion of fugacity in that relevant chapter of Denbigh. Don't let it trip you up. Don't sweat it. And you got to remember-- public service announcement. OK, back to balancing chemical reactions. So let's have a gas phase reaction A plus B equals 2C. Everything's gases. So the reaction balance can be written in another way. 0 equals 2C minus A minus B. And so this is pretty familiar. The only-- here's something new, but it's not that new. We're going to use stoichiometric coefficients. We're going to define them here. Plus, this defines stoichiometry coefficients U sub i. So maybe you've seen this before. Maybe you haven't, but it's a pretty simple concept. This way we can just generalize. So we have these stoichiometric coefficients U sub i. The U sub i's, they come from conservation of atoms, balancing chemical reactions. Conservation of atoms-- and by convention, U sub i's are less than 0 for reactants because they're consumed. And U sub i is greater than 0 for products because they're created. So that's balancing chemical reactions. And I want to just remind us. When we see this, this is what we should think. We should think-- well, let me not use a bunch of different colors-- we should think of at some fixed temperature and pressure of volume of A, A, A, A, A, A, B, B, B, B, B, C, C, C. This is meant to be more gas molecules in a box. This isn't I took A and B, and it went totally to C. We'll actually see that never happens, except in 0 Kelvin. Instead, it's a system that's fluctuating back and forth. Reactions go to the right. Reactions go to the left. Nature doesn't care what you wrote in the left-hand side. You could have easily written 2C equals A plus B. Nature doesn't care. It's a reacting mixture of gases and, in our case, at a fixed temperature and pressure. So that's a little bit about balancing chemical reactions. Let's keep going. We have constraints on the d ni's. If you remember, dG at fixed temperature and pressure equals mu i d ni. The total-- all the changing in Gibbs free energy at fixed temperature pressure comes from changing of mole numbers. It's going to come from these reactions as some components are produced and other components are consumed. So we want to figure out what are constraints on the d ni's. And I'll just write this down for the reaction, which we saw in the previous slide-- d na-- it's not meant to be DNA, like dinucleic acid. It's just d na. Here is dn V nu V equals dn C over nu C. So this is just true in general for any chemical reaction. The term d ni over nu i are constant. And this defines for us a new variable. It's just C, which is the reaction extent. So it's a unitless, generalized parameter to describe whether the reaction runs a little bit to the left or a little bit to the right. And maybe some of you have seen this before, and probably most of you have not. So this is a series of M minus 1 equations in M variables. M here is a number of components in the reaction-- A, B, C, three components. How many equal signs? 1, 2. M minus 1 equations in M variables-- what does that mean? If I have M minus 1 independent linear equations and M variables, how many independent variables do I have? Linear algebra again-- how many independent variables do I have describing the change in moles as this reaction proceeds? AUDIENCE: One. RAFAEL JARAMILLO: One. One independent variable, that's right. So instead of having to sum a bunch of different d ni's on the right-hand side of my expression for dG, I can simplify and have only one. And I think you probably have an intuition for that. If I have a reaction, it can go to the right. It can go to the left. It's like one variable. There's sort of one variable that tells you what's changing. It can go to the right. It can go to the left. So that comes out in the analysis, one independent variable. So, for example, I can express d na as nu a over nu C d nC. And I can express d nB equals nu B over nu C d nC and write dG totally with respect to d nC. That's just an example. I can eliminate nA and nB and write the differential of Gibbs free energy totally with respect for one independent variable. So again we're back to counting independent variables to set up problems. All right, another concept from introductory chemistry-- coupled reactions. Coupled reactions-- so, for example, 4 ammonias is plus 5 oxygens can turn into 4 nitrous oxides plus 6 waters. I'm going to call that reaction 1. And that is the same thing as 4 ammonias splitting into nitrogen and hydrogen. I'll call that reaction 2. 2 nitrogens plus 3 oxygens becoming 4 N0 molecules. I'll call that reaction 3. 6 hydrogens plus 3 oxygens-- did I get my numbers right here or did I not-- becoming 6 waters. And that is reaction 4. And reaction 1 is simply the sum of reaction 2, reaction 3, and reaction 4. This is a reminder of how you can break down chemical reactions into subordinate reactions. Why am I telling you this? I'm telling you this because of thermodynamics, of course. The overall change in state variables is the sum of changes between intermediate states. This is super useful. And, again, it's introductory chemistry, so this hopefully is a reminder. So, for example, delta G for equation 1-- reaction 1, I should say-- is the same as delta G for reaction 2 plus delta G for reaction 3 plus delta G for reaction 4. And this is something that you'll use on the p set. And preview for next time-- each reaction obeys its own equilibrium balance. And we will talk about equilibrium balance on Monday. So that's it for today.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_30_Intermediate_Phases_and_Reactions.txt
[SQUEAKING] [RUSTLING] [CLICKING] RAFAEL JARAMILLO: All right, let's talk about intermediate phases and line compounds. So we've [AUDIO OUT] line compounds. So I want you to recall intermediate phase in a three-phase system. And I'm going to recall it visually, and we're going to remember what the free-energy composition diagram looked like in such a case. And it looked-- let's say an alpha phase and the beta phase. We had some intermediate phase. So the common tangents are to look like this with a common tangency there, tangency there. Let's see. I'm just going to eyeball this. Tangency there, let's say. All right, so this was your generic three-phase situation. And I want to remind really just about this region here. This was, let's say, the [INAUDIBLE] phase. Let's call this phase epsilon. This region of composition was an epsilon solid solution. So there's a finite range over which you have a solid solution-- that is, the epsilon phase has variable composition within a range. OK. So now suppose that instead of being a solid solution, the epsilon phase is very intolerant of deviations from stoichiometry. So, for instance, for instance, let's say we have A 1 minus n Bn. But in this case, n is fixed. It doesn't vary in nature. So this is a little bit more like a molecule than a solution. If n is fixed, it's a little bit more like a molecule than a solution. How does that look on the free-energy composition diagram? Let's see. What colors did I use? I used blue, green, and maroon, I guess. So all right. So here's blue, as before. Here is maroon, as before. But now let's say that-- here's n. That's that composition. And this solution model is going to look like this. What I drew is something that's really narrow. So there's a minimum to this curve, the composition in which we find this material in nature. But as soon as we deviate from that composition a little to the right or a little to the left, we have to pay a huge energy cost. You Imagine this curve collapsing into almost the shape of a pin. So my common tangent construction is going to now be like this. So my-- let's just call this "solution" model-- solution in quotes because it doesn't really appear, the solution in nature anymore for epsilon phase. So the solution model becomes very narrowly shaped, like a pin. All possible common tangents are going to converge at the same point. That point is x of B equals n. That's that one composition that we find in nature. So you see how that's a geometrical fact? As this thing gets narrower and narrower, all common tangents are going to cross at that one point. Instead of crossing at different points, giving you a finite range of composition, they're all going to cross that one point. And what that means is I no longer need a solution model. I no longer need a solution model. I only need one point. So I no longer need a solution model. All I need is that one point, that one point, is one free energy point and one composition. OK. So here, I can't help myself. I got to draw the mouse-face plot. So here's the point. If you have a very, very steep free-energy composition curve, like this-- very, very steep-- all possible common tangents that you could draw are going to converge at one point because of that very steep curvature down there. So this kind of, to me, looks like a mouse. So it's a mouse-face plot. But the point here is that these whiskers are all the common tangents kind of coming together at one point. So let's look at some examples here in nature. We can start with the magnesium nickel system. So we've looked at this system before. I don't remember why and when, but we did. And so here's a phase diagram. It's got a number of different phases. It has a liquid phase at high temperature. Let me grab a highlighter. It's got a liquid phase at high temperature. And then how many other phases? We have magnesium, which is HTP. We have nickel, which is FTC. And then we've got two other phases which appear as line compounds. We've got this magnesium 2 nickel phase, which is a line compound, and this magnesium nickel 2 phase, which is also known as a Laves compound. Laves is a structure type. And this line compound down here, it actually broadens. Can you see this develop some width here? So at high temperature, this Laves phase develops some width. It can be made as a solid solution with a very, very narrow range of solid solubility. But when you drop down to low temperature, both of these intermediate phases appear as line compounds. So that's an example. And one of the hallmarks of a line compound and phase diagrams with line compounds is that you have very different structure types. So you can kind of see how these are not-- you don't reach this hexagonal magnesium 2 nickel phase just by individually substituting our atoms from the HTP magnesium phase. And they're really fundamentally different structure. So I grabbed some images here of the magnesium 2 nickel and this magnesium nickel 2 Laves phase. These Laves phases are of interest for people that study magnetism because they have these triangular sheets which are interesting for spin liquids and spin ices-- and then this FCC nickel. I even found a paper based on some phenomenology here in "Switchable Mirrors Based on Nickel Magnesium Films." But this is the point I want to make here is that these are very distinct structures, and they're only occurring at very distinct compositions. That's a hallmark of an intermediate phase. That is a line compound. So how would you draw the free-energy composition diagram for the magnesium nickel phase? It would be drawn like this. Magnesium nickel-- actually, magnesium nickel system, right? So, for instance, at some low temperature, at some low temperature, we have here this is going to be x nickel. So here, we have HTP magnesium on the left-hand side, FCC nickel on the right-hand side. This is going to be a delta G, and here's 0. And I simply have here a value that represents the formation of magnesium 2 nickel, a value that represents magnesium nickel 2. And my taut rope construction, or my common tangent construction, ends up being just a series of straight lines. This is magnesium 2 nickel. This is magnesium nickel 2. And these vertical distances are related to formation-- related to formation of free energies. My taut rope-- right. So instead of being like a taut rope, it's just now like-- well, it's just sort of a string held up between needles, needle points. There's no more curvature apparent. All right. So now let's talk about the size of these vertical segments. Let's talk about compound formation energy. And it's often written as delta form. So what is the compound formation energy? It is the free energy change for formation of 1 mole of compound from the elements in their reference states. So, for example, I might have 2 moles of magnesium in its alpha phase plus alpha-- let's say HTP-- plus a mole of nickel in its alpha FCC phase. And these can react to form magnesium 2 nickel. And there's going to be a free energy of formation for that reaction. So you can see I'm using this term reaction, and we're writing things a little bit more like molecular reactions. Even though this is an extended solid, it's an extended solid with fixed stoichiometry. Well, here's another example. Two aluminum in its alpha phase plus 3/2 oxygen gas delta form forming alumina. And there are an infinity of examples. So line compounds formed from the elements with a formation energy delta G. OK. But there's a detail. This formation energy is per mole of compound. When we draw free energy composition diagrams, we assume 1 mole total of the components. So there's a normalization that you need to apply in order to use formation free energies-- as you might find in databases-- in order to use those data in free-energy composition diagrams. So, for example, we need normalization to use delta formation free energy on a free energy composition plot. So using the example of magnesium 2 nickel-- let's see. I'm going to redraw the free-energy composition plot quickly. And what I want to measure here, I want to figure out-- let's talk about what size that vertical distance should be. This is magnesium 2 nickel. That green arrow, this is the change of free energy when 1 mole of atoms form magnesium 2/3, nickel 1/3. I want to stop and make sure people recall that this was pretty much our definition of solution. Modeling we have a model for how the free energy changes when you combine a total fixed amount of atoms in different composition ratios. So, for example, a point on this plot represents 1 mole of total atoms-- in this case, let's say 2 to 1 magnesium to nickel. So 2/3 a mole of magnesium, 1/3 a mole of nickel combined to form this magnesium 2 nickel 1 phase. But this is different in the formation energy, right? This measure is 1/3 times delta form magnesium 2 nickel. So it's simply 1/3 because I have 1/3 the amount of atoms. So that is kind of a algebraically-simple point, but it's a conceptual point that it's easy to mess up. It's easy to mess up when you're trying to do a free-energy composition diagram, when you're trying to model a system, and you go to the textbook or you go to the databases and you look at formation energies of compounds. And the formation energies of compounds are listed per mole of compound. So if I go over here-- again, there are lots of databases out there, but they're all going to list something like this, properties of selected compounds. So here's a carbide, boron 4 carbide. I know you can't read that. But it's listing, let's say, delta H's. That's not per mole of compound, not per mole of total atoms. So you have to apply that normalization if you need to draw such plots. All right. All right, I want to give you some examples of line compounds in nature and in technology. And then we'll come back to discuss the thermodynamics a little more. Before I move on to some examples, are there questions about this algebra, this arithmetic, the concept of formation energy, the mouse-face plot, anything, anything of that nature? So I have some examples here. I should have animated this. I didn't. We can just walk through this one at a time. There's an infinity of examples of line compounds. So I just want to show you some of different types. So here's an example, which speaks to the problem set, actually. I lost my highlighter. What happened there? Let me get my highlighter back. OK. So here is a copper silicon system. So you can see that you can get a fair amount of silicon into copper. That's 10%. That's this kind of purple region. There are silicon bronzes and silicon brasses-- that is, bronze and brass alloyed with silicon. Those are ternary systems. But silicon tends to be a pretty useful additive for copper-based alloys. I used a silicon bronze when I was in grad school and we were designing high-pressure cells that needed to be actuated at low Kelvin, like below temperature, below 1 Kelvin. And below 1 Kelvin, you can't use lubricant. You can't just use WD-40 because it'll freeze. Means you have to look for metals that slide well against each other. And it turns out there's a whole family of silicon bronzes that various space agencies around the world have developed in the last century. Because space is another application where you have moving parts sometimes, you have machinery that needs to move against each other. And you can't just have WD-40, right? There's no real lubricant that you can use. And so there are certain silicon-based bronzes that have been developed for that application. And I didn't know anything about all that, really, when I was in grad school. But I needed something that would work for my high-pressure experiments. So we ended up with that. So let's see. There's a couple of other solid solution phases. There's this little guy here and this little guy here and then, of course, the big liquid phase. But how many line compounds are there? Trick question. Somebody, please, how many line compounds are there in this system? AUDIENCE: Is it three? RAFAEL JARAMILLO: It's not three. that's why it's a trick question. Somebody else? AUDIENCE: Four. RAFAEL JARAMILLO: Four. Yeah. I didn't see who that was. Thank you. It's four. So there are the three here, which are intermediate phases. There's this really funny composition here, 0.08, 0.17. There's this thing and there's this thing. And I don't know anything about these phases, but I do know that they're probably very distinct crystal structures with distinct properties. You can say that as an educated material scientist not knowing any details. You can just say, oh, these must have different crystal structures. They must have different properties. So those are line compounds. They appear as lines here. But what about over here? Silicon. Silicon's not an intermediate phase, but it is a line compound. The solid silicon phase appears to have no equilibrium solubility of copper. Now, in reality, you always have some finite solubility of a solute in a solvent. We know this from maybe a month and a half ago. We did this on a problem set. It was something called there's always a solution. Because of the driving force of entropy, you can always get some solute into a solvent. However, in many cases, that solubility limit is very, very low. So in the case of silicon, the solubility of metals is very, very low. The solubility limit is typically parts per billion. So in principle, there is a purple region extending along this y-axis of solubility of copper into silicon. However, it's at the parts-per-billion level. So on this plot where you go from 0 to 100 atomic percent silicon, you don't see it. And that's true of line compounds in general. There's always some solubility, but it's often so narrow that you can model as if there's no solubility. That's not just an academic point. Doping semiconductors is why we're able to talk to each other over Zoom. Without doping, there is no semiconductor devices, there is no electronics revolution. So the fact that you can dope some metals into silicon is as important as it gets. And the solubility limits can be in the parts per billion. They can be sometimes in the parts per trillion. But they're rarely above parts per billion. Anyway, so on the last P set, you're going to do some problems around doping semiconductors. So I do want to point that out. There's always a solution. Good, OK. What about this one? This is gallium arsenic system. And so gallium arsenic system has a very famous line compound right up the middle, gallium arsenide. So why is gallium arsenide important? Does anyone know why gallium arsenide is important? Let's talk about technologies that are based on gallium arsenide. Does anybody know? All right. Well, the two areas where you're going to find gallium arsenide are places where silicon either isn't fast enough for electronics or you need to use light in addition to electronics. So where silicon isn't fast enough are the transmitters and receivers of your phones, gigahertz RF networks. Silicon is not fast enough. So the transmitters and receivers of all modern telecommunication devices, including your phones, are based on what's called III-V semiconductors, such as gallium arsenide. So there, the transistors and diodes and so forth are just made out of a gallium arsenide wafer instead out of a silicon wafer. Another place you're going to find it is anywhere you need light. And so gallium arsenide and alloys, thereof-- which we don't show here-- are the basis for all optoelectronics and photonic technology. So right now, we're Zooming. But likely, some part of the data between me and you is carried by fiber optic. And fiber optics are ways of transmitting a lot of information over long distances at low power by using light instead of electrons. And so at the points where the light and electrons are transduced, you have gallium arsenide and similar-material-based chips doing the work. So that's an important and famous line compound. Here's another one. Here's another system where carbon and silicon each don't really dissolve in each other. You have these apparent line compounds along the y-axis. But there's another line compound right in the middle, silicon carbide. Silicon carbide is a refractory-- some people will say it's a ceramic. Some people say no because it doesn't contain oxygen. That's kind of immaterial. It's basically a refractory material. It's used for grinding, so it's of enormous industrial importance. It also is an emerging semiconductor material for high-power electronics. And high-power electronics is the idea that you could replace discrete, bulky, power-handling equipment with integrated power-handling equipment. So I think these power substations that you see when you're driving by on the highway, these big-- kind of they take up a whole property lot and they buzz at 60 Hertz-- or these cans that you see hanging from the utility poles that down convert to a voltage for houses, like 220 or 115. The idea that you could replace those discrete elements with integrated circuitry-- saving power, saving money, saving weight and so forth-- that is the field of power electronics. And the thing is, you can't do it with silicon because silicon doesn't perform well at very, very high voltages. So you need new semiconductors that perform well on high voltages. Silicon carbide is one of the leading candidates. So in 50 years from now, if the idea of a power substation is a thing of the past, it will be due to silicon carbide and similar high-power electronics that are being developed today. And this last one, this is a big old mess. This is the titanium-sulfur system. This is a neat system because, first of all, it has everything in it. It eutectics. It has peritectics. It has a bunch of line compounds. It has line compounds that broaden it to intermediate phases as you raise the temperature. It has sulfur, which is a liquid below 500 C and melts at 20, and titanium, which doesn't melt till 1,670. So it has this totally wacky kind of mismatch between two elements that are nothing alike. The system contains 2D materials. 2D materials are of a lot of interest today in semiconductor technology. Titanium-sulfur system is the basis for the original lithium-ion batteries. About a third of the Nobel Prize work that was recognized recently in 2019 with the chemistry Nobel Prize was for work on titanium-sulfide-based cathode. Why? It's a layered material, so you can shove a lot of lithium in it. Anyway, there's a lot of good stories to tell in these titanium-sulfur system. So these are-- for example, systems that have line compounds and other things. Before I move back to the board, do we have any questions on reading these, interpreting these, using these? AUDIENCE: Yeah, I have a quick question. So on the bottom left one and the top right one, do they both contain three line compounds? RAFAEL JARAMILLO: Yeah. So, for example, let's look at the silicon-carbon system. Carbon here is a line compound. Line, line-- OK-- line element, I guess, it's not a compound-- so a bit of a terminology there. It has zero apparent solubility of silicon in carbon. Silicon is a line element. It has zero apparent solubility of carbon in silicon. Again, there is some, but it would be invisible on this plot. And silicon carbide is a line compound. So again, here's a line, here's another line, and here's another line. And it doesn't have to be that way. Here's a case of a line and an element that actually does have a pretty wide, solid solution, great range. So it's not that all elements are always pure, right? That's definitely not the case. But the low temperature-- have a look here at the silicon-carbon system, silicon-carbon system at low temperature. At low temperature-- actually, similarly to gallium arsenide at low temperature, it's going to be a really boring free-energy composition diagram at low temperature. Let's draw it. Let's draw what that looks like. So would anyone like to take a stab at how I would draw a free-energy composition diagram for the silicon-carbon system at low temperature? Here's silicon. Here's carbon. Here's delta G. Here's 0. And let's say this is 50/50. So first off, do I need any solution models? Right, I don't. I don't need solution models because nature doesn't form solutions, so no need to model solutions. So what do I need instead of solution models? AUDIENCE: Just the taut rope, like, the lines. RAFAEL JARAMILLO: Yeah. All I need is a number to represent the free-energy change on forming silicon carbide. I just need that point. And similarly, carbon from carbon doesn't take any energy to form. Silicon from silicon doesn't take any energy to form, no solution models anywhere needed because nature doesn't form solutions. And here is my free-energy composition diagram. It is just a triangle. So it's simplified. It's simplified a lot. Any other questions on the meaning or importance of this stuff? I'll go back to this slide quickly. And then if there are no more questions, I'll finish up on the board. So getting back to my trick question, I suppose that, because silicone is not a compound, the right answer probably was that there are three line compounds. But there are 1, 2, 3, 4 if you take pure phrases in the system. Don't worry. I'm not going to ask trick questions on an exam or anything like that. But the thermodynamics here is that this is a pure phase. This is a pure phase. This is a pure phase. This is a pure phase, meaning it's always going to be a known composition, never going to be alloy. This is a pure phase. This is a pure phase. And this is a pure phase. They just appear as single points on the free-energy composition diagram. That's what I meant to say. OK, then we move back to the board. I'll just make a couple of conceptual points and then we'll finish up. Comparing solutions at equilibrium to line compounds at equilibrium. So this is going to be two solutions. Let's imagine an alpha phase and a liquid phase at equilibrium. And I'm going to draw just a representative phase diagram just to have something in mind. So here's alpha. It is liquid. Here's x. Here is temperature. And I'm going to imagine now some equilibrium, alpha liquid equilibrium. I drew one tie line there, alpha liquid equilibrium. Just an example, this is no system in particular. We know that, along that line, dG equals mu 1 alpha minus mu 1 liquid dn 1 alpha plus mu 2 alpha minus mu 2 liquid dn 2 alpha. This is just recalling previous stuff. We have internal composition variables. The compositions are variable, composition variables. So we have n1 alpha, we have n1 liquid, we have n2 alpha, we have n2 liquid. And if we have conservation of mass, then we can boil this down, let's say, to x1 alpha and x1 liquid. So we have these composition variables. We're familiar with that. And the equilibrium condition, the equilibrium condition dG equals 0 satisfied by common tangents. What do I mean? Coefficients equal to 0. That's what the common tangent ensures, that the coefficients are 0. Chemical potentials are the same in both phases so that the coefficients are 0. Chemical potential's the same in both phases. The coefficient is 0. That's ensured by the common tangent construction. So this is just recalling, right? Now let's imagine two line compounds. I drew an unnecessarily-complicated example. Let me just follow through on that. B3A2 and B4A3. How did I come up with that? Well, I sketched an imaginary phase diagram, and then I had to follow through on my sketch. So I had-- what do I have? I have like this and then like this and then like this and then like this. And that means I had a congruent melter. I had this. And then I drew as this and then like this, peritectic, and something like this. I had this complicated thing which I do. And the point is not the complicated thing. It's really, I have some two-phase region down here at low temperature where I have two line compounds coexisting. So let's see. The Gibbs free energy of s 1 mole of that thing is 3 times the Gibbs free energy of B and its reference state plus 2 times the Gibbs free energy of A and its reference state plus delta G formation of B3A2. The Gibbs free energy of B4A3 is 4 times that of pure G times 3 times that of pure A plus the free energy of formation of B4A3. There are no internal composition variables. That's a key point. Before, we had composition variables because the composition of the phases was variable. Now, we don't have that. What that means is that the equilibrium condition dG equals 0 is satisfied trivially, trivially. There's no need to do common tangents. There's no need to equate the chemical potentials because there are no internal composition variables. The compositions aren't changing. I can write out the total Gibbs free energy. It's going to be determined just by the phase fractions, phase fraction of the II-III phase times, let's say, 1/5 of the Gibbs free energy of the II-III phase plus phase fraction of the III-IV phase for A3 times 1/7 of the Gibbs free energy of the IV-III phase. And these phase factions are determined by lever law. This is really the point. This is really the point. When you have line compounds in coexistence in equilibrium, there are no internal composition variables. Whereas when you have solution phases in equilibrium, the internal compositions are variable. And that's what's led to everything we've been enjoying over the last month and a half, is the fact that solution phase is the compositions are variable. And now we have these cases where the compositions are not there [INAUDIBLE] anymore. OK. I want to leave you with one final thought, which is leading into Wednesday's lecture, which is the case of metal oxides, which are line compounds. So it's on topic. But I want to introduce this and get this in your minds. Let's imagine reacting metal M with 1 mole of oxygen to form an oxide. So zM plus O2 gas reacting to form MzO2. All right. What's z? How do I determine z? Anybody? Does anyone know some oxides? Name for me a common oxide that you know. What is rock? What's the main component of rock? What's the main component of window glass? AUDIENCE: Silicon oxide? RAFAEL JARAMILLO: Silicon oxide, OK. So for silicon, z is 1. Yeah, easy. Does anyone happen to know an example of an oxide for which z is not 1? AUDIENCE: Magnesium oxide? RAFAEL JARAMILLO: Yeah. Magnesium oxide is 1 to 1, right? So in that case, it would be 2. Oxygen always O2 minus in compounds. So z is determined by charge balance. Oxygen is always O2 minus. And metals have various oxidation states-- can't write anymore. Metals have various oxidation states. That's what that's supposed to mean. Some metals have more than one oxidation state. So that's z. Here, M is in its reference state. That could be solid, liquid, or even gas-- although, we don't really encounter that so often. But oxidation of metals at low temperature, the metals are solid unless it's gallium or mercury. Oxidation in metals at high temperature, a lot of them are molten. So this matters for high temperature electrochemistry. Oxygen always in gas phase. We're not talking about low-temperature physics here. So oxygen is always going to be in its gas phase. And these oxides are line compounds. That z is not a variable. That z is an integer or a rational fraction, and it's fixed-- SiO2, magnesium oxide, Al2O3, so forth. So when we return on Wednesday, we're going to talk about the thermodynamics of this reaction. We're going to use this property of being line compounds, and we're going to use a bunch of other things as well.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Supplemental_Video_The_Lever_Rule.txt
[SQUEAKING] [RUSTLING] [CLICKING] RAFAEL JARAMILLO: Hi. Today, we're going to review the lever rule. The lever rule is a tool that we use to answer the following question. Imagine we have a binary phase diagram. And two phases, alpha and beta, which are separated by a two-phase region. We're going to consider a system with an overall composition and temperature right here. So the overall mole fraction of component two is there along the abscissa, and the temperature's, of course, there at that isotherm. Now, we know that in the two different phases, the mole fractions for component two are going to be this value for phase alpha and this value for phase beta. So we have overall composition x sub 2. We have some number of moles super alpha. We have some number of moles super beta. What the lever rule does is allow us to calculate for those values, the mole fractions of the alpha and the beta phase. The way we find the mole fractions is by writing down constitutive relationships which express the conservation of mass. So for example, we have to conserve the total mass. The total number of moles of alpha plus the total number of moles of beta is the total number of moles of the system, so n super alpha and n super beta are related in that way. We also need to conserve the moles of the individual components. Because the two components, they can't transmutate from one into the other. So the way we do that is we say n super alpha times the mole fraction, component two n phase alpha, plus n super beta times the mole fraction of component two n phase beta should be equal to the overall mole fraction of component 2 times the total number of moles. So this is conservation of mass. So we put these together, we get the following expression. Now that we've eliminated n super beta, we can simply solve for super alpha. I'm actually going to solve for something called f super alpha, which is the phase fraction, which is the moles of phase alpha divided by the total moles in the system. From the equation on the previous board, simply becomes x2 beta minus x2, x2 beta minus x2 alpha. And likewise, you could solve for the phase fraction of phase beta, which is the moles of beta divided by the total moles in the system. This is going to be equal to x2 minus x2 alpha, x2 beta minus x2 alpha. These are phase fractions. And the neat point and the reason why it's called the lever rule is that these expressions correspond to line segments along the timeline. So I want to illustrate that. Let's start with the denominator. The denominator for both expressions is x2 and beta minus x2 and alpha. So I'll circle that in green. If we come over here to the phase diagram, we see that that is the total length of the timeline. Now, we'll look at the numerators. Let's start with the numerator here. x2 super beta minus x2. So that's the distance between this point and the overall system composition. It's that length. And I'll use blue for the remaining numerator, which is x sub 2 minus x sub 2 super alpha. So that's the shorter distance from the overall composition to the composition in phase alpha. Which brings us to the final point, which is why it's called the lever rule. If we visualize the timeline as a lever or a seesaw, and we put the overall system composition at the fulcrum, so this is at x sub 2, this is x sub 2 super alpha, this is x sub 2 super beta, the phase reactions are just the amount of stuff you would need to put on this lever to balance it so it's perfectly horizontal. In this case, the fulcrum is closer to the alpha phase. So I'm going to need a lot more stuff in the alpha phase than in the beta phase to balance this lever. f super alpha goes x2 beta minus x2, x2 beta minus x2 alpha. And f super beta goes x2 minus x2 alpha, x2 beta minus x2 alpha. And that's why it's called the lever rule.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_28_Boltzmann_Hypothesis.txt
[SQUEAKING] [RUSTLING] [CLICKING] RAFAEL JARAMILLO: Hello. Happy Monday. And welcome to lecture 28. We're going to continue working on statistical thermodynamics, and we'll start with the Boltzmann hypothesis. All right. And what Gibbs is to classical thermal, Boltzmann is to statistically. And so these are really giants, and the reason why their names are all over everything. My understanding is that Gibbs was a pretty straight shooter, and his biography is a little bit on the boring side. That's my very quick read of it. Boltzmann was a much more turbulent individual. And so it may not have been fun to be Boltzmann, but his biography is a lot more interesting. So if you have a moment, you might read a little bit about his life, anyway, but we're just going to focus on the science. So here's the preamble. We have this quantity, which we counted microstates, omega. It describes the stability. We haven't put it in these terms, but it follows from everything we've been doing. Omega describes the stability of a macrostate because the state with maximum number of microstates will appear. All right. So this is an observation from what we've done so far. And I hope this is sensible to you. That the state with the maximum number of microstates will appear the most stable. It's the most likely, and if you find yourself in that state, you're very unlikely to get out of that state. So it's going to appear to have this property of stability. And if you remember, towards the beginning of this class, we defined equilibrium as having this property of stability. And so there's some connection there. And so Boltzmann made that connection with his hypothesis. And his hypothesis is that the entropy is a function of the number of microstates. So the entropy of a macrostate is a function of the number of microstates corresponding to that macrostate. And more than that, he hypothesized that entropy is a monotonically increasing function of omega. So this is a monotonically increasing function. So what that means is that max S means max omega. So we're going to use that. Max entropy means max omega. So that's a hypothesis. All right. So we're going to consider the form of that function. This is pretty easily done, consider two isolated systems, system A and system B. System A is some stuff. I don't know what it is. And it has number of microstates omega sub a, and it has entropy extensive S of a. And system B is some other stuff. It's isolated from system A. It's over on a different room, on a different shelf, what have you, and it has microstates omega b and entropy S of b. So first of all, entropy is extensive, which means that the total entropy in the system-- like any other extensive quantity, what should the entropy of the whole be? I'll label that total. STUDENT: Be the entropy of a plus the entropy b. RAFAEL JARAMILLO: Yeah. Like any other extensive thing, like marbles in a jar or moles of material, you just add them. But the total number of microstates is-- I'll call it combinatoric. I don't know if that's a word. What I mean is, the total number of microstates available if you consider both systems is what? You can have microstate 1 and all of these, and you can have microstate 2 and all of these, and microstate 3 and all of these, and so forth. So what's the expression for the total number of microstates considering both systems at the same time? STUDENT: Is it the product of a and b? RAFAEL JARAMILLO: It's the product, yeah. All right. So entropy is additive, but omega is multiplicative. So that means f of omega total, which is f of omega a times omega b, is f omega a plus f omega b. We're using both of these properties now. So who knows a function that has this property, the function of the product is the sum of the Function STUDENT: Log? STUDENT: Logs? RAFAEL JARAMILLO: Yeah. So we come up with this conclusion, which is that this function we're looking for, which is entropy, is proportional to log of omega. And we're going to right now just have a-- we have a pre factor out there. It doesn't change. And so we're going to call that pre factor k, and we'll put a b under it because it be Boltzmann constant. We don't know what it is yet, Boltzmann's constant. And this is known as Boltzmann's entropy formula. So just like that, we have a pretty simple-looking equation that gives us the entropy for a system as a function of the number of microstates. That's pretty cool. All right. So we still don't know what Boltzmann's constant is, but that's OK. So let's see what some implications of that are. We'll start with the same configurational entropy, which we have talked about throughout the term. Configurational entropy, so what is? It's a number of ways to configure a system and space. So I'm going to draw a grid. This is going to be simply suggestive. We're not going to analyze my drawing because I'm just going to be rough about it. Let's just put a particle here and a particle here and a particle here and a particle here. What we're going to do is we're going to count the number of ways to distribute n molecules into r boxes. And we're going to have those boxes sufficiently small such that no box has more than one molecule. So we're dividing space up into tiny, tiny, tiny little voxels Well, we know how to do this already. We talked about this the other time. This is just r choose n, and that's r factorial-- wait, sorry-- over n factorial-- there's an error in the notes there-- r minus n factorial, r choose n. Good. So now what we're going to do is we're going to let the total number of boxes be the total volume divided by some little volume b. And so this is total volume and this is a little voxel. And we can say corresponds to volume-- we can say this corresponds to the volume of a molecule. It's a tiny, tiny little amount of space, but it enforces our condition here that the boxes cannot have more than one molecule. So now it can be shown-- you will apiece that-- can be shown that the log of r choose n-- you're going to take the log of omega, that's the Boltzmann hypothesis, Boltzmann entropy formula thing, so you're going to take the log of this binomial coefficient, the log of r choose n is approximately n log r for r very, very much larger than n. So r very, very much larger than an n corresponds to-- let's say a gas. Most of space is empty. There's a much larger number of voxels of space than there are molecules that fill them. So that's like a gas. And this is a-- that's a problem on a current PSET. So far so good. Now what we're going to do, now let system expand from volume to 2 volume. So we're going to do an expansion of this gas, and we're going to calculate delta S. We're going to do this using statistical thermodynamics now. We're not going to do it the way we did a month and a 1/2 ago. You have Boltzmann's constant times n, and let's see. Log 2 volume over b minus log volume over b. And we can collect terms and simplify. This is kbn log 2 v over v equals kbn log 2. So this here, kbn, this is R n equals Avogadro's number. We recall this from the classical derivation, isothermal expansion of an ideal gas. This is the same result for isothermal expansion of ideal gas. Delta S equals R log v final over v initial. So this is one way to start identifying what that Boltzmann constant is. Boltzmann's constant is R divided by Avogadro's number. That's cool. That's neat. We've done this weird statistical thing, and good, old Ludwig came up with this. And we find that when we calculate a simple case, we get functionally the same thing as when we calculated this case, when we didn't know anything about molecules. And we were just dealing with classical thermodynamics. We can make the connection via the coefficients. That's cool. Great. Questions about this? Because I'm going to move on to the next thing that Boltzmann did. The next thing we're going to do, and what we're really building up to here, is the maximum entropy condition, maximum entropy condition and the Boltzmann distribution. So we get Boltzmann hypothesis, Boltzmann entropy formula. And this is going to be Boltzmann distribution, this Boltzmann's constant Boltzmann's name is everywhere. We're going to set this up and do most of it, but this will carry over into lecture 29. It's that essential that it's worth taking the time. All right. So what we're going to do? We're going to consider n total particles distributed over r states according to the occupation numbers. We have these occupation numbers from last time. All right. That's a set of numbers, n of 1, n of 2, and all over-- all the way up to n of r. So how many states are-- how many particles in each state? And we're going to calculate the entropy of this thing. So again, taking from last time, from our last lecture, is going to be kb time to log n of total over product n of i, everything factorial. Now, I'm going to use Sterling's approximation to get to the next line, which is going to be kb, n total log n total minus n total minus sum n of i log n of i plus sum n of i. So I use this Sterling approximation, and then I'm going to use the fact, the sum over i. n of ni equals n total, so the sum of where all the particles are equals all the particles. So I can simplify a little bit. This equals kb n total log n total minus sum over i, n of i log n of i. And now I can condense this a little bit. I'm going to split the numerator and denominator, figure the minus sign, and I get a log n of i over n total. So so far, just playing with numbers. So I have the entropy is a sum for n of i log n of i over n total. By the way, this kind of looks like x log x, doesn't it? It kind of looks like our ideal entropy formula. Anyway, just a passing observation. All right. So that's fine. Here's the science inside. The distribution of occupation numbers, the distribution of occupation numbers is an unconstrained internal variable. That means that those particles are going to fluctuate. They're going to fluctuate in and out of different states, and that's an unconstrained process. So in our previous example, you had particles in a box. And I you could imagine these particles in general move. They could jump in between boxes. That's an example of jumping in between states, we're going to make this a little more general and not limited to states being positions in space. We're going to have a more general expression. We're going to say, let's say, state i minus 1. We have state i and state i plus 1, so forth. And we're going to allow that-- At any given moment, let's say there's four particles in this state. Let's say there's two particles in this state. And there was a third, but that particle jumped. It fluctuated and went over here. This is simply visually acknowledging that the states are fluctuating. The particles are fluctuating between them. These fluctuations can and will happen. So if that's happening, the maximum entropy condition, S equals S max, requires that S is what? Stationary. As we have done now so many times in this class, it has to be stationary with respect to all unconstrained internal processes. So this conceptually, where we're going is the following. We did something like this before. We've done it multiple times. When we had two systems that can exchange volume, we require the entropy to stationary with respect to that. And we got the mechanical equilibrium condition. Pressures are equal. And we had two systems that can exchange energy. We wrote out the max entropy condition, and we require that the entropy is stationary with respect to the energy exchange. And we got the thermal equilibrium condition. We got the temperatures are equal. And likewise with systems that could exchange particle number, we got chemical potential being equal. Adding that whole thing up, we call that thermodynamic equilibrium. So now we're doing something slightly different. We're requiring this entropy stationary with respect to exchange between different states. And it's pretty general right now. So a little bit vague, but there are clear similarities to what we've done earlier in the class. At least there are mathematical similarities. So ds prime-- that's write that out-- equals minus Boltzmann's constant. And what I'm doing is I'm just taking the total derivative, so L log n of i, dn of i-- see-- plus n of i over n of i d n of i minus log of n total d n of i minus n of i over n total-- just taking the total derivative of the expression on the previous slide. And this simplifies pretty readily. And I get minus Boltzmann constant and the sum over log n of i over n total dn of i. All right. Just, again, making this explicit. I just took the total derivative of this using the chain rule. All right. So that's the S, and I have all my little unconstrained internal processes here, little fluctuations between the occupation numbers. All right. Now let's apply my constraints. That's what we did before, and that's what we'll do again. And in this case, I'm going to apply isolation constraints. Isolation constraints, so I want my system to be isolated. Max entropy is the equilibrium condition for an isolated system. We remember that. So here's my little fluffy-- pink insulation here surrounding my system, and I've got system surroundings and what? The boundary is rigid. It is impermeable, and it is insulating. All right. Now we're going to allow that the states I have different energies. That e sub i be the energy per particle state I. And I don't want to just slip this in there. This is kind of new. This is kind of new for us because previously in the baby book, and even just 15 minutes ago in this example of configuration entropy, we had this idea that space was somehow flat and uniform. And the energy of each particle would not be dependent on its position. All right. So if the states are positions, maybe you have a gravitational potential, or maybe you have an electric field. Maybe this a battery, and there's an electric potential. And maybe the particles are charged, and then you can imagine energy and space becoming conflated. But more generally, there's no reason why these states have to be positions in space. They could be spin states or vibrational states. They could be rotational states. They can be anything that is distinct. Different states have a particle in general. These different states can have different energies per particle. So e sub i equals the energy per particle and state I. And so we're going to then say the total internal energy is pretty simple. It's just adding up all the energies, e sub i times n sub i. And that means that du equals e sub i dn of i. Its conservation of energy. So you're going to study a system of elastic collisions in the lab. And the individual particle energy isn't going to be changing, but I think you can trust that the total system energy for elastic collisions or rigid billiard balls, that doesn't change. So one gains and other loses and so forth. And likewise, we have the total number of particles. And this is a pretty simple expression. This is the sum n of i. And that means dn total equals the sum dn of i. I'm sorry I forgot. This is going to be 0, and this is going to be 0, conservational of mass, conservation of mass. So now I have some mathematical ways to apply these conditions. And I'll just make a note in passing, this really is beyond the scope of this class, but those who are interested, conservation of volume, you might say, what about volume? Before, two months ago, you were considering volume, energy particle number and volume. We're not touching conservational volume right now, but it's connected to the E Is being constant. When I took du, I didn't apply the chain rule and allow the E of Is to change. I assumed they were constants. And that comes from the conservational volume. So when you take quantum mechanics next semester, this comes out right away, so. All right. But we're not going to touch it here. So this is a case of constrained optimization, constrained optimization. Constraint optimization is the most important application of calculus in engineering, in business. It's in general what you do in Sloan and what you do in course 16 and course 2. A little bit less in course 3, but you should remember this from calculus. So we want to optimize that function, subject to constraints, that the energy and the total particle number are fixed. So we're going to use the method-- anyone remember? What method are we going to use? Is it from multi? French name. STUDENT: The Lagrange multiplier? RAFAEL JARAMILLO: Yeah. Thank you. We're going to use [INAUDIBLE] of the Lagrange multipliers. So in your calculus textbook, you would have used the del operator. I'll just write that here just for familiarity, but then we'll switch back to our operator. And so what do we have? Del, the thing we want to optimize, plus del, the things which are conserved. And we have Lagrange multipliers. So we have del n total and del u, and this whole thing is going to be 0. This is written as in your calc textbook with the del operator, but we're going to write this way, ds plus alpha dn total plus beta du equals 0. And alpha and beta are Lagrange multipliers. I was talking to my wife about this actually. She teaches college math. And it does seem this is the most important application of calculus outside of some specialty areas, at least the most widespread. Anyway. So we're going to substitute our expressions. We have expressions for ds and du and dn total and collect terms. That's what we're going to do. So we substitute our expressions, and we collect terms, and we get the following, sum over states, we're going to have minus k sub beta log n of i over n total plus alpha plus beta e of i, dn of i equals 0. So what is this functional form? What is this form? This is just like we did two months ago, starting about two months ago, for the case of unary systems. We have here a set of unconstrained independent variables. And so we want this whole thing to be zero. How do we ensure that this whole thing is zero? Remember these differential forms? What did we call the pre factor in front of the differential of the independent variables? STUDENT: You just said the coefficients equal zero. RAFAEL JARAMILLO: Coefficient, right? Exactly. We're going to set each coefficient to zero. So let's do that, minus kb log n of i over n total plus alpha, plus beta e of i equals 0. I'm going to rearrange n of i over n total equals e to alpha over kb, e beta e of i over kb. And this is true for each state I equals 1, 2, through, say, r. So we're not there yet, but we just had something really important to happen. This is a distribution function. All right. This is describing the occupancy of state I. And it's a fractional occupancy. It's n of i over n of total, so the fraction of particles that are in state I. And it's exponentially dependent on the energy of state I. All right. So I want you to notice two things here. I'll just repeat what I said. That this is a distribution function. That's a very useful thing. That's a distribution function, and it's exponential in e of i. But we haven't finished yet because we have these Lagrange multipliers. So we don't know what those are yet. So we're going to determine one of them now, and we'll determine the other one on Wednesday. So we're going to determine alpha by normalization. What do I mean by that? What I mean by that is the sum over all of i, n of i over n total-- what is this sum, sum over all of i, n of I over n total? STUDENT: Just one. RAFAEL JARAMILLO: Yeah, it's one. It's one. So if we set that equal to 1, we get the following, e to the alpha over Boltzmann's constant equals 1 over sum over i, e beta, epsilon i over kp. And we're going to give this thing a name. We're going to call it the partition function. The partition function Q is the sum over all possible states e beta e of i over k Boltzmann. So for now, it's just a name. What does that mean? It normalizes the distribution. Why is it called partition function? It describes all the different ways that the energy can be partitioned in the system. So it's a sum over all the states, somehow characterizing the system and all the ways energy can be partitioned in their. Partition function normalizes the distribution function. So this distribution function, n of i over n total is equal eb beta epsilon i over k beta divided by Q. So it looks like I end it about five minutes early, but better that than rush through the derivation of a determination of beta. So I'm going to stop now and take questions. STUDENT: Could you explain the Lagrange multipliers a little bit? I can't find it. RAFAEL JARAMILLO: Yeah. So you want to subject-- you want this to be 0 subject to the constraint that this is 0 and also this is 0. And you know these are 0 because you're applying those constraints. And so you can add these to the equation of delta S equals 0 without fundamentally changing the equation. And then alpha and beta are not necessary mathematically, but they generalize the situation. And so then you can effectively relax-- you can relax your constraints here, and the overall constraint is maintained by alpha and beta. So your method of Lagrange multipliers is only valid with-- the equations that result from this appear as if they might be valid even when the constraints on total number of particles and energy relax, but they're not. You have to remember that. The equations that we get from this method are only applicable when u of t and n are fixed. So that's the concept point here. And so when we derive a distribution function, we have this distribution function, or we have this form here. And this, of course, also relates to the Arrhenius rate law. So we're getting there. And people will see this distribution function so often in natural sciences that it's important to remember it's not always true. It is very specifically true that it is the distribution function that maxima optimizes entropy under this condition. So I think that's the concept here. So don't apply it willy-nilly. STUDENT: Thank you. RAFAEL JARAMILLO: But you don't have to worry. This isn't a calculus class in the sense that I'm going to ask you to derive this or even repeat this derivation, but I do want you to know where this comes from, because this is going to start becoming second nature, not necessarily in the next week and a 1/2. But this distribution function is going to be so familiar to you by the time you're a fourth year student in material science. It's good to remember that it comes from somewhere, and it's subject to assumptions.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_26_CALPHAD_Case_Studies_and_Guest_Lecture.txt
[SQUEAKING] [RUSTLING] [CLICKING] RAFAEL JARAMILLO: All right, good morning, everybody. Today is not a thermodynamics lecture by me. We have a guest lecture by Professor Greg Olson. And we just finished a lot of work with thermocalc, and CALPHAD, in general, and solution modeling, and binary phase diagrams. And I've told you throughout the entire semester that thermodynamic data is valuable and allows you to make predictions. That's why it's valuable, and those predictions allow you to make real things in the real world possible. And so far, you've just had to take my word for it. So the point of today's guest lecture is that so you don't have to take my word for it anymore. So I'll just say that Professor Olson is world famous for using thermodynamic data for real-world impact. And I leave it at that. Is that OK, Greg? GREG OLSON: That's sufficient, I think. Yes. Let's hope it's positive impact, yes. OK, very good. RAFAEL JARAMILLO: So I'll hand it over to you. GREG OLSON: All right. Let me attempt my screen share. All right, my presentation will be from the perspective of my university day job, as well as the activities of our computational materials design company, QuesTek, and their ongoing collaborations through the Chicago-based CHIMaD materials design center. As thermocalc professor of the practice, my job description is to make MIT a global beacon of CALPHAD technology, into which we are recruiting you. And in support of that, we've actually had a range that MIT would host this year's international CALPHAD conference. But unfortunately, due to the pandemic, we've had to postpone it a couple of years. But I hope you'll still be around to join us when that happens. And I would like you, throughout, don't hesitate to interrupt. Just unmute yourself, and yell at me, if you'd like to discuss anything. The context of this technology is the National Materials Genome Initiative-- that's a presidential initiative announced by President Obama a decade ago, intended as a decadal initiative, but in fact, a recent National Academy study has recommended that it continue for another decade. And the overarching goal of this initiative is to build out the databases and tools that would allow us to take what has been historically a 10 to 20-year materials development cycle and compress that by at least 50%. The metaphor of the genome is discussed in the National Academy study going back to 2004, on this very subject of accelerating the technology transition of materials and processes. So it reviewed the best that had been achieved at that time. And then, looking forward, looked at the analogy of the Human Genome Initiative as possibly the greatest engineering database in history that was created not just to support the life sciences, but really to support science-based medicine-- an example of which is that mRNA vaccine we're all getting these days. So it's had a tremendous impact in allowing a more science-based approach to medicine. So the concept of the materials genome from the start, as called for in this 2004 study, was to build out an equally fundamental database, with the idea that the human genome physically functions as a database that directs the assembly of the structures of life. What are the equally fundamental parameters that direct the assembly of the microstructure of materials, and could we use such a system to not just support material science, but to enable a new form of science-based materials engineering? So it really was the ultimate engineering application, and the ability to put our scientific understanding in a useful predictive form-- that really was the central concept of this Materials Genome Initiative. And the recommendation in 2004 is exactly the structure that was formed, and then is continuing today. Now, there have been many Academy studies acknowledging the new opportunity of computational materials engineering, but one thing that was unique about the 2004 study was the leading role of global network of small businesses that created and maintained this technology and made this possible. And so this was a list that was constructed in that report of what had already been made available at that time, and demonstrated successes of the technology. The principle mechanism by the way that this technology has moved into major corporations has been by acquisitions of various forms that have affected about a third of the companies on this chart. So small business really did create this technology and lead the way. Historic milestone was a decade ago with the first flight of QuesTek's ferrium S53 stainless landing gear steel. And this was the first stainless steel to meet the mechanical performance requirements of aircraft landing gear, which was driven by the need to eliminate toxic cadmium plating. So this was a green steel, solving an environmental issue. But more significant, it represented the first fully computationally designed and flight-qualified material to go all the way to flight. And that was just in December of 2010. So that really measures a high level of maturity of this technology even before the National MGI was created. And that is reinforced by this timeline. So there had been debate as to what a materials genome could be-- it's very clear the genome we have is, in fact, the CALPHAD database system, whose origins go back to Kaufman and Cohen at MIT in the 1950s, with the calculation of the iron nickel phase diagram. I'd like to emphasize, though, the CALPHAD acronym is based on calculation of phase diagrams, but the reason for that acronym was to distinguish it from something called PHACOMP at the time, that was a technique that was trying to estimate solubility limits in alloys from the attributes of a single phase. And CALPHAD acknowledges that solubility is really based on phase competition that is represented by phase diagrams in the equilibrium limit. But in fact, I think, yes-- RAFAEL JARAMILLO: If you don't mind, can you just briefly tell us about landing gear, and why that's such an accomplishment? I think we take a lot of these things for granted. What is it-- if there's one of many material criteria, for instance, that makes it such a demanding application, so we can understand why that's such a big deal. GREG OLSON: Yeah, I think I'll come back to it a bit later. But it is-- the big challenge is that the high chromium levels that you need to get the corrosion resistance are in conflict with the things you need for the mechanical performance of strength and fracture toughness. And it was a matter of using a predictive science approach to take it to a higher level of optimization that could resolve that conflict. That wasn't going to happen by empirical development. Yeah. And I'll illustrate what was key to that later on. But I want to emphasize that, actually, is the name of your class materials at equilibrium, still? Is that the name that's used? RAFAEL JARAMILLO: No, it's thermodynamics, but it might as well be materials at equilibrium. GREG OLSON: OK, all right. Because we tend to think of-- I think the way thermodynamics is taught often, we tend to think of it as something that applies only to equilibrium. But the truth is, it wouldn't be called thermodynamics if it was only about equilibrium. It was created to describe heat engines that are highly dynamic systems, where in modeling them, it was useful to look at equilibrium limits. It's also very useful, if you want to measure thermodynamics, we should take systems to equilibrium. So we know what we're measuring. But really, the power of thermodynamics is it describes the driving force of dynamic systems. And it really drives the evolution of microstructures and processing and service. So it really is, in that sense, the genomic data that drives the systems. So in fact, I prefer to describe CALPHAD as calculated phase dynamics. And so it starts with thermodynamics, but it's real power system is far from equilibrium. And in fact, when it was invented by Kaufman and Cohen, what they were really trying to do was not to calculate a phase diagram, they were trying to take the information that's there in an equilibrium diagram and reduce it to its underlying thermodynamics, so they could apply that thermodynamics to martensitic transformations that are far from equilibrium. So it was really to create-- get the underlying thermodynamics to understand the driving forces for the dynamics of martensitic transformations. So it really is the non-equilibrium applications that drove the creation of the technology in the first place. So it began as solution thermodynamics, and became an international organization in the 1970s. And the first commercial software showed up around the 1980s. But then, it expanded from solution thermodynamics to adding mobility databases, and solving multi-component diffusion problems, moving on to other phase-level attributes, such as elastic constants. So an increasing array of phase-level thermodynamic and kinetic attributes over an expanding scope of materials, from metals to ceramics. And very recently, organic systems, as well. All of which was largely based on empirical measurement. But today, DFT physics calculations have received enough accuracy and efficiency that they now actually actively participate, and we integrate into the assessment of these databases the predictions from physics calculations-- at least for 0 Kelvin, the enthalpy predictions. So it was the arrival of the thermal system as a commercial code, and a supporting software database structure set by the European SGTE consortium that set some level of standardization, that really inspired our founding in 1985 of our SRG design consortium. And this was founded at MIT. And the idea was to create a general methodology of computational materials design that would be enabled by these underlying CALPHAD databases. With the idea that we would use high performance steel as the first example, acknowledging that we studied steel the longest, have the deepest predictive science foundation in steel. And also, at that point, the highest quality of thermodynamic data was available for steels-- to use that as the demonstrator. So our first projects were designs of steels. But throughout the 1990s, we did a number of demo projects that applied the same methodology to other alloy systems, polymers, ceramics, and even some composites, to show its generality. And it was the first steel designs that ultimately led to the founding in the late 1990s of QuesTek as a company that could offer computational design services based on this technology. But what the CALPHAD is really allowing us to do is use this mechanistic understanding we already possess, but use it in a quantitative system-specific way. And that's what enabled the successful demonstrations of design of new materials throughout the 1990s. And it was largely that demonstrated success that made the case for the DARPA AIM initiative, which began at the start of the new millennium. And this was really addressing the central focus of what is now the MGI, and what we now call integrated computational materials engineering, or ICME. And this was to go beyond the design of an alloy, and set a specification of composition and process temperatures to really address the full materials development cycle. And this meant connecting materials models to macroscopic process models to handle material production scale-up, process optimization at the component level of things like landing gear. And then, most important, the forecast of manufacturing variation, so that you could predict the minimum properties of a material that a user could count on, at a 1% probability basis. And that's what it takes to get a material actually flight-qualified for critical applications. So at this point, we've had over six decades of building out a materials genome with CALPHAD structure. About 30 years now of a full design technology, and we've now been at 20 years of a fully integrated process. So it's quite developed. And as I'll touch on later on, what this has allowed the compression of the materials development cycle, getting it down to the cycle of product development, has allowed for the first time to include materials in concurrent engineering. Historically concurrent engineering meant everything but materials-- you had to use whatever materials were available. And now, there are a number of success stories I'll touch on later, where materials have been fully integrated in concurrency, and allowing a very strong synergy between materials development and product development. So that's the landscape. Core of our approach is the philosophy of the late great Cyril Stanley Smith of MIT, who looked at the general principles of dynamic, interactive, multilevel structure, or structure hierarchy, and acknowledged an intrinsic complexity of material structure. For which he advocated that we should be taking a systems approach. And essentially, using the same framework of systems engineering that the rest of engineering is already using. So we've taken that to heart. And to implement it, another important contribution from the late great Maurice Cohen of MIT, is what he described as a reciprocity between the opposite philosophies of science and engineering that are represented by this unique linear structure, in which the predictive cause and effect logic of science flows from left to right. And the inductive goals means logic of engineering flows from right to left. So it actually allows us to bring these two philosophies together in a streamlined, non-turbulent way. So the scientific prediction is that however we process a material will determine its structure, the structure will determine the properties, and the combination of properties will determine its performance. And this, then, enables a design system where we can set performance goals for a new material that we map to a set of property objectives, use our knowledge of structure property relations to devise possible microstructures that are accessible through prescribed processing. It's important to know that the flow from left to right is unique. That exactly how we process will determine that structure and its unique set of properties and performance. It's always the nature of this inverse problem that we do not have uniqueness. That once we have a set of property objectives, there are multiple structures, and multiple process pathways that we could use to achieve it. So the approach in adopting a system framework is to use this as the backbone of a system structure and add Smith's structural hierarchy to it. So that each design project starts with a system chart. And the idea here is to get the entire material down on a page. So the left to right flow of this is the cause and effect logic, but the process of design works from right to left, where performance goals overall will be mapped to a quantitative set of properties, such as strength, toughness for high performance steel, and resistance to environmental hydrogen enbrittlement. And from our mechanistic knowledge, we know that those properties map back to different subsystems in the hierarchy of microstructural subsystems, which we know dynamically evolve throughout the stages of materials processing. And that allows us to, then, identify and prioritize the key structure property links and process structure links for which we want to build out our design models. Now, that can be done by empirical correlations that are useful for interpolation. But we really, from the start, were committed to getting the most value out of the CALPHAD fundamental data, is to use predictive science based on mechanistic understanding, and to express that mechanistic under form that we could parameterize to devise parameters accessible to those fundamental databases to make a quantitative approach to the full design of the material that produces different levels of microstructure throughout different stages of processing. And meets all those property requirements for useful material. And what that motivated was the models summarized here, which really is a subset of what's available in computational material science that allowed us to do quantitative engineering. So at the bottom level, the three fields we integrated were the DFT physics that was particularly useful for surface thermodynamics, which is more difficult to measure than bulk thermodynamics. And of course, the material science is particularly advanced in the theory of solid state precipitation, and precipitation strengthening. And for structure property relations, we applied-- brought in the micromechanical applications of continuum mechanics to simulate unit processes of fracture and fatigue, to set the structure property relations. So these are really the three disciplines that were integrated in this approach to meet performance goals. But equally important is to constrain the process ability of a material, and that's a role for the CALPHAD-based material science models of the solid-solid phase transformations, and the liquid-solid phase transformations. Both of which are scale dependent, in terms of the size of heat-treated components through the size dependence of heat transfer. So ultimately, it's the linking of these microstructural models to the macroscopic process simulations that allow us to constrain materials up front, theoretically, to be processible on a desired scale. And that really helps to accelerate the full cycle, instead of experimental scale-up. So what's represented here on the right are the software models and their platforms. And the advanced instrumentation is equally important-- the ability to use techniques, such as the atom probe, to actually measure the complex compositions of nanoscale strengthening precipitates in our ultra high-strength materials. So from the start, calibration and validation to understand the associated uncertainty of our predictions was really important. And that's what the instrumentation was enabled. So a first example-- starting out in the 1980s, to design ultra high-strength steels, we first took apart the highest performance steel of the time, which was the AF1410 steel, which is tempered at 510C, to precipitate alloy M2C carbide. So these are HCP carbides, where the M is a combination of chromium, molybdenum, and vanadium, and sometimes tungsten. So we brought together a wide array of techniques that map the time evolution of the precipitate particle size, the aspect ratio of the carbide, their number density, total volume fraction, and the evolution of the carbide lattice parameters and associated composition trajectory. And then, this summarizes the evolution of the precipitation strengthening. The evolution of the size was consistent with the theory of precipitation at high super saturations. And in that regime, we can treat the initial, critical nucleus size as the fundamental scaling factor for particle size that's so important to strengthening. And that, of course, scales inversely with the precipitation driving force. So this is a way we could get a thermodynamic handle on the particle size that governs strengthening efficiency in these systems. But it was important to recognize that the trajectory of the lattice parameters, driven by the composition trajectory of the carbides, is consistent with the initial nucleation being in a fully coherent state. So it was necessary to add to the CALPHAD chemical thermodynamics an elastic energy term, that's composition-dependent through the composition dependence of the lattice parameters, setting the misfit strings. So all that was put together to get a precise driving forces for the precipitation of these carbides, so we could efficiently control the particle size. And what that leads to, then, is this simplified parametric approach to strengthening, where from the Orelon strengthening theory, the precipitation strengthening scales inversely with the spacing of the obstacles, and that spacing scales with particle size over phase fraction to the one half, for the spacing in a slip plane. And that means, then, the precipitation strengthening goes as f to the 1/2 over particle size. And if we accept this scaling to the initial critical nucleus size that scales inversely to the driving force, then we can write that the precipitation strengthening will scale directly with that thermodynamic driving force, times phase fraction to the 1/2. So this predicted that we ought to be able to design steels with higher driving forces to get more efficient strengthening. And that was then tested by making a series of steels that had a fixed carbon content, setting the ultimate phase fraction of the precipitates. And then, we used our coherent thermodynamics to predict the driving force. But as well, by the time we get to the temperatures where we can have substitutional diffusion, the carbon diffusion controlled formation of iron carbide, like FE3C will already have occurred. And that lowers the chemical potential of the carbon. So it was necessary to get these driving forces right. We had to first consider a constrained equilibrium with FE3C to set the carbon potential. And then, we were able to validate that within the scatter of these measurements, at a fixed carbon level, we could vary the hardness, peak hardness of this strength and steel, over about 20 points on a Rockwell C scale. So very dramatic, direct proportionality predicted by the model. And so that calibration became the principal tool that we used to design steels with much more efficient strengthening, demonstrating alloys that, for a given carbon content, could have 50% more strength than the previous technology. So it's an area that's been highly developed empirically, but in fact, there was a lot more to be achieved by being more predictive and taking systems to a high level of optimization using these tools. Now, what I wanted to just touch on is there's also an important role of the surface thermodynamics. And this is a case where we made very good use of the DFT predictions. But what this chart is about is a correlation of the embrittlement potency of interfacial segregates against the segregation energy difference between free surfaces and grain boundaries. And in this case, there is experimental data to validate this, but this is actually the case of predicting it with our DFT quantum mechanics, with pretty good accuracy. So these are the well-known interstitial components that have been well-studied. Substitutional elements are less well-studied. So after demonstrating the ability to compute these numbers, we actually built out this database of the embrittlement potency from surface thermodynamics, which was calibrated and validated by some large-scale DFT calculations, giving us these predictions, from which we were able to identify the strong cohesion enhancers that could sufficiently enhance grain boundary cohesion to offset the embrittling effect of hydrogen. And this allowed us to design these very high-performance steels to no longer be prone to the intergranular form of stress corrosion cracking. So it was a big advance in the resistance to hydrogen embrittlement in this class of steels. So it's one example-- yeah? RAFAEL JARAMILLO: One thing, can you fill in what is DFT? We've heard it several times this point. We talk about data a lot, so where does that data come from? GREG OLSON: Yeah, so that's density functional theory. So so this is something that we basically have ways to solve the Schrodinger equation. But there is a menu of approximations that you do take along the way. But it really is essentially first principles prediction of energy at 0 Kelvin. RAFAEL JARAMILLO: So we use a lot of empirical data, in 020, but you've also probably used DFT-derived data, even without knowing it. So when you have some materials property data, that comes from somewhere, and sometimes it comes from theory, and sometimes it comes from measurement. And oftentimes if it comes from theory, it comes from DFT calculations. GREG OLSON: Yeah, it is. And there are different approximations, and this was an all-electron method, very rigorous. But it was typically something like 400 hours of Cray supercomputer time for each data point, back in the 1980s, when we did that. RAFAEL JARAMILLO: And I'll just chime in-- Ali asked a question, which do you prefer? I think you mean, which do you prefer, experiment or theory? Is that what you meant, Ali? Speak up-- unmute yourself. It's more fun. STUDENT: Yeah, I mean like for the applications like of prediction, which is better to extrapolate? RAFAEL JARAMILLO: Greg, if you had your choice-- I mean, I'll just say, first of all, it's often not-- it's often an apples to oranges comparison. Because we use DFT most often to get data for processes or situations that we just can't measure. Which I think gives you your answer. If you have apples to apples, if you have a measurement and calculation of the same thing in the same circumstance, I think Greg and I would agree, if the measurement is a good one, we'd go with that. GREG OLSON: Yeah, if you look at the magnitude of this, we're using this method to find the ones that have this cohesion-enhancing potency of greater than 1 EV. And the intrinsic uncertainty of even these calculations is about 0.1 EV. And this other model got it within 0.2 EV. So it is important to understand the uncertainty of those predictions. So if it's plus or 0.1 EV, that's fine for helping you find the 1 EV candidates. But most often in metallurgy, the number we want to know is only of the magnitude 0.1, and 0.1 plus or 0.1 is not very useful. So it's a good way to find out where to go for the big numbers, but most of the time, we really need the experimental data that gives us higher accuracy than we can get currently from the DFT methods. So that's why I was saying, that up front, uncertainty quantification was a very important part of the whole strategy of putting these tools together. But this is a good example of maximum use of DFT-- we had a small set of experimental surface thermodynamics to test against. And then, we could calibrate against the ability of DFT to predict it. And then, make this projection across the periodic table. So it's a surface thermodynamic genome almost entirely from DFT calculations. And the way we put it all together is graphical parametric design. So we map the behaviors of interest back to these parameters, like driving force and phase fraction for strengthening. And so for the actual stainless landing gear steel, here's a cross-plot versus molybdenum content affecting the driving force and the carbon affecting the phase fraction. And there are a couple invisible yellow contours showing how the driving force increases with molybdenum, and of course, the carbon sets the phase fraction, so it's this region in here that gave us the strength level that is our goal for the design. And superimposed on that, our process ability constraints, such as the martensite start temperature, to have a fully martensitic steel. And the solution temperature to put the reactive components in solution so we can precipitate them at high temperatures. And the relative slopes that we can see from these types of plots also allow us to assess relative sensitivities, so we can develop robust design strategies that don't require too tight a tolerance. So quite typically, we start from the last stage of processing the nanoscale precipitation that meets the strength goals. And then, back up to earlier stages of processing, where we're also using these FCC MC type carbides. In this case, we've got a composition variable in the process temperature. So we're constraining these grain-refining particles to be soluble at homogenization temperatures, able to precipitate out at forging temperatures, and then maintain a certain phase fraction in size at the final austenitizing temperature of the steel that sets the grain size of the steel. So similarly, we can back up to even earlier stages of processing, specify the de-oxidation processes that set the primary inclusions at the multi-micron level that are important to toughness and fatigue resistance. But that graphical strategy is very efficient, and it is the practice that we use at QuesTek and the practice that I teach in my design class. So the first four products to come out of that are these high-performance secondary hardening steels. We have now two flight-qualified steels, as well as the stainless that's used by the Air Force. We have a high toughness steel that's first gone into the hook shank application for the carrier-based planes. So both of those steels are flying. We also have very high performance carburizing steels for gear steels for fatigue resistance, and that's doing well on off-road and on-track racing. And in fact, the Red Bull Formula One team that's very competitive this year is actually using our gear steels for their reliability. And those gear steels now are being qualified for helicopter applications. But all of these benefit from that extra strengthening efficiency that we get by refining those M2C carbides down to a 3-nanometer size scale through that maximization of driving force, as validated here by an early 3D atom probe reconstruction. So all four of those steels are using greater strengthening efficiency to resolve the conflict with other properties. RAFAEL JARAMILLO: This is maybe the second or third time you've mentioned atom probe, and there's a data set right in the middle here. So it seems like a good opportunity to talk about what that is. GREG OLSON: Yes, the atom probe is a version of the field ion microscope that looks at the point of a very sharp pin to get very high electric fields. And it can actually cause evaporation from the high fields. And so we can evaporate atoms off of the tip, and project them to a position-sensitive detector, and actually reconstruct the position of the atoms in the tip. So it really is taking us down to a nanoscale spatial resolution with detectability of all atoms in the periodic table. So a very powerful technique. RAFAEL JARAMILLO: If I may, it pulls apart a material, atom by atom, and tells you where all the atoms were. GREG OLSON: Yep, that's what it does. Yes. And we have very good facility at Northwestern, but there's also a very good one at Harvard that we're using now. STUDENT: Can you use the material after that? Does it change the surface? GREG OLSON: Yeah, this is true tomography. So we're fully-- we are really toning the material and taking it apart. But one thing we can do is, before we do the tomography, we can look at the tip in the electron microscope. So we can get diffraction information, crystallographic data, fully characterize the tip into EM before we then take it apart in the atom probe. RAFAEL JARAMILLO: But it does-- it's like getting beamed up, but you're never getting beamed back down. GREG OLSON: That's right. RAFAEL JARAMILLO: It tears the material apart atom by atom, and does not put it back together again. GREG OLSON: Right. Well, it puts it back together theoretically, right. Yeah, so we reassemble it as tomography. Yep. All right, yeah, I would just touch on the DARPA AIM initiative. Again, this full compression of the cycle is this linking to macroscopic process models. And so we were able to work with GE and Pratt and Whitney, and learn about the technology of multidisciplinary computational engineering. So this integrator was linked to the tools of macroscopic aero-turbine engine design, including heat transfer for heat treatment. So we built this PrecipiCalc simulator. So this puts together the CALPHAD thermo and mobility databases, and adds some surface thermodynamic quantities to actually simulate complex nucleation and growth in complex alloys, so that we could actually predict the effects of very complex heat treatments of turbine disks that had multimodal distributions of gamma prime precipitates. And very accurately control it, and use it to demonstrate it, and process optimization. Validating predicted overspeed burst speed of a turbine disk. And then, ultimately, we did develop strategies to predict the cumulative probability distribution of properties from the variation that can occur during manufacturing from the allowed tolerances of the specifications of material composition and processing. And we were able to get information that normally would take a lot of time and money, and a lot of testing, like 300 turbine disks, before you'd have enough data. And we could, with only 15 data points, and mechanistic models, actually predict this 1% minimum property at room temperature and elevated temperature. So we demonstrated the strategy to get the qualification data of necessary minimum properties very efficiently by predictive modeling based on these CALPHAD tools. And so here's the time chart on the two landing gear steels. So this is the technology readiness levels at the level of the landing gear. Here's the corresponding materials milestones. And so both of these steels went from a clean sheet of paper to flight in less than a decade-- meeting the MGI goals. But particularly doing it the second time, we were able to move more quickly into the Navy trusting our predictions well enough to give us the technology for the hook shank application. So we actually had component qualification within a month of material qualification, in that case. And this has been a case study for the MGI program, where it was reviewed in the second landing gear steel-- what were the technology accelerators using this technology? And what were the inhibitors doing it in a small business where there were lapses in the federal funding of the projects, and there was a reliance on toll manufacturing by others? It was estimated that we had really demonstrated a technology capable of a three-year cycle. And that's really the important key of that's what enables concurrency. And really, the most historic example of that so far was the Apple Watch announced in 2014. So these are all new alloys developed by Apple that were designed and developed concurrently with the development of this device. And actually, delivered in less than two years from their acquiring the technology from QuesTek to actually do this. And that included the high strength anodizable aluminum alloy that ultimately went into the 6S iPhone. So it's been estimated there's now a 50% chance that you've got a computationally designed material in your pocket. And it came from this CALPHAD-based technology. From there, news travels fast in the Valley, and it caught the attention of this guy. I actually had the opportunity-- he invited me out to give him a half-hour elevator pitch. I thought I was selling him our technology, it turns out I was selling him Charlie Cumin, my former student who was the founding president and CEO who had gone to Apple. And so after three years at Apple, he moved to be vice president of materials technology at both SpaceX and Tesla. So this technology has been taken even further in that environment. And notably, included a burn-resistant nickel super alloy that allows the high oxygen pressures that really enables the Raptor engine of the Mars Starship. So it gives you a rare example of a corporate CEO bragging about his metallurgies. I think what I'll do is I just want to mention back here on planet Earth, we've got this CHIMaD center as a decadal center supporting the MGI. It's in its second five years of its 10 years. It's largely based in Chicago, but MIT is a partner to it. And it's looking at not only improving these methodologies, but greatly expanding their scope. Notably, including taking this to organic systems and polymers, as well, and they're building out essentially a polymer CALPHAD to design all materials by the same methodology. And I'll just mention that a lot of the current projects are looking specifically at the design of materials for the new technology of additive manufacturing. And here's some of the early projects at QuesTek in that field. And one of our more important designs was a high-temperature aluminum that uses unique features of additive manufacturing, that compared to this scandium bearing alloy, we have scandium-free alloys using some rare Earth elements, that are able to sustain high strength in aluminum alloy out to very high temperatures. Very promising. And it's a unique microstructure that can only be achieved by additive manufacturing. I did want to mention electroceramic-- we had the opportunity under this DARPA Simplex program-- we were asked by DARPA to look at integrating new data-mining techniques with our CALPHAD-based design strategy, and apply it specifically to thermoelectric systems. So here, this is the system chart for a thermoelectric material, where the basic conflict is we want electrical conductivity, but not thermal conductivity. And the way to break that is to use microstructure for phonon scattering to get the thermal conductivity down. So it was a chance to use the same kind of hierarchical microstructures that we use in structural alloys of precipitation and grain refinement to manage the thermal conductivity. And one early example of that was to build out the CALPHAD thermodynamics of this lead telluride, lead sulfide system, taking the same framework. And then, actually, predict from a phonon scattering model-- if we optimize the particle size, what level of performance we could achieve as we vary the phase fraction of the lead sulfide particles. And it showed that it agreed with the data from empirical development, the literature, at low phase fractions. But actually said at these higher phase fractions, if we were to refine the particles to their optimum size, there is more performance that you could achieve from that system. So this is a very early demo of applying CALPHAD to these systems, as well. And this is underway now as part of the CHIMaD center, Jeff Snyder at Northwestern is actively developing the use of the CALPHAD to control-- predictably control microstructure optimization in these same systems. But of course, what you really want to know is, what about bubble gum? And so we did get support from Wrigley-- did have four years of support for a doctor of bubblegum. And we were asked to make a four-component, easy-to-manufacture gum, similar to Hubba Bubba Seriously Strawberry, could we, with that simple a formula, get the performance level of Hubba Bubba Max-- which is the highest performance commercial bubble gum, but very difficult to manufacture. And we did succeed with our modeling to even outperform the highest performance gum out there. So we finally had produced a material that society could appreciate. It was very rewarding. And this was aided by a student team that actually won the ASM design competition back in 2008. And if you'd like to learn more about that, I suggest you sign up next year for 3041 and learn how to be a materials designer. Here's an example of the five projects currently underway. All of these connect back to the CHIMaD research. So the teams identified in red here are all being coached by doctoral students, and post-docs, whose thesis is on that particular design project, to help student teams get to the high technical level it takes to actually do the computational design of a complete material with some challenging objectives. Notably, including a project this year where Apple is the client. And we've had a number of students coming out of the class who have gone to take internships over the summer at Apple. They're very pleased with the students. So I highly recommend you sign up and join us next year. RAFAEL JARAMILLO: All right. Well, that was fantastic. That covered a lot of ground, Greg. And we're right on time. GREG OLSON: I hope it wasn't too dizzying, but of course, you have the opportunity to go back and play it over it slow me down. RAFAEL JARAMILLO: It was dizzying, but it was a really impressive show of impact. So thank you. We have-- so as usual, I'm going to stop recording in a minute. But I'm very happy to take questions. I'm sure Greg is happy to take questions for a couple of minutes. GREG OLSON: Indeed. RAFAEL JARAMILLO: --till the bottom of the hour. No? Questions about CALPHAD, the mechanism, the history, the science, landing gear, Elon Musk? Any of the above, I'm sure. Did you know how important this was to your-- well, to your modern lives? GREG OLSON: Even in your iPhone. And it will help reduce-- it will help reduce the price of your Mars ticket, too, the efficiency of the engine. STUDENT: Of the models that worked with the heat-- that worked with the heat transfer from other companies, I think you said you're working with the-- I don't know if it's Air Force or DARPA, that you've collaborated with other engineering companies? GREG OLSON: Yeah, so right, so the idea of ICME is to link the materials models to the macroscopic processing tools, where very often, it's all about the heat transfer. So we were in the AIM project, we were able to then simulate the heat transfer that the actual thermal processing of a turbine disk. And then, take the thermal history at different nodes in that simulation, and simulate its full evolution of a complex microstructure through that complex processing. And actually, predict the spatial variation in the turbine disk of structure and properties, with an accuracy within 1 KSI and yield strength. STUDENT: What's the models employed by [INAUDIBLE]---- like this mechanical engineering, like compatible with the materials models, in terms of the math or continuity of that stuff? GREG OLSON: Yeah, well, that's the role of that eye-sight process integration and design optimization software. So there are integration tools like that, that are intended to connect models from any software on any platform, and be able to transfer the output of one as the input to another. RAFAEL JARAMILLO: And I'll say, this sort of integration is-- it's part of what you're doing on this-- not at PSAT seven, but PSAT eight, which is you have software, and it has data and outputs, some form of data, and you want to model that so that you could perhaps use it in a different piece of software. And you're doing this today, you know, you're doing this, 020 in the sense of very low level, down in the weeds, looking at a couple traces of data. But once you understand the things that happen down in the weeds, you can start to step back and look at a bigger picture, and imagine more sophisticated integration. So it's great that you brought that up. GREG OLSON: Yeah, and of course, that is all very important part of this concurrent engineering approach, that we can actually link the computational tools across disciplines, and simulate the system level. STUDENT: So I guess going off of that, you mentioned a few times how now with all the new technologies and methods, understanding how materials work, how now there's a lot more concurrent engineering taking place. How the materials are being developed alongside the products that are going out to market, or for consumers to use. And I was wondering how tied those two things are. Like if there are different teams working on different aspects of it? Or if it's really becoming more and more integrated, in terms of the production teams? GREG OLSON: Yeah, I would say there is ever-increasing integration. And of course, in the case of Apple, these really are consumer products, and very novel products, that they are developing on a very accelerated schedule. But what they're very good at is the manufacturing engineer, because they make stuff on such a huge scale. So they're very good at that. I think at SpaceX, you're seeing a very innovative technology. So the early ideation of taking extremely challenging problems, like how we get humans to Mars efficiently, is making maximum use of this. And there, it's a very thorough integration at all levels. So they use the so-called 80% solution, that it's often said that in computation, with 20% of the resources, you can get 80% of the way where you're trying to go. And then, the question is, do you really want to spend the next 80% to only get another 20%? So pretty much at all levels of structure, they'll take the computation to 80% for everything, including materials now. And then, of course, they blow up rockets at SpaceX. And those are highly instrumented blowing up rockets. And that's the idea, is to test at the full system level, and approach full system level optimization, taking everything to its limits, instrumented, so they can dial it back on the next one. But the pace at which they're able to do this is really incredible by integrating across all those disciplines.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_23_Building_Binary_Phase_Diagrams_Part_I.txt
RAFAEL JARAMILLO: So today, we're going to continue working on the thermodynamics of binary phase diagrams. And just as a reminder of where we're going, PSET 7 is due Friday. And that's a very lightweight PSET, reflecting the holidays. PSET 8 is a much bigger problem set, and is going to involve using several different pieces of software that we're going to spend the next week learning and exploring. So I'll just get into it. But if you have any questions about the timeline of the class for the next week or so, please speak up. And we have settled into DeHoff, chapter 10, which is really the most substantial chapter in the textbook. And there's a reason for that. It combines everything that we've learned, and it's the most substantial unit of this class. So we are in a holding pattern in DeHoff chapter 10 for at least several lectures. OK, so we're going to learn about the taut rope construction. And we're going to do that with the example of a lens diagram that we've already seen. That is, the silicon germanium system. So I'm going to start by drawing the phase diagram of the silicon germanium system. Who has a high melting point, silicon or germanium? STUDENT: Silicon? RAFAEL JARAMILLO: Silicon has a higher melting point. It's more strongly covalently bonded, because it's a smaller atom. And it's higher up on the periodic table in that column. So it has a higher melting point. So let's draw the lens diagram here, and there we have it. There is the melting point of silicon. And here is the melting point of germanium. And I'm going to mark some temperatures. We're going to have T1, I'm going to have T2, goes right through a nice tie line there. I'm going to have T3, nice tie line there. And I'm going to have T4. And so this, of course, is the lens, has tie lines everywhere. I'm going to just draw a couple. And a low temperature, we have solid phase, which we know is a diamond cubic crystal structure. And high temperature, we have liquid phase. All right, and what we're going to do is I'm going to draw the three energy composition diagrams at four different temperatures. One, two, three, and four. So let's start with T1. So at T1, this is delta G mix. At T1, would somebody like to tell me what my solution model for the alpha phase should look like? I'm going to-- what should my solution model for the alpha phase look like for T1? Its low temperature? STUDENT: Should it be negative at all points? RAFAEL JARAMILLO: Negative at all points, and curved upwards. Because this is a fully miscible system. This is a solid solution that's stable at all compositions. That's what the phase diagram tells us. So there is a solution model for T1. All right, now, the slightly harder question here-- what should my solution model for the liquid phase look like at T1? This is new. This, you know, but this next question I asked, this is new. I'll tell you, because it's new. It's going to look something like that. Why is that? This is a model for the free energy of a phase, which is never stable. We can still model it, though. You see, I'm not drawing it across the full composition range. This model shoots up to some high value at the two y axes. Why? Because it takes free energy to turn germanium or silicon from alpha into liquid at this temperature. So we have to imagine this shooting up to some high value. But we can draw it, and we can think about it. That's T1. All right, what about T2? T2 is-- we've seen this, T2, we've seen. So what is this going to look like at T2? For pure germanium, what is the reference state at T2? STUDENT: Liquid? RAFAEL JARAMILLO: Liquid. So my liquid solution model has zero free energy for pure germanium. What is my reference state for silicon at T2? Right here? STUDENT: The alpha phase? RAFAEL JARAMILLO: The alpha phase. Next time, yeah, so there we go. And that's going to go up. So I have alpha phase. I have liquid phase. And I have now a common tangent. I have a common tangent, which is what we expect. Because we have here a tie line connecting two compositions. Now, I'm going to draw here for T3. T3 is qualitatively similar, but the liquid phase has become more stable over more of the range. The alpha phase has become less stable. So again, alpha, liquid, a common tangent. And lastly, somebody who has not spoken yet today, what is my free energy composition diagram at temperature of T4 that is up here. What is the only stable phase at high temperature? STUDENT: Liquid. STUDENT: The liquid. RAFAEL JARAMILLO: Liquid. So what will the liquid solution model look like? STUDENT: Would it look like the plot for T1, except the curves are reversed now? RAFAEL JARAMILLO: Exactly right. This is a case where the alpha phase is fully miscible, and stable everywhere. And here's a solution that's never stable. Now, we've just flip-flopped. The liquid phase is fully miscible, stable everywhere. And the alpha phase is unstable everywhere. Alpha-- great. OK, now we're going to talk about the taut rope construction. So what I'm going to do is, I'm going to use green, and I'm going to imagine you're going to imagine putting a rope and anchoring the rope here and here. And imagine the rope is initially floppy-- you have a floppy string. And it comes out this side. And I'm going to grab hold of the two ends of the string, and I'm going to pull it taut. What we'll see is the taut rope will now trace the alpha. It will trace the alpha solution. All right, that was kind of silly. What happens in this case? If I do that in this case, the taut rope traces the liquid solution until the point of common tangency. And then, it becomes a straight line until it traces the alpha solution. You can see that geometrically, right? If these were objects and you had a string and you pulled it taut, it would trace the solution, where the solution is stable, then it would trace the common tangent. And then it would trace the solution where the solution is stable. What if you did it here? Pull the rope taut, it's going to trace the solution while it's stable, and then it will do the common tangent and then it will come up and trace that solution. And similarly here, it's going to tell you that the liquid phase is stable everywhere. So this taut rope construction, which you can have this very visual sense of, gives you the phase diagram. At any given temperature, you do the taut rope, you pull the rope taut, and wherever the taut rope goes, that's your phase diagram. So that's neat, right? That comes from two things. It comes from the fact that positive curvature implies a stable solution. We know that from previously. And it comes from the fact that the common tangent condition defines the two phase region. So those two pieces of geometry, put together, give you the taut rope construction. Any questions on that before we move on? STUDENT: Do you always place the rope at the bottom part? Like below everything? RAFAEL JARAMILLO: Yeah, loose rope, taut rope. Yes, you always put it-- imagine it hanging well below, and then pulling up. And it's always anchored at 0. Imagine having little eye bolts. Imagine this thing is a geometrical object-- I wish I had a model of this. I've never seen a model. But something with balsa wood would be fun to play with. And imagine you have eye bolts here, and you have a string and you put the string through the eye bolts, and then you pull up. Yeah. Any other questions on this? This is meant to help. It's not meant to hurt. The taut rope construction is meant to illustrate how to find the free energy-- the minimum free energy configuration. That is, how to find equilibrium. All right, let's move on. We're going to do more and more taut rope. It's not the last you'll see of it. We're going to talk about eutectic reactions now. Eutectic reaction. So here's what a eutectic reaction is-- it's a liquid that transforms into two solids. A liquid transforming into solid one, and solid two. We'll visualize this. Let me just define eutectic point, and we'll look at some phase diagrams. A eutectic point is a local minimum of the liquidus at temperature T eutectic. Do folks know what the liquidus is? I haven't defined that yet. Let me pull up the slides. And while I do-- while it loads, I'm going to define it. The liquidus is the locus of points on a binary-- on a binary phase diagram, above which, only liquid phase is stable. That's the liquidus. All right, let's look at some phase diagrams. OK, so here is aluminum silicon. This is eutectic phase diagram. This is a very typical geometry. Your eye will pick out eutectics right away, because eutectics have this gull-wing shape. This is the liquidus. This purple line here that I'm laser pointing out, is the liquidus. It is the locus of points above which only liquid phase is stable. And in the eutectic system, you have a local minimum to that. It's also known as melting point suppression. So right there is the eutectic point. This phase diagram has a narrow region of FCC. Almost no solubility here of aluminum and silicon. So the solid solution of aluminum and silicon here in the diamond structure is too narrow to view on this phase diagram. And huge regions here of solid-solid, solid-liquid, and solid-liquid coexistence. And that point there is a eutectic reaction. Because when you cool through that point, you transform from one liquid phase, into two solid phases. This up here is a single phase region. It's a liquid. And this down here is a two-phase region. You see all the tie lines, which thermocol draws in green. So at minimum there, that minimum of the liquidus, is the liquidus eutectic temperature. It's also known as an invariant point. It's called an invariant point. And here's a nice example. This is an example of a system with multiple eutectics, don't have to just have one. So here is the magnesium nickel system. Nice, clean phase diagram, but it has multiple eutectics. So here's the eutectic between magnesium and this magnesium to nickel solid phase. And here is a eutectic between-- well, we'll come to intermediate phases, we'll get there. But there's a solid phase here, that melts congruently. And the eutectic between that phase and pure nickel. So anywhere you see a local minimum of the liquidus, you have a eutectic reaction. At this reaction, you have a transformation from liquid to these two phases. And at this reaction, you have a transformation of liquid to these two phases. So you'll see, each of those eutectic points had this general form-- transformation from a liquid of high temperature into two solids at low temperature. All right, what is Gibbs phase rule tell us? Gibbs phase rule tells us that degrees of freedom in such a thing is C minus phase plus 2. And so that's 2 what? Minus 3 plus 2. So that's 1. So I have 1 degree of freedom at a eutectic point. And that's weird, because it's a point. It's a point, like a triple point, it's a point. A point should have 0 degrees of freedom. What's with 1 degree of freedom? Does anyone have a guess as to where that degree of freedom is in the phase diagram? What's a geometrical object with 1 degree of freedom? What do we call that? STUDENT: A line. RAFAEL JARAMILLO: A line-- it becomes a line in TP and composition, higher dimensional space. So in this class, we don't worry about pressure dependence of binary phase diagrams. That's an advanced topic. But since we're learning fundamentals, I want you to be confident that Gibbs phase rule holds. And I'll show you a binary phase diagram projected, drawn in three dimensions, just for fun. So here is-- see, this is why we don't draw these very often, because they're a horror to look at. But this is a binary phase diagram now where pressure is this vertical axis, and temperature is into the board. Temperature is like into the board, here. And composition is here. And this is taken from a textbook, from Gaskell. And they conveniently step through. They're giving you these isobaric sections here. So here's an isobaric section at one pressure, and you see there's a eutectic point-- a eutectic point between the liquid and alpha and beta phase. And they draw this at very high temperature. They also show the lens here between the liquid and the vapor. This must be a low pressure. And then, as we reduce the pressure, I'm going to pump the system down, you see the liquid phase is shrinking. And the vapor phase is taking over. But you still see that eutectic point. It's still the same reaction-- alpha and beta transforming into liquid. So I'm going to flip backwards-- see, alpha and beta transforming into liquid. There is that same point, it's a point on the line-- alpha and beta transforming to liquid. But if I go to a sufficiently low pressure, I lose it. Now it's gone-- I no longer have alpha and beta transforming into liquid. So it was a line, it was a line that had 1 degree of freedom, as promised. I also need to tell you what a eutectoid is. A eutectoid reaction. A eutectoid reaction is a third solid that transforms into two solids. So this is the low temp, and this is the high temperature. So I'm going to show you an example of a eutectoid. Before I do that, I'll ask-- how would that this is the high temperature equilibrium state, and this is the low temperature equilibrium state? How would you know that? Or how would you guess that? You know very little here-- you just know that high temperature, it's one phase, one solution phase, and a low temperature is two different phases. In general, two different solutions. How might you guess? STUDENT: This is more what we're doing in 3.023, but it reminds me of an oxidation and reduction reaction, where at lower temperature it would reduce. RAFAEL JARAMILLO: That makes sense. It's not quite that, though. And we'll get there in this class, as well. The reason is, this is more mixed up. If you have two solid solutions, and you take all those components, and you mix them into one solid solution, you necessarily have a higher entropy of mixing in this case-- in this case. And you can convince yourself of that by drawing your own free energy composition diagrams. So in general, high entropy things are favored at higher temperature. This is something that you know. It's a little bit more apparent here. I have a liquid solution and two solid solutions. So here, you might say, oh, yeah, the liquid is higher entropy. That makes sense, that it's the high temperature phase. But it holds even here. So let's look at some eutectoid reactions here. OK, so here is brass, copper, zinc. And who sees a eutectoid reaction? You're looking for that typical, gull-wing shape. But in this case, the liquid-- well, how come I don't have this laser pointer here? In this case, the liquid has no eutectics. It doesn't have anywhere that gull-wing shape. But there is a eutectoid somewhere in the reaction, in the system. Who sees it? STUDENT: Is it the delta phase? RAFAEL JARAMILLO: Yeah, delta. This delta phase is just a little solid solution, little delta phase. And it has a little minimum here. There's a little gull-wing shape. Your eye should start being able to pick these out. These are little gull-wing shapes. And so this is a reaction-- you cool down the high temperature phase of the delta solid solution. And when you cool through that point, you go into a two-phase region. What two phases coexist in this region, now, below 555 degrees? Samuel, are you willing to-- STUDENT: Gamma and epsilon. RAFAEL JARAMILLO: Gamma and epsilon, right. So see how we're reading this phase diagram? The single solution, the single phase regions are labeled alpha, beta, gamma, delta, beta, epsilon, eta. And the unlabeled regions, you infer, are two-phase region. So when it cools down from the one-phase delta, I go into a two-phase region, gamma and epsilon. Fantastic, thank you. Just for fun, what happens when I heat up delta, and I go through this point? What is this region? What two phases coexist in this region? STUDENT: Would that be gamma in liquid? RAFAEL JARAMILLO: Gamma in liquid. So this is a funny system. I have gamma and epsilon, heat up, heat up, heat up, gamma and epsilon heat up, heat up. Sudden transformation, everything mixes spontaneously into a new crystal structure, delta . Heat up, heat up, heat up, heat up, heat up, heat up-- oh, suddenly, I have spontaneous un-mixing again. But in this case, it's into two phases, back to the gamma solid crystal structure, and some liquid. And as I heat up further, the liquid phase fraction increases, increases, increases, hit the liquidus, and now I have one liquid solution. Right. There you have it. One solid phase, two solid phases. All right, so let's draw free energy composition diagrams for eutectic systems. And for those who are interested in this, and I hope you all are, on the supplemental videos section of the course, there is a video-- a demo. A demo that we made-- we made it for this course, and also for the ed-ex course, of the indium gallium system. So that's a eutectic system, indium gallium, you have melting points depression. It's very important in microelectronics. And I can tell you why liquid indium gallium is very important for microelectronics another time. But we have a nice little demo where you see two systems, which are solid at room temperature, liquefy when you mix them. So there's melting point suppression. So go check out the demo, if you like. So let's draw a free energy composition here for-- we're going to start with the temperature below the eutectic point. I'm afraid I'm going to fail my spelling test here. Is that spelled correctly, anybody? Two immiscible solid phases, liquid, unstable. OK, below the eutectic point, I'm below the liquidus, the liquid is unstable. I have two immiscible solid phases. So I'm going to draw a free energy composition diagram. This is now general, this is not particular to any one system. All right, so first, let's have an alpha phase. Let's have a beta phase. And let's have a liquid phase. Now we know, from the start of this lecture, when we were looking at the lens diagram, the liquid phase here is just something that is like that. It's everywhere unstable, so it's never going to factor into the taut rope construction. This is 0, this is delta G mix. And let me draw my taut rope. My taut rope is going to be pulled like this, and then it's going to become a straight line. Sorry, I didn't quite hit my common tangent point there, but I think you can get the point. There, so there is my free energy composition diagram for this system below the eutectic point. Here are my points of common tangency. So in this region, I have an alpha solid solution. In this region, I have alpha and beta two-phase. And this region, I have beta solid solution. Beautiful. Let's move on. At the eutectic point, liquid becomes stable. So what does that look like? Here's my 0. What color did I use? Alpha was green, so let's draw alpha. Beta was blue. So let's draw-- beta was blue-- let's draw beta. I'm going to cheat a little bit here. Here's my taut rope in this case. All right, so here's my taut rope. But now, the liquid phase is just barely stable. So now, I have a unique point of a common tangent that's common to three phases. Right at that eutectic point-- here's liquid, alpha, beta. Here are my points of common tangency. So again, on the left hand side, I have an alpha solid solution. This here is alpha and liquid. This here is liquid and beta. And this is a beta solid solution. Let me finish this sequence here with a free energy composition above the eutectic point. This is a liquid one-phase solution, stable over a finite composition range. What does that look like? So I'm going to, again, have a little alpha region. Alpha. Going to have a little beta region-- beta. But now, the liquid has become very stable. So my liquid is going to come down like that. Sorry, it should have had some more curvature here, but I didn't. But anyway, here's liquid-- let me draw this a little more curved. Here's my liquid phase. And where are my common tangents? Now, I'm going to have a couple of them. Here's one. And here's one. Cool, so now I have a more interesting system. Over here on the left hand side, I have an alpha solid solution, in that little region. Then, in between these two, I have alpha and liquid. So here's my taut rope coming down here. And then, you see my taut rope gets pushed around. It's like a plunger-- the plunger liquid phase came down and pushed the rope. Liquid phase came down and pushed the rope down. So in between these two points of common tangency, I have a liquid solution. Then, my taut rope comes up here. And in this region, I have liquid and beta. And finally, over here on the right hand side, I have a beta solid solution. So let me flip back a little bit and step through those one more time, and remind us what our phase diagram will look like. So here is a little itty-bitty schematic, eutectic phase diagram. So at temperatures below the eutectic, I am going to be where? I'm going to be somewhere like here. You see I have an alpha phase solid solution, and a two-phase region. And then a beta phase solid solution. Alpha, two-phase region, beta. All right, let's flip up. Now, I'm at the eutectic. Draw this again. I'm not going to be able to reproduce the same drawing every time, but that's fine. So here, I'm at the eutectic. So here, I have-- again, an alpha phase solution in equilibrium with liquid. Which then is in equilibrium with beta, right there at the eutectic point. And I see the two conditions of common tangency. And then, at the highest temperature that I've drawn, I'll move down here-- Here, let's say I'm there at this temperature. So I have an alpha phase solid solution. Then I have a two-phase region, between alpha and liquid. And then a liquid solution. And then, a two-phase region between liquid and beta. And then, a beta solid solution. So just as we did in the last lecture before the break, we are seeing the phase diagram emerge, flipbook style. As we draw these energy composition diagrams at individual temperatures, we're seeing the phase diagram be painted through a series of tie lines. I have one more thing I want to share with you today, but I'll pause now and take questions on this. Let me finish drawing my taut rope here, taut rope comes down, curls around, taut rope goes up, and then taut rope finishes. You see how this liquid phase can be seen as having pushed-- plunged the taut rope construction down? Let's do this flipbook style again. All right, liquid was unstable, and it came down, it's touching the rope. And then, it pushes the rope down. Imagine the rope is elastic. That's a case of an intermediate phase. And we'll cover that in more detail shortly. But you see what's happened here is an intermediate phase, meaning I have one phase that stable on the left hand side of the diagram, a different phase, which is stable on the right hand side of the diagram. And intermediate to those, I've got a third phase, which is a solution phase, which is stable. And intermediate phases have this feature on them for energy composition diagrams. We've got a phase, which is very unstable on one side of the diagram, very unstable on the other side. But stable in the middle. And the textbook has more drawings that are like this, or illustrating this energy composition diagrams, almost flipbook style, in how they generate phase diagrams. And we're certainly going to be doing a lot of that on the next piece, on problem set eight. All right, there's one more thing I want to share with you, which gets to often a point of confusion around using CALPHAD software, which is the following. Do we like to plot delta G of mixing, or the total Gibbs free energy? As you've found already, we analyzed delta G of mixing in lecture, because we start by learning solution models. And there's a good reason for that. However, most thermodynamics software, including Thermo-Calc, prefers to plot total Gibbs free energy. And so you need to go back and forth. Fortunately, it doesn't matter. It doesn't change the resulting phase diagram. And let me just draw that schematically. Here's a system for which I'm going to plot-- now, I'll draw this for a two-page region. Here's a free energy of mixing for-- here's an alpha phase, and here's a beta phase. And I have a common tangent construction. And my phase system is what, alpha solid solution, two-phase region, beta solid solution. Here's alpha, here's beta. And now, for the exact same system, at the exact same temperature, this is what the Gibbs free energy looks like-- the total Gibbs free energy now, not the Gibbs free energy of mixing. The total Gibbs free energy for this exact same system at this exact same temperature, often looks like this. Now, we're going to plot total Gibbs. Well, this is the Gibbs free energy for component one in its pure state. And at some other number, this is the Gibbs tree energy of component two in its pure state. And it dropped us down so I can cheat a little bit in my drawing. I'm going to draw a dashed line connecting those two. And my total Gibbs free energy versus composition diagram will look like this. The important point, and of course, I cheated a little bit, because I did the drawing so that it would work out, is that whether you got delta G of mix, or the total Gibbs free energy does not change the resulting common tangent construction, specifically, it doesn't change the compositions that are at equilibrium at this temperature. It's the same two compositions. What is this line? This straight line connecting the two reference states? Line connecting G1 naught and G2 naught. That's just-- well, we know what that is, we've written that down before when we were solution modeling. That's x1, G1 naught, plus x2, G2 naught. And sometimes, we prefer x1 mu 1 naught plus x2 mu 2 naught. This straight line offset, because you offset here by the straight line, doesn't change the face of those [AUDIO OUT]. And you can work this out for yourself. You could prove to yourself that if these two points satisfy the common tangent condition, then these two points do, as well. Or you could just believe me. So this hopefully addresses a point, which often does trip people up when they're starting to use the CALPHAD software. And that's where I'd like to leave it for the day. So on Friday, we're going to continue in this vein, we're going to look at more free energy composition diagrams. We're going to move on from eutectic reactions, we're going to look at peritectic reactions. And on the problem set that's going to go out on Friday, we're going to be using a new piece of software, which is an aid for visualizing these free energy composition diagrams, and for visualizing how the free energy composition diagrams give rise to the overall phase diagrams. And so we will start on all that, and be doing that for at least another week or so. And that's all I have to say.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_24_Building_Binary_Phase_Diagrams_Part_II.txt
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: All right, so it is Friday, and we are where? Problem set is due today-- problems that will go out later today. And we're going to continue today with binary phase diagrams. We're still very much firmly in DeHoff chapter 10. And today, we're going to talk about peritectic systems and intermediate phases and start spending more time with the software. OK, so let's talk about peritectic reactions. OK, so eutectic reaction we defined earlier, and now we're going to do peritectic. And this is a peritectic reaction. So I'll define it formally, and then I'll show you some examples, just as we did for the eutectic case. So a peritectic reaction is a liquid and a solid at high temp transforming into a solid. That is our peritectic reaction. So as before, we have the Gibbs phase rule gives us c minus Ph plus 2 equals 1. So just as with the eutectic case, this is a line in T P composition space. As for eutectic-- as for eutectic case. And by the way, speaking of eutectics, I hope some people were able to check out that video demo of the indium gallium system. And as with eutectoids, we have peritectoid, which is kind of like a peritectic, but not quite. And this is a peritectoid solid 1 plus solid 2 going to solid 3. So let's see some examples because I know this is kind of dry. So here's the copper-zinc system. We have, again, brass-- and so we've seen this system before. I like to come back to systems that you've seen, just from familiarity. So we have pure copper and solid solution of copper with zinc here. That's the alpha phase. And then we have several other solid phases. So there's a beta phase and the gamma phase, delta phase, epsilon phase, and then there's pure zinc, which is eta. And what I want to highlight now are these transformations. So I'm going to flip back and forth here. This region here is bounded by the liquids, the delta phase, and two isoforms here, one at 697 and one at 594. And you could see that this region here is what? Somebody told me, please-- what is the phase composition in this region that I'm working? AUDIENCE: They'll turn in liquid? PROFESSOR: Delta on liquid, right. Two-phase region-- delta on liquid. And you see, you can cool down and enter this epsilon solid solution. So right there is a peritectic. So just as your eye can start to pick out eutectics with this [INAUDIBLE] shape, your eye can start to pick out peritectics. It looks like a beam balancing on a triangle. There's a triangle, and there's a beam. And visually, I can start to pick these out. So there's a peritectic. Here's another one. So right here is beta on liquid two-phase region. This little sliver is beta on liquid two-phase region. And then down here is a gamma. And so there we have, again, a little beam balancing on a triangle. And there we have another peritectic. And there's another one there. And I picked out five peritectic points here. And just so that you've seen it, these are sometimes called invariant points, same as with eutectic. It refers to the fact that although they are, in fact, lines, on a binary phase diagram at fixed pressure, they appear as points. So they occur at only one given composition and temperature as long as you're at 1 atmosphere. So peritectoids-- here, I have chosen the copper tin system now. The copper tin system is definitely complicated. I'll start with these peritectic points here, for instance, is copper and liquid. Two-phase region here-- copper solid solution and liquid. And if you cool down, you enter this nice, purple solid solution, which is labeled as-- it's got this funny label-- high temperature phase of copper 0.85 tin 0.15. So that's a peritectic. Here's another peritectic point here. Here's a nice, typical peritectic. Here's the beam balancing on a little point. So up here, we have a coexistence between a solid and the liquid. And down here, we have a solid solution phase. Those are peritectics. But we also have peritectoid, and I don't want to dwell on this for too long because they're less common. But if you look, I zoomed way in here. And if you look in this little triangle region here, this little triangle region is a two-phase region where we have coexistence between this phase, which is copper 3 tin high temperature, and this phase here, which is copper 3 tin room temperature. In this little triangle region, at equilibrium, the system will phase separate into solid 1 plus solid 2. And if we cool down right into this tiny little sliver, then we enter a single-phase region, which is copper 10 tin 3 high temperature. So for every one of these transformations that we can find in a phase diagram, there is often a type-- there's a label, eutectic, peritectic, eutectoid, peritectoid. We have others that we haven't discussed. I think that's all I have to say on peritectics and peritectoids. Consider it a series of labels to characterize common visual motifs in these phase diagrams. There are pretty substantial implications of these types of transformations for materials processing, but that really goes beyond the scope of 020 and veers into kinetics and, of course, materials processing, which you take later on. Questions on peritectics, please, before we move on? I'll move back to the board. We're going to now talk about another feature of binary phase diagrams, and that is intermediate phases. Intermediate phases so an intermediate phase is a phase that is stable for some intermediate composition, but not for pure components. So it's a phase that does not exist in the pure component case, and it is structurally distinct from reference space. So let's show an example-- chromium iron. So here is chromium iron. And this is generated using Thermo-Calc. And chromium iron is a kind of a fun system. It has this weird-looking thing, which will come to. But what I want to focus on now is this intermediate phase. So you see, there is something that looks like a spinodal. This whole region here is BCC. This whole region here is a BCC solid solution. And it looks as if chromium iron are fully miscible in BCC, except that miscibility is interrupted. And there is this teardrop shape here, intermediate phase, which, unlike BCC, it is structurally distinct. And-- how come I can't get my laser pointer here-- so we have BCC, and then this region here is a two-phase BCC and sigma coexistence. This is a sigma solid solution, intermediate phase, and then if we keep on moving to the right, we have a sigma and BCC two-phase region again. And if we drop sufficiently low temperature, sigma is no longer stable, and we have a very typical spinodal pattern, which is chromium-rich BCC and iron-rich BCC coexisting in this wide two-phase region. And these are, indeed, very distinct structures. So BCC, body centered cubic, is a structure that you know. It has only one site. There's only one type of site in a BCC lattice, and has eight-fold coordination, whereas this intermediate sigma phase-- I had to look this up-- this is a structurally very complicated material. There are five inequivalent crystallographic sites. And they have different combinations of 12, 15, 14, 12, and 14. So yeah, this is kind of a complicated thing, and this structure does not form for pure chromium, and does not form for pure iron. But it does form for this chromium iron solution. As you might guess from the coordination numbers, do you think this is a more closely packed structure than BCC or less closely packed? Well, I'll ask what is the coordination number for a close-packed either FCC or HCP? Who remembers this? Nobody remembers this from Structure or even from 3.091, the coordination number for a closely packed lattice? AUDIENCE: Is it 12? PROFESSOR: 12, right. So this crystal structure here has coordinations which are 12 and higher. And, as you might guess, the way it does that is it has atoms of two different sizes. You can't get coordination number higher than 12 if you only have one type of atom. But you can find ways to pack in more atoms if you have atoms at three different sizes. And you might guess that this phase becomes more stable at higher pressure. It seems more dense, and you'd be right. So you might guess that the range of stability of this solution might be expanded as you increase the pressure. But right now, we're just sticking at 1 atmosphere. So let's go back to the board. So how did the free energy composition diagram look? So, for example, chromium iron is a case of a spinodal interrupted by an intermediate. So let's draw a free energy composition diagram for this. So we know what a spinodal free-energy composition looks like. Let's draw it in a region of phase separation we have here. OK, that's a typical solution model for a system that undergoes spinodal decomposition. But now let's say that I'm at a temperature for which the sigma phase is stable for intermediate compositions. Could somebody please tell me, qualitatively, how I should draw this sigma phase solution model? So this is the BCC phase model. How should the sigma phase model look? We know it's going to be stable for some intermediate competition. So somewhere in the middle of this diagram, we want that sigma model to be a stable solid solution. And we know that we need this BCC phase to be stable on the left-hand side of the diagram and we need this BCC phase to be stable on the right side of the diagram as well, and that there's going to be regions of two-phase coexistence. So we need some common tangents. You think about the taut rope construction. How should I draw a sigma? AUDIENCE: Would it be a parabola that's like a U-shape? PROFESSOR: Yeah, it's like a plunger. Imagine the taut rope. I'm going to I'm going to risk really messing up here. Imagine the taut rope like this. Don't worry, I ordered more sharpies. They were delivered to my office. So we will be back in business. Imagine that this is a-- man, I really messed up. Imagine this is the taut rope. I should have drawn this more straight. Imagine that there was no sigma phase. In that case, I would have the taut rope like this-- single-phase, spinodal decomposition, single-phase. Does that make sense? Did I draw that decently? And now I'm going to introduce a sigma phase, and the sigma phase is going to be like a plunger that comes down. And it pushed my taut rope into in a new configuration. So now my rope got pushed down, around. So this phase came down and pushed the taut rope, and now I have what I want to have, which is I have solid solution BCC, two-phase region, solid solution sigma, two-phase region, solid solution BCC. Thank you for that. So you can see how introducing the intermediate phase disrupts the spinodal. And so we're going to spend some time now in Thermo-Calc. So what I want to do is I want to share my Thermo-Calc screen. OK, can you see Thermo-Calc? Is it open? AUDIENCE: Yep, we can see. PROFESSOR: OK, wonderful. So what I'm going to do is I'm going to generate the iron chromium phase diagram. And by the way, this stands in as a bit of a tutorial on using Thermo-Calc. And there are also tutorial videos on Thermo-Calc available on the course website that I shot previously. So if you find yourself stuck on just how to navigate this software, there are resources available. So what I did is I clicked on Phase Diagram because that's a good template to start with. All right, here I am. So I was able to define the system-- system definer. You see this project window here? I'm going to keep iron demo database, and I'm going to click on chromium and iron. And so the generic project has the system definition, an equilibrium calculator, and a plot renderer. And the equilibrium calculator is configured to calculate the phase diagram at 1 atmosphere as a function of temperature and composition. So the first thing I'm going to do is just run this and generate a phase diagram. And I did save a version of this project with everything already computed, sort of like on a cooking show. I've got the cake baked in the other oven. But I want to at least step through some of this in front of you so that those of you who have not seen Thermo-Calc used in certain ways will feel a little more comfortable with it because we're going to be using it a fair amount on the upcoming problem set. And so what it's doing is it is solving the common tangent construction. It's solving the taut rope construction, which is also called the convex hull construction. If we were getting into the numerics of this, it would be the convex hull-- solving for that convex hull solution for the system. And although the iron chromium phase diagram is already known, Thermo-Calc does everything from scratch. So here we go. Here is the iron chromium system. Just as promised, it has this intermediate phase. Now you've probably discovered that sometimes mousing over is effective and sometimes it's ineffective. Right now, it's telling me I'm in BCC A2, if folks can see that-- BCC A2. So it's a BCC phase that Thermo-Calc has labeled as A2 for some reason. And it's this wide rating of solubility. And here is another solid solution. And for some reason, Thermo-Calc says insufficient information, but that's not true because all I need to do is mouse over the two-phase region. And it tells me this is BCC A2 plus sigma. And that's what I was expecting to see. This is a BCC and sigma two-phase region, and here is also the BCC and sigma two-phase region. And down here is a BCC and BCC region of spinodal decomposition. There's one more thing which I want to highlight, which is kind of interesting. For pure iron, pure iron transforms from FCC at low temperature-- from BCC at low temperature into FCC and intermediate temperatures and back to BCC at high temperatures. Iron is a funny creature in that way-- BCC, FCC, BCC. And we've reminded a number of times in this class that if you look along the y-axis of binary phase diagrams for pure components, you're looking at a slice of a unary phase diagram. So I want to show you-- quickly, I'm just going to switch over to PowerPoint-- how we are, in fact, looking at a slice of the unary phase diagram. So here's the unary phase diagram of iron. Unaries are able to plot temperature and pressure now on one two-dimensional sheet because we have no composition variable. And if we focus here at 1 atmosphere-- that is, 1 bar-- we see alpha iron BCC, gamma iron FCC, delta iron back to BCC. So BCC, FCC, BCC, liquid-- BCC, FCC, BCC, liquid. So iron is a funny creature that way and, of course, that has implications for steel making. And anyway, back to the binary, you can see that behavior by tracing pure iron along the right-hand side of the binary phase diagram. OK, questions on that? I want you to be able to look at binary phase diagrams and read off some information about the energy systems. For instance, chromium appears to melt at around nearly 2,200 Kelvin. You can just read up the y-axis and say BCC, BCC, only one structure of chromium, until it melts-- becomes a liquid. So you can get a lot of information from these. But I wanted to show you for energy composition diagrams. So what I'm going to do is I'm going to ask Thermo-Calc to plot for me, for energy composition diagrams at three different temperatures, in this region where it's just a simple spinodal, in this region where there's an intermediate phase, and this region, right above the end of that intermediate phase region. So I'm going to plot 700 Kelvin, 900 Kelvin, and 1,114 Kelvin, right there above the end of that region. And so I want to show you how to do that. First, I'm going to rename these modules, just to make things visually simpler. You don't have to do this, but you can do this. So this is Phase Diagram Calc. That's the calculation of the phase diagram. And I'm going to rename this Phase Diagram Plot. So now I want to create a new successor to the system definer. It's going to be an equilibrium calculator. I want to calculate the equilibrium of this system. I'm going to rename this. I'm going to call this 700 K Calc. I'm going to calculate some properties of this system at 700 Kelvin. Over here, to my setup window, I'm going to make this a one-axis calculation. So I'm going to be at 700 Kelvin, I want to plot mole fraction, and I want my access to be mole fraction from 0 to 1. So what I've just done is I've set this up to calculate the equilibrium properties of the system at 700 Kelvin as a function of mole fraction iron for iron going from 0 to 1. And, by the way, while I'm at it, I'll go over here to the phase diagram calculator and change this into mole fraction so that I can deal with mole fraction instead of mass percent. So this calculator here is going to calculate properties of the system at 700 degrees. Now I want to plot that when it's done. So I open up a plot renderer, and I'm going to call that plot renderer 700 K Plot. Again, you don't have to name these things. I can name them Apple and Banana. But it helps me keep it clear. So the 700 Kelvin plot, I'm going to have the x-axis be composition mole fraction iron. On the y-axis, I want free energy, Gibbs energy per mole for all phases. So I go back up here. I'm going to perform the tree. And it's going to do all the calculations in this tree. And again, if I apologize once, I'll apologize 100 times for turning these parts of lectures into software tutorials. I don't like being subject to the whims of software and waiting for software to run, but I think this is important, not only because we ask you to do these things on P Set 8, but because it really is quite an important job function for people that are designing and making materials for a living, which is where many of you are headed. And I will also mention that one of the originators of this approach to designing materials is Professor Greg Olson, who will be giving a guest lecture in this class next week, talking about some of his experiences in industry using CALPHAD software to design really cool real-world things. All right, with that justification, it should be finishing-- wonderful. So now I have two plots. I have the phase diagram plot, which for some reason is still in mass percent, probably because I still have it plotted as mass percent. And then we have the 700 K plot. This is a plot of free energy, has the function of mole fraction of iron at 700 degrees for all phases plotted. And so you're a little bit familiar with these sorts of plots now from P Set 6, I believe. This is the free energy of chromium-rich BCC, and then the data kind of flatlines. And the data flatlines not because this suddenly becomes composition independent, but simply because the software-- the database doesn't have data for this region. And so it extrapolates. It does a weird thing. The programmers decided to make it flat. This flat region is not real data. And you could say the same down here. This is iron-rich BCC. There's a solution model. It looks real. It's got some curvature to it. It looks as we expect for, let's say, a regular solution model. And then that flatlines. So what's going on? What do we actually expect here, between these two regions of the solution model? I'm moving my mouse-- AUDIENCE: A common tangent? PROFESSOR: A common tangent. So one funny feature of Thermo-Calc is if, instead of plotting all phases, you plot system, it will stop plotting the different phases with different colors, and it will just plot the free energy of the system as a whole, as one line. And so although you lose the common tangent construction-- it becomes a little bit not obvious where the tangent starts and stops-- you do now get an accurate plot showing the free energy of the system as a function of composition. So this is a straight line right there. That's your common tangent. And we can flip back and forth between these views, between all phases or system. Another funny feature, or you might call it a bug, in Thermo-Calc is that if, instead of plotting per mole, you plot for no normalization-- the actual numbers don't change, but the way that it extrapolates to regions of no data does change. So the actual data has not changed. This is still the solution model for chromium-rich BCC. This is still the solution model for iron-rich BCC. And it's the same data. But for some reason now, when you plot no normalization instead of per mole, it decides to add the zero point and makes a big straight line. And these crossing straight lines-- they're not real data. You should not pay any attention to them. I won't make excuses for Thermo-Calc. It's the way it is. So getting back to the actual thermodynamics, at 700 degrees Kelvin, we expect a spinodal-like system of phase separation between chromium-rich and iron-rich BCC. And that is just what we see. Now let's do it at 900 Kelvin, where we have an intermediate phase. So equilibrium calculator-- I'm going to rename this. I'm going to call it 900 K Calc. And I'm going to make this calculation at 900 Kelvin, one axis, and again, I want my axis to be iron composition. I'm going to change my composition unit to mole fraction. And I'm going to make sure I can plot the results. So I'll call this plot renderer the 900 K Plot. And I'm going to plot composition and mole fraction on the x-axis, and the y-axis, again, I'm going to plot Gibbs energy per mole for all phases. Let's see what happens. Looks like the point density is a little low. Instead of plotting 50 points, let's plot 100 points across the axis. I have to start this again. That looks a little better, a little smoother. So here I have a solution model. Point density is a little low-- lack of data. In intermediate phase, lack of data and another solution model. And if I plot this for the system instead of all phases, you now see the solution model is connected with tie lines. And looking back at the phase diagram for 900 Kelvin, solid solution two phase-region, intermediate phase solid solution, two-phase region, solid solution. I was going to finish this and plot it at 1,114 Kelvin, where you can just see that it becomes a single solid solution almost across the entire composition range. I'll do that in a minute if time allows. But I want to pause and ask for questions, either on the thermodynamics or on the software. OK, I want to show you one more neat thing before I come back to this at 1,400 Kelvin. I mentioned, I think in the last lecture, what happens if you were to imagine the nonexistence of a particular phase? I think it was a liquid. I said imagine that we could turn the liquid phase off, remove it from consideration. How would the phase diagram change? Now I'll ask you, imagine if the sigma phase didn't exist. How would this phase diagram change? What would you think would happen? AUDIENCE: Would you just get regular BCC-BCC separation, like for the lower section? PROFESSOR: Right. Let's see that. I want to show you how you can do that. Go up to My Project and go to the System Define. The system definer, we chose the atoms-- chromium and iron. But there's a bunch more information here. Species-- chromium and iron. There's also phases and phase concentration. And these are the phases over which the system calculates equilibrium. Each one of these phases is a solution model for which Thermo-Calc has at least some data. So here is kind of like the guts that we've been playing with, the solution models. And I'm just going to uncheck sigma. So now what the system is going to do is everything it did before, except it's not going to include the sigma phase. So you can kind of play God in going to perform the tree. So we're solving the taut rope construction here for a world in which the chromium iron system does not have a sigma phase. And so Shane gets that this would look like a BCC region of phase separation, or I'll just call it a BCC spinodal. So let's see. It might take a little bit of time. Oh, look, there it goes. Look at that. So here is the phase diagram of iron chromium in an alternate universe in which the sigma phase does not exist. And so you see we have a spinodal. We didn't get rid of the FCC phase. We didn't get rid of the liquid. But we did get rid of that sigma phase. And what do we think the free energy composition diagrams look like? They look a little more boring. Let's have a look. 700 K? There's a two-phase region phase separation. 900 K-- same thing. Well, the point density is a little poor, but common tangent phase separation. So on the problem set eight that goes out later today, we have a number of tasks for you. About half the tasks are here in Thermo-Calc and involve not the iron chromium system but a different system, where we ask you to generate these plots, generate data, then extract the data. Then we ask you to analyze data in a data analysis software of your choice-- Excel or MATLAB or Mathematica or what have you. And then the final part of the problem set is you're going to then simulate the free energy composition diagram and the resulting phase diagrams using a phase diagram simulation tool which we've built for this class. and that I'm going to introduce on Monday. So if you look at the problem set, the first part you could start on this afternoon. The second part, I advise that you could wait and start on after Monday when we introduce the phase diagram explorer. But what it's going to boil down is going into Thermo-Calc, looking at a phase diagram, then reverse engineering the phase diagram, understanding what models are underlying the phase diagram, what data do I need to populate those models, and then how do I simulate those models in another piece of software. And by the time we're done with us next week and this next problem set you will all be experts at not just reading phase diagrams, but understanding how they're constructed. So that is almost everything I wanted to get through today. I did not have time to talk about more general free energy composition diagrams for intermediate phases, mainly because the software is too slow to load. It's not going to be terribly important for this P Set. So I'm not too concerned about it. But if you want to know what you're missing, I was going to walk through-- where is it? There's a nice schematic in DeHoff of intermediate phases. And the free energy composition diagrams, the intermediate phases in general produced, and I reproduce the picture right here in DeHoff-- so DeHoff Figure 10.19. A general binary phase diagram with an intermediate phase. And here's, flipbook style, the way that the free energy composition diagram evolves to give you these intermediate phase diagrams. So just further views of the type of phenomenon which we saw about half an hour ago.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_3_Process_Variables_and_the_First_Law.txt
RAFAEL JARAMILLO: All right, so let's talk about processes and thermodynamics. So we talked about thermo being this weird thing that can tell you about starting state and the final state. But it doesn't really have a clue as to the process that takes you from one to the other. That is thermo doesn't describe real world processes. But that doesn't mean it's useless. The concept of state functions, which are history independent, allows us to make predictions. So for example, if someone told you the pressure, temperature, and volume of the system at the beginning and then told you the pressure and temperature of the system at the end, with that information you could calculate the volume without knowing the process that took it from A to B. So that's useful. So here's the property of state functions. Any process connecting A to B will give same results for the final state. These are the final state variables, because they are state variables. So you want to calculate something. You want to do the easiest calculation you can, conceptualize the simplest process to calculate it. So in the same way that we choose a system, its components, and its boundaries out of convenience depending on the problem we're trying to solve, you calculate the simplest process to calculate out of convenience, not necessarily the process that nature actually takes. So the simplest processes to calculate are reversible. So reversible process, that's a new concept. That's a new concept for us. And it has a very deep and important thermodynamic meaning. It means a process that does not change the entropy of the universe. That is a system plus its surroundings. And we'll see that in the next couple of lectures. Reversible processes have these properties-- the system plus surroundings are in equilibrium at all times. That's an assumption. This violates our day-to-day notion of an arrow of time. It's really inconsistent with most of our experience. In practice, a reversible process would take forever. So we're imagining a process, which doesn't actually exist. But it makes a calculation easier. So in order for a process to be in equilibrium at all times, it has to go arbitrarily slowly. In order for our process to be able to go forwards or backwards, we need to be able to play the movie forwards or backwards without being able to tell which one is forwards and which one is backwards. That violates our day-to-day notion of the arrow of time. And the requirement to go arbitrarily slowly means that in practice, a reversible process would take forever. But it makes calculations really easy. Is that surprising to anybody? Or we're comfortable with the concept of a reversible process? None of your day-to-day processes are reversible. If you think you can think of one, just toss it out. And we can try to figure out why it's not. Those are kind of fun thought experiments. Let's describe processes. In the last lecture, we described systems. Now we're going to describe processes. So things you can do to a system-- you can work on it, you can heat it, or you can add or subtract stuff. These are things you can do. These are verbs in thermo. All of these are verbs in thermo. They're actions that you do on a system. Work, what is work? It's a mechanical way of exchanging energy, mechanical exchange of energy. So we use w, and what's that? Kilogram meter squared per second squared, that's the units of energy. Just a little bit of dimensional analysis-- you guys separate that out, it's kilogram meter per second squared per meter equals force times distance. That's force, and that's distance. That's fine. In thermo, we normally separate that out a little bit differently. We grab kilogram per meter per second squared. What do I need? What do I need to multiply this by in order to get the same units? What do I need? STUDENT: Volume, meters. RAFAEL JARAMILLO: Volume, yeah, meters cubed. So what's this? This is a pressure times a volume. So when you take Intro Mechanics, let's say in a physics class, you normally do force times distance work. But when you take Intro Thermodynamics, you normally do pressure volume work. Course C is where I got this at. But this is P, PVV work, we call it. That's work. What about heat? Well, it's kind difficult to define, actually. Heat is a process that exchanges energy without mechanical work or mass exchanges. So heat is defined by what it's not. It's kind of a weird thing. And for heat, we're going to use q. So work is w, heat is q. They both have units of energy, joules. So we have described state variables, which are history-independent. But we also have process variables that are process-dependent. So infinitesimal increments, infinitesimal of process variables are denoted as-- I'm going to put this in quotes, "inexact differentials." That is dq/d work. Using d, not D. This is annoying. Don't blame, don't shoot the messenger. This is one really key area where thermo deviates from multivariable calculus. So try not to get confused because it is confusing. There's a mini lecture I recorded. It's up on the website called, the Three d's of Thermodynamics, because it gets kind of hairy. We use a lot of d's to denote differences. So hopefully, that's useful at some point, if not now, then maybe in a week or two in the class. In thermodynamics this Greek dx means path dependent. That's what it means. So what is a process variable? If you have a state A and a state B, a process variable is a variable, which depends on the process it takes to get from A to B. So you might have path 1 with work 1 and heat 1, and path 2 with work 2 and heat 2. And those are different because they're not state variables. They're process variables. If we were on campus, the nice analogy to use is the amount of work you need to get from the second floor of building 1 to the fifth floor of building 32, or just pick a pair like that. There is myriad ways of connecting those two. Because the campus is connected. You can go upstairs and downstairs, you can take elevators, you could get between those two points in a very large number of different combinations. And the amount of mechanical work you need to do will vary with combination. But the change in your gravitational potential energy between the starting and final point, that's a fixed change. That's an example of process variables versus state variables. But we're not on campus. That's so sad. You're just going to have to imagine all the work you'd have to do to get from building 1 to building 32. So this is infinitesimal increments and process variables. Infinitesimal-- I think I need new markers. And state variables are denoted as exact differentials. So for example, dT, dP, dV, and so on, using regular lowercase d as in a total differentiable in calculus. So all I can do is tell you that understanding that can only come through practice, for sure at the beginning of the class. So all I can do is tell you-- I'm going to try to switch the purple so things show up a little better. So let's talk about how we classify processes. So an isothermal process is a process at fixed temperature. What about an adiabatic process? Anybody know? STUDENT: It's like very fast. STUDENT: Does it permeate through a boundary? STUDENT: Heat is 0. RAFAEL JARAMILLO: I heard a couple of things which are equivalent, actually. Somebody said very fast. Somebody else said, no heat flow across the boundary. So the strict definition is no heating across the boundary. In reality, heat will always flow from a hot object through a cold object. But if you've insulated your system, whether it's your house or a beaker in the lab, then that heat flow could be slowed down. So one way of approximating an adiabatic process is to run it quickly. If you run a process very quickly, the entire process can run before an appreciable amount of heat flows across the boundary. So that's why a fast process can be equivalent, so I like both of those answers. Isobaric-- isobaric process is that fixed P. And isochoric, we don't use as much, it's fixed V. And we'll see that means that there can be no work done on the system if the volume is fixed. And then there is classifying boundaries. We've seen this a little bit so far. But as a practical matter, the way that we set up processes are by defining boundaries. So for example, an insulating boundary, and no or say, slow heat flow. Diathermal-- heat but no mass transfer. Open or closed, is mass the transfer allowed or disallowed? And rigid, fixed volume. So when we run processes, we can somewhat determine what kind of process we're going to run by setting up the boundary conditions. And this is a practical thing. This is a practical matter. And all of us who run labs, we do this every day in various ways. So classifying processes, classifying boundaries, and so forth-- let's talk about other things associated with processes. What about heat capacity? Most folks have encountered heat capacity before. Heat capacity, we're going to use C. And it is the change in heat, an incremental amount of heat for an incremental change of temperature, heat increment, temp increment. Is heat capacity a state variable or a process variable? STUDENT: Process. RAFAEL JARAMILLO: Process is right. Why is it a process very well? STUDENT: Because the dq is path-dependent. RAFAEL JARAMILLO: Yeah, exactly. It's a process variable because it depends on a process variable. One corollary of the postulate that there exists state functions, is that state functions can only depend on other state functions. So if something depends on a process variable, it itself, is a process variable. C is path dependent. Dq is a process variable. So it's useful to define common processes. I should say define for common processes. And the ones that we encounter in this class are CV equals dq/dt at fixed volume and CP, which is dq/dt at fixed pressure. And the reason these are what you encounter is because these are the most useful. How do you keep a system at fixed pressure? Let's say, you want to keep a system at atmospheric pressure. What's the easiest way to do that? STUDENT: Use it like a piston. RAFAEL JARAMILLO: It's actually, I would think the opposite. Expose it to the atmosphere, remove the piston. Because if a system is exposed to the atmosphere, it can exchange volume with the atmosphere. It can get bigger or get smaller. If it's higher pressure in atmosphere, it'll push against the atmosphere and return back to atmospheric pressure. If it's at lower pressure, it will get compressed by the atmosphere and return back to atmospheric pressure. So the existence of a free surface regulates the pressure. Cost in volume, that would be a case like a piston, where you have something rigid, a really rigid container. You can fix the volume and then run the process. The volume is not going to change. Which one of these is bigger? That requires a little more thought. We're going to increase the energy of the system by heating it. We're going to increase the energy by heating it. For a given increase in temperature, do I need to hear it more of a fixed volume or a fixed pressure? We haven't really gotten there yet. But I'm wondering if somebody wants to venture a guess with an explanation. STUDENT: Maybe for fixed pressure. RAFAEL JARAMILLO: Fixed pressure, how do you figure? STUDENT: I mean, I think if the system is like half fixed volume, it wouldn't be able to use the heat as work. But with fixed pressure, it can expand. RAFAEL JARAMILLO: Yeah. So your reasoning is right, but your answer is wrong. So I'm glad-- the reason there is when you have a system that's at fixed volume, it can't do any work on the surroundings. It's hemmed in. So all of these thermal energy you put in will go into a change in the internal energy of the system, increase in temperature. Whereas if you have a system that's fixed pressure, it can do work. It can expand against its surroundings. So as you pour energy in the form of heat, it can lose energy in the form of doing work. So it's like filling up a leaky cup. You have to pour more water in because there's stuff leaking out at the bottom of the cup. In this case, the stuff leaking out is energy in the form of work on the surroundings. So CV is smaller. CP is larger. But your thinking about work was correct. C is an empirical observable recorded in devices. That's really important. Maybe you know that, maybe you don't. But the heat capacity of a system tells you so much about that system. But most of those things, you can't calculate from first principles. They need to be measured and then reported. So for example, what has larger-- here's some, here's-- entry level, what has a bigger capacity, a big pot of water or a small pot of water? STUDENT: They have the same. RAFAEL JARAMILLO: Well, yeah, it depends. Heat capacity could be extensive or intensive. So I think I know what you're saying, water is water. But when we want to regulate the temperature of something, we want to prevent a temperature change, it helps to have a large heat capacity object nearby, a large thermal reservoir. That's one reason why the temperatures are more moderate on the coasts of continents than inland. Our temperature in Boston doesn't swing nearly as much as it does in Iowa. You could say the same about any inland versus coastal community in the world. And the reason is that the oceans are a big thermal reservoir. They have a larger heat capacity than the Great Lakes do. So our temperature swings are less. What about bread or cheese? Same quantity, now, same mass, bread or cheese? STUDENT: Bread? STUDENT: Cheese, maybe. RAFAEL JARAMILLO: I think it's cheese. When you eat pizza, the bread is not the one that burns the top of your mouth. It's the cheese. It has a larger capacity. Cheese has a larger capacity. That has to do with, of course, the chemistry. A big pot of water has a larger heat capacity. What about aluminum foil or a potato? STUDENT: Isn't a larger heat capacity and ability to absorb more thermal energy without changing temperature, though? RAFAEL JARAMILLO: Yes, exactly. STUDENT: So isn't the cheese changing more in temperature than the bread? RAFAEL JARAMILLO: No. What it means is that you have to put in more energy from your mouth to cool down the cheese than you do to cool down the bread. So here's another example, which should help. Aluminum foil versus potato-- you bake a potato that's wrapped in foil. STUDENT: Potato. RAFAEL JARAMILLO: You take it out of the oven. Someone said potato. The answer is going to be the foil. When you take something out of the oven, it's wrapped in foil. You can often open the foil. Even if the oven was at 400 degrees, you can open the foil without burning your fingers. You have to avoid the steam. But you could grab that foil and very quickly bring that foil down to room temperature without burning your fingers. But if you grab the potato, you're going to burn your hands. Because it takes more energy out of you, in this case, to cool the potato than it does to cool the foil. STUDENT: Doesn't that mean the potato is actually the one with a high C? RAFAEL JARAMILLO: Larger C, yeah, sorry, sorry. Everything I said, but reverse. Exactly, it's easier to bring down. And this is lots of examples here that are every day, why certain surfaces outside feel colder or hotter than other surfaces on a very cold or very hot day, lots of examples like this. So what do I mean to say here? C can be calculated for simple, we call toy problems. But in most cases, especially in cases that we rely on as engineers, measurements are necessary. And I actually did record some numbers here. So heat capacity in joules per gram Kelvin-- let's see, water. Water has a large capacity. It has to do with all of the vibrational degrees of freedom in water. Aluminum is very small, 0.897. So this right here is why you can touch the aluminum foil, but you have to avoid getting a steam burn when you open up the food you packed in foil. Potato is actually large, close to water. Because anything that's food-related, basically is set by the water content. Cheese has a lower water content than potato. I grabbed the number 3.15. If you are a gourmet chef, you have a table of capacities of things like cheese. And 3.15 is this one number. I mean, I don't know cheddar. But anyway, so let's move on. Let's move on. So, let's talk about something really important, the first law of thermodynamics. So here's the first law of thermodynamics, as you sometimes encounter it in a physics class. And I'm here to tell you this is totally useless. Because if this is all it were, who cares? I mean, it's interesting, but you can't use that. And that's not how we arrived at it. Intellectually, it's sort of intellectually being dishonest. Because this is very non-obvious, very non-obvious and rather hard to prove also. How is this useful? The answer is because it applies locally. And this is where the real intellectual thread comes from. So we talk about the energy of a system plus the surroundings. System plus surroundings, you could call universe. It's kind of highfalutin language. Sometimes I'll slip, and I'll say universe, but I don't mean to. Systems plus surroundings-- energy of system plus surroundings is constant. But here's where it gets useful. We use the boundary for bookkeeping. And like it or not, a lot of thermo is bookkeeping. So we have a system, we have a boundary, and we have flows of energy. I mean, you can keep track of them. And we can keep track of the forms that energy can flow. So there are three ways that energy can flow between a system and the surroundings in 3020. We have encountered two of them so far about 20 minutes ago. Somebody want to tell me what the three of them are, all three? STUDENT: Convection, radiation, and conduction. Nope. Sorry, so those are forms of heat transfer. But we are not going to talk about fluid dynamics or radiative processes in any detail in 020. So what you said was scientifically true. It's not specifically what the three forms are in 020. RAFAEL JARAMILLO: We've talked about two of them so far. One is heat, which is everything you just described. What's another way energy can flow? STUDENT: Work. RAFAEL JARAMILLO: Work, heat, work, mechanical work. And there's a third way that energy can flow. We haven't introduced it yet. Would anyone likes to venture a guess? STUDENT: Mass transfer. RAFAEL JARAMILLO: Mass transfer, right. Matter has energy. And so you can exchange matter and transfer energy that way. So this is the bookkeeping that we're going to do in this class. So this gets me to first law of bookkeeping. I wish I had a week to teach fluid dynamics and talk about convection and then talk about heat transfer, talk about conduction, and do radiative. But that goes well beyond this class. You'll learn about those things in other classes. Or come to office hours, we can talk about heat transfer. First law bookkeeping-- this is where we hopefully resolve some ambiguity about 020 with the other classes you've taken. So types of energy include kinetic, potential, and internal, which we u. So kinetic energy depends on how fast a system is moving. You know that. So if your system is a baseball, kinetic energy is how fast it's moving. Potential energy depends on where a system is in some reference frame. So again, if your system is a baseball, the potential energy depends on where it is, relative to the center of mass of the Earth. Internal energy is everything else. It's all of the energy which is independent of the velocity of the system and independent of the location of the system. So again, if your system is the baseball, the internal energy is all the molecular bonds that make up the baseball plus the binding energy of the electrons to the nuclei. So this is the one that we use in thermo. This is the last time we'll talk about kinetic energy in the last time we'll talk about potential energy. We might get to a 20 minutes of electrochemistry later in the term. And then potential might sneak back in. But basically, this is where we are for 3020. What about processes that exchange energy? That would just be, we just answered. There's heating, there's work, and there's mass transfer. So this is some first law of bookkeeping. I hope this helps a little bit as we try to recall what counts and what doesn't in equilibrium thermo. So we can do this mathematically. Energy is a state function, but it's changed by processes. So du equals change in heat energy plus change in mechanical work plus something new, a term which accounts for a change in mass. So dq equals heat applied to the system. There's a sign convention here. We're going to follow the sign convention in the DeHoff textbook. So if you're heating a system, let's say that's the endothermic reaction, the instant cold pack, then dq is positive. Dw is work performed on a system. So there's a consistent convention there. These have the same sign. Positive dq increases the energy of the system, positive dw increases the energy of the system. So for example, for mechanical work that we encounter in this class, dw equals P dV, and here I ask is it positive or minus? Which one is it? This is the pressure. This is the change in volume of a system. So a positive P dV means the system is doing work on the surroundings. It's expanding against the surroundings. So what should be the sign here to stick with our new convention? STUDENT: Negative. RAFAEL JARAMILLO: Negative right, it's negative. Because the system is losing energy as it expands, negative. So it equals minus P dV. A little mathematical note here-- this is a process variable. This is a state variable. This is not an equation of state. This is a differentiable form of an equation of state. So it's not the same thing. But it has this funny property, that you have on the one hand, process variable, on the other hand, is state variable. This could be considered an integration constant. If you don't care about that comment, ignore that comment. We don't really talk about integration concepts in this class, we don't have time. But they are kind of hidden there and the structure of thermo. All right so dq/dw and mu i dNi equals the energy of change dNi of moles of component i where i has to run over all the different components of the system. So N of i equals number moles of component i. And mu of i is defined as partial u, partial N of i, for a fixed-- and we'll get to this-- entropy, volume, and moles of j not equal to i. I is a component label. And this whole thing is called the chemical potential, which is a really central concept for material science and chemical engineering. So we will be spending an entire term on the chemical potential. That's where I wanted to get to today.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_1_Introduction_to_Thermodynamics.txt
[SQUEAKING] [RUSTLING] [CLICKING] RAFAEL JARAMILLO: So thank you for waiting. So this lecture is an introduction to some basic principles in thermodynamics, is meant to be fun. Normally, I lecture on the board. This is much more interactive. You're all holding the text for today, which is Statistical Physics for Babies by Chris Ferrie, and then two demos, an instant hot pack and an instant cold pack that we'll use. So wait until I tell you, and we'll all pop those hot and cold packs in the context. So I want to start by reading from the text. So I'll just read this to you, and you can read along with me. This is a great book. All right. So this is a ball. This ball is on the left. Now it's on the move, so we gave it some thermal energy. So it's moving. Now it's on the right, so it moved over here. So now we have six balls. All right. So we have six balls here and a space that's divided by a dashed line in two. So it's clear that the can go left and they can go right. And we gave them some thermal energy, so now they're on the move. So they're bouncing around. So sometimes, you might find more on the left, and sometimes you might find more on the right. But you'll almost never find them all in one side. That would be pretty surprising. It doesn't seem impossible, but it'd be surprising if things are random. We all agree with that? So we normally find them like this. It looks pretty evenly distributed around the space. So now that was the first 10 pages. Now to I'm going to lecture a little bit on that. So let me give you a slightly more scientific explanation of what we just read. No, chalk was right there. So just for today, when I have slides, I get squished over to the right hand side board. Normally, I lecture on the board right in the middle. So I apologize today. These lecture notes are going to be posted-- they are already posted to Stellar if you have trouble seeing. So what we're going to do is we're going to draw a cartoon of noninteracting gas molecules in a box. So I'm going to start like this. We're going to have a box, and the box has a divider. On the right hand side is vacuum and the left hand side are six gas molecules. And I'm going to give them some thermal energy. So they have a finite temperature. They're zooming around, and they don't interact with each other. They just bounce off the walls. So the first thing we're going to do is we're going to remove the partition. So I'm going to draw a dashed line where the partition was. And at the instant we remove the partition, there's still over here on this side bouncing around. So remove partition. So we remove the partition. Now, what's going to happen? So this is what's going to happen. So what I'm going to say is the next step is we just wait. We don't do anything. We just wait. And we probably don't have to wait very long. We don't have to wait very long before the gas molecules are randomly distributed around the space, zooming this way and that. So you all buy this? So let me tell you what happened in thermodynamics speak. So this is a key word in thermodynamics. This happens spontaneously. The volume of the system here at this moment, when I removed the partition was small, and spontaneously, without us having to do anything, it got bigger. So all we had to do is wait. It was a spontaneous process. So you could say the volume spontaneously adjusted within the limits that we imposed, that is the box is still there, to increase the amount of disorder in the system. So now I'll ask you a question. Why? STUDENT: Because the probability is higher than the number of combinations is but is greater number of [INAUDIBLE] atoms. The atoms are [INAUDIBLE]. RAFAEL JARAMILLO: That's exactly where we're heading. Let me rephrase that. That was the correct answer. Because the more disordered state is more likely. So we're using a frequentist definition of statistical likelihood. All right. Back to the book. So now he does a better job explaining this than I did, so let's go back to the book. All right. So now we're on page 11, where the balls now have different colors. So now we have six balls. Now, each ball is a different color. So there's only one way you can have all the balls on one side. That way is have all the balls on one side. What if we want to put five balls on the left and one ball on the right? How many different ways can we do that? Six different ways. The ball on the right can be the red, yellow, blue, orange, green, or purple ball. So there are six ways to do that. Now, If we started counting up the different ways to have two balls on the right and four balls on the left, you would find there are 15 different ways to do that. But what was the most likely situation? Three and three. It was sort of even. So if you count up all the different ways you can have three balls on the left and three balls on the right, you'll find that there are 20 ways. So it was 1, 6, 15, 20. So that seems too simple, right? But that's the answer. The reason why this is the expectation, this is our expectation from everyday life, is because it's more likely. There are more different combinations that look like this than any other type of state. And in this case, this has a one combination like this and 20 combinations like this. So you would say there's something like a 1 in 20, roughly 5%, chance of finding all the balls on the left. That doesn't sound that small, right? You can take odds on that. But when you start doing this calculation for a mole of atoms or a mole of molecules, you'll find that the likelihood is geometrically small. Meaning it's never going to happen. So we later on in the course, much later, we're going to come back to these concepts and do calculation with something like, what is it that good of finding all the air in this room suddenly over here? All right. None of us would be very happy about that. Fortunately, the likelihood is so small, you'd have to wait many, many, many ages of many universes long, and you still probably would never see it happen. So OK with that. So physicists-- but I should say scientists because this book was written by a physicist, so there's an implicit bias in here-- physicists call the number of different combinations entropy. So that's what entropy is. So a state with a low number of combinations is a low entropy state, like a state with all the balls on the left is a low entropy state. A state with a high number of combinations is a higher entropy state. So even if the balls, all started on one side, as in my example, they're going to end up here because this is more likely. It's an increase in entropy. So the second law of thermodynamics says that things move from low entropy to high entropy. We'll rephrase that later on in a couple of weeks. I'd be a little more technical about it, but that is one form of the second law, that entropy always increases in the universe, never decreases. So that explains why we see these more likely scenarios. You can, if you like, push all the molecules back into the left, but the key word there was push. To do that would take some work. So you can do work on a system and decrease its entropy. You can clean a room. So you can decrease the entropy of a finite system, but in so doing, you're still increasing the entropy of the universe. You're just shifting it around. So things naturally go from organized to messy, just like my kids' rooms, and now statistical physics. So do you feel like you know statistical physics? I love this book. Now we're going to fill in some of these concepts on the board. Any questions on the reading? The theory is now. So if anyone didn't get, I think I owe. How many people did not get a copy? 1, 2, 3, Thomas, 4. Anyone else? All right. So we'll get more copies for you. Now we're going to talk about solutions. So yes, [INAUDIBLE]. STUDENT: Are we doing the mean [INAUDIBLE] RAFAEL JARAMILLO: So we'll cover this in a couple of weeks, but the quick answer is that the entropy of a system left alone never decreases. It either stays the same or increases, but if a system is not left alone, if it actually can interact with its surroundings, entropy is something that can be exchanged. So I can actually-- if I were a system, I could lower my entropy by giving some to you, or you could decrease my entropy by taking some. That's-- STUDENT: So when you pull down the [INAUDIBLE] where with [INAUDIBLE] RAFAEL JARAMILLO: When you work? STUDENT: When you clean your room, [INAUDIBLE] RAFAEL JARAMILLO: The work you're doing generates heat. You cool off. That generates entropy. The most famous example of this is Maxwell's demon. It's a thought experiment. You can fall down quite an internet rabbit hole, or a textbook rabbit hole, reading philosophical style treatise on Maxwell's demon. Maxwell's demon is a little demon that sits in between this box and only lets gas molecules go to the left. And so Maxwell's demon is like a little turnstile for gas molecules. And it's called Maxwell's demon because it was thought of as a thought experiment to disprove the second law. And there have been a hundred plus years of physics papers showing that, for example, there's an information theory approach to Maxwell's demon that Maxwell's demon has to know that the molecule is approaching. Therefore, it has to receive at least one photon, and you can calculate the physics of the entropy generation of that photon generation and absorption by the demon. So we can make the decision to open the gate. Can't get around the second law, so we'll get there. I don't know if that exactly answered your question, but it was a little-- we'll get there again with more time to spare in a couple of lectures so. Just Google Maxwell's demon. It's kind of fun. So now we're going to talk about solutions. So here comes the first demo, maybe the last demo. Done a lot of demos in this class, but this one is so easy. I did the same demo in 3001. So many of you were in 3001. So I'm sorry I'm ripping myself off here, but it's topical. All right. So there is water. There's water in the dish. You can't see it. So what I'm going to do is I'm going to make a solution. I'm going to mix two substances. One substance is food coloring, which is, of course, also water, but pretend this is like some other substance. It's water, like it's some dye molecule solvated, so insulation, and the other substance is water. So I'm going to mix them, and you can see what happens. So let's see. So while this is going-- you guys know what's going to happen, right? I find kind of fun and calming to watch it happen. All right. So over here, we're going to talk about solutions. So we're going to talk about solutions for the case of noninteracting molecules. So I'm going to draw some pictures. All right. So let me say 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15. So here's the starting condition. We have some uniform material. It's made of some molecules. I don't know. All right. So here it is. Here's the uniform material in a box, and these are the molecules that make up the material, and its properties are uniform. So now what we're going to do is going we're going to label certain molecules. Can you guys see that? Barely. We're labeling certain molecules. So what's one way you could label molecules? Physically. Yeah. STUDENT: [INAUDIBLE] position. RAFAEL JARAMILLO: Mhm, no. What I mean is we're going to make a physical change to the molecule, so it's carrying a label. STUDENT: [INAUDIBLE] little fluorescent tag. RAFAEL JARAMILLO: Some kind of fluorescent tag. You could make it a different isotope. There's different ways to label, or just as a thought experiment. You just go in and you take, let's say, five of these molecules and color them blue to pick a color. Kind of what we're doing, 1, 2, 3, 4, 5. So I went in and I labeled those molecules, and now I'm going to wait. What do I expect to happen? Yes please. STUDENT: [INAUDIBLE]? RAFAEL JARAMILLO: Yeah, you expect it to even out. So the case of noninteracting molecules. Initially, everything looked even, and then we did something. We made it uneven, and now you expect it to even out. So I'm going to draw again, 15. And you expect the label molecules to spread out, maybe like that. They're not on one side anymore. You get it. I didn't want to spend all day up here drawing circles. It's more convincing if I do like 20 or 30 circles, but I decided to stop at 15 circles. So what's actually happening-- I mean, you understand that, and you believe that. I'm just going to describe that in thermodynamics language. So what happens is that diffusion-- so the process by which this concentrated area of labor molecules becomes diffuse is diffusion. And it happens spontaneously. I don't have to go over there with my atomic tweezers and move the blue food coloring molecules until they all look even. I think by the end of this course period, it will be uniformly blue. Takes about half an hour or so at the temperature of an overhead projector. So that's the diffusion process. And again, that word spontaneously, it's happening spontaneously, and it's mixing the label of molecules. So what can you say about the entropy of this situation? Is the entropy from the second frame to the third frame, is it going up or down? It's going up. Something's becoming more disordered. That's true. The entropy is going up. There's more different configurations that look like this, all mixed up, and configurations that look like this, everything on one side. So it's more likely, it's more entropy. All right. So now I'm going to erase this board and redo it for the case of molecules that now are going to interact with each other. Yes please. STUDENT: [INAUDIBLE] spontaneously I thought before when we're talking about the [INAUDIBLE] indicate instantaneously or-- RAFAEL JARAMILLO: No, it doesn't mean instantaneously. STUDENT: [INAUDIBLE] RAFAEL JARAMILLO: Exactly. That's right. It does not mean instantaneously. There are many good examples of this, of systems that are out of equilibrium that are slowly relaxing. If you go to very ancient cathedrals, you'll see that the windows appear to be pooling at the bottom of the window frames. They're slowly flowing. Things seem static. In the case of gas molecules, it'd be about hundreds of nanoseconds in a reasonable space. So the timescales might vary. But it's a good question. It doesn't mean instantaneous. It just means without outside influence. Thank you. So now let's do the case-- my favorite example here is-- there's something called the Tar drop experiment. And this is a university in Australia, where they took some tar and they put it in a vessel. And they poked a hole in the bottom of the vessel, and they put a video camera on it. And it's been running for decades. And I don't remember, but once every eight or nine years, you get a drop. There's a webcam for this thing. You can go and watch it. So spontaneously, it's dripping, but it's pretty slow. So now we're going to do the case of interacting molecules. So the previous example and the example you're watching over there, there's effectively no interaction between the labeled molecules and between the labeled molecules and the unlabeled molecules. There's not a lot of intermolecular forces in between the blue dye, and, let's say, a distant water molecule, or between two distant blue dye molecules, not a lot of interaction. So what about when there is interaction? Let's do the cartoon case, and then we'll start-- this is how we get through with the hot and cold packs. So we have a uniform material, all right, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15. All right. So we're starting again with the uniform material, and now I'm going to-- let's see-- label some. So just as before, I'm going to label some. This time, the ones I'm going to label are-- All right. I'm going to label them in a way that is more like what you expect from the equilibrium situation for the noninteracting particles. Label some. So now I've labeled them. I've labeled them as quasi randomly, and now I'm going to make a caveat, label some so that they attract. So I've labeled these with some sort of a molecular tag that likes to be close to other like molecular tags. So there's an attraction here. So now we're talking about forces and interactions between molecules. So what happens if I label these so that they interact and then I wait? What do you expect to happen? STUDENT: They only attract themselves? RAFAEL JARAMILLO: Yeah, they're only attracting. So it would be as if the blue dye molecule were attracted to other blue dye molecules in that case. STUDENT: Then I'd assume separation. RAFAEL JARAMILLO: So if they're attracted to each other and there's no other forces in this-- let's see-- you might expect after waiting for some time, that now the molecules have clustered. So in this case, inter atomic or molecular-- if these were atoms, I would talk about inter atomic forces. If they're molecules, I'll say intermolecular forces, same thermodynamics-- inter atomic or molecular interactions cause spontaneous unmixing. All right. So we talked about mixing, and now it's on mixing. So this can happen too. And there'll be plenty of examples. Maybe you're already familiar with some. And you can also-- I won't draw this-- so you can also consider the case of repulsive interactions. So let's say that those molecules, instead of attracting each other, were repelled from each other, think about how you sit in a lecture room when it's an exam and we told you not to sit next to anybody, do you become completely disordered or more ordered? STUDENT: More ordered. RAFAEL JARAMILLO: More ordered. You become more ordered, because if I have a person, I know that I only have to go to two chairs over to find another person and I go two chairs over to find another person. That's order. So is that increase in entropy or decrease in entropy? STUDENT: Decrease. RAFAEL JARAMILLO: That's a decrease on entropy. So from this central frame to the right hand frame, is that an increase in entropy or decrease in entropy? STUDENT: Decrease. RAFAEL JARAMILLO: Decrease. It's a decrease in entropy. And the case of the students taking an exam in a lecture hall, that's also a decrease in entropy relative to, say, right now, where you look more disordered than that. There's some clusters. There's some lone people. It's more disordered. Yes please. STUDENT: So then would the balance in entropy, like you said how everything always have to increase, would that come from doing work to label some molecules [INAUDIBLE]? RAFAEL JARAMILLO: Not quite, but balance is the key thing. We're heading exactly towards this idea of balance. But thank you for saying balance. So now everyone-- let's see. So you got this idea of disorder and a basic notion of entropy, and now we've started to introduce the fact that molecules can interact with each other, and an interaction is typically characterized by an energy of an interaction. So there's energy involved. And I think these are concepts which you're already familiar with. So we're going to now review endothermic and exothermic reactions in this context. So who's here heard of endothermic and exothermic? Everybody raised their hand, except for [INAUDIBLE.] All right. So now we get to play with the hot packs, but wait till I say you play with the hot packs. All right. So we're going to start with endothermic process, endothermic process, endothermic example. So an example of an endothermic process is ammonium nitrate dissolving in water. So I'm going to draw it like this, n moles of ammonium nitrate plus n moles of water are going to react, and the reaction product is going to be a solution. So even though this just looks like you dissolved or you just mixed, you can still talk about it as a reaction. So this over here is pure solid. So this is what we call in thermodynamics a pure material. And it's in a standard, state ammonium nitrate. Will sit in a jar. It's solid at room temperature and atmospheric pressure. All right. And this over here is a pure liquid. So this is a pure solid and this is pure liquid. And what's this on the right hand side? STUDENT: Mixture? RAFAEL JARAMILLO: It's a mixture. It's a liquid solution. In thermodynamics-- and this is something-- is this terminology you have to get used to it-- the word solution and the word mixture are basically interchangeable. Solution to us does not mean it's a liquid. Solutions can be liquid, gas, or solid. In fact, in this class, usually, they're solid. So just get used to it. There's no way around it. So this is a-- but this is a liquid solution. In this case, it's a liquid solution, or mixture, I feel like, liquid solution. So I need to get more board space here. So this process absorbs a finite amount of heat from the surroundings. It's an endothermic process. So it absorbs an energy, Q-- this is in unit of joules, it's an energy-- from the surroundings. Sometimes this is called the heat of solution. Later on in this class, we'll show that this is related to the enthalpy of solution. We'll start using the word enthalpy, but for today's lecture, it's enough to think about heat and energy. So in going from left to right, does the energy of the system increase or decrease? STUDENT: Increase? RAFAEL JARAMILLO: It increases because it's endothermic. It got energy from the surroundings. Does the entropy increase or decrease? STUDENT: Increase. RAFAEL JARAMILLO: It increases. And it's OK if you didn't have an intuition for that now. The reason it increased is because on the left hand, you had a solid phase, and then every n moles of that solid went into a liquid phase. And liquids tend to have much higher entropy than solids. They're more disordered. A solid is like everyone at every other chair. A liquid is somehow more disordered. So that was right. So in this case, the energy and entropy, both increase. So before, we learned that nature likes to increase the entropy of the system. So that would seem to drive the reaction to the right, but if you've taken a physics class, you were probably told that nature likes to decrease the energy, which would drive the reaction to the left. All right. And both of those things are true in the right context. So what gives? All right. So we're going to find out what gives. So take your instant cold packs, which is the-- not the one that says warm relief, the one that's blue. All right. So this is a Dynarex Instant Cold Pack. This is a snapshot from the Safety Data Sheet, Dynarex Instant Cold Pack. If you're ever looking for your first choice for gloves, I guess it's the Tillotson Health Care Corporation. All right. So this is the SDS, and this is telling you that it's made of ammonium nitrate pellets surrounded by a small ruptured plastic bag filled with water. So we're going to figure this out. Does this reaction stay on the left hand side or does it go spontaneously at the right hand side? So what do you think? STUDENT: To the right side. RAFAEL JARAMILLO: Everyone who thinks it's going to go to the right, shout or raise your hand. Hey, hey. So squeeze together here. So here we go. I'm going to squeeze together. Aah, cold. All right. So is it cold? Yeah. So is it spontaneously went to the right or the left? Where does it go? It goes to the right. So it's absorbing heat from your hands. You feel cold. All right. So in this case, entropy-- it spontaneously went to the right, which means entropy increased, but the energy also increased. So it's like we weren't sure what was going to happen, but in this case entropy drives-- I'm being really sloppy here so-- entropy drives the reaction. It's the entropy consideration. Nature thinks about it, figures out the right balance, and say, entropy is going to win. We're going to make a solution. So now we're going to do the opposite case, which is exothermic process. That was satisfying, right? I like that. So exothermic process, for example, crystallization of sodium acetate from a solution. So we're starting with a solution, and we're going to crystallize sodium acetate. So CH3COONa. Let's see. The way I wrote this was like this, x, H2Oz, see, reacts, the reaction is like this. It goes with some heat, Q, and it becomes CH3COONay, I'm going to put a little solid there because it came out of solution, formed a solid crystal, and then CH3COONa x minus y H2O. So someone tell me-- well, so I told you this is the solution. This is a solution. It's a liquid solution of x moles of sodium acetate into z moles of water. All right. What is-- someone repeat for me. What is this material? What is this phase? STUDENT: Solid sodium. RAFAEL JARAMILLO: Solid sodium acetate. All right. And what is this material over here, this phase? STUDENT: Liquid. RAFAEL JARAMILLO: It's a liquid. Is a pure material? STUDENT: Still solution. RAFAEL JARAMILLO: A still solution. So we have solutions on both sides, but they have different concentrations. We've conserved moles. We haven't created a destroyed matter. So this is a liquid-- it's a liquid solution, but in the hot pack we're about to use, it's super saturated. So ask me questions in a minute about what that means. We'll come back to that concept later in the class. It's a supersaturated liquid solution, so it's non equilibrium. Then over here, we have a pure solid, and over here, we have a liquid solution. And what do you think would be the right term to characterize this liquid solution? A liquid solution in equilibrium with the solid solute is a-- STUDENT: Saturated? RAFAEL JARAMILLO: Saturated solution, saturated. So if that didn't come to your tongue right away, that's fine. That's a concept which we're going to cover in detail in a number of weeks from now. All right. So we start with a supersaturated solution, and we get a saturated solution. So I told you this was exothermic, so I'm not going to ask you what's the sign of Q. This process releases heat energy Q to the surroundings. all right. So would you say the energy of this system increases or decreases as it goes from left to right? STUDENT: Decreases? RAFAEL JARAMILLO: Decreases, exothermic. It lost energy. So energy-- all right. What about the entropy of the system? Does it increase or decrease from left to right? STUDENT: Decreases. RAFAEL JARAMILLO: Decreases, right? It decreases. You started with all the sodium acetate in a liquid state, and then you took some of that and made it a solid. And I told you before that the solids tend to have lower entropy. So the entropy decrease. So energy and entropy, they decrease. Key is that they both decrease. All right. So nature likes to maximize entropy, but it also likes to minimize energy. So what's going to happen? So demo number two. So grab your instant hot pack. Let's see. There we go. So this is instant hot pack from Danyang Rapid Aid Hot and Cold Therapy Product Company Limited, and it is made up of sodium acetate and water. That's what's in here. So I'll tell you what's actually-- I'll show you a video of what this might look like in a minute, but let's test it. Let's do the experiment. Let's figure out if it goes left or right. So I did this the other day. It's like, fold top to bottom to pop inner fluid bag and shake. So I'm folding the top to the bottom. Certain amount of strength required to be in this class. Aah, there we go. Oh, it's like it's hot. This is like a PE credit. I should PE credit for this. All right. Who's got popping? I hear some popping sounds. Gets hot, right? So what's going on? So what happened? Some people are still struggling. It's actually really tricky. Those of you whose bags popped, can you tell me what happened? Somebody tell me what happened. STUDENT: It warmed up. RAFAEL JARAMILLO: It warmed up. It got hot. So did the reaction-- where's equilibrium here? To the right or to the left? STUDENT: To right. RAFAEL JARAMILLO: To the right. So it's weird, right? Entropy decreased. So in this case, unlike the previous case-- so in the previous case, entropy seem to win. Nature got to increase its entropy, but in this case, energy seems to win. Nature got to decrease its energy. So the balance was different. Let me show you a video of what these look like. All right. So we've seen two examples, and we've done two experiments. And we've seen that in one example, entropy seems to rule the day, win the day, but in the other example, entropy or energy seem to rule it. I forgot which one I said first. So on the one hand, we have entropy or disorder, and on the other hand, we have energy, or what we will more commonly use in this class, which is enthalpy. They're not identically the same thing. So we'll cover enthalpy in plenty of detail in later lectures, but for today, we can use them interchangeably. So on the other hand, we have energy or enthalpy, and this question of balance is everything. So here's a plank, and there's a fulcrum. And this is basically thermodynamics. Thermodynamics is nature's way of balancing entropy with enthalpy. And so what we're going to do is spend a semester learning how to calculate the balance for all sorts of different physical situations, mostly with solids but also reacting chemicals, liquids, solid gas, systems, and systems of interest, rates, and modern material science. So what happens when we get the final answer? So I'm going to move on. I have a couple of slides to share now to introduce you to the concept of phase diagrams, which is the way that we communicate with each other in material science what the answer is, what's the balance. Before I move on, do we have any questions about these hot and cold packs, endothermic, exothermic processes? So you might not know in advance the balance. One chemical system we just showed you, room temperature and atmospheric pressure preferred to maximize entropy, the other system, at the same set of conditions, prefer to minimize energy. So fortunately, there's a rigorous theoretical framework to calculate the balance, and that's what we're here to learn. Once you've calculated the balance for a given material, you want to communicate that answer to other people. And this is where phase diagrams come in. Phase diagrams are the visual tool by which within material science, we communicate what that answer is, what nature will prefer at a given set of conditions. So I want to show you some examples of phase diagrams. If I have one deliverable in this class to you when you walk out of here in December, it's that you know how to use phase diagrams and you know where they come from. That's the core of thermodynamics and material science, unlike, let's say, mechanical engineering. So this is the water phase diagram. Let's see. I assume that most people have seen the water phase diagram before. What's cool about this water phase diagram is that it covers a very large range of pressure. So this is pressure in the y-axis, and the x-axis is temperature. And the colors here show you what the right balance is between enthalpy and entropy. So for example, at a given pressure, let's say atmospheric pressure-- where is that? Here. When you heat up, you go from ice, which is a low entropy state, to water, which is a medium entropy state, to-- came down here-- to vapor, which is a high entropy state. On the other hand, it turns out pressure goes the other direction. Pressure, we haven't talked about volume, but increasing pressure favors smaller material. It makes sense, right? So as you go from the vapor to this solid, and then these other phases of ice, which you might find in the core of distant planets, the volume per mole becomes smaller. Crystal structure becomes different. There are different materials. So this kind of neat. You have 11 phases of ice. I think there are 12 known now. They're labeled with these different Roman numerals. And so this is one thing that we get used to in thermodynamics, is that there are many different solid phases of a material. They're in a sense different phases. They're different materials. They behave differently. They have different structure, and therefore, they have different properties. I was just speaking with one of my Europe advisees who-- she spent her summer at JPL testing the mechanical properties of drill bits on different phases of ice. Why? Because they're going to land a rover on Europa, and they want to drill through-- they want to go ice fishing. They want to drill through the ice core of Europa to get to the liquid underneath. And they might expect different phases of ice than the normal terrestrial ice that we're familiar with. And they will have different properties, and therefore, it will wear the drill bit differently. And they have to know where the drill bit. You'd hate to fly all the way to Europa and have something like the drill bit failure ruin the mission. So it's kind of interesting. She just told me about this just before this class. So here's another example. Here's a binary phase diagram. Here, the y-axis is temperature, and what's the x-axis? STUDENT: [INAUDIBLE]? RAFAEL JARAMILLO: I heard something like concentration, which is about right. It's composition. So this is a complicated diagram which you're going to know the back of your hand when you walk out of this class. So this is the copper-zinc system. This is called the liquidus line. Above this line, the system is a uniform liquid mixture. Below this line, there's a slew of solid phases, alpha, beta, gamma, delta, epsilon, eta. Which phase or combination of phases-- this region, I know is a mixture of alpha and beta, you'll learn that. How the system behaves depends on where you are in temperature, pressure, and composition. And that's something-- this is an equilibrium phase diagram, well established because that's brass. It's an important metal, alloy. And we'll learn how to calculate these phase diagrams. There's a lot more. So the types of phase diagrams, that's a binary phase diagram, a classic eutectic tactic. Phase diagrams don't have to be composition temperature and pressure. They can be electrical potential and pH. So a pourbaix diagram, like we would use to predict corrosion or to optimize a battery, gives you the phase that you might expect as a function of aqueous solution pH and electrochemical potential, or electrical potential, relative to a standard electrode. Here's a ternary phase diagram. So here, instead of two elements or pure materials, we now have three, titanium, tungsten, and carbon. So this is going to be all sorts of high performance titanium, tungsten, alloys in there. Here, there is no more room left on the page to do temperature and pressure, so those are in some sense in the third and fourth dimensions, but they're there. And this is something that's near and dear to my heart because I work on a lot of oxide and sulfide electronic materials. This is a Richardson-Ellingham diagram, which isn't quite a phase diagram, but it does tell you whether nature predicts the metal or the metal oxide as a function of temperature and oxygen partial pressure. So this is very useful if you are, for example, making a transistor. You have to know how to control the oxidation process of silicon, or if you're smelting or any number of other interesting processes that are important. So we're going to learn how finds this balance, and we're going to learn how to interpret these phase diagrams. But in practice, calculating one of these phase diagrams would take you a really long time, and you'd get a paper out of it. And you'd feel very accomplished, but what if you're out there in the world and you have to make engineering decisions and you have 7 minutes before your boss tells you, what percentage carbon do we want in this deal? All right. You don't go to the blackboard. You go to computerized tools. So the idea that you could calculate phase diagrams with a computer, that idea is called CALPHAD. It's kind of funny that that idea would have an acronym, but it does. It's called CALPHAD. And there are many CALPHAD software packages out there. What they do is they take materials data from databases, which often are proprietary, and they calculate the balance for a given temperature or pressure or composition or a pH or electric field or what have you, and they tell you the answer. So these are really indispensable once you leave here and you go off to your next position, whether it's in research or industry or what have you. And so we're going to spend a little bit of time in this class getting familiar with one of these, Thermo-Calc. The reason why we're going to get familiar with Thermo-Calc is because they have a free educational version, and it's relatively user friendly. So you have a thick a couple of weeks until we're actually asking you to use Thermo-Calc in your p sets, but if you want to just download this on your computer, install it, get ahead of any problems-- it tends to work well on both PCs and Macs-- and play around a little bit. So the thermodynamics section of this course basically has three parts. The first seven lectures are introducing the concept of equilibrium. That is this balance. How does nature determine the equilibrium situation, the most balanced situation for any given set of conditions? And then we're going to apply that concept. We apply the concept of equilibrium to increasingly complex physical systems to calculate these phase diagrams to figure out how to where they come from, how to make them, how to read them, how to use them. And then at the very end of the course, we're going to come back to some foundational material. We're going to come all the way back to statistical physics for babies and do some slightly more sophisticated analyzes of entropy and how it relates to, let's say, configurations of molecules, do some foundational work. We might even mention Carnot cycles at the very end of class. I already told you about the resources online. So I'll leave you with this. So Arnold Sommerfeld is one of the fathers of quantum mechanics. The point is that he was a smart person, no dummy when it comes to science. And this is a very famous quote. There's a website devoted to quotes about thermo by the way, but this is the one that gets most often related. So I'll read it. He said, "Thermodynamics is a funny subject. The first time you go through it, you don't understand it at all. The second time you go through it, you think you understand it, except for one or two small points. The third time you go through it, you know you don't understand it, but by that time, you're used to it. So it doesn't bother you anymore." And this is consistent with most people's experience of learning thermal. This is a bit of a disclaimer because this is the first time you go through it. So this is a tricky subject, and you shouldn't feel badly about yourself if you leave this course thinking that, I know how to use a phase diagram, Thermo-Calc, that's neat, but what just happened? All right. That's perfectly natural.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_12_Case_Studies_Saturation_Vapor_Pressure.txt
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: So I called the lecture Case Studies. The first case, we're going to do is weather. And the second case we're going to do is particle physics. So this is the case of weather. OK, air always has some water in it, right? Always. All right, let's define some terms. Relative Humidity, or RH for short, equals the water vapor pressure normalized to its saturation pressure. In other words, rh, which is normally expressed as a percent, equals 100, because it's a percent, times PH2O, that is the water vapor pressure, over the saturation vapor pressure. Maybe you knew that already from looking at the weather report. Another term, dew point. Who knows what the dew point is? Can you tell me? There's a chat. I missed a chat. Let's see. Pro tricks. Thanks, All right-- AUDIENCE: It's like 4 Celsius? PROFESSOR: Sorry. AUDIENCE: Just 4 Celsius? PROFESSOR: I mean, but what's the concept? What is it-- what does it capture? AUDIENCE: Equilibrium of atmospheric pressure and vapor pressure. PROFESSOR: Yeah, those concepts are in there. Anybody else? It's the temperature at which water at its current vapor pressure would condense. So this is a little bit of a complicated concept, even though it's in your-- it's right there in your weather report. You take the current pressure of water, and you find that temperature for which the current pressure of water equals the saturation vapor pressure. Why is it called the dew point? Please, somebody give it a shot. Why is it called the dew point? What's that word come from? AUDIENCE: So isn't that when water becomes a liquid form on grass and stuff. So it forms dew. PROFESSOR: Yeah, dew is like wet condensation, typically on grass or leaves. Why does that-- it's not that it rained overnight, right? When you come in the morning, the grass is wet, especially in the summer, even if it didn't rain. What happened? AUDIENCE: The water condensed onto the grass. PROFESSOR: The water condensed onto the grass. It spontaneously condensed onto the grass because the temperature fell below the dew point. So there was some humidity in the air. During the day, the dew point was lower than the current temperature. In other words, water was below its saturation vapor pressure. As the temperature drops, the saturation vapor pressure drops. And at some point, those cross. And then, water condenses out and forms dew. So let me-- when it's raining-- when it's raining, water is in two phases, vapor and liquid, right? And PH2O equals PSAT 100% humidity. When it's raining, the humidity is 100%. And the dew point, it's just the temperature. On a sunny day, well, I should just say, otherwise, the water vapor pressure is lower than its saturation vapor pressure. The relative humidity is lower than 100%. And the dew point is greater than or less than the temperature. AUDIENCE: Less than? PROFESSOR: Exactly. These are important concepts. So please dwell on these if it's not immediately apparent. Let's look at some data. All right, so here's what I did. I grabbed the Weather Underground. And I went to Weather Underground, and I grabbed some data. And I grabbed it for a typical day. So I grabbed it for last Tuesday. And here's some data. So it gives you plots of random things. And then, it gives you a table of time, temperature, dew point, humidity, then wind, wind speed, wind gust, pressure, and all that. then I grabbed that data and I started plotting it. All right, so first of all, here's the data that it gave me. So the x-axis here is the hours of the day, and the y-axis is temperature in Fahrenheit, and make things familiar for us. So the temperature-- this is a typical day. So temperature started low, and got warm, and then and then cooled down at night. So that's very typical. The dew point slowly rose throughout the day. So there's more water coming into the air, right? The weather pattern, for whatever reason, is bringing more humid air into the region. The dew point is crawling up. All right, so the next thing I did is I plotted two things. I plotted the saturation vapor pressure as a function of temperature using the expression, which we derived 10 minutes ago. And then, I plotted the actual vapor pressure. OK, how did I do that, somebody? The orange curve, please, how did I find the saturation vapor pressure as a function of time throughout the day? How did I do that? AUDIENCE: You used your model from before and use the temperature at that point. PROFESSOR: Good. Yeah, let me back up. Exactly right. So I developed this model. Fits the data pretty well. I just plug-in the temperature. I convert from Fahrenheit to Kelvin, but I plug-in the temperature. And I get this number. So you see it varies monotonically with temperature. That's good. Temperature goes up. Vapor pressure goes up. I'm plotting in torr by the way. It's convenient to plot in units that are like 1, 5, 10, 20. Does anyone know what a how many torrs in an atmosphere? Anybody know? AUDIENCE: 7.5? PROFESSOR: 760. So these are important. You should know such things. So 760 torrs in an atmosphere. Torrs corresponds to a millimeter of mercury. So these are the kind of scientific literacy things. I didn't want to plot in pascals. I didn't want to be down in like the orders of magnitude away from 1. Great, so now we understand how I got the orange curve, how did I get this blue curve? That's the actual vapor pressure of water throughout the day. Well, I've got the saturation data. And I've got-- and I've got the dew point. I could have plotted relative humidity. I do that next. The way I get the blue data is I take the dew point temperature and I plug that into my model, right? That's the meaning of the dew point. The dew point tells you the water vapor pressure in terms of the temperature at which that pressure would be saturated. So I plug the dew point into this model to get the actual water vapor pressure. All right, last curve, relative humidity. Somebody please volunteer and tell me how do I calculate this relative humidity? Somebody who hasn't volunteered yet today please. AUDIENCE: Don't you just divide the two lines that we just found? So we would be dividing at each point. The blue line by the red line and then-- PROFESSOR: Thank you. That's exactly right. It's the it's the fraction relative to saturation, right? So it's the actual vapor pressure divided by the saturation vapor pressure. Thank you. So you see, it's got the kind of-- there's a little bit of wet air coming into the region. Dew point starts creeping up. But then the day starts heating up, right? And the relative humidity falls. And then, at night, the relative humidity climbs. This is not a day where we're going to form dew, right? This is almost 11:45 at night, and the relative humidity is still just at 50%. But on summer days, you'll see this climb at night. Really, really come up towards 100. Then we're going to form dew. Or frost if it's winter. Excellent. So I really want you to become proficient with these sorts of data manipulations. And now you understand the weather a little more. OK, so let me see what comes next. That was it for the weather and meteorology. The next case study we're going to do is in a totally different topic. So I want to pause for two minutes and gather any questions that you have on the concepts of dew point and relative humidity. These concepts of vapor, which is sub saturation are really important for materials processing and pretty much everywhere else. So I'm using this very accessible commonplace example of the weather. But this sort of analysis is certainly not limited to meteorology. AUDIENCE: I was ask about the line we saw in before, which wasn't exactly linear. I think it was like the 1 over T y log pressure. So what's the parameter that if we got like either like the H is constant or that the gas is ideal? Which one of them is like the most efficient to fix and get a better fit of our initial assumptions? PROFESSOR: Yeah, it's a good question. So first of all, if you look up expressions for the vapor pressure of water, you'll find equations that look like this with correction terms. So you'll find equations that look like this with quadratic terms or logarithmic terms. They're just curve fitting. There's not really thermodynamics in there, but they're curve fitting, and this model does not exactly fit the data. So that's just an empirical thing. So I encourage you to just go Google for expressions and you'll find different expressions. What's underlying there? What's the reason for that? The assumption of constant Delta H is the first to break down. So the assumption of temperature independent enthalpy of vaporization is pretty good in a narrow temperature window. But it's not good from 0 to 100 C. It's not good over that whole range. And so, if you were to analyze more carefully and come up with an expression for the temperature dependence of Delta H, you would get a more accurate model. And it would be more mathematically complicated than this one. Thank you. All right, so now it's 10:28. I want to move on to particle physics. So the example we're going to look at and we're going to work some-- we're going to work a little bit is out of cloud chambers. So if we were in a classroom, I'd ask how many of you have seen a cloud chamber, or know what a cloud chamber is. It's a little awkward to resume, so I'll just give you a two minute introduction to cloud chambers. Cloud chambers are-- let's see, they're an expansion apparatus for making visible the tracks of ionizing particles and gases. So this is Charles Wilson Cavendish Laboratory in Cambridge, Nobel Prize for this work in 1927. And here's what it is. Here's his drawing. You have A is the chamber here. And A is connected via this valve, B, to a vacuum space, C. And C is evacuated by a pump, so there's a pump and gauge. And there's a voltage across that space. And these are magnets. So there's magnetic field lines. So this space here has electric field lines and has magnetic field lines. And the space is filled with wet air, humid air. And at times 0, what they do is they open this valve, and they cause the air to rapidly expand into this vacuum space. So that's what you do. And here's a photo of it. So here's the volume here. It's glass so you can see into it. You can see there's this volume here. Here's the vacuum space. Here's the vacuum pump connections. And there's a valve here, and a valve actuator here, the stem-- the valve stem, I guess, they just pull it manually. And what does this do? It creates a situation where the water is super saturated. It goes from-- you go from relative humidity, less than 100%, to relative humidity, greater than 100%. It's super cooling, right? That's like the instant hot pack. So you have a super cold situation with greater than 100% relative humidity. Or in other words, the dew point's higher than the actual temperature. And what happens? Condensation will nucleate around any imperfection. In particular, it will nucleate around ionized molecules created by ionizing radiation. So as radiation that ionizes air passes through the chamber, it leaves a path of cloud. It leaves a path of condensed water vapor. That's a cloud chamber. From the original paper, he put in a piece of radiation, radioactive material. And, of course, this thing is spewing, ionizing radiation, nucleating cloud everywhere it goes. And you get this kind of starburst pattern. Here's another Nobel Prize. This is the-- well, it was Fermi, who originally hypothesized the existence of positrons, and Carl Anderson at Caltech verified that. And he got the Nobel Prize for it. And so, this is the first published recorded positron track in a cloud chamber. And we won't go into how they figure out it's a positron. The point is that you have a piece of ionizing radiation that's normally invisible, and it's made visible because it leaves behind clouds. This is kind of a beautiful concept. And here's a very long-time exposure photograph of a cloud chamber. All sorts of radiation here. They look pretty beautiful. And I have a couple examples here of these things actually running. So here's first an example from Harvard. And I don't know whether you'll be able to hear the audio on this. And if you can't, that's perfectly fine. That's dry ice. So there's a bed of dry ice. And then what they're going to do is put the box over it. And here, they're not going to use water vapor. It's going to be a cloud of ethanol. So they're going to saturate the-- they're going to saturate the chamber with ethanol. And I think what's happening now is the temperature is slowly dropping, and they're just projecting light through the box. And now, the ethanol is condensing spontaneously in the box. What are those what are those zing traces? What's going on in there? What's creating those lines of cloud? Those are cosmic rays zinging through the box. Here's another example. They're slowly introducing a rod with 2% thorium, which is radioactive. That's giving off ionizing radiation and the cloud is nucleating around that. That's really cool. It's sort of a more, I would say, a slightly prettier example here from a company that sells these as demo kits. And we'll just look at this for a little while because it's really pretty. And so, here, they have a little bit of a cleaner system. I don't know whether this is water, or ethanol, or what have you. But they have this thing operating continuously. I don't know how they rig that up. But they have it operating continuously. Not one at a time. And all these lines are just the cosmic rays that we're constantly inundated with. Maybe there's some other radiation in there as well. I'm not sure. And normally this is invisible, right? But it's visible because we've created a situation of greater than 100% relative humidity. Isn't that cool. It's really cool. So I'll take questions for a minute. And with the remaining time, we're going to set up and solve this problem. And I'll explain what I mean. So this is 110-year-old technology. Doesn't mean the concepts aren't valid today, right? Now it's mainly used for science museums. Has anyone ever seen one of these? There's one at the Exploratorium in San Francisco. I haven't been to see it, but I've seen a video of the one they have set up there. There's one in the physics teaching lab here at MIT. Has anyone ever played with one of these? All right, well, anyway it's cool stuff. When there's a magnet on, you know whether the particle is charged or not by whether it bends because charged particles curve in magnetic fields. Anyway, let me stop this mesmerizing video and move back to the board. All right, so here's the cloud chamber problem. All right, so we have a cloud chamber. And let's just make it specific initially at 298 Kelvin and 1 atmosphere filled with air with dew point of let's say 288 Kelvin. Let's make it nice and round. So this is 77 degrees Fahrenheit, and this is-- I don't remember. Now this is 59 degrees Fahrenheit. So this is typical lab conditions you might find in Boston. So it's 77 degrees in the lab. The humid air has a dew point of 59 degrees. And we're going to expand-- expands quickly. In your thermal class, you're supposed to read that as adiabatically. It expands quickly by-- and we're going to parameterize the expansion by Delta V over V initial. This is going to be the control parameter in our analysis. So we're going to expand the volume by Delta V over V. And here's the problem, find the Delta V over V initial needed to achieve saturation. I used to give something like this as a homework problem. But I think it's more fun to work through in lecture. So we have a cylinder with a piston. We have humid air. You have humid air. We're going to withdraw the piston, and we're going to have air with condensable water. All right, so humid air to air with condensable water vapor. That's the problem. And I'm going to simply step through how you solve this because it's pretty complicated. And if I don't get all the way through, we'll we'll post some notes. So the first thing is we're going to write an expression for the water vapor pressure as a function of Delta V or V initial. The initial vapor pressure PH2O initial equals the saturation vapor pressure evaluated at the dew point. And I just plug this in. It's 1,863 pascals. That's using concepts which we learned earlier in the lecture. The system is initially at P total initial equals atmosphere equals 101,325 pascals. So here's a concept which we're going to learn on Friday. The water vapor pressure equals the total pressure times the mole fraction of water. This is Dalton's law of partial pressures. And we're going to dive into that on Friday, Dalton's law. OK we know for an adiabatic process-- for an adiabatic process, P final equals P initial volume final over volume initial to the minus gamma. We know that. And so, I want to parameterize in terms of 1 plus Delta V or volume initial, that's my control parameter. That's taking results from a couple of lectures ago. And then, I apply Dalton's law. And what I get is the water vapor pressure equals the mole fraction of water times the initial total pressure times 1 plus Delta V or volume initial to the minus gamma. And by the way, the mole fraction of water, I solved for it. It's 0.0184. All right, we introduced this mole fraction concept. We then use our concept of dew point. We used our expressions for adiabatic process. And there we go. OK, this is an expression for water vapor pressure as a function of that expansion. 2, right expression for temperature, right? It's going to cool down as we expand. All right, well, again, for adiabatic process, temperature initial volume initial gamma minus 1 equals temperature final volume final gamma minus 1. In other words, temperature final equals temperature initial volume final volume initial minus gamma plus 1. And I like to parameterize in terms of Delta V over V initial minus gamma plus 1. OK, good. So now we know how the temperature varies. We know how the pressure varies. And the final thing we do is we solve for saturation. Solve for saturation. This is the thing we're looking for, the water vapor pressure is saturated. We know that PH2O equals XH2OP total initial plus 1 Delta V over Vi over minus gamma. OK, so that's that. And we know that p? Saturation equals Ce to the minus B over T. That's our model. And T equals T initial 1 plus Delta V over V initial minus gamma plus 1. So that's the temperature. That's the saturation pressure. So this is the thing we need to solve. This is parametric in Delta V or V initial. See, it's parameterized by what we chose as our control parameter. So we can solve this numerically, or graphically. We do need a value for gamma. And I would recommend use value for the diatomic ideal gas, right? Why? Air equals 80% N2, 20% O2, plus small amounts of other things. And N2 and O2 are both diatomic molecules. All right, CP over CV is actually in the range of 1.4 to 1.7 for air. Whereas its 1.4 for the diatomic gas. So we're not going to worry about this approximation. We're just going to take the value for the ideal gas. We can adjust it later. This is kind of its own interesting meteorology problem. But we're just going to take the value for the ideal gas. Now I'm done writing out equations. I want to show you the solution. But I ran through that rather quickly to make sure we'd have time for discussion. So I'm going to pause now and ask for questions on what on Earth just happened. AUDIENCE: This is a pretty broad question, but without going back through the math again, could you talk about the steps are. PROFESSOR: Right, so let's start with this. You need to recognize that there's a control parameter here. The control parameter is the thing that you're physically doing, which is expanding the volume. This is your independent parameter. Everything needs to be a function of this, or else you kind of end up lost. So that's the first thing to recognize. The next thing to recognize is what on Earth are you trying to solve for? This is the condition you're trying to solve for. All right, you're solving for the condition of saturation. For this pressure and beyond, you can start forming clouds. That's when water is saturated. So recognizing that we're controlling expansion, and this is a condition we're looking for, you recognize that you have to get expressions for the left-hand side and the right-hand side in terms of the control parameter. So we took it one at a time. The pressure of water vapor was this expression. We got that by thinking about Dalton's rule of partial pressures and the adiabatic expansion process. OK, so that's the pressure of water. What about its saturation vapor pressure? Its saturation vapor pressure varies with temperature. We know these parameters for water. We solved them 40 minutes ago. They're also widely available. All right, so that means that in order to find PSAT, we need temperature. Well, that also is varying as we expand. So now we need temperature parameterized in terms of Delta V over V. So you see Delta V over V is controlling temperature, which controls PSAT. Delta V over V is controlling the pressure. And so, those connections need to be your starting point before you really dive in to do the solution. Let me move on to show some slides and we'll come back because I hope the slides illuminate that a little more. Let's see the answer. So this is what I did. I first plotted the saturation vapor pressure of water. Let me grab a laser pointer here. All right, so this is the saturation vapor pressure of water as a function of temperature. The same data from Wikipedia that I showed previously. Although, now I'm plotting it on a semi-log scale. Y-axis is log, but x-axis is just temperature. And I'm 1 over temperature. All right, these are my initial conditions. This is the starting temperature, standard temperature, 25 degrees C, and the initial water vapor pressure, right? If you remember the dew point was 288. So if I scoot over here to 288, I'll find that this water vapor pressure would be saturated at 288. That's the meaning of that term dew point. So this is my starting condition. Now I'm going to expand. I have Delta V over V. As I expand, the temperature drops. As I expand, the pressure drops. So as I expand, the temperature drops, and as I expand, the pressure drops. These are adiabats. These are adiabatic curves that you now are somewhat familiar with. So you see I'm going to plot a parametric curve. I'm going to plot H20 partial pressure versus temperature on this curve, on this plot. There, I did it. This is the parametric curve of how the water vapor pressure and temperature vary as I expand the volume. So you see as I expand, the pressure drops and the temperature drops. And at some point, it crosses saturation. At that point, I'm saturated. And you could solve this numerically. You could also just eyeball it if you're in a hurry. It crosses for around 9% volume expansion. So you need to quickly expand this volume by roughly 10% in order for the water pressure to become saturated, or in other words, for to cross the dew point. Think that's the end of my graphical analysis. OK, again, I don't know if that helped, but others, please, chime in. We have plenty of time to discuss this or anything else. I think this is a fun problem to work because it leads to these beautiful images. It also is a bridge between the topic of binary phase diagrams and the topic of gas mixtures, which we start on Friday with Dalton's Law of partial pressure. So it's kind of a bridge topic. Also, just something that everything every scientifically literate person should know. So, yeah, again questions? Please. AUDIENCE: What is quickly expanding imply adiabatic process? PROFESSOR: All right, that's excellent. So if you do anything fast enough, then there's no time for heat to transfer. Yeah, it is a confusing point because you haven't had free transfer yet and you haven't had kinetics. And maybe if you're a Course 2, if you're a junior or senior student in Course 2, that's sort of obvious to you. But it's not necessarily obvious to you at this point. So if you do something very quickly, it can happen adiabatically or in approximate an adiabatic process because heat transfer takes time. You can withdraw a piston much more quickly than you can equilibrate thermal energy between a volume and its walls. We've seen that a little bit with the free expansion example, or the free expansion problem, a couple of lectures ago. You can see this in many real world engines where things like compression strokes and internal combustion engines are approximated as adiabatic because they happen too quickly for that heat transfer. So in a problem that you might receive on a problem set or an exam, typically we'll clarify. And, of course, there are corrections. It's not strictly speaking adiabatic. But this model works well. AUDIENCE: Thank you. PROFESSOR: Yeah. If you like, you can also just say we thermally insulated the walls. Anyway, I don't want to go far down that path because it gets kind of nitty gritty. It is a helpful thing to know that in many textbooks and problems, maybe not that you'll receive from me directly, but maybe problems you'll find in other books or in other classes, when something is said to happen very quickly, that is kind of equivalent to saying adiabatically or quasi adiabatically. And you'll find lots of problems discussed in the scientific literature where they discuss the adiabatic solution of that, adiabatic process of this. It's very common in quantum computing. Very, very common. For those of you who go into work on quantum computing, this becomes second nature. Adiabatic versus fast. And there is, of course it's because heat transfer is the bane of qubits. And so, you have to operate quickly enough for decoherence not to take effect. So anyway, it's an important concept. Thank you for asking.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_21_Phase_Coexistence_and_Separation.txt
RAFAEL JARAMILLO: So let's talk about phase coexistence. Phase coexistence in a common tangent construction. Right, so what's the situation here? We have two phases. We'll just label them 1 and 2. And they can freely exchange components A and B. So two phases, two components. And we know that phased coexistence at equilibrium requires that the Gibbs free energy is stationary. So here's our little picture. We have phase A, phase one, phase two. And let's say A is-- it can be red squares. And B can be green circles. And we know that this boundary is open. If this boundary is open, then what? Components can freely jump between phases. That's something that can happen-- it's an unconstrained, internal process. We're not going to stop that from happening. So given that these matter exchanges can happen, how do we determine the equilibrium condition? And I say, these fluctuations must leave G stationary at equilibrium. OK, so this is now zooming in at the molecular level. And I think people understand that if you have a glass of ice water, there's a boundary between the ice and the water, the liquid water and the solid water. And ice and water molecules will be jumping back and forth between the two phases at that boundary. On a molecular scale, and on a molecular timescale. So what does it mean for that process to leave G stationary? That would be our equilibrium condition. So we can get into the math with that. We say, the change of G-- now we're going to be at fixed temperature and pressure. So we're not going to worry about that. And we know the total differential of Gibbs free energy from weeks past. It's a sum over components and phases, chemical potential of each component in each phase times the change in mole number of each component in each phase. So K going-- let's see A, B-- these are components. I going one, two, these are phases. So let's write this out. This goes mu, A, 1 d mu-- d n, sorry, let me start over. Mu A 1 d n a 1 plus mu B 1 dn B 1 plus mu a 2 dn a 2 plus mu B 2 dn B 2. OK, so we're written that out. Now, we're going to use conservation of mass. And conservation of mass tells me that I can have molecules or atoms jumping back and forth, but I can't have them just disappearing. Or just appearing. I've got to have mass conserved. So conservation of mass tells me that dn a 1 equals minus dn a2. And dn b1 equals minus the dn b2. So two equations, and so we're going to reduce our number of independent variables from four to two. So I'm going to reduce-- I'm going to do that. And I get this-- mu a 1 minus mu a 2 dn a 1 plus mu b 1 minus mu b 2 dn b 1. So I have four independent variables, I applied two constraints, I end up with two independent variables. This has a familiar form for us-- these here, these here, are the unconstrained, independent variables. Those are the unconstrained, independent variables. And what do we call the multipliers in front of the independent variables in this differential form? What do we call this thing and this thing? This is a differentiable form for a dependent variable, and how it varies with two independent variables. And we have-- STUDENT: It's the coefficient. RAFAEL JARAMILLO: Their coefficients, thank you. I didn't notice who that was, but thank you. Right, coefficients. Good. So this is a form that we are familiar with. And we know from when we've solved equilibrium conditions before in this class, that d g p t equals 0 requires coefficients to be 0. That's our equilibrium condition. And in this case, what that means-- the chemical potential of A is the same in both phases. And the chemical potential of B is the same in both phases. When that happens, phases can exchange mass freely without changing-- overall Gibbs free energy. Or in other words, Gibbs free energy is stationary. So the math is the same. There's a lot of pattern recognition that kicks in, in this class. The math is the same formally, as we've seen before. We're just exploring different cases of this. And so right, chemical potential is the same. So these particles, they can move back and forth. They can change their phase, but the chemical potential remain unchanged. So the overall Gibbs free energy will remain unchanged. Now, how do we analyze that? Right, in general, in general, each phase has its own solution model, and partial molar properties of mixing. So what do I mean by that? Mu a in phase one equals mu a in its reference state, plus delta mu a of mixing in phase one. Mu a in phase two equals mu a in its reference state, plus delta mu a in phase two mixing. And I can write the same for the other component-- mu B in phase one, equals mu B naught plus the change in chemical potential of B due to mixing in phase one. Chemical potential of B in phase two equals the reference state plus the change in chemical potential of B due to mixing, phase two. Why did I write it in this way? I wrote it in this way because we see that these pure components-- pure component A, and pure component B-- those terms are the same in this equation as this equation. Or they're the same in this equation as this equation. It's the chemical potential of A in its pure state. It's the chemical potential of A in its pure state. It doesn't care what kind of mixture you're going to make out of it. This is like ingredients on the shelf-- if you've got sugar on the shelf, that's pure sugar, it's thermodynamic properties don't care whether you're going to go make cupcakes or you're going to make syrup out of it. It's just pure sugar. So its reference state is unchanged, doesn't depend on what you're about to mix it into. So in words, the reference states-- the reference states are fixed, and the mu k naught don't depend on mixing. The properties of the ingredients don't depend on your recipe. So if we have mu a 1 equals mu a 2 as our equilibrium condition, this becomes delta mu a of mixing 1 equals delta mu a of mixing 2. And similarly, mu B 1 equals mu B 2 delta mu B of mixing, 1 equals delta mu B of mixing 2. And these are our two-phase co-existence equilibrium conditions. Those are our equilibrium conditions. So now, we're going to use something which we introduced maybe a week or more ago, which is the graphical-- the graphical solution for parsing all the properties. And to remind you what that was, we had-- now, specific to the case of Gibbs, because that's pretty much the only way we're going to be using this-- we had here Gibbs free energy, and we had this concept of a solution model. So let me just draw a section of a solution model. Draw the whole thing, because I don't need the whole thing. And let's imagine some system composition here, that would be x 2-- that's x 2. The way that we found the partial molar Gibbs free energy of mixing was we drew the tangent to the solution model at that composition. And the intercepts were the partial molar properties of mixing. So this intercept here is mu 2 at x 2. And this intercept here is mu 1 at x 1. So I hope you do remember this-- if you don't, just rewind for a couple of lectures. So we had this graphical solution. Now, we introduce this for one solution model. So what we're going to do now is use this tangent construction for the case of two different phases. This becomes a common tangent construction. It's called-- or if you like, you just call it a condition-- for two-phase equilibrium. So how does that look? Well, let's drive one phase. There's part of a solution model for one phase. Let's draw another phase. There's part of a solution model for another phase. And now, I'm going to look for a common tangent. Let's say that I drew that accurately, and we have a tangent, which is common to both curves. This is phase one, this is phase two. But say, now using our notation from the previous slides, let's say this is composition of component B and phase one. This is composition of component B in phase two. These intercepts are the chemical potentials B and A. So what we've shown is a common tangent, which is satisfied at compositions xb 1 and xb 2. Right, it's only satisfied with these compositions. If you look at these curves, the way I've drawn them, there's no other common tangent. It's only those two particular compositions. And for those two particular compositions, common tangent satisfied composition, that ensures that the chemical potential mu a 1 equals mu a 2 mu B 1 equals mu B 2. This is the equilibrium condition. So a common tangent on a free energy composition diagram is the same as the equilibrium condition. When you can draw the common tangent, then you can have two-phase equilibrium. And only at those compositions that satisfy the common tangent. If there's no common tangent you can draw, then there's no two-phase equilibrium possible. So we're knitting things together here across different lectures. I'm going to spend the rest of the class on how this works in the case of spinodal systems. So before I make it a little bit more specific to spinodal systems, are there any questions on the concept, or on the math of this, so far? OK. So let's move on, then. OK, so now, we're going to make this a little more specific. We're going to talk about an example of common tangent construction. And that is spinodal systems. So something a little bit familiar to us. So here's the general view of a free energy composition diagram for a spinodal system. This is going to be delta G of mixing, and let's draw a solution model for a phase that exhibits spinodal decomposition. So let's draw something like-- that works. You've been playing with stuff, something like this in PSETs and in previous lectures. And so this has a common tangent. Let me draw it. OK, so it has a common tangent, let's say, here, and let's say, here. Just eyeballing this, obviously. And so let's note the composition of the system when it satisfies common tangent. Let's say, we have x b, we'll call that phase one, x b, we'll call that phase two. The 0 of the free energy mixing is there. And these intercepts are delta mu b of mixing, and delta mu a of mixing. And so what's happened in this system? What happens at equilibrium? The system has phase-separated. It has spontaneously unmixed into two different phases, with what? These two compositions. So you could say, this is a composition rich in component a. So that composition were a-rich. And here's a composition rich in component. And we saw some examples of this on the slides from Monday's lecture. So there's the common tangent construction-- [INAUDIBLE] system. When the common tangent is possible, the free energy of two-phase system is lower then that of fully mixed one phase system-- as you would expect for equilibrium. It's got the lowest free energy. I'm not going to prove this to you right now, but I'm going to show you graphically how it comes about. Here, again, is a free energy composition diagram. And let me say, my solution model looks kind of like this-- [INAUDIBLE] here. I'd say it looks like that. And here is my common tangent, points of tangency. And my composition is here at those points of common tangency. And let's imagine an overall system composition here. Let's imagine an overall system composition there. It can be shown that relative to a fully mixed system, the free energy can be decreased by exactly this amount, by splitting into two different phases. So spontaneous un-mixing lowers the free energy by exactly the amount that I drew here on the plot. So the single phase, the single phase, the Gibbs free energy of the single phase system equals the Gibbs free energy of the components in their pure state. Plus just the value of this curve evaluated at the overall composition, plus delta G mixed evaluated, at overall x b. And two phase, two phase, the Gibbs free energy is, again, the Gibbs free energy of the components unmixed, plus the Gibbs free energy of each of those two phases-- sorry. Right, so single phase, it's just the reference state plus the evaluation of this solution model at that composition. Two phases, it's the reference state, plus the evaluation of the solution model at these two compositions, weighted by the phase fractions. So this here's a solution model, solution model. This is solution model. And these are phase fractions. OK, so you're working some of these concepts on the problem set. All right, so now we're going to continue exploring the spinodal case. The common tangents-- they define the tie lines on a binary phase diagram. So again, for the example of spinodal system, example of a spinodal system-- here, I'll draw a phase diagram. So we have temperature, composition, and we have a region of spontaneous-- or I should say, region of phase separation. And let's see, we have the highest temperature here, which is sometimes marked as star-- different notations there. I'll just say T star. And we have these tie lines-- these tie lines define the compositions that can coexist at equilibrium with each other. So in a system like this, mixing-- mixing favored at high temp. All right, in fact, it's fully immiscible at high temp. At low temp, the system spontaneously unmixes. Spontaneously unmixes, and the tie lines that are drawn in the two-phase region, connect compositions that co-exist at equilibrium. And so this is a reminder, basically, of what tie lines are. How do the common tangents define the tie lines? They do it through the free energy composition diagrams. So phase diagram emerges from temp dependence of free energy composition diagrams. So I'm going to draw some snapshots here of how that happens, and then we're going to spend a lot of time in the next couple of weeks on this. So let's say, low temp, T1, let me draw a free energy composition diagram for this spinodal system at low temperature. At low temperature, T1, delta G of mixing is basically positive-- that's why we have-- let me draw it like this. This is an example. It could look like this. This system has a common tangent, right? So now, right below this, we're going to draw the phase diagram. There's going to be temperature. I'm going to drop down those compositions. This is x b. What I've just learned from this complement tangent, is that these two compositions can coexist at equilibrium at temperature T1. So those are two points on the phase diagram. And I know what's going to happen if I vary temperature. So I'm just going to draw those as two points on little line segments, and I'm going to connect them with a tie line. What about medium temp? What about medium temp? So as I raise the temperature, my free energy composition diagram will change. As I raise the temperature, mixing becomes more favored. So now, my 0 of delta G of mixing will be there. And I'll draw it maybe like this. I've got a common tangent, right? There and there. Let's draw the corresponding phase diagram. And this is x b. At T1, I had coexistence at-- let's say, here, and here, being a little bit rough with that. So we had a tie line here. Now, at T2, I've got coexistence. OK, let's say this is T2. Let's say my coexistence is here and here. All right, I'm calling this T2. And I've got a tie line there. And now, I'm only drawing discrete points in temperature, but you can imagine this now starts to define a family of tie lines. So I've shown you how the tie lines at T1 and T2 emerge, but you can imagine that as I sweep the temperature, I have a family of tie lines. And this starts to sweep out the two-phase region. And then, a high temp-- T3-- we see it again. At high temp, I know that my free energy mixing will pretty much be strictly negative. So I'm going to draw the 0 way up there. And I can say, my solution model might look like that. So no common tangents, no inflection points. Strictly positive curvature throughout. And so that section of my phase diagram will show full miscibility. Here's x and b. Here's T1, here is T2. And let's say this up here is T3. I'm just going to sketch this now. We'll see if we have-- all right. So now, my region of two-phase region has closed. And I have finished drawing my spinodal phase diagram. So the way I did this here is I did a very, very slow motion animation for you. At each temperature, you have a free energy composition diagram. At each temperature, that free energy composition diagram determines for you whether or not there are any tie lines in the phase diagram. It tells you about the phase stability and equilibrium as a function of composition at that temperature. And so as you tune the temperatures and knob, you sweep through a series of free energy composition diagrams. And you gradually build up the phase diagram. So you can imagine a flipbook style version of this. The textbook has some nice examples of this-- of these free energy composition diagrams giving rise to phase diagrams. These single temperature snapshots. And in the weeks ahead, we're also going to play with an online kind of animated version of this, which we've coded up for use in this class. OK, that's where I want to leave it for today. We're finishing about five minutes early, so we have plenty of time for questions. I just saw there was a chat-- does a common tangent construction help us to say anything about the kinetics of the process of segregation? No, it doesn't. This doesn't tell us anything about kinetics. So this tells us about the equilibrium, the common tangent construction tells us about the equilibrium configuration of the system. Now, we can infer a lot of things about kinetics from the phase diagrams, and we can also infer things about kinetics from the driving force for phase segregation. It goes beyond the scope of the class, but kinetics is really interesting. And it starts from this point-- kinetics studies start from phase diagrams like this. And a tendency of a system to spontaneously unmix-- that is, a tendency for pattern formation. And what patterns actually form depends on the kinetics-- depends on diffusion coefficients, depends on temperature. And that's where this curriculum takes off in the fall.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_14_Reacting_Gas_Mixtures_at_Equilibrium.txt
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: So let's talk about reacting gas equilibrium. We started setting this up last time. And now we're going to derive the key equation. So we're going to equilibrate. It's really hard to say the word equilibrate while writing the word equilibrium. I somehow got that right. Gas phase reaction at equilibrium at fixed T and P. So we mixed gases, and we let them react. We're going to do this in general terms. Reaction A plus B goes to 2C. And of course, everything we're going to do is general. Doesn't have to be for a reaction of this form. OK. So we know that the condition of equilibrium at fixed temperature and pressure, exactly Gibbs energy at an extrema. And so we care about its differential. So let's use what we know. OK. So here, we have the total differential of the Gibbs free energy for changes of temperature, changes of the pressure, and changes of mole number. And again, the independent variables for Gibbs are temperature, pressure, and mole number. You can see that right here in the differential form. And why do we choose that free energy? Why do we choose Gibbs? Why is this convenient? Because we can ignore 2/3 of the right-hand side of the equation. For fixed temperature and pressure, right, those are fixed. Fixed and stubborn and independent are, like, synonymous in thermo, right? If they're independent, that means they can be stubborn and not move. So those go to 0. And we care only about this term. So we're going to be spending the lecture analyzing that term. And so equilibrium is determined by the chem potentials mu i and the mole numbers. So that's going to be it. Our entropy and our volume are not going to be explicitly part of this. Get rid of that. All right. So at equilibrium, dG equals-- now, that's the equilibrium expression. So, all right. How many independent variables are there in this case? I'll remind you, we have a general reaction A plus B goes to 2C. That's a reaction. Let me break that down a little bit. Somebody tell me, what are the components? Let's list the components. And those are the i labels, components i. What are the components in the system? AUDIENCE: A, B, and C. PROFESSOR: Components are A, B, and C. Right. So this label runs over A, B, and C. OK. So we have three mole numbers, n of A, n of B, and n of C. How many independent variables do we have? This is carried from the previous lecture. AUDIENCE: That's five? PROFESSOR: Five. And how do you get five? AUDIENCE: Number of components, and then two independent variables of thermodynamics. PROFESSOR: So I see you're taking the numbers three and two, but you're adding them, and then you should be subtracting them. AUDIENCE: Oh. PROFESSOR: And the reason is the following. We have three components. We have three ni's, right? But reaction balance means there's only one degree of freedom. How do we know that? Because we have two constraints. What are they? The n of A over nu of A equals the n of B, remember this from last time? The nu of B equals the n of C over nu of C. There's two independent equations in here. You can count the equal signs. There are two equal signs. So we have three components, but two constraints, which means there's only one degree of freedom. And we introduced this reaction, extent equals d xi. You don't have to use xi to answer any problem you're going to find in this class, but it is a nice, general form. So we have-- for one reaction, univariance, it can be called. We have only one degree of freedom. All right. So now we're going to solve the reaction equilibrium condition by applying those constraints. So apply constraints. And here, I'm going to arbitrarily choose the product mole number as the independent variable. So let's see. We have d n of A equals nu A over nu of C d n of c equals minus 1/2 d n of C. d n of B equals nu B over nu C. d n of C equals minus 1/2 d n of C. You got this? This sort of makes sense? A plus B going to C. For every mole of C that's created-- sorry. Two C's. My apologies. For every 2 moles of C that are created, there's 1 mole of A and 1 mole of B that are destroyed. So it works out, right? For every 2 moles of C are created, you destroy one each mole of A and B. So what I'm going to do is I'm going to plug these into my expression for dG. dG equals, then repeating from the previous board, mu of i, the n of i, and now I'm going to eliminate the n of e-- A and the n of B and write everything in terms of the the n of C. And you get the following. Mu of A, nu of A over nu C plus mu of B, nu of B over nu of C plus mu of C. And here is my equilibrium condition. OK. Now, this should have a certain similarity to what we did with the case of unary systems, when we derived the equilibrium condition for two phase coexistence. We had a chemical. We had an entropy, I think, in that case, on the left-hand side. And we needed it to be at an extrema or optimized, that is, its differential being 0. So we have one independent variable here, the n of C. Write down some points here. dnC is unconstrained. It's unconstrained. Can't-- we can't assume that this is 0. Why is that? Why can't we assume that's 0? We call this an unconstrained internal variable. Why is it unconstrained? AUDIENCE: Because even in equilibrium, the reaction doesn't stop. PROFESSOR: Right. Thanks, So we can constrain pressure. We have pressure gauges and pressure regulators. Those exist. You can buy them. We can constrain temperature. We have thermostats. Those exist. You can buy them. And you can think of different experimental setups to constrain the temperature and constrain the pressure. However, I want you to think of a box of gas. And you have molecules of A, B, and C. And they're zooming around in here. And this reaction can take place. You can't stop it. So even at equilibrium, this is going to fluctuate. The reaction is going to run a little to the right, a little to the left. And if you're away from equilibrium, the reaction will definitely either run to the right or to the left, right, depending. That's the meaning of unconstrained. You can't fix it. You can't go in there with your atomic tweezers and stop the atoms from reacting. That's an important concept. So if we have-- and this is rather like the case in unary, two phase system when we had a phase boundary and we couldn't stop the phase boundary from fluctuating. We couldn't stop a little bit of solid from evaporating, for instance, or a little bit of gas from condensing. That would be an equivalent example. The math worked out in a very similar way. So if we need this expression to be zero, and this, in general, is nonzero, how do we enforce equilibrium? AUDIENCE: So we have to assume that the coefficient is zero, then, since-- PROFESSOR: Coefficient AUDIENCE: Yeah, zero? PROFESSOR: Thank you. I didn't see who said that, but that's exactly right. If you have a function and you're optimizing it-- here, I draw for maximum. Same thing for a minimum. How do you enforce the condition of being optimized? You make the slope 0. That's what we're doing here. Coefficient. The coefficient dG dnC must be 0 at equilibrium. That becomes the equilibrium condition. Right. So there's some calculation here. There's some, in this case, one-dimensional function optimization in here. That's calculus. And then there's also some science, right? You have to recognize that this reaction is going to happen. You're not going to stop it. And so if dnC can't be enforced to be 0, then you've got to call this thing equals to 0 the equilibrium condition. So there's a lot in here, a lot to Unpack All right. All right. Now, fortunately, we have expressions of those chemical potentials. So we're going to substitute ideal gas mixture expressions. This is from last time. For the mu i's. We have expressions for the mu i's from last time. So I'm going to substitute those in. And I get the following. dG equals mu A0. That is component A in tis reference state, pure at 1 atmosphere, plus RT natural log partial pressure of A over standard pressure, this quantity times mu A over mu C plus mu B0 plus RT natural log partial pressure B over standard pressure, nu B over nu C plus mu C0, plus RT natural log pressure of C over standard pressure. This whole thing minus dn of C equals 0. And as a reminder, mu i of 0 equals the reference or standard chem potential of component A-- I should say, from point i, at pressure P0 and temp T. And for P0, we almost always choose one atmosphere. So we substitute. Now we're going to collect terms. Collect terms and multiply through by nu of C. I'm going to get the following. Nu of A, mu A0, plus nu of B, mu B0, plus nu of C, mu C not. I group these, plus RT natural log P of A over P0 to the power of mu A, P of B over P0 to tau of nu B, P of C over P0 to the power nu of C equals 0. Just manipulating the thing which we wrote down previously. We're going to continue to simplify this. We're going to define something. Defined delta G0 equals this thing in parentheses equals the sum over i of the nu of i times the reference potentials. Why does that make sense to define it that way? In this case, it's the following. 2 mu of C0 minus mu of A0 minus mu of B0, recalling, again, what our reaction is, it's A plus B going to 2C. So what is this thing? It's the free energy change of the reaction when all components are in their standard state. So for instance, if we assume that our standard pressure is 1 atmosphere, if you take one atmosphere of A and you react with one atmosphere of B and you produce C, where C is also one atmosphere, the change in Gibbs free energy is this delta G0. And that doesn't have to ever correspond to a physically practical reaction. You might not ever realize that particular reaction in the lab. But thermo is all about state functions and reference data. And we're picking convenient reference data. And you can imagine that reaction, even if you don't carry it out. That is, 1 atmosphere of A, 1 atmosphere of B, and 1 atmosphere C at equilibrium. So-- AUDIENCE: And-- PROFESSOR: --you will find this delta G-- one second, sorry, You will find these delta G0's in databases. That's the sort of thing you're going to find. If you're trying to engineer something and you need to design a reactor, you're going to be looking at these delta G0's and going forward from there. OK. AUDIENCE: Yeah, sorry. I just had a quick question. Are all those partial pressures in the natural log? PROFESSOR: Yes, they're all in the natural. That's a natural log. AUDIENCE: Yep, OK. PROFESSOR: Let me make that really explicit. Does that help? AUDIENCE: Yeah, thanks. PROFESSOR: And that just comes from the property of the natural logs, you know. Natural log x plus natural log y equals natural log xy. So we just apply that sum product rule three-- multiple times. OK. Let me continue simplifying. We're simplifying. I was just at Walden Pond with my kids over the weekend. I think that was Thoreau who said above all, simplify. It's like he was describing what someone should do in Mathematica. So we're going to define this thing called the equilibrium constant, which some of you may have encountered in other classes. But here, we've derived it. K of P equals product operator over components P of i, partial pressure, normalized to standard pressure to the power of nu of i. So that is the equilibrium constant in terms of the pressure, so that's K sub P. So that's kind of a big deal because it shows up in-- all over the place, including in some introductory chemistry classes. All right. We're going to further simplify. Simplify by assuming that total pressure, which we're simplifying here, but the total pressure is the reference pressure so that P of i over P0 equals x sub i, making that even simpler, again, using Dalton's rule. And if we make that simplification, we get a cleaner expression for the equilibrium constant, which is the one that we're going to use most often in this class. And for this reaction, again, for the reaction of A plus B going to 2C, this equilibrium constant is the mole fraction of C squared over the mole fraction of A, mole fraction of B. Yeah. So one more simplification, and then I'm going to stop and ask for questions. We're going to collect terms to write concise expression for reaction equilibrium constant. And this is a sort of expression that you may have been given in a chemistry class or you might find in Wikipedia for Kp equals e minus delta G0 over RT. OK. So this is our final answer. This is what we're going to-- this is what's going to be useful for you after this class, when you're analyzing reacting gases. And so now you've seen it derived, and you know where it comes from. I want to pause now. I went through that basically without interruption, or hardly without interruption, I want to pause and take questions. At Walden Pond I saw a great demonstration of thermo. There was one person out on the ice. This is on Saturday. I mean, it was warm. And one person was out on the ice. And everyone who wasn't on the ice was watching this person. And the ice was-- it was sunny. And they fell through. So it was thermo in action. I saw someone fall through the ice. They were fine. I think people were laughing at them. It wasn't very deep where they were. And they struggled back onto the shore with their friends and I'm sure were laughed at for the rest of the day. So there's thermodynamics in action at Walden Pond. I don't have a picture to share with you, though. All right. Questions on how we got this? Because we're going to be using this a lot. AUDIENCE: Yeah, I have a question about the mole fractions. Are those-- like, I guess, where did the mole fractions come from? Are they the mole fractions that the components find themselves in at equilibrium? PROFESSOR: Right, so first of all, let me answer your question in two parts. First of all, this is Dalton's rule, right? It's the definition of partial pressures, almost. So that's Dalton's rule. What mole fractions are we talking about here, right? That's a great point. So we're not going to do this a whole bunch in this class, but you're set up to do this. Let's imagine nonequilibrium and equilibrium. Nonequilibrium-- there's something called-- sometimes as written as reaction quotient. Sometimes it's written as reaction constant. It has different forms. But it's written-- it's the exact same thing as the reaction constant. But in general, in general, it's not equal to is equilibrium value. In general, it's not equal to its equilibrium value. So let's see what that means. Say that Q greater than K sub p. So the numerator is too large or the denominator is too small relative to equilibrium, right? So if the numerator is too large or the denominator is too small relative to equilibrium, what direction will the reaction run? And we can answer the question thinking specifically about this case. AUDIENCE: Would it go to the left, towards the reactants? PROFESSOR: Right. Let's look at this. If this thing is too big relative to equilibrium, that means there's too much C or there's not enough A times B. Sorry, I was really messy with that x sub a. So that's telling you is that you're out of equilibrium, and you're out of equilibrium to the right. And the reaction will run spontaneously to the left. And the math work out. You will find in this situation that running to the left decreases the Gibbs free energy. In other words, dG, dn of C. What's our situation here? Let me try to draw that. This is a situation where Gibbs free energy versus n of C-- you want to run to the left to decrease Gibbs. So you have a curve like that. You run to the left to decrease Gibbs. The math will work out. I know I'm being qualitative here. The G d nfC is greater than 0, positive slope. And so it's telling you the reaction will run to the left. And the difference between the reaction quotient and equilibrium content. That can be called reaction infinity or driving force. And there's all sorts of analyses of this. And it couples to rate-- and there's a lot of engineering in here, right? You want to figure out-- you want to design a reactor or how far away from equilibrium should you design it to be, stuff like that. On the contrary, right, let's say Q is less than K of p. I mean, it's going to be the opposite. This is a case where there's not enough numerator. There's too much denominator. The quotient is small, smaller than equilibrium. So this is a case where you're going to decrease Gibbs energy by going to the right. dG d n of C less than 0. And the reaction will drive to the right. At equilibrium, n of C, G-- they're right there, where they Gibbs free energy is minimized. So you can write down this expression out of equilibrium, right? And you might-- and compare this to this expression, which you get from the thermodynamics of the reaction and determine whether your reaction will run spontaneously to the right or to the left, or is it actually at equilibrium? did that help answer your question? I'm not sure. AUDIENCE: Yeah. That makes sense. Thank you. PROFESSOR: So basically, you can evaluate this instantaneously. Whether or not it's equilibrium depends, but you can always write that thing down. And again, we're not doing really-- we don't really have the time in this class to talk about a bunch of out-of-equilibrium reactions and talk about rates and affinities and stuff. But that will come later in course 3 and other courses. And so you're well set up. Because if we understand equilibrium, we can start analyzing out-of-equilibrium situations. And nonequilibrium thermos is-- exist. It's a fascinating topic, I think. And I teach it as part of the advanced thermo class, but we have no time for it here. Other questions? I'd like to make sure that people-- this is a point from last lecture, but I'd like to make people-- sure that people understand how we get this numerator and denominator thing out of this expression. Remember, the reaction coefficients are positive for products and negative for reactants. So products end up with a positive power, and reactants end up in the basement here with a negative power. So often, if you've seen this before, let's say an introductory chemistry, you were probably told just to put products on the top and reactants on the bottom. Well, this is where that comes from. So it's good to know it comes from somewhere. Maybe everyone understood that. I don't know. But worth mentioning. So on the PSAT, you're asked to evaluate the equilibrium condition for a couple of reactions. And that often boils down to finding the delta G's. So I want to make this really explicit here before moving on. Delta G's-- delta G0 equals G0 products minus G0 reactants. So that's G0 products at some temperature, let's say T2, equals G0 products at T1 plus some integrals. We've done those integrals already. We know how to do those integrals. For instance, you integrate entropy or temperature or something like that. G0 reactants at some temperature 2 equals G0 reactants at some temperature 1 plus integrals. So we're seeing that delta G0 at some T2 equals delta G0 at some T1 plus integrals. And these depend on the heat capacity differences. So I'm doing this at a very high level and quickly here because I want to highlight the similarity to the case of unary transformations. We had-- in unary systems, we had transformations between phases. And we can analyze things like the change of Gibbs free energy or any other state function, change of h or change of s, across that transformation. And we could calculate the temperature dependence of that by integrating things that depended on heat capacity differences. It's all formally the same right now. We're having a transformation from reactants and products. The delta G0 is important. That helps us determine equilibrium condition. And in general, it's temperature dependent. And you're going to find things like delta G0's or their equivalents tabulated at standard temperatures in databases. And you might be asked to calculate reaction balance at some other temperature. And to do that you're going to need the cp's and the delta cp's. So formally, it's very similar to the unary case. I want to highlight that. AUDIENCE: And if-- sorry. If a problem is just happening at standard temperature, can you then just use the standard Gibbs values? And-- PROFESSOR: Yep. AUDIENCE: --I guess you wouldn't need it to integrate in that case? PROFESSOR: Yes, indeed. Yep. h0 for elements at standard temperature and pressure is defined at 0. s0 for elements at standard temperature and pressure is a number, often in the back of the Hoff. Delta h0, delta s0, delta g0 for compounds-- those are numbers often found in the back of the Hoff for common reactions. That's right. And so if you're a standard temperature and pressure, you normally have everything you need, even in Wikipedia, for standard reactions. And I mean, we're analyzing, like, ammonia in this, so it's very common substances, easy to find the data. Thank you for that. All right. Let me move on. At risk of repeating myself, notes reaction equilibrium. OK. A negative delta G0 drives reaction to the right. That's a straightforward point. Adding more of a given component shifts reaction balance. It shifts the reaction balance to maintain equilibrium. That's Chatelier again. Any change in the status quo prompts an opposing reaction in the responding system. And one more thing. For three or more components, there is no unique equilibrium condition. So let's analyze again this case, x sub C squared, x sub A, x sub B. And this equals-- OK. So this thing here is just a number, let's say at a given temperature. It's a number. And we have three variables. We know that x sub A plus x sub B plus x sub C equals 1 by definition. They're mole fractions. We still have one degree of freedom at equilibrium, so there's an infinite number of compositions that would satisfy equilibrium here. You can play with the method. If you want to parameterize-- fix-- you can fix x sub C and then calculate x sub A as a function of x sub B here, if you like. There's lots of ways to slice this. But basically, you could shift your equilibrium balance. So how does that work? Let's see how this works with Chatelier. Let's say you're at equilibrium. Let's say you're at equilibrium. And you add a little bit more x sub B. You inject a little bit more component B into your system. So the denominator ticked up a little bit. How's the reaction going to respond? How is the reaction going to respond? We just add a little bit more B. We kicked it out of equilibrium. How will the reaction respond? AUDIENCE: It'll flow to the right to make more C. PROFESSOR: Yeah, it's going to swerve to the right to try to counteract that change and bring this expression back to equilibrium. So you add more B, the reaction shifts to the right. Likewise, if you add more C, the reaction will shift to the left. If you take away some A, reaction will shift to the left, and so forth and so on. You cannot get away from the Chatelier. It's always going to be there trying to restore equilibrium. It's in the math. It bears pointing out from time to time. All right. Let's talk about temperature dependence. This gets to one of the problems in the homework. Temperature dependence of reaction equilibrium. Now, I'm just going to do some math here, and we'll talk about the meaning. Let's calculate the partial with temperature at fixed pressure of reaction balance. And I'll do log of K sub p because the math is easier this way. This is d/dt at fixed pressure of minus delta G0 over RT. And by chain rule, this equals delta h0 over RT squared minus 1 over RT d delta h0 dt at a fixed pressure plus 1 over R d delta s0 dt at a fixed pressure. So that's just the chain rule. d delta h0 over dt at fixed pressure equals delta Cp, again, coming back to transformation quantities, and dx delta s0 dt at fixed pressure equals delta cp over T. We rely on this a lot. And you might want to remind yourself of this. You might want to show this just in the minutes after class. Go back to when we introduce these potentials. So using these expressions, we find d log of k of p, dt at fixed pressure equals delta h0 over RT squared. And this has a name. This is called the van 't Hoff equation. OK. You're going to use a form of this in the homework because you are implicitly asked to calculate how the reaction balance changes with temperature. Let's see how this works out. Let's see how the math works. So for example, let's consider an endothermic reaction. m, endothermic reaction. Q and delta h0 are greater than 0. That's an endothermic reaction. For example, that's the cold pack, right? So an instant cold pack. Delta h equals reversible heat for the reaction. Delta h greater than 0 means dx log kp dt pressure is greater than 0. What does that mean? Reaction moves to the right or left with increasing temperature? With increasing temperature, do I move to the right or the left if this condition is true? AUDIENCE: To the right. PROFESSOR: Move to the right. k of p is products over reactants. So if k of p is increasing, if this slope is positive, that means I'm moving to the right. Reaction moves to the right with increasing temperature. What does that mean? This system tries, doesn't actually know what it's doing, to oppose the temp rise by taking up heat in an endothermic reaction. If you try to raise the temperature, it's going cold pack on you. That's what van 't Hoff says. You cannot get away from the Chatelier. And the converse, of course, is going to happen for an exothermic reaction. Let me ask you. If I just rip that-- wrote that backwards, products-- I just flip it, make product reactants and reactants products, everything I just said would be-- have a sign flipped. And it works out. Because nature doesn't know how I wrote this. Nature doesn't care how I wrote this. It doesn't know what the difference between products and reactants are. So an exothermic reaction moves to the left with increasing temp. Right? OK.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_2_Scope_and_Use_of_Thermodynamics.txt
[SQUEAKING] [RUSTLING] [CLICKING] RAFAEL JARAMILLO: Good, all right, so let's get started. Today is lecture 2, 3020, and we're going to talk about the scope of thermodynamics. And one last note-- I'm in another location, so getting used to the configuration here with the camera. My face is hidden behind the camera half the time. I'm sorry for that. I think that by Monday I'm going to have this all worked out. But at least you can see me, so there we go. All right, so what does thermodynamics treat and what does it not? That's the topic of the day, the scope of thermodynamics. So first we're going to talk about states of matter. What is a state of matter? There's different ways to define it, of course, but we're going to define it as something that has well-defined responses to, let's say, squeezing, heating, adding more of the same stuff, and adding different stuff. And we're using colloquial language here, but there is way that matter responds to stimuli like these is characterized by response functions. And we're going to write these down. So these are the types of things that we're going to do throughout the term. Another thing that we often do to states of matter is we apply fields. We don't treat that in 020. We don't get there, but the types of fields we might apply are strange fields, gravitational fields, magnetic fields, or electric fields. We might just get to a little bit of electrochemistry in the class. We'll see if we get there. But, otherwise, we're not really going to spend time talking about applying fields. So we have this stuff, and we're characterizing it. And we characterize it by how it responds. So that's one thing that thermodynamics treats. Thermodynamics also treats transformations between states. So this is pretty generic. We have some state A, and it transforms to some state B. It treats transformations. So we have starting and final states described certainly. So we have certainty over what those states are, how they behave. These diagrams are really useful for that. You say here's the state that you expect. Let's change the temperature pressure. Here's the final state that you expect. So we can use phase diagrams to make these predictions with certainty. One thing that's-- and knowing about thermo when you first learn it is that the process is abstract, meaning thermo tells you how things start out and how they end up. But a process of transformation-- it's very much a black box in thermo, and we don't really treat it. There's a class for that. It's kinetics. You take it in the fall, so we'll be specific. Kinetics is not described. There's no time in thermo-- at least in equilibrium thermo-- when you first see it, like in 020. So we will never do this to you. Never going to do that-- we're never going to take any time derivatives. We're never going to calculate rates. We only calculate what you get at the end of a process if you wait sufficiently long. What is sufficient? I don't know. This is one way to think about it. Thermo describes the why and only hints at the how. I find this an interesting way to remember what thermo does and what it doesn't. So, for instance, when we rupture the hot pack or we rupture the cold pack, we could understand those spontaneous reactions. We understood why. One of them was driven by entropy. The other was driven by energy. That was the why. But the how, we didn't talk about it. In fact, somebody thankfully asked me a question how that worked. What was actually going on inside the bag? It had to do with two different bags and the surface preparation and supersaturation. Thermo doesn't really treat the how. It just gives you some ideas. So it's a starting point for understanding the how, but it does tell you the why. All right, what about the use of thermodynamics? You're here-- you're here because thermo is useful. It's really useful, whether you become scientists or engineers. It's really useful, and that's why we make you take it. So how is it useful? How do we use it? We use it to predict and control matter. And we use it to transfer knowledge-- again, an example being phase diagrams, which are great summaries of knowledge. So let me give you an example. Let's have it a temperature axis here. This is temperature in degrees C. And we're going to study water at one atmospheric pressure. So it's one-dimensional system here, temperature. And here it is boiling. All right, so what is the freezing point of water? Somebody? You know this. What's the freezing point in water in C? AUDIENCE: 0 degree Celsius. RAFAEL JARAMILLO: Zero, right. I'm freezing right now in this basement in the woods in Vermont. So I'm freezing. What about if we're here in the summer? It wouldn't be boiling, but what is the boiling point of water in degrees C? 100, someone put the chat. Well, I don't really monitor the chat. Not because I'm anti chat-- it's just one too many screens cluttering up my thing. So next time, please use the mic. But yeah, it's 100. Thanks. So here's a question. How did you know that? Anyone who just did a quantum chemical calculation from first principles, I want you to raise your hand. Nobody did that. In fact, if you did, you'd probably be writing a paper right now because that's still a very, very, very hard calculation. In fact, I've never seen it-- an accurate prediction of the freezing point of a liquid from first principles using theory. So how did you know that? AUDIENCE: These are material properties of water. RAFAEL JARAMILLO: These are the material properties of water. You know that because you know that. And you know that because the data that's out there you have access to. Materials have properties, and some of these are so commonplace that you know them without having ever taken a class. So these are observations. These are observations. These are two of the huge number of observations on materials that mankind has taken over millennia. Thermal gives you a framework for ingesting those observations and using them to make predictions about states that you have never seen before. So this is the way to think about it. Empirical observations-- so it's a bunch of observations, observation 1, observation 2, observation 3. All these little pieces of data, they get fed into something called the laws of thermodynamics. This is the conceptual framework. And then, from these observations and the laws of thermodynamics, you can deduce predictions, or what I like to call it, decisions. Because when we're scientists, we talk about hypotheses and predictions. But when we're engineers, we have to make a decision. What temperature will they run this furnace? How high can this autoclave run without ruining my product? What's the maximum temperature that this product can see during shipping? And you don't have time to run a million experiments. So you need a deductive framework to make decisions. And that's what thermo gives you. How do we store observations? We store them in databases. So as you'll see starting in the first piece, thermo concerns itself very much with the maintenance of databases. Without materials data, these laws of thermo are just text on a page. They're utterly useless. You need this-- the data fed into the deductive framework to make decisions. So the databases have huge economic value. When you download the student version of thermo calc, they're giving you a very minimal set of databases. First of all, you'd have to pay a fair amount to get a larger set. But if you want to get the real stuff, these are trade secrets. So companies like Alcoa, they guard their thermodynamic databases the way a company like Intel guards its lithography maps. It's like the key to the entire operation. So the economic value of data is huge in thermo and material science more generally. Now, fortunately, we don't need to sit down with our pen and paper and figure out what the law of thermodynamics predict every time. We have software for that. So databases fed into software allows you to make predictions and helps you make decisions in a reasonable time frame. That's how you'll use thermo in the real world. This is how you use thermo in the real world. OK, our next topic, systems in thermodynamics. All right, systems are characterized by temperature, pressure, volume, and composition, also boundaries. So here's an example. Here is a balloon. There's a boundary. This is in. And this is out. All right, so let's say we want to calculate some properties of the gas inside of this balloon. What should we take as the boundary of our system? It's kind of painfully obvious, right? You take the plastic. You take the actual balloon, the actual balloon that-- you choose that as your boundary. That makes sense. It's kind of obvious, but it's good to start from an obvious example. Why do you choose the physical balloon as a boundary of your system to analyze the gas inside? Why not just choose an arbitrary volume? Why not say no, I'm going to choose as my system the lower half of the balloon? And this is an imaginary line or surface separating the two parts of the balloon. Why not choose this lower volume to analyze? AUDIENCE: Because there's nothing separating it from the upper volume of gas. RAFAEL JARAMILLO: There's nothing separating it. And so as a result, particles can move between the lower volume and the upper volume. So if I choose the whole balloon, I get to use conservation of particle number. That's convenient. If I now treat these as two subsystems-- system A and system B-- now I need to keep track of how they exchange particles. It's more work. We get the same answer if you do it, but it's more work. So this is, again, a painfully obvious example. But starting as soon as this p set, we hope to make it-- give you some slightly less obvious examples, which is how you choose boundaries. You draw-- you choose boundaries-- boundaries chosen out of convenience. You choose a boundary when you analyze a system to make your life easier. It depends on what questions you're asking So I believe in problem 1 on the p set, we asked you to run through some of these mental exercises for different examples-- right, boundaries chosen out of convenience. So let's do an example. Example, adding sugar to a glass of water and stirring. So let's see. Let's draw a beaker. And there's water, good. And here's sugar. There's sugar, and the sugar gets added. And I'm going to have a stir bar. I have to stir. So I have a stir bar-- a little swizzle stick or something-- and stirring that around so it's moving. This is why. All right. Let's talk about this system. First, I'm going to write down a list of all the things in this picture. So the things are water, sugar, the glass, the atmosphere-- I didn't really draw it, but it's there-- and the stir stick and the stir bar. I want to get an answer like how sweet will the water become? That's the answer I need. What system-- so to analyze, what should be in the system, and what should not be in the system? So would somebody say should the water be in the system? AUDIENCE: Yes, because it's one of the main components of all the mixture. RAFAEL JARAMILLO: Yeah, OK, so we're going to choose the water and the sugar I will say also. It's one of the main components of the mixture. So let's go. What about the glass? Should that be part of the system? AUDIENCE: No. RAFAEL JARAMILLO: Probably not, unless we expect some kind of maybe reaction with the glass. But the sugar water, it's not going to etch the glass. If this were a hydrogen fluoride solution, it would etch the glass. We'd have to include the glass. But we're not going to include the glass. The atmosphere-- do we have to include that? AUDIENCE: No, because it's not changing. RAFAEL JARAMILLO: Yeah, if this is just something you're doing in your kitchen, probably not. If you're running a factory making sugar water, like a simple syrup or something, you probably want to include the atmosphere because you want really good process control. And the humidity in the atmosphere might make a difference. That's an example. Or if you're a monk brewing a farmhouse ale-- and we know that the stuff in the air from the surrounding farms makes it into the beer and helps change. So there are examples where you do want to include the atmosphere in the system, but here not. And the stir bar, not-- the stir bar has a function, but it's not going to be a part of the thermodynamic system. What about the boundaries? All right, these are the types of boundaries. And before we go through these, this is as good a time as any to remind you that the associated reading is every bit as much a part of the course as the lecture. So if you keep up with the reading and do the associated reading before lecture, you'll get a lot more out of the entire experience. So thank you for keeping up with the reading. All right, boundaries-- open or closed? Somebody please tell me. Is this open or closed system? Which would you choose and why? And what does that mean, open or closed? Somebody. AUDIENCE: I would say it was closed because nothing is leaving the system. There's no evaporation happening or particles leaving. RAFAEL JARAMILLO: Good, OK, that's what I would choose too. Closed means no loss of particles or no gain of particles. Particle number remains the same. So the example given there was that if you included evaporation-- loss of water-- it would be an open system. You'd have to include the atmosphere in your system. And if you're growing-- if you're in a crystal growth experiment where you slowly let the water evaporate-- the supernatant-- then it's an open system. But we're doing this quickly, and we're not going to wait for water evaporates. So closed, thank you. All right, rigid or not rigid? AUDIENCE: Maybe not rigid since the top is, like not like a hard solid surface. RAFAEL JARAMILLO: OK, good. So the top, it's not rigid. That's what I would choose too Let's talk a little bit more about what that means. It means that the volume can change. The volume is free to change. That's what was said there. The top is not a hard surface. So we assume that the walls of the glass are rigid. They're not going to change. But the top surface is what's called a free surface. It can exchange volume with the atmosphere. And the ability to exchange volume-- that's the mechanism by which two systems arrive at the same pressure. So there's notion of pressure regulation. Temperature regulation becomes very important in thermo. Non rigid systems are pressure regulated because they can exchange volume with the atmosphere. If there is a fluctuation in atmospheric pressure, that same temperature-- the same pressure fluctuation will be felt by the water bath, for instance. As opposed to a rigid system, a rigid system can build pressure. You can have a bomb or a high pressure vessel. That's a different pressure from the outside. Why? Because it's fixed volume, rigid. All right, good. And finally, adiabatic or diathermal? Right, so the non rigid meant that you're going to be at atmospheric pressure throughout. In my view, the Zoom video camera webcam review is blocking this text right here. Is it blocking this text in anyone else's view? AUDIENCE: No. RAFAEL JARAMILLO: OK, good. Thanks. All right, so what about adiabatic or diathermal? Is this adiabatic, or is it diathermal? The boundaries-- adiabatic or diathermal? So adiabatic means a process or a boundary that does not allow heating-- so a thermally insulating battery. Diathermal is the opposite. Diathermal thermal is a process or a boundary that allows heating, allows heat to flow, allows heat exchange. AUDIENCE: So it'd be diathermal. RAFAEL JARAMILLO: It'd be diathermal, right. So in the same way that the free surface allows the atmosphere to regulate the pressure in the water, keeping it at atmospheric pressure throughout, the diathermal boundaries in glass might be a little slow-- slow thermal transport, but definitely have a very fast thermal transport there at the surface. This is going to keep temperature of the room-- this is the temperature there the room throughout. So the diathermal thermal boundary regulates the temperature. The non rigid boundary regulates the pressure. I'm taking my time here, intentionally introducing a bunch of new terms. Or if they're not new terms, the meaning of all these terms is very specific in thermo, so the idea of regulation, the idea of choosing boundaries. So you'll do some examples like this in the p set. All right, let's talk about types of systems, classifying systems. Classifications. All right, so this is just getting some essentials out of the way here-- unary, one component. All right, so an example of a component would be water. OK, so here's an example. A unary system has one component, one molecular component. A multi component system has more than one. So, for example, the binary system of water and sugar. I've got hydrogen, oxygen, and carbon here. How come it's not three components? Anybody? AUDIENCE: Because at no point are we discussing like, the separation of those components. RAFAEL JARAMILLO: That's right. At no point are we breaking bonds or in reforming molecules. This is remaining water throughout. This is remaining sucrose throughout. This is not a digestive system, and you break down the sugar. This is not an electrochemical water-splitting system where you might be using some source of energy to electrochemically generate hydrogen gas from water. So that's good. So your choice of unary, multi component-- there can be more than one answer. And the same molecular components can be analyzed as unary, or a multi component system, depending on the situation. So, again, you make these decisions out of convenience because you understand the process that you're engineering, that you're designing. All right, OK, good, so that's urinary versus multi component. And this kind of gets-- sometimes can get a little bit easy to get tripped up there. All right, two different types of classification-- homogeneous first, heterogeneous. So homogeneous means one phase. So, for example, sodium acetate-- the sodium acetate solution. That was-- the supersaturated sodium acetate solution was a starting point for the incident and hot plate, sodium acetate solution. What about heterogeneous? More than one phase, right-- solution plus solid, sodium acetate. So that's-- because I think a little bit less description. What about closed versus open? This we've already discussed-- no mass exchange with surroundings versus can exchange mass with surroundings. So those are some basic and important classifications. Next category-- we're going through the necessary stuff here to get started on real thermal problems. State functions or, if you like, variables. We'll use these interchangeably, state functions and state variables. So these characterize the system. They are independent of history. This is a really subtle point. This will trip you up. It trips me up. It seems obvious, right. The fact that you could characterize a system with variables which are independent of the history of that system is very non obvious. That was a major intellectual accomplishment. It shouldn't sit easily with you. That takes some getting used to. You shouldn't trust me on this yet. Common state functions found in 020-- so would somebody like to volunteer a state function, a common state function you might find in thermo? AUDIENCE: Temperature RAFAEL JARAMILLO: Yeah, temp. So that's a state function. We use t. To denote volume, we use v. Pressure is p. Any others? AUDIENCE: Enthalpy. RAFAEL JARAMILLO: Enthalpy, OK. That's a state function. What about one that's not so specific in thermo? Are there others that are more colloquial? AUDIENCE: Concentration. RAFAEL JARAMILLO: Concentration, I'll say composition. Yeah, composition-- and we'll use atomic percent often in this class. What about mole number, total system size, like that? And then there's enthalpy. And then there are others which are more specific to thermodynamics. There's entropy. There's Gibbs free energy. And there's Helmholtz free energy. If we were course 2-- if we were mechanical engineers-- we would be spending a lot more time on Helmholtz free energy and enthalpy. If we were physicists, we'd be spending a lot more time on Helmholtz free energy and energy, which we're going to use u for internal energy. But as material scientists, we spend most of our time on Gibbs free energy and enthalpy for reasons that we'll discuss at length in a couple of lectures. But there are more. There are more. Like magnetization, that's one we don't-- that electric polarization. There're many. OK, so we have state functions. We have state variables. We also have equations of state. So an equation of state, in general, is some state function X equals a function of other state functions Y1, Y2, Y3. And in this formulism, that's a state function. And these are state variables. But I'm going to use this interchangeably-- state functions, state variables. All right, so you all know one state function. What's the state function everybody knows, even on the T-shirts? When I do this lecture in person, I wear my MIT T-shirt with a state function on it. AUDIENCE: It'd be PV=nRT. RAFAEL JARAMILLO: Yeah, PV=nRT, right. PV equals nRT. OK, so that's a state function. That's a state function for an ideal gas. This is true always for an ideal gas at equilibrium. That's pretty cool, and you can do a lot with that. But before we do a lot with that, I want to warn you this is the exception rather than the rule. For almost every system, every material which you'll use in your careers, we do not have a state function like this. Maybe we have some that are empirically determined. You might find some magnetic systems for which state functions have been exactly derived from fundamental principles like this one. But this is the exception rather than the rule. Usually, you can't write down a closed form like this. Usually, you have to deal with much more complicated data resources. Something on units here-- try to sneak this in. R equals PV over nT. What are the units of that? Pressure is Pascals. Volume is meters cubed-- make this so that it looks like Pascals. Mole is unitless, just a number. And temperature is Kelvin. In fundamental SI units, a Pascal is a Newton per meter squared. Force over area, that's pressure. We have a meter cubed, and we have a Kelvin. So then, you get Newton meter by-- and what's a force displaced over a meter, Newton meter? AUDIENCE: The joule. RAFAEL JARAMILLO: Joule, joule Kelvin. So R equals 8.314 joules per Kelvin. It's a handy thing to remember. We're going to be sticklers for units in this stuff. I was going to say more things-- thermodynamic properties. All right, so at the very beginning, I told you that states of matter had certain responses when you squeeze them or you added stuff or you heated them. And these are going to be described by thermodynamic response functions. So that response function term sounds kind of fancy. Don't let it scare you. It describes how a system responds to something. So someone screams, you cover your ear. That's your response function. So some common response functions that we use in thermo are compressibility, isothermal compressibility, a fixed temperature. And, of course, this is the same as d log V dP. So you see this a lot. The only real prerequisite for thermo is multivariable calculus. And I think the first time I learned thermo and probably a second time even, I didn't appreciate just how much I relied on multivariable calculus. So just know that now so that you're not surprised later. All right, thermal expansion-- it's another common one. Alpha equals-- again, we're going to make this volume independent. And we're going to do that at fixed pressure. And there are others, but these are the easiest ones to write down. I'm giving you an idea of the sorts of properties that we're going to calculate in thermo. So let's consider ideal gas compression at fixed temp-- volume initial at pressure initial going to volume final at pressure final. All right, so compressibility Bose minus 1 over V dV dP at fixed temperature. And for an ideal gas, this is very, very simple. This is minus 1 over V dVP at fixed temperature nRT over P. This equals nRT over VP. 1 over P equals 1 over P. This is really simple for an ideal gas and only for an ideal gas. So the final volume after compression is going to be the initial volume plus the change. So it's like second fundamental theorem of calculus. So we're going to plug in-- what's the integrand? It's minus V beta. You continue to plug in. This is volume initial minus-- so hang on-- dP nRT over P. 1 over P equals V initial minus nRT P initial plus nRT P final, which, of course, just equals V final. So this calculation is really trivial for an ideal, guys. You get exactly what you think you would get. The reason why I took the time to do it is to show you the manipulations that you'll need to do later on for non ideal gasses. Now, thinking about this, you've got to put your multivariable calculus hat on. Of course, for an ideal gas we get this-- 1 over x, these relationships-- P initial, V initial, P final, V final. And this curve is an isotherm. So we'll come back to these when we do heat engines and the Carnot cycle at lecture 4. We're almost done. Let's see, one last-- two last points here, intensive versus extensive properties. All right, intensive properties can be defined and measured at any point within a system. Would somebody like to volunteer an example of an intensive property? AUDIENCE: Temperature. RAFAEL JARAMILLO: Yeah, temp. When they take your body temperature, they assume that you have one body temperature. And they can take it-- there's different places they can stick the thermometer, but you should get the same number. Density-- now, you can locally define density, and you can measure it locally. Composition for a uniform system, composition. Pressure-- and there are more. Extensive properties depend on the extent of the system. And they scale with system size. So some examples here would be energy, volume, mass, entropy, and so forth. The doctor can take your temperature at several different places, but the way you-- your whole body has to be on a scale. They can't weigh you just by weighing your arm. So extensive, you need to know the extent. Intensive, it can be defined and measured at any point inside or within a system. Easy way to remember that. All right, we're going to end today with definition of the phases-- a phase of matter. So I'm going to define a phase the following way. Phase is a region within which all intensive properties are uniform. All intensive properties are uniform. That's a phase of matter. So, for example, solid sucrose, some things we've seen here-- water, simple syrup, grass, some examples on length scales. It's an important caveat. If I zoomed down to atomic length scales, I might say this isn't uniform. I've got different metal elements, depending on where I'm looking. So this has to be on length scale of large and molecular length scales. And phase boundaries, which, of course, are a really important topic in thermo and material science, are classified similarly to system boundaries. All right, so we have a two-phase system. For example, sugar and water-- we have some sugar, and it's in water. We have a sugar water system. And we're going to think about this boundary in the same way that we classified system boundaries. And so that's an open boundary because sugar molecules can go across the boundary into the water. And similarly, sugar molecules can come out of solution and go back into the solid. This is a boundary that exchanges volume. It exchanges heat energy, and it exchanges mass. Good, so that's all I had prepared for today. It is 10:54, almost perfect. Thanks for joining. As usual, hang out until at least 11:00 if folks want to talk or ask any questions. Otherwise, I'll see you Monday.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_8_Mathematical_Implications_of_Equilibrium_and_Spontaneous_Processes.txt
[SQUEAKING] [RUSTLING] [CLICKING] RAFAEL JARAMILLO: So we're going to talk about equilibrium in a unary heterogeneous system. So let me give you the reason why. We are not talking about phase diagrams in exam one. But that comes next. That's what we cover next after the exam. So today's lecture is like a bridging lecture-- we're going to talk about applying equilibrium, which we've learned about, and unary heterogeneous systems. And of course, this will lead into unary phase diagrams. So let's set up the problem. This problem is as follows-- we're going to isolate our system. So we have a box, and it's going to have thermal insulation. Imagine that thermal insulation all around there. And we're going to have two phases in this box. We're going to have an alpha phase. And then over here, in a different region, we're going to have a beta phase. And we're going have a phase boundary between them. And this is unary, so one component. So system with one component and two phases. We're isolating the system. So if it's isolated from the surroundings, what is going to be its condition for equilibrium? Equilibrium for-- what's the condition for equilibrium for an isolated system? STUDENT: Is it that entropy is set at maximum? RAFAEL JARAMILLO: S max, right, maximum entropy condition. That was-- right, isolated system. Thank you. OK. The internal-- and this is part of the problem statement, this isn't always true-- so there's a boundary between the phases internally. Think the boundary between the ice and the water in your glass of ice water. That's an example. That internal boundary is non-rigid, it's open, and it's diathermal. So it's the opposite of the external boundary. So that's the problem set-up. And our goal is to evaluate the equilibrium condition-- so that's it. Evaluate the equilibrium condition. So let's see what we mean by that. So let's write an expression. And if we want to evaluate the condition, ds, we have to write an expression for ds. So let's start with phase alpha. For phase alpha, d u of alpha-- we're going to do combined statement-- equals t alpha d s alpha minus p alpha dv alpha plus mu alpha d m alpha. You'll recall a couple of lectures ago, we introduced briefly superscript phase labels. And now we're going to start using them-- now we're going to start using them a lot. Lost my black pen. OK. All right, so this is combined statement. You've seen this before. We're just adding superscript phase labels. So we're going to rearrange. This is a pretty easy algebra operation. We're going to rearrange and write ds equals 1 over t alpha d u alpha plus p alpha t alpha dv alpha minus mu alpha t alpha dn alpha. Likewise-- likewise, for the beta phase, we're going to have d s of beta equals 1 over t beta d u beta plus p beta t beta dv beta minus mu beta t beta d n beta. I'm going to try using this pen for a while. It's thinner. You guys can tell me which one you like better. So we have made an assumption here, which is that the alpha phase is everywhere at the same temperature. Also, it's everywhere at the same pressure. And it's everywhere at the same chemical potential. Similarly, we've assumed that the beta phase is everywhere at the same temperature, everywhere at the same pressure, and everywhere at the same chemical potential. So you have to state your assumptions, it's good to know what they are. All right, so now, entropy-- is it extensive or intensive? STUDENT: Extensive. RAFAEL JARAMILLO: Extensive, right. Entropy is extensive. Not intensive. So the total ds-- the thing we're trying to write-- equals ds alpha plus ds beta. I want to make sure that people understand that-- if there are questions about that concept, let's tackle them now. No questions about that. So that means, the total change of entropy in the system equals the sum of the things which I just wrote down. So I'm going to take the pains to write it out. All right, so this is a big differential equation. Let's count variables-- how many variables are there? Independent variables, how many independent variables are there? STUDENT: Three? RAFAEL JARAMILLO: So there are six here. Mu alpha, v alpha, n alpha, mu beta, v beta, and beta. Six variables and six coefficients. So we're learning to identify variables and coefficients from differential equations. That was what we were working on for the last two lectures. So differential form, dependent variable, independent variables, coefficients. I want you to really get to the point where you don't have to think to pick those off. All right, so now we're going to talk about optimization. For the case of unconstrained-- unconstrained optimization, ds equals 0, requires that all six coefficients equals 0. So those of you who have done a lot of mathematical optimization, are probably familiar with this. Those of you who have not, which is probably most of you, might take a minute to think about what that means. So I'm going to draw that in one dimension. We have a six-dimensional problem, but I'm not very good at drawing in six dimensions. But I can draw on one. So let's draw a curve in one dimension. I have x and I have y. Where is the optimum of this curve? Optimization means finding the optimum. STUDENT: The maximum point? RAFAEL JARAMILLO: Right, just the maximum point. Right, right. So what would the condition for that be? Say it in calculus language. STUDENT: The slope to be 0. RAFAEL JARAMILLO: Yeah, d x-- sorry, d y, d x equals 0. Sorry, I messed up my y there, but at that point, the slope is 0. So what that means is let's say x is unconstrained. As x wiggles around, the value of y doesn't change. Only as long as I'm at that point with 0 slope. If I'm at a point with non-0 slope, as x wiggles around, y will change. So if there is a potential, which tries to maximize y, it's going to force the system to the right, if I'm here. Or if I'm over here, if there's a potential that tries to maximize y, it's going to force the system to the left. So the only condition for which the system is stationary, we might say at equilibrium, is a condition for which that slope is 0. Now, imagine this in six dimensions. This is 1 plus 1 dimension. Now, imagine it in 6 plus 1 dimensions. I can't draw in seven dimensions any better than I can draw in six. But the same thinking applies-- that for unconstrained optimization, I would need all six coefficients-- those are partial differentials-- to be 0. So the way to solve this problem and to figure out the equilibrium conditions, is to identify what are relevant constraints that reduce the dimensionality of the problem? So that's what we're going to do next-- we're going to think about what constraints apply in this case. And the constraints change from case to case. So let me show you what I mean. Let's talk about constraints. A constrained optimization is-- and you could-- if you went into business consulting, you're doing constraint optimisation. Operations research, a lot of engineering problems boil down to constraint optimization. We're just scratching the surface here. But constraints come from physical things. Let's start for conservation of energy. I seem to have switched back to the fat pen. I'm going to switch midstream here. I want a vote at some point, you guys tell me which one you like better. Conservation of energy. So conservation of energy, this is an isolated system. So what can we say about d u of alpha? If alpha gains energy, where does it have to come from? STUDENT: From beta. RAFAEL JARAMILLO: Has to come from beta. So d u alpha has equal minus d u of beta. Again, coming back to our picture, we have an alpha phase, we have a beta phase. They're separated by a very porous, almost non-material boundary. And that boundary allows energy to pass back and forth. They can do work on each other, and they can heat each other. OK, that's good. This reduces our number of dimensions by 1. We have an additional equation. There's another constraint-- conservation of volume. I said, the system was rigid, it was in a box. And the box's size wasn't changing. So if we have volume conserved, what do we know about d v of alpha? STUDENT: Minus d v beta? RAFAEL JARAMILLO: Minus dv of beta, right. If one side is getting bigger, it has to be because the other side is getting smaller. There's nothing else in the box. And the third conservation law we're going to apply is conservation of mass. If phase alpha gains particles, it has to come at the expense of phase beta. Because the box is closed. The mass of the overall system is fixed. So these three conditions, which are constraints, they simplify the equilibrium condition d s equals 0, to three independent variables with three coefficients that we set to 0. So now, instead of worrying about six slopes in [INAUDIBLE] space, now we only have to worry about it in 3 plus 1 dimensional space. I still can't draw it, but we'll see that the math is relatively simple. So let's talk about those. Constraint optimization-- constraint optimisation down-- we're down to three. We're down to three terms-- d s equals 1 over temperature of alpha minus 1 over temperature of beta d u of alpha. See, I eliminated d u of beta. I eliminated the energy of the beta phase. OK, likewise, p of alpha over t of alpha minus p of beta t of beta dv of alpha. I eliminated dv of beta. And similarly, you have alpha t of alpha, minus mu of beta t of beta d m of alpha. So now, this is my problem. And this whole thing has to be equal to 0. So now, we've applied our constraints. How can I guarantee that this whole term equals 0? Under what conditions can I be guaranteed that that term will be 0? STUDENT: If all the coefficients are 0 again. RAFAEL JARAMILLO: Thank you, that's exactly right. The only way I can guarantee that 0 is if the coefficients are zeroed out. Why is that? Because I can't set the independent parameters to be 0. Why is that? These two phases can exchange energy freely. They can exchange volume freely. And they can exchange a particle number freely. It's an open boundary. I can't stop those processes. So d u of alpha and dv of alpha and d n of alpha in general can be non-zero. So the only way that I can guarantee that I'm at equilibrium is to set the coefficients equal to 0. Set coefficients equal to 0. So let's do that. 1 over t of alpha minus 1 over t of beta equals 0. What does that give me? That gives me temperature of alpha equals temperature of beta. This is known as thermal equilibrium. So remember how we got thermal equilibrium? Two subsystems that could exchange energy reached thermal equilibrium. What's the next one? P of alpha over t of alpha minus p of beta over t of beta equals 0. Well, given thermal equilibrium from this, I get p of alpha equals p of beta. This is known as mechanical equilibrium. And the third one is u of alpha over t of alpha minus mu of beta over t of beta equals 0. And again, given thermal equilibrium, this gives me mu of alpha equals mu of beta. And this is known as-- does anybody know what this is known as? STUDENT: Chemical equilibrium. RAFAEL JARAMILLO: Chemical equilibrium. So there are three different subsets of equilibrium-- three subsets of equilibrium. Each can be achieved on its own. Each can be engineered to be achieved on its own. But if you add them up, this equals thermodynamic equilibrium. So that's the meaning. This is the equilibrium-- I'll write in fat red-- conditions for two-phase coexistence. So if two phases are going to coexist in equilibrium in a unary system, you must have all three of these-- thermal, mechanical, and chemical. And the structure, the mathematical structure of this will come up again in more complicated scenarios. But if you can see past the math, I want you to see the physical origin of this. When systems can exchange energy, they come to thermal equilibrium. When systems can exchange volume, they come to mechanical equilibrium. And when systems can exchange particles, they come to chemical equilibrium. And you, as engineers, can engineer processes to achieve selective equilibrium. You can allow systems to exchange volume, but have impermeable membrane so that they can't exchange particles. You can allow systems to exchange energy by heating, but have rigid and impermeable membranes. Or you can have membranes that allow some particles to equilibrate, and others not to. So let's say, you're making a desalination system-- you want water to equilibrate, but dissolve solids not to. Lots of examples here. So this seems really theoretical, it actually presents a lot of opportunity for engineering-- selective equilibrium. Let's highlight some assumptions to this point. Assumptions to this-- so assumptions to this point are at the intensive parameters, there is t and p and mu are uniform within each phase. That's been an assumption. We also assume that the boundary has no substance or effect on properties of either phase. So this is relatively straightforward. This-- this gets complicated. So we've assumed the boundary is a dashed line with no substance. In the real world, whether that's a good assumption depends a lot on the size scale you're talking about. So some of you may have been involved in nanoscience-- when things shrink, interface has become more and more important. It's a geometrical fact. The thermodynamics of systems with boundary effects is a really interesting topic that goes beyond 020, but it becomes important for those of you who might work on colloidal systems, or semiconductor quantum dots, or coatings-- lots of useful materials that you don't get the right answer if you don't consider the boundary. But in this class, we don't consider the boundary. A corollary of this assumption is that the spatial distribution doesn't matter. Spatial distribution doesn't matter. So I could have two versions of the system. What colors did I use for my phases? Orange and purple, I think? Hardly matters. One with, let's say, orange and purple, and a single phase boundary separating them. Or I could have purple-- particles or filaments are dispersed in an orange matrix-- in this situation. As long as my total quantities are the same in both cases, there's no difference. There's no difference. If the boundary has no substance. You can create or destroy boundaries-- they're free. So when we start caring about the differences between these, we have to start accounting for boundaries. Enough about boundaries, because we don't cover them in this class, but I want to make clear that you know where we are and where we're not. We'll just finish here. This is the same thing-- if and only if boundaries are free. No cost. Got it, OK, moving on. Let me make something kind of abundantly-- I might be beating a horse here, but boundary conditions affect equilibrium. The boundary conditions affect equilibrium. So for example, here's a simplified case. Here's an isolated system. I think I used brown here for a thermal insulation, giving you an idea it really is isolated. And I'm going to have an alpha phase. Then I'm going to have a rigid but thermally conductive box containing the beta phase. So in this case, alpha and beta separated by boundary that is rigid-- no volume exchange. Closed, no mass exchange. But diathermal-- heating is OK. So this you grabbing a Coke out of the freezer, out of the cooler-- it might be pressurized, but it's going to come to thermal equilibrium with your hand, if you wait long enough. So this is a case of selective equilibrium. You're going to get thermal equilibrium, but not chemical and not mechanical. In such a case, ds simplifies. All you have-- no dv, no d n terms. And t alpha equals t beta at equilibrium. OK, so now, physically, you know that. You know you take a pressurized bottle out of the cooler, it will come to your room temperature, the temperature of your hand. But until you release the cap, it will stay pressurized and you won't have any mixing. So I'm trying to use these examples, which you're very physically familiar with, and illustrate them in this way, this mathematical optimization framework, that you're maybe less familiar with. So you can become more familiar with it. And why does all of this happen? Why does all of this happen? All of this happens because of entropy generation during a spontaneous processes. And this is where we need to keep our eyes on-- generation of entropy during spontaneous processes. So let's write out ds again-- t alpha 1 over minus t beta over mu alpha plus p alpha t alpha minus p beta t beta dv alpha. Sorry that's messy. Minus mu alpha t alpha minus mu beta t beta d n alpha. So this is our at equilibrium. So let's consider t alpha greater than t beta. Let's consider you drop a hot particle of alpha phase into a bath of beta phase. In this case, 1 over t alpha minus 1 over t beta is less than 0. If the temperature of alpha is greater than the temperature of beta, what can you say about what will happen spontaneously? The hotter phase, alpha, it will heat or it will cool the cooler phase? It will spontaneously what? STUDENT: Heat it up? RAFAEL JARAMILLO: Yes, heat the colder phase. That's actually another statement of the second level-- that's Clausius statement of the second law of thermodynamics, is hot materials spontaneously heat colder materials. It's equivalent to the other ones, the other statements we've seen. So that means d u alpha is less than 0, or d u beta is greater than 0. Those are the same thing. So if d u of alpha is less than 0, and this coefficient is less than 0, then d s is greater than 0. Good. Entropy is generated-- entropy is generated during heat transfer from hot to cold. This is, again, a restatement of the second law, super important. So when a hot object spontaneously heats a cold object, entropy is generated. That's what we want to see, based on everything that we've postulated. And that's what the math gives us. So it's all consistent. That's good. Let's consider another case. Consider t of alpha equals t of beta. So they're already thermally equilibrated. But they're mechanically out of equilibrium. Or in other words, p for alpha or t of alpha minus p of beta over t of beta is greater than 0. So what's going to happen? The higher pressure phase will spontaneously-- it'll either expand or contract. So the higher pressure phase will spontaneously expand or contract. STUDENT: Expand. RAFAEL JARAMILLO: Expand, right. This is not going to contract. Will spontaneously expand, and I could say, at the expense of the lower pressure. So it's elbowing the lower pressure phase out of the way. That means d v of alpha is greater than 0, or d v of beta less than 0. And again, from the previous page, and from these, I get d s greater than 0. So it's not just heat transfer, that increased entropy, it's also mechanical work that can increase entropy. Why is that, by the way? Let's go back to the baby book picture. Why is it that mechanical work can increase entropy? What's going on? Let me draw a picture. Let's start at t equals 0 with a high-pressure phase, and let's imagine these are gases. And let's draw a low pressure phase. So this is high pressure, and this is low pressure, at t equals 0. And then, I remove the partition. So the high pressure phase expands into the lower pressure phase. Why does that increase the entropy? STUDENT: Maybe because particles are more randomly distributed and go out further from each other? RAFAEL JARAMILLO: Yeah, it's like that. These can be more mixed up in the whole volume. There's a more random distribution of more particles. That's right. Good, last example here. Let's consider temperature alpha equals temperature beta, but mu of alpha is greater than mu of beta. So in this situation, you have alpha over temperature of alpha minus mu of beta over temperature beta is, of course, greater than 0. So we have phase boundary, we have the alpha phase, imagine particles in the alpha phase. I shouldn't have used a color, I should have used the same black, because it's unary, and I have one type of-- And then, we have the beta phase. And again, more of the same component. All right, so matter in phase at higher chemical potential will spontaneously convert to phase at lower chemical potential. In other words, d n of alpha is going to be less than 0, or d n of beta is going to be greater than 0. How does this actually happen? Each individual little particle-- I'm going to anthropomorphize here-- each individual little particle samples its local environment. And it asks itself, where can I get the cheapest Gibbs free energy? It's Gibbs free energy price shopping-- it goes price shopping. And it sees that it's in a more expensive region. It actually would like to transform into a cheaper region. So it transforms, and when it does so, the phase boundary moves a little bit. Now that particle has transformed from alpha to beta. The beta phase has grown a little bit at the expense of the alpha phase. And then, the next particle gets to ask itself-- what would I rather be? And then the phase boundary moves a little bit more. And so on until chemical equilibrium is reached. And again, from the combined statement, we find that this spontaneous process increases the entropy, as we expect. So this process here, by which particles transform between their phase-- sometimes in this class, I call it Gibbs free energy price shopping-- that's most of what we deal with in material science. It's less intuitive than high-pressure, things pushing into low pressure things, but we're not mechanical engineers. And it's less intuitive than hot things heating cold things. So of these three examples, it's the least intuitive, maybe, it's the least everyday. But it provides the most rich behavior. So this is what we're going to be worrying about. And of course, the particles don't sit down and think, right? But through spontaneous random thermal fluctuations, they sample different environments. And they will stay in the environment that's more stable. So that's where it gets towards kinetics. Again, another topic which we don't quite have time for this semester. What we worry about this semester is chemical potential and reaching equilibrium. And I always say you should never anthropomorphize atoms. They hate it when you do that. On that note, I think I'll stop recording and we can leave here for the weekend. I'll stick around till 11:00 or beyond. I'm happy to answer questions.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_4_Heat_Engines_and_Energy_Conversion_Efficiency.txt
[SQUEAKING] [RUSTLING] [CLICKING] RAFAEL JARAMILLO: So today, we are going to talk about heat engines. So what's a heat engine? A heat engine is any machine that takes heat and turns it into work. So an engine is something that does mechanical work. A heat engine is an engine that runs on heat. And as we'll see, no machine can turn heat into work with 100% efficiency. That would violate the second law of thermodynamics. But we try to engineer our systems to do it as well as we can. So here's the most obvious case of a heat engine in our day-to-day lives, and especially if you live in Texas right now, is that of power plants. So power plants take some fuel resource, coal, oil, gas, or nuclear. And you burn that to create heat and then use that heat to make mechanical work. So this picture in the upper left here is the Mystic Generating Plant, just a couple of miles from where I am. That is a combined cycle natural gas power plant that provides much of the electricity that runs MIT. And over here is some pictures of, I think, this is maybe a coal or maybe a nuclear plant. It doesn't matter. The point is I want to show you the cooling towers, which we'll come back to a little bit later. So those cooling towers have to do with cooling. And heat engines run on heat. So heating and cooling are somehow important here. Another example of a heat engine. This is where the field all began, actually, is with steam engines. So a steam locomotive takes a heat resource, you burn coal. And you turn that heat into steam. And then the steam does some work for you on a piston. So that's a little bit more of an old-fashioned example. Here is a more contemporary example, the jet engine. You burn an amount of fuel, it creates heat. You use that heat-- and how you use that efficiently is the engineering of the thing. So that's another example of a heat engine. Here's one which is familiar to all of us, an internal combustion engine. We burn something that creates heat. And that heat energy is partially transformed into work as the pistons work on a crankshaft. And here's one which is maybe a little bit less obvious, but it's definitely a heat engine. It's the hurricane which harnesses a temperature difference between sea surface and the upper atmosphere and creates mechanical work. So these are all examples of heat engines. So those are real-world examples. Now, what we're going to do is do this in a little more abstract. So let's see here, back to the board here. You guys can see the board, right? Yes? Yeah, OK. Thanks . All right, so we've seen some real heat engines. Now, we're going to abstract this a little bit. Heat engines abstracted. So this is the way that heat engines are represented in a textbook or in a class like this. You have some cyclic machine normally with a circle with an arrow. Here's what it means to be a cyclic machine. It returns to the same state after each cycle. All right, so now already we have some thermo words here that we know. State, so there's state functions involved. And this machine returns to the same state after each cycle. So here's another abstraction. We're going to have thermal reservoirs. We're going to have some high temperature reservoir. And we're going to have some low temperature reservoir, t hot and t cold. So these are called thermal reservoirs. And they are maintained at t hot and t cold throughout. So this is an abstraction. In reality, nothing is maintained the same arbitrary precision for all time. But you can achieve this through engineering. So for instance, you could have a boiling pot of water. That's a thermal reservoir. You know what temperature it is. It's at 100 C, because it's boiling. And as long as you keep enough water in it and keep the heat there, you know it's going to be regulated temperature. Another good thermal reservoir is an inlet of an ocean. And that describes a lot of why power plants are sited where they are. So we have thermal reservoirs. And then there are some terms. We have total work, total heat, work in, heat in, work out, heat out, and eta. So total work, total in, out, work, and heat. So tracking flows of energy across the system's boundaries throughout the cycle. And then there's an important term here, efficiency. Eta is defined as the total work that the system does on the surroundings. And the reason I put this in green and circled it is because for today's lecture and today's lecture only, we're going to have some confusion over this sign of work and heat because we have defined for this class work as work done on a system. So positive work increases the energy of a system. But if you're engineering an engine, you probably want to flip the sign because you want to keep track of work with the system does on something, on the electric power grid, say, or on the road and your cars that push you down the road, or what have you. So there's going to be a little bit of confusion there. It can be managed. So that's abstracted heat engines. So here is a typical representation of a heat engine. You have a high temperature reservoir. You have this engine, which operates with efficiency eta. The engine receives heat q in from the high temperature reservoir. And it dumps heat q out to the low temperature reservoir. So heat received, heat rejected. And each cycle, it performs some amount of work on the surroundings. And I ordered new pens. And they haven't come yet. So I'm sort of fading here. But let me switch to purple. So a typical representation of this would be like this. The heat engine, we're just learning terminology here, with efficiency eta operating between t hot and t cold. So that's what the engineers will say to you. We have this heat engine. It's got efficiency and it operates between these two temperatures. In each cycle, q in is absorbed at t hot. q out is rejected at T cold. And work out is performed. OK, quiz, what is q in minus work out minus q out? Somebody, what is that? AUDIENCE: Zero. RAFAEL JARAMILLO: Zero. It's zero because this is a cyclic engine. It returns to the same state every cycle. That means every state function has to return to its starting point. Energy is a state function. So the energy of the system cannot change around each cycle. So again, coming back to our bookkeeping, this here, q in minus work out minus q out, this is the energy exchange. This equals delta u, which has to be zero because all the state functions return to their starting point. So let's go back to our slides here just for fun. So here's some real engines. For the thermal power plant, say, somebody tell me where would I find t hot? Where would I find the high temperature? Does anybody know? AUDIENCE: Burning temperature of the fuel? RAFAEL JARAMILLO: The burning temperature of the fuel. And actually, these combined cycle natural gas plants, their first cycle is a jet, actually. It's very similar to a jet. So the burning region is going to look something like this. You have a region where the fuel is burned. And that's going to be the highest temperature inside of these units. And where's the cold temperature? Where's t sub cold? AUDIENCE: Water. RAFAEL JARAMILLO: Yeah, that's a good guess. In this case, it's actually wrong. And I put this up here as a trick here because it's kind of cool. The water here is cold. We're in the harbor. So that'd be a good resource. But the water is pumped up into these evaporative cooling units. This thing here, as far as you can tell, it's just a building on stilts. You don't know what that is. But I've been there. It's very cool. These have the same function as these cooling towers do. And they evaporative-- they cool the water down below the temperature of the inlet. So they're able to lower t sub c a little bit that way. And that turns out to be important to do that for the efficiency of the plant. But yeah, you're basically right. This is-- the reason these plants are on the water, the reason for it. And it's because it's a nice big reservoir of cold. So for a steam locomotive, they don't carry around cold reservoirs with them. So their cold would be just the ambient. In hurricane, t sub hot is the warm surface currents that fuel hurricanes. And t sub cold is the upper atmosphere, the top of this engine. And internal combustion engine, t sub cold is your tailpipe. So it's pretty much-- I think that's kind of neat. And as we'll see, the actual numerical value of t sub hot and t sub cold is really important to determining the efficiency of the engine. So let's calculate a cyclic process. Work and heat are process variables. Thermodynamics doesn't describe-- thermo doesn't describe real world processes. So what do we do? We describe a hypothetical process for which the system remains in equilibrium at all times. And this is weird. And it is just weird. It's a weird thing to do. But if we do this weird thing, we can use state variables. We can use equations of state if they're available, if we know them. But here's something to keep in mind. In practice, such a cycle would take infinite time. So when you are venture capitalists and you have somebody coming, pitching you too good to be true energy technologies, normally, you're going to think back to 3020 and try to poke holes in their argument. And here's one hole you might find, which is power is work divided by cycle period. As you approach this ideal of being in equilibrium at all times, you have to slow your cycle down. As you slow your cycle down, your cycle period goes up. And the actual power you get out of your unit goes to zero. So even if you could design and build an ideal heat engine, no one would buy it buy it because it might be maximally efficient, but you get no power out. And this comes up again and again. So I'm glad that you've seen this now. All right, let's talk about one ideal cycle. Let's talk about the most famous one, the Carnot cycle with an ideal gas. There are many cycles you can calculate, reversible cycles you can calculate from ideal gas. Carnot's just one of them. But it's a famous one. So that's what we're going to do. So here's the Carnot cycle. You start with isothermal expansion at t hot. Let's try that. Now, while I'm drawing this, there was a point on Piazza about this that I think I replied to this morning. For an ideal gas, we're going to tell you-- you don't have to know this. We're just going to tell you that the energy is depending only on the temperature. The energy is depending only on the temperature. So isothermal processes don't change the energy of an ideal gas. So if the gas is expanding, it's doing work on the surroundings. Does it have to be receiving heat from the surroundings or does it have to be heating the surroundings? It's a question for you. AUDIENCE: Receiving heat. RAFAEL JARAMILLO: Has to be receiving heat. Has to be receiving heat because it's losing energy in the form of work. So it must be gaining energy in the form of heat. So we're going to start at 1 here and we're going to go down to-- get a different color-- 1 here and we're going to go down to 2. So this is step one. We're starting at a smaller volume. And we're expanding to a higher volume, isothermally. This dashed line is the t sub h isotherm. Step two, adiabatic expansion. to tc. So now, we're going to expand this is step two. And this dotted line is a t sub c isotherm. All right, step three, Isothermal compression at t sub c. That's this. And then step four, step four, which is adiabatic compression back to the starting point, back to point 1. So here's my arrow, going around this way, good. All right, here's a note. The area enclosed by the cycle, the work total is area enclosed. That's the integral. The integral dv with p is the area enclosed by the cycle. So if you have a cycle on a pv plane, you can already figure out the work done geometrically. This is true for any cycle, not just Carnot. So let's analyze the isotherms. Isotherms, pv equals nrt pdv plus v dp equals what? Zero. It's isothermal. So dv equals minus v over p dp equals minus nrt over p squared dp. So pdb equals minus nrt over p to the power of 1 times dp. All right, so that was useful. So the integral of work equals the integral dp nrt over p equals nrt natural log p final over p initial. Which by, again, using equations of state equals nrt natural log v initial over v final. That's useful. Let's do a sanity check. Expansion, does work on surroundings. v final larger than v initial means v initial over v final is less than 1, which means log v initial over v final is less than zero. That means the system loses energy. OK, good, checks out. So this is the expression for an isotherm for an ideal gas. Isotherms continued. For ideal gas, internal energy is a function of t only. du equals n c of v dt. We're not going to derive this in this class. That actually gets kind of beyond the scope of the class. But we'll use it from time to time. This is single variable calculus. Makes things easier. I just want to make sure for ideal gas and only for an ideal gas, and only for an ideal gas. So what this means is du at fixed t equals zero. du for an isothermal process equals zero. And what that means is dq equals minus d work or q equals minus work. And this, again, gets to the point that was on Piazza this morning. And it's a useful expression when calculating properties of heat engines. Now, let's talk about the adiabats. Adiabat is a process with no heating across the boundary. So we know that d work equals minus p dv, as usual. This also has to equal the change in energy, since there's no heating. How do we calculate this? For ideal gas and only for the ideal gas, du equals d work plus dq. But that's zero. Equals minus p dv equals the thing I gave you on the previous page, ncv dt. So single variable calculus, we can easily integrate that. Work equals n cv t final minus t initial. So that becomes simple for an ideal gas. For an ideal gas, adiabatic curves are described by tv gamma minus 1 equals constant. Or in other words, p final over p initial equals v initial over v final to the power of gamma where gamma equals the heat capacity ratio. Now, we're not going to spend a lot of time on this in 020. So I'm just giving you equations. These are derived well in Wikipedia. You don't need to go to the textbook. But they're also described well in De Hoff chapter 4. And we'll come back to it at lectures 6 and 7. Again, because today's lecture is a little bit of a detour from the mainstream of 020, we're going to just give you some expressions. So that you're familiar with them. This is not a mechanical engineering class. Question, adiabats are steeper than isotherms in pv plane. Why? Find my plot. Here's my plot. So here, adiabats, which are sections 4 and 2, they're steeper than the isotherms, which are sections 1 and 3. Mathematically, it's because gamma is greater-- gamma is greater than 1. So if this were 1, you'd have the same curvature as isotherms. But gamma is greater than 1 for reasons we discussed last lecture, because cp is bigger than cv. So those are some properties of adiabats added all up. We're going to have work. We're gonna have heat. And we're going to have the different segments. So for segment one, segment one was isothermal. We calculated the work. It was nr temperature log v1 over v2 using the notation of the cycle that we set up before. So this is an isothermal process. There is the work. Somebody, what's the heat? AUDIENCE: Be negative of the work. RAFAEL JARAMILLO: Negative of the work, exactly. And nr t sub hot log v1 over v2. Isothermal process, ideal gas, energy doesn't change. So work and heat have to be equal and opposite. OK, two, the work here was the change of energy, which was ncv t cold minus t hot. What was-- what was heat for the adiabat? AUDIENCE: Zero. RAFAEL JARAMILLO: Zero, right, adiabatic. 3 and 4 are going to be the mirror images of 1 and 2. So 3, the work is nr t sub cold now log v3 over v4. And of course, the heat is going to be the opposite of that. Minus nr t sub cold log v3 over v4. And the final leg, which is an adiabat is going to be ncv t hot minus t cold. So that's the table of the contributions to heat and work. And then using this table, we're going to calculate the Carnot efficiency. So I actually wish I had-- I'll bring it back. I threw that scrap of paper on the floor. Put it up here. Let's calculate the efficiency. Calculating the-- I'll put this in parentheses-- because we're really just calculating the efficiency, happens to be for a Carnot cycle. Calculating the Carnot efficiency. Work total, the total amount of work the thing does is the negative of all of these added up. It's the negative of all of these added up. And you see that step two and step four are going to cancel exactly. So it's the difference between step one and step three. So here we go. It's going to be this minus nr th log v1 over v2 plus nr t of cold log v3 over v4. So that's the total work, total work done by engine-- by the engine. What about the heat in? How much-- how much thermal research-- how much fuel did you burn to run this thing? What's that going to be? That's the heat received at the high temperature isotherm. OK, that's the input. That's the thermal research that you used. So minus nr th v1 over v2. Heat absorbed at t sub h. So the efficiency equals the total work that the thing does over q in equals 1 plus t cold over t hot log v3 over v4 over log v1 over V2. Now, using the property of adiabats, tv to the gamma minus 1 equals constant, you can show that v3 over v4 equals v1 over v2 inverse. And we'll come back to that. And it's just three or four lines of algebra. If we have time, we'll come back to that. But if we use that, we get the Carnot efficiency equals 1 minus t cold over t hot. So after all that, some steps of which we skipped, we have a very simple expression. So you are running a power plant. Do you want-- what can you do to improve the theoretical limiting efficiency of your plant? AUDIENCE: You can increase t hot and decrease t cold. RAFAEL JARAMILLO: Yeah, increase t hot and decrease t cold. So a lot of engineering decisions, which you will see if you go studying thermal engines-- and we could talk about jet engines, we could talk about power plants. Let's talk about power plants. A lot of engineering decisions which you see come down to trying to increase t hot and decrease t cold. Once you understand this is the theoretical limit, a lot of the designs make a lot of sense because you see what the engineers are trying to do. Increasing t hot, for example, you could do by using supercritical water in your working cycle, allowing you to burn your fuel at a higher temperature. There's a lot of material innovation that's needed in order to enable that because you're increasing the high temperature of-- high temperature part of the system. And we know that you need really advanced alloys, and ceramics, and thermal barrier layers and such to do that. And then a lot of other engineering decisions you see are around decreasing t cold. And so that's where, for example, the cooling towers come in. Especially when you have a power plant, which is in the middle of, let's say, the prairie, and there's no ocean nearby, you don't have a really convenient cold resource. So you build evaporative cooling towers to try to get a lower t sub cold. So for example, here, the combined cycle power plant, the Mystic Generating Station, I looked it up. Very typical of GE, General Electric combined cycle units, the inlet temperature is 1,400 C. So most of your metals melt at this temperature. So you've got to have really special materials at the inlet. And then the heat rejected is at 15 C. That's what the website said. So I don't know if that's representative of the water or what-- probably what you get in these cooling units. I'm not sure. So if you plug these numbers into Carnot efficiency, you find that the theoretical limit is 82%. And the actual efficiency of these is close to 60%. So that's really remarkable. It's really, really good. This tells you a lot about the state of technology for natural gas combined cycle plants. All right, I want to come back to some math here to round us out. So that was kind of neat. But now, we're going to head back towards materials. Let's consider heat transfers. Carnot cycle, and then we're going to consider a less efficient cycle that burns the same quantity of fuel. So ideal and realistic. Let's consider the heat absorbed and the heat released. OK, so heat absorbed is q in. We just wrote this down. nr t sub hot log just-- let lowercase v equals v2 over v1, just to keep the writing simple. So that's the heat absorbed. And the heat released is-- we'll call this q out Carnot equals nr t sub cold log v. Now, that was just copying results from the previous slides. Less efficient engine, if it burns the same quantity of fuel, it takes the same heat. That was how we set up the problem. It burns the same quantity of fuel. You burn fuel, you give off a certain amount of heat. But it's going to be less efficient, the heat it rejects, how is that going to compare to the Carnot case? Is it going to be greater than the Carnot case? Or is it going to be less than the Carnot case? Keeping in mind that delta u equals q in minus work out minus q out. So this has to be zero. q in is the same. It's a less efficient engine. So it does less work. So it has to reject more heat. So a less efficient cycle, burning on the same quantity of fuel, rejects more heat. That's the way to talk about this. It rejects more heat with the low temperature. Makes sense, right? It's losing energy that it could-- a better cycle could use-- could exploit as work. And here is where we get to something which puts us on the road to the second law of thermo. We're going to consider this quantity, delta q over t. And for now, just bear with me because this is probably seeming random. The notation here, the integral sign with the circle in the middle means we're going to integrate that around the cycle. So for Carnot, this thing is dq over t equals nr t sub hot over log lowercase v over t sub hot minus nr t sub cold this log, lowercase v over t sub cold. So this funny thing just happens to be zero for Carnot. For less efficient-- less efficient cycle, the dq over t equals nr t sub hot log v over t sub hot minus q out over t sub hot. So these are the same. But this term is bigger than this term. So that means the overall line is negative. This is because q out is greater than q out Carnot. So this is where I want to leave it today because we will soon see that this is related to entropy generation by a non ideal cycle. And dq over t is related the change of the state function ds. So what we're going to see in the next lecture or two is that a Carnot cycle, or more generally, any reversible process is one which leaves the entropy unchanged. And we'll see that a less efficient cycle, or any irreversible cycle, is one which-- well, the sign will make sense, increases the entropy of the surroundings. Or you could say increases the entropy of the universe. All right, so now, that's how the-- our detour into heat engines ends. It's 10:51. I hope that this lecture gives you some recognition of the role of heat engines, some insight into how they're engineered, why they're engineered the way they are. Some equations which you can use in later classes, especially in things like course 2 and course 16. But with respect to our class, this slide here is really the most important one because we've taken this data in order to justify the importance of this funny thing, dq over t, which goes to the heart of the second law, which is what we're going to start on next time.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_18_Case_Study_in_Reacting_Gas_Mixtures_Introducing_the_Nernst_Equation.txt
[SQUEAKING] [RUSTLING] [CLICKING] RAFAEL JARAMILLO: Right. So this is the last lecture before an exam. And so what I'm going to do is walk through a case study just look at reacting systems of ideal gases in a little more detail. And then if there's time remaining, I want to do Nernst in a nutshell, just for fun. So we're going to start with reacting systems of ideal gases. And this pen is on its way out so I'm going to switch. And we're going to consider our reaction of the form A plus B going 2C. And later on, I'm going to plug in some numbers. And I plugged in numbers for hydrogen plus chlorine goes 2HCl, which is all in the gas phase. And this is also generically discussed in the textbook, so that's very convenient. And here's what we're going to do. We're going to ask, what can we learn-- what can we learn from plots of free energy verse composition, free energy versus system composition? So free energy, Gibbs free energy is the sum of partial molar Gibbs free energy weighted by the moles. And we know that we decompose this into the sum of the contributions of the pure components before they were mixed, and then we have this ideal solution model. So composition is expressed by those n of i's. We have peer component i at pressure p. That's the reference state. An then here, we have a system that was mixed at fixed T and total p, so just remembering what those terms are. Now we're going to start analyzing this in a little more detail. All right. So for a univariate reacting system-- univariate, that meant that there's only one variable, univariate. And that's a system with only one reaction. So the reaction can run to the right or run to the left. There's only one reaction coordinate. We saw this in almost all of the examples that we did. The composition can be expressed with respect to a single variable. So we saw this before. And right here, we're going to write with respect to nc. So I'm just going to arbitrarily choose the third component. And so we know that dna equals dnb-- equals, in this case, minus 1/2 dnc. Remember, this is A plus B going to 2C. And we get these from dni over mu of i equals constant. Good? And so that means that n of a equals n of a initial plus the total change of n of a. And we just plug that in, n of a initial minus 1/2 n of c minus n of c initial. So this is stuff we've seen before in one form or another. nfb initial minus 1/2, n of c minus n of c initial. That's just integrating those changes. That's good. And, of course, n of c, well, that's our independent variable, so n of c equals n of c. So that's good. Now we've expressed this univariate system with only one variable, n of c. Everything is in terms of n of c. That's good. We're also going to need, also going to need n total equals-- and for this particular system, this happens to equal n of a initial plus n of b initial plus n of c initial. And that's not always guaranteed, that the total number of moles is fixed. But for this reaction, for this one that we're doing, you have the destruction of two moles and the creation of two moles, so the number of moles is fixed. So this just makes the math a little easier in this particular case. And we're going to need n of total in order to write my p of i's, which are total pressure over n of i over n total. So that's what we're going to need our n totals for. All right. So now we have everything except for the reference data. We wanted to write the Gibbs free energy as a function of composition, and we're well on our way. So now we can now write out the Gibbs free energy as a function of n of c and plot it. So this is what we get. Gibbs free energy is n of a initial minus 1/2 n of c minus n of c initial. Chemical potential a is reference state, plus n of b initial minus 1/2 n of c minus n of c initial, times the chemical potential of b and its reference state, plus n of c times the chemical potential of c and its reference state, plus the contribution from the ideal entropy of mixing RT t n of i log n of i over n total. All right. So this so far, none of this is new. I'm just writing things out a little more than I have in the past. So in order to go any farther, it's helpful to put some numbers in. So what we're going to do is, we're going to consider, again, this reaction, the gas phase reaction, hydrogen plus chlorine going to hydrogen chloride. And we're going to consider this at 298 Kelvin. And I'll tell you, this reaction is really annoying. And this reaction is annoying for me personally because in my lab-- and a lot of labs that are doing similar research-- we would really like to use hydrogen-containing precursors and chlorine-containing precursors to grow certain semiconductors. It can be a very effective way of growing certain semiconductor thin films. The problem is that the reaction byproduct is hydrogen chloride, or hydrochloric acid. And if you're making hydrochloric acid, you really can't have any metal in your system, especially downstream. And what that means is, you can't use a lot of common semiconductor-processing equipment. You have to use less-common equipment. You have to use a lot of [INAUDIBLE] and so forth. And it stands as a pretty substantial barrier to running processes like this in the semiconductor industry, where no one's really interested in swapping out stainless steel reactors for nonmetal reactors. It can be done. But anyway, it's a problem-- so a little bit of an aside there. So let's see. The chemical potential of hydrogen, what is that? We need to get this from databases. So we're going to have a standard enthalpy of hydrogen minus T times the standard entropy of hydrogen. And by convention, the standard enthalpy of elements at 298k and 1 bar is 0. By convention, by convention or definition, elements in their standard state at 298 Kelvin, 1 atmosphere have H0 set to 0. Similarly, for mu chlorine 0-- chlorine 2-- this is H0 chlorine minus TS0 chlorine. And we don't need to look anything up there. And then we have the chemical potential of HCl in its standard states. And this is going to be H, HTL0 minus TS0 HCl. And this is nonzero because this is a compound. This is the formation enthalpy for one mole HCl from the elements in their standard state. That's what that is. So you might see this like this, 1/2 H2 plus 1/2 Cl2 going to HCl with an enthalpy, a formation, sometimes written as delta FH. All right. So what do we do? We get data. And so you can find data for the standard entropy of these components and the enthalpy of formation of HCl. You can find that data-- I'd say on NIST WebBook or pretty much anywhere else. These are all pretty common materials. So I plotted some things, so we're going to look at the plots. So I plotted the function. This is what it looks like. So this is plotted for the case of-- let me put a grab a laser pointer here. This is plotted for the case of-- initial moles of A and B is 1, and initial moles of C is 0. I could have plotted-- I could have chosen any starting conditions here, just made it simple. So you start with a stoichiometric mixture of A and B and no C. And I plotted this Gibbs free energy of the system as a function of n of c using data for hydrogen chlorine and hydrogen chloride. I did cheat a little bit. I reduced the formation enthalpy of HCl just because it makes the plot easier to read by eye. So don't take these numbers to the bank. Don't use these in your future engineering and research endeavors. But the science lesson still stands. So what is this plot telling us? Would somebody just please interpret this? What does this mean? What's the meaning of it. AUDIENCE: Well, we can see that g is minimized approximately at 1.5 moles of nC. That is HCl. So that being the lowest point, that's where the reaction will settle at equilibrium at this temperature. RAFAEL JARAMILLO: Great. Thank you, So yeah, that's exactly right. The reaction, if you start over here on the left-hand side with number of moles of C, the reaction will proceed to the right until the Gibbs free energy is minimized. And we see that point right here. Good. We also see it curved back up. So there is a problem on the P set about this curvature. And this curvature is always here, even if it gets pushed really close to the y-axis. This is to say that at equilibrium, no reaction ever runs all the way to one side at equilibrium. There's always a little bit of mixing, and that comes from the singularity of x log x. So that's another thing to see. You could see here that, in this case, the starting Gibbs free energy on the left-hand side is higher than the Gibbs free energy if you had full conversion. So there's a driving force here for the reaction to run to the right initially. I don't know. I'm not sure what else to say here. All right. Now, what I did is I repeated the calculation for 328 Kelvin. And so how did I do that? Let me switch back to the camera for a minute. We'll come back to this data, but I want somebody to remind us how I would have calculated this for a different temperature. So we have here the Gibbs free energy. And now I want to calculate this for a temperature other than 298 Kelvin. What do I need? What data do I need? I have temperature here. So obviously, I can change that number from 298 to a different number. But is that the only thing that I need to keep track of, or is there something else? AUDIENCE: There's also the heat capacities for H center. RAFAEL JARAMILLO: You need the heat capacity for each component. That's right, Can you tell me why? What do I do with those heat capacities? AUDIENCE: You can get the H T of, that the H-- the enthalpy as a function of time. RAFAEL JARAMILLO: Right. I need to change the enthalpy from enthalpy of 298 to enthalpy at a different temperature. I also need to update the entropy. So I integrate heat capacity here. I integrate heat capacity over temperature here. I also have to change the temperature here. So what I'm doing is I'm using the heat capacity data to calculate the standard chemical potential at some other temperature. That's right, so just a reminder there. So I need a heat capacity data for all three components. And I did those calculations. I just programmed this in MATLAB, so it did it for me. And I updated H's, and I updated S's, I updated mu's, and I plugged that all into this expression. So I have an updated mu, updated mu, updated mu. And, of course, I just changed the temperature there. And then I reapplied it. So let's see. Let's go back to the graphs here. So I replot it. So let's see. Can somebody speak to-- what is this plot telling us? The 328 Kelvin plot is below the 298 Kelvin plot. Can somebody offer an explanation as to why that is? Why did it overall drop instead of rise? AUDIENCE: Could it have to do with this reaction being either exothermic or endothermic? RAFAEL JARAMILLO: So I love that, and we're getting there. But in this case, that's not the dominant effect. In this case, that's not the dominant effect. Somebody else? AUDIENCE: Could it be because the temperature is higher, so the formation of enthalpy-- sorry, the delta G has to be lower? RAFAEL JARAMILLO: Yeah. It has to do not so much with any reaction or formation, but it has to do with this functional form. dG for any system equals minus SdT plus PdV plus ba, ba, ba. So the trend in temperature of Gibbs free energy tends to be negative because Gibbs varies with temperature with a slope of minus S, and entropy's strictly positive. Now, there could be other things changing. You can have volume changing-- sorry, VdP. You can have volume changing. You can have number of moles changing. But just as a rule of thumb, for almost any system, Gibbs tends to go down as the temperature rises. And it comes simply from that coefficient. OK. All right. So now we can see that the overall thing shifted down. But it's a little bit hard to compare these two curves when they're so shifted. The dynamic range of the plot is kind of too big. And so what I'm going to do is, I'm going to do what they do in the textbook. And I'm going to plot this arbitrarily, the Gibbs free energy minus 2 times the Gibbs strategy of C0. So what I'm basically doing is I'm pegging these plots to 0 on the-right hand side. You could peg it to 0 on the left-hand side and same thing, basically. And so now I can zoom in a little bit. Now I can draw another conclusion from these plots by comparing them. You see I have them now on the same plot, so I get the full dynamic range of the plot. And then I'm going to zoom in even further. So now we're zoomed way in around the minima. Would somebody likes to point out an observation about these equilibrium points? Can you draw any conclusions about this reaction from what's plotted here? AUDIENCE: Increasing temperature drives the reaction to the left. RAFAEL JARAMILLO: To the left, yeah. So Samuel is pointing out that as you heat up, Sam is pointing out that the minimum in Gibbs free energy at 298 Kelvin is over here, and it seems to shift a little bit to the left as we increase the temperature. It's a little subtle. You need to look closely. But it's visible if you zoom in-- Zoom in. Thank you, Zoom. All right. So I'll ask a couple of questions. And these are the questions which we're going to analyze, two questions. First, is the reaction endothermic or exothermic? And two, can you estimate the reaction enthalpy from the data on the screen? So who wants to tackle number one, question one, is this reaction endothermic or exothermic? AUDIENCE: Is it exothermic? RAFAEL JARAMILLO: Is it exothermic? And OK, why? AUDIENCE: Because we see that with an increase in temperature, it shifts to the left. So we would expect he to be a product of the reaction so that when it increases, by Le Chat principle, it will shift to the side. That reduces the amount of heat. RAFAEL JARAMILLO: Yes, exactly. Thank you. That was very clearly said. So Le Chat says systems-- I should say reacting-- says that reacting systems, quote unquote, "resist"-- they don't have bumper stickers. They don't actually know what they're doing. But they, quote unquote, "resist" the temperature rises by running in endothermic direction. They try to soak up that heat-- the endothermic direction. So this reaction is observed to run to left with increasing T. So running backwards is endothermic. So as written, it's exothermic. Good. That's good. And well, also, just remember Van 't Hoff said d log kp dT equals delta H0 over RT squared, which is basically saying the same thing. All right. But what we're observing is that this is negative, right? The reaction at equilibrium is shifting to the left, so kp is getting smaller as we raise temperature. So this slope is negative. And indeed, we have an exothermic delta H. Good. So that's a concept question. All right, but what about number 2? This is a little bit more involved, so I'm just going to step through this. Using Van 't Hoff, using Van 't Hoff, if we can estimate d log kp dT, then we can estimate the enthalpy of the reaction. So that's useful. So here's what we're going to do. I'm going to ask the class, how would I estimate this? How would I estimate this quantity? AUDIENCE: Isn't kfp equal to the partial pressures of the products over the partial pressures of the reactants? RAFAEL JARAMILLO: Right, right. But this is the data I'm given. So is there something you could do with this? AUDIENCE: Well, given that it's an ideal gas system, we have the number of moles at equilibrium of HCl. And from there, we can extract the number of moles of the reactants, which would be a substitute for the partial pressures. RAFAEL JARAMILLO: Great, that's fantastic. That's exactly what I'm looking for. Thank you. So I just showed that plot again just to jog your minds. And that's right. So what we're going to do is, we're going to look at these two temperatures for which we're given data, 298 and 328. And we're going to eyeball-- we're going to eyeball the equilibrium number of moles of C. And I did that. You could do it if you were still looking at the plot. It's not hard to do. I eyeballed it. I didn't solve it numerically or anything. I just sort of did it to two decimal points, 1.55 and 1.53. So that's it. That is the data that we need. From there, everything is calculations. And C at equilibrium is 1.55 and 1.53. We already have expressions for an na and nb in terms of nc. And you can figure out from the stoichiometry of the problem, it comes out to 0.225 and 0.235. So now you have the mole numbers. And what we just heard from-- was basically Dalton's law, that we can get the partial pressures, and therefore, the equilibrium constants from the mole numbers. So I went ahead and calculated those from these numbers, 47.5 and 42.4. And just as a reminder here, this is using pi equals ni over n total. And again, just to make it very, very clear what I did, for example, 47.5, what is that? That's 1.55 divided by 2 squared over 0.235 divided by 2 times another factor of 0.235 divided by 2 reaction coefficient. So that's an example. All right, so these are-- as usual with thermo, the actual calculations are exceedingly simple. It's just figuring out what goes where. OK, good. And then what? Then what do we do? Using these estimates-- well, we don't have a derivative, but we have some changes. The change in log k of p, we calculated it. It's minus 0.114. The change in temperature was 30 Kelvin. So those are the two data points we were given. And so we will just say, well, I'm in a rush, and this is all the data I have. So let's say that this derivative is approximately equal to rise over run. And this comes out to -0.00379. The units here are inverse Kelvin. And there, that's it. So delta H equals RT squared d log kp dT. And I'm going to say it's approximately equal to-- OK, I estimated that. R is a constant. What temperature should I use? What temperature should I use? AUDIENCE: Maybe, like, the average temperature, so the one in the middle. RAFAEL JARAMILLO: Yeah, that works. Anything will really do. Yeah, you remember this from calculus, when you take a derivative, you can evaluate a function in the left or the middle of the right, stuff like that. So I'll use midpoint. Anything will do. You just have to use a number. You can't just leave it as T. So we'll use a midpoint. And if I do that, I say minus 3.09 kilojoules. And that's the answer to the question that was given, estimate the enthalpy of this reaction. It's exothermic, as expected. That's good. And I'll let you know that when I generated that plot, I used minus 2 kilojoules. So the plots are generated from minus 2 kilojoules and the heat capacities. And this rough estimate gets us minus 3 kilojoules. All right. Well, anyway, it's a starting point. The idea was an estimate. I will leave you-- I want to move on, but I'll leave you with just some ideas, how to find temp. You want to make sure that you know how to calculate the temperature dependence of mu of i's. I think answered that a while ago. I want to make sure we do this. And this is an interesting question. Can we estimate the heat capacity difference across this reaction from a temp series of G of m of c? So we didn't see just now a temperature series. We really just saw two temperatures. But if you had a bunch of temperatures, if you had a whole family of curves at different temperatures, you could estimate heat capacity differences from that. And it's interesting to think about how you might go about that-- at least, formally. All right. So in the time remaining, I'd like to move on to Nernst-- unless there are really burning questions about this, in which case I'm happy to stall. AUDIENCE: I had a question about-- I guess going back towards the start of working on this problem. So in this particular reacting system with HCl, we do have the n total is equal to a constant because for two moles of reactant, we have two moles of product. How would you deal with that not being the case? RAFAEL JARAMILLO: It's formally not hard. It's just get a little annoying. The computer will take care of it for you. Basically, what you do is, you have these expressions here for na, nb, nc, and it's all in terms of nc. So we reduced everything to one variable, right? All that means is that the n total, which is the sum of ni's, will also then, in general, have an nc dependence. AUDIENCE: OK. RAFAEL JARAMILLO: Everything else follows. It all follows here. That'll general have an energy dependence there in the denominator. But plug it in and plot it, it's-- in general, here, I have n total here when using Dalton's law. I just went ahead and plugged in 2's there. In general, you'd need to calculate n total, and it would be other than 2 so that there'd be another place where you'd need to keep track of it. Good question. OK. Anything else? It's a nice reaction form to analyze to just understand the math and the formalism and the fundamentals of this because of that conserve mole number. Adding a very mole numbers is a complication. Of course, it's realistic. There's lots of systems you'll analyze in your careers that will not have fixed mole number, but it's a complication. And so that's why I chose here to do A plus B equals 2C. Likewise, in the textbook and lots of other textbooks, we like to take the simple cases first. All right. So now I want to just switch gears and take you on a very quick walk through the Nernst equation. And you can just sort of stop taking notes if you like and just let this wash over you. It's just for context. It's just for context. All right. So what we're going to do, we're going to take a redox reaction, take a redox reaction. And why I'm doing this now, by the way, is because it couples nicely to some stuff which we have done in this class recently. And it couples nicely to what you're going to be doing in 023. So that's why we're doing this now. OK, so we take a redox reaction. I'm going to use this one. It's a very well known reaction. This is called a Daniel cell. There's a guy-- I had to look us up-- JF Daniel. And we're talking about the 1830s. He's making batteries out of zinc and copper and sulfates. So it's a very-- so we take this redox reaction. Great. And this is key. We're going to separate the reduction and the oxidation half reactions in a device-- this is about making stuff, making devices-- in a device engineered such that electrons and ions follow separate paths. So you have to know something about plumbing. You have to know something about electronics. This is fine. This is fine. So this is an example for the demo cell. This is what that would actually look like. What on Earth does that actually look like? This is what that would actually look like. We're having two beakers here, two beakers, these two beakers. And what are we going to do here? We're going to have a rod of zinc metal, rod of zinc metal. And we're going to have a rod of copper metal. And then we're also going to have something called a salt bridge, which can be a U-shaped pipette, a pipette but a u-shaped tube, filled with brine, basically. And this is aqueous. So these are aqueous systems. And what do we have here in solution? We have zinc sulfate aqueous, aqueous zinc sulfate, and aqueous copper sulfate. And so we have a system where ions can go this way. But we still need somewhere for electrons to go. And so electrons, we pull out through external circuitry. Put two terminals there, and these are the voltage current terminals. And what I've just drawn, if you abstract away the details, is every battery ever made or every electrochemical reduction system ever made. But anyway, we don't have time for everything ever made. We're just going to step through this example. So that's what we did. Now I'll keep on going. The electrostatic work-- you guys remember introductory physics? The work of moving charge nF across potential E-- we tend to use this sort of scripty E. This is also known as electromotive force, but it's also sometimes just written V. What's nF? n equals moles. F equals the number of coulombs per mole of electrons, 96,485 coulombs per mole. This is known as Faraday's constant. The [INAUDIBLE] work of doing that is-- I'm going to call this W star, which is also the change in energy of the system. And this is from introductory physics. We have displacement, dot product with the electric field. So this is a force dot product with a distance field displacement, displacement fields-- let's see-- dot product. And if you remember how voltage is defined, it's a spatial integral of a directed field, so nFE. So the work, the negative work, which is the minus change in energy of a system, of moving n moles of charge across potential epsilon is nFE. We're also going to take something called the generalized work theorem, which I don't have time to derive. But hopefully, it's a little bit intuitive. The change of energy is W star, where W star is any reversible work. So, so far in this class-- and really, for the entirety of this class-- we only deal with mechanical work, PdV work. And that's fine. But just for this one 20-minute segment, 15-minute segment, we're going to deal with electrostatic work. We're going to admit that there are other ways-- other than mechanical ways-- of changing the energy of the system through work. And this is electrical work, electrostatic work. So if we take this and we combine it with this, we get something called the Nernst equation. And the Nernst equation is very simple. The change in Gibbs free energy of a system is minus nFE. That's the change in Gibbs free energy for moving n moles through a potential of E. But we're going to keep going here. We're going to use stuff we already know, formalism of reacting systems, to write delta G with respect to activities-- or, if you like, concentrations. So, for instance, in this case, delta G equals delta G0 plus RT natural log and-- remembering the form of our reaction here-- zinc sulfite activity, copper activity, copper sulfate activity, zinc metal. This here is the reaction quotient. All right. So what we're going to do is, now we're going to use Nernst to couple electrostatics-- that was from the previous board-- we're going to couple electrostatics and chemistry. So this was thermochemistry. This was electrostatics. And so what happens when electrostatic and chemistry get married? You get electrochemistry. And it's written as followed. NFE equals minus delta G0 minus RT ln of q. And this is also sometimes called the Nernst equation. This is a cell voltage, something you can measure easily with a voltmeter. This is a reference. And this reaction coefficient depends on concentrations. So for the Daniel cell we have-- for example, for a Daniel cell, we have epsilon-- not, epsilon sorry-- electric potential is a reference potential minus RT over 2F log concentration of zinc sulfate with a concentration copper sulfate. What are these? x's are concentrations in aqueous solution. E0 are reference potentials, reference potential defined for a reference state. And in this case, it would be concentration of the two aqueous components equals 1 mole. And I made n equals 2 because this particular redox, it involves two electrons if you remember the formal charge on sulfate or copper or zinc here. OK. So that was very fast, but I think it was worth it because now you've seen this stuff. And then as you see it again in other classes, hopefully, there'll be some connections to some material from 020.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_6_Thermodynamic_Potentials.txt
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: On Friday, we had a very almost philosophical lecture. And the second law can get like that. It gets sometimes a little bit head in the clouds. So today is going to be pretty mathematical. But, actually, it's very practical. We're going to start moving from this very highfalutin concept of the second law towards useful equations that we can use, things that we can use to make predictions. So I'm going to tell you about thermodynamic potentials and equilibrium. And I'm going to start off just by listing them. These are going to be presented without proof or motivation [INAUDIBLE] we'll discuss. So the first one is what we already know now. For fixed U entropy is max at equilibrium. So you already know that. But there are others. There are other conditions and other potentials. So for fixed entropy and volume, the internal energy U is minimum at equilibrium. So this is a condition that you encounter in your introductory physics classes. They don't really state that, but that's the condition. Is there ever really a way to fix the entropy? Is there a way? How can we fix the entropy? There's no entropy reservoir. We can't connect the system to an entropy reservoir like we can with a pressure reservoir or a temperature reservoir. So I'll tell you-- it's a little bit of an aside. But the only way to fix entropy is to be at 0 Kelvin. When you're at absolute zero, there's no entropy. So the physicists like it that way. But for the rest of us, it's not terribly useful. For fixed S and P, new term here, the enthalpy H equals U plus PV is minimized at equilibrium. OK. So that's enthalpy. This condition, minimum enthalpy and equilibrium, is not terribly useful. But the enthalpy is useful. It's a useful thing. And one of the problems on the current p set is making connection between enthalpy and the heat of a process. Sometimes those terms enthalpy and heat are actually used interchangeably. That's not strictly correct, but you'll see why. More perfect temperature and volume-- so now we're getting a little more practical, fixed temperature. That's something we know how to do. We all have thermostats. That's what your thermostat does. The Helmholtz, F equals U minus TS is min at equilibrium. So I'll tell you, we spent almost no time at all on Helmholtz in this class because fixed temperature and volume are much less practical and useful than fixed temperature and pressure for materials processing. However, in other disciplines, especially in semiconductors and semiconductor physics, it's the Helmholtz free energy that matters because your device is typically operated at a fixed temperature, and your electrons are confined in the volume of the semiconductor device. So if we were to do things like calculate the theoretical limiting efficiency of a solar cell, we'd be worrying about Helmholtz. But as it is, you almost never hear us discuss that at all in the rest of the class. This is the one that matters, perfect temperature and pressure. Anybody know what's the relevant potential for these conditions? Who's read ahead? AUDIENCE: Gibbs free energy. PROFESSOR: Gibbs, right. So this is the one that matters for material science. And one way to see the rest of this class, actually the rest of the term, is we're just calculating the Gibbs for different scenarios. And we're doing price shopping for Gibbs, I call it. We calculate the Gibbs free energy for different scenarios. And we choose the lowest one as the equilibrium case. So that is thermodynamic potentials. Now these are presented without proof. It's not worth the time here to motivate each and every one of those. On the piece that you're going to convince yourselves that given this, then this. So one of the exercises is going through a form of proof of this. And that's useful because Gibbs is the one we care about. Now I have to make some mathematical asides because without this, my math note, a lot of what comes is kind of weird and confusing. So in 3.020, state functions and state variables are functions of at most three independent variables and both stay independent variables. And this is where that comes from. It comes from-- sorry, this pen is kind of smushy. It doesn't come from nowhere. It comes from the combined statement. So it comes from physical reasoning. dU equals TdS minus PdV plus mu. Yeah. This implies the existence of a state function. U is a function of S, V, and N. This is a critical link here. If you write down a differential form, we're going to learn how to identify the independent variables, the coefficients, and the dependent variable. And the existence of this differential form implies a state function. Now, in most cases, we may never know this, but can-- we can still use this. That is, we may never know the closed form of the state function. But we can still use its differential form. In fact, outside of the ideal gas situation, we almost never write down any state functions in this class. You can't find a complete state function for something useful like a high-entropy alloy. It doesn't exist. It's too complicated. So a corollary of this is that thermodynamic potentials have what I'll call natural independent variables, which we will see aplenty in the case of Gibbs. So for example, entropy is a function of energy and volume. If you remember, at equilibrium, entropy was max for fixed energy and fixed volume. That tells you that those are its natural variables. At equilibrium, internal energy was maximum for fixed entropy and volume. Those are its natural variables. Enthalpy was equilibrium for a minimum at equilibrium for fixed entropy and pressure. Those are its natural variables. For Helmholtz, its natural variables are T and V. And for Gibbs, its natural variables are T and P. Before you ding me here, I'm not writing the ends for compactness. So everything is also a function of the size of a system. I'm just leaving that off to keep the lines short. One more note. Where on Earth does this stuff come from? When I learned thermo, these potentials were just told to me the way I just told them to you. And I think in the back of my mind, I said, where does that come from? And then I forgot the question. And I didn't come back to it for years. It does come from somewhere. The potentials are related by a change of variables via Legendre transforms. So this is something we don't have time for in this class, and probably I wouldn't be able to teach very well anyway. This is the math concept. And so for those of you who have seen Legendre transforms in your math classes, that's great. For those of you who are curious, you can start with a wiki entry. We're not going to cover it in detail. You can think of it as just one more type of a transform, like a Fourier transform or Laplace transform. Fourier transform, for example, changes your natural variables from time to frequency. That's an example. Legendre transform will change natural variables, for example, from U as a function of S and V to H as a function of S and P, exchanging V for P. And if you'll remember, H equals U plus PV. And this comes from somewhere. This is all beyond-- this is beyond 020. I just want you to know it comes from somewhere. It comes from somewhere. For most of the class, you can just memorize. Try a bit a little more. Let's start with the combined statement. dU equals T dS minus PdV plus mu. I'll let this be a multi-component system. So I'll keep the labels there. Somebody please remind me, what are the independent variables in this differential form? AUDIENCE: S, V, and I. PROFESSOR: Correct. Those are the independent variables. The other things we call-- having fun with color today. The other things we call coefficients. So the coefficients describe how the dependent variable varies with the independent variables. And we can write down by inspection coefficient relations. And here they are. dU dS. DU dS at fixed volume and fixed particle number equals what? What is partial U by S? You can pick it off of-- AUDIENCE: p? PROFESSOR: Yep, you just look at the equation. Put the terms apart. And pick these off. dU dV at fixed entropy and particle number equals minus p. And dU d Ni at fixed entropy and volume and particle number J not equal to i equals mu i. Where's your coefficient? Good call. You can them O. I just picked partial differentials. But they're called coefficient relations. A lot of problem solving in thermo comes down to calculating these coefficients. The whole chapter, actually, in-- get to chapter 10 of the book. The most useful chapter is chapter 4. If you read only one chapter of this book, pick chapter 4 because it contains all that we need to actually solve problems. To actually solve problems, you just have to understand chapter 4. And I say just. It's actually one of the hairier chapters. But there it is. And it's all about these coefficient relations and identifying independent variables and calculating coefficients. So let's see how to do that. I wouldn't have called this a strategy. I think it's a little funny. Besides, it's a tactic, not a strategy. But that's what it's called in DeHoff. So I'll stick with it. The general strategy for deriving thermodynamic relations. This is really best illustrated by an example. So that's what we're going to do. Here is the general problem, the general problem statement. Given a dependent state variable Z, independent state variables x and y, calculate-- so this is a word problem-- delta V for a given process. So this is like-- probably half of all thermo problems boil down to something like this. There's some kind of state variable, which you identify through the problem statement. There are some independent variables, which usually you're not told. You figure them out through the problem statement or the situation in hand. And you want to calculate the change for a given process. So what example are we going to do? Example. Calculate delta V for a process involving a change of pressure at fixed entropy, which is also is a reversible adiabatic process. This is helpful, remembering that fixed entropy and reversible adiabatic processes are related. That's really helpful. So now I'm going to step through this example. We want to write down the differential for the change in the dependent variable. And this is going to depend on some coefficient times dP plus some coefficient times dS. And our job is to find-- we want to find those coefficients, and then integrate-- this is the approach. Once you find these coefficients, then you can integrate this term, which for fixed entropy gives you the thing you're looking for, which is the change in volume. That's what we need to do. Questions before I go on to actually do that? OK. Then we're going to start. 1-- we're going to write the exact differential. Rewriting what we did. dV equals x dP plus Y dS. x equals dV dP at s. Y equals dV dS at p. OK, good. Then, we're going to do a change of variables. There's a really useful reference, DeHoff table 4.5. To express dP and dS in terms of dP and dT. DeHoff chooses P and T because it's a materials science text. Another text might have chosen a different two. You can choose any two. So we're going to look up in table 4.5. And it has the coefficients for every state variable as a function of dP and dT. So here, this is trivial. There's going to be no-- this is 0 because dP equals dP. That's trivial. The entropy is nontrivial. We could derive this, but we're not going to take the time. We're just going to copy it from the table. It happens to be equal to Cp over temperature times dT minus V alpha dP. That's not obvious to me. I have to think about it. This I could figure out if I thought about it long enough. This, it doesn't make sense to me. I just have to derive it. But you don't have to derive it or think about it. It's right there in the table. So here is pressure and entropy written now as dependent variables, dependent on temperature and pressure. So we're going to combine now, and we get the following. dV equals x-- then we combine-- --equals dP-- that's easy-- plus Y. What's Cs? Cs equals Cp over T dT minus V alpha dP. I just plugged in, substituted, substituted for dS here. So that's the second step. Then I'm going to collect terms. The third step-- collect terms. Collect terms. So I have the following. I have dV equals Y tP over temperature dT plus X minus YV alpha dP. I still don't have X and Y. I still don't know what they are. But I can say by inspection, dV dT at fixed pressure equals Y Cp over T. That's this coefficient. dV by temperature at fixed pressure is that coefficient. And dV dP at fixed temperature equals this coefficient, X minus YV alpha. We get the coefficients by inspection. The fourth step is you compare to the known coefficients, again referring to table 4.5. So in table 4.5, you will find the differential form of volume written as a function of temperature and pressure. And this is what you'll find. V alpha dT minus V beta dP. So here's the thing. This coefficient and this thing have to be equal. And this thing and this thing have to be equal because they're both the same mathematical objects. They are these partials. I always say if it walks like a duck, talks like a duck, it's the same thing. Compare the known coefficients and equate. So we're going to equate. We're going to equate them. And we're to get the following. dV dT at fixed pressure-- well, that was this-- equals Y Cp over T-- but it's also this-- equals V alpha. And dV dP at fixed temperature equals this. X minus Y V alpha. But it's also this. It goes minus V beta. So now, we have our answer. How? We have a system of two equations and two unknowns, X and Y. Two equations and two unknowns. So now you can solve. 5-- solve for unknowns. I'll spare you the algebra there because it's pretty easy. Solving for Y V alpha T over Cp. And solving for X equals V squared alpha squared T over Cp minus V beta. And these are complicated and nonobvious. That's why you need a tactic, set of tactics, to get to the answers. Because if you follow this approach, you will always get the right answer even if it doesn't make sense to you. So here's the final answer. dV equals V squared alpha squared T over a Cp minus V beta times dP plus V alpha T over Cp times dS. This is the thing we were looking for. That's it. We integrate this to find change delta V for given delta P at fixed entropy. Again, this is mechanics. This will work for you even if you don't have a physical intuition for every step. This strategy works. Question. For a real material, when you are-- this is very realistic. I mean, now we go from multivariable calculus to money on the line. You are calculating the effect of a pressure change on the volume of components. Let's say you're going to send this into outer space. And you have a system made out of components. And the components are made out of materials. And you want to make sure that the pressure change that your device experience isn't going to cause the thing to fall apart. So now this matters a lot. You need to know how to do that calculation. Here's how. Question. Where are you going to get this? This is the integrant. How are you going to evaluate the integrant? How are you're probably going to end up evaluating the integrant when you're working at SpaceX, and you need the answer in five minutes? Let me set up a straw man. Are you going to do density functional theory calculations to calculate the thermal expansion coefficient, and specific heat, and volume, and compressibility for your material from first principles? AUDIENCE: You're going to probably look at databases. PROFESSOR: Databases. Thanks. That's my favorite word. You're going to look at databases. This stuff is in databases. You're going to parameterize what you need using databases. You're going to plug that in. And you're going to do this integral numerically. The computer is going to do it for you. You're going to make sure you have good sources of data. You're going to make sure the data is inputted correctly. You're going to make sure your units are right. And then the computer is going to do this for you as a numerical integration. That's how it works in the real world. So this hairy-looking challenge is why we hammer on this stuff in 020. And they make you talk to the-- you get to meet the librarians and such because being able to go from those data resources to answers is really an important skill. It's really an important skill. There is one case where you don't need to do any of that. And we'll discuss this a little more next lecture. And you can work this out on your own if you're curious. It's also worked out plenty in the textbook. For an ideal gas, as with many things, it simplifies. So you don't need any databases to evaluate this for an ideal gas. It simplifies to the following. X equals minus V over P times Cp minus R or Cp is also equal to V over P Cv over Cp. And this leads to an expression, which before we just gave you. P final over P initial equals V initial over V final. This is where that comes from. And if you use this expression for adiabatic process for an ideal gas, it comes from this. So I'm not going to take time to derive that now. It's kind of an interesting calculation. You can just evaluate all this stuff. And I'll just mention that in p set 2, in p set 2, you're doing a related problem, which is you're calculating change in Gibbs free energy with current pressure and entropy. So we just did volume with pressure and entropy. And in p set 2, you're going to do Gibbs with pressure and entropy. But the overall what they call the general strategy is the same. So for the remaining minutes, we're going to get a little bit specific and talk about Gibbs free strategy. Let's talk about equilibrium-- fixed pressure and temperature. So now we're reprocessing materials. Gibbs free energy equals U plus PV minus TS is minimized. Let's play with a differential form, dG. So I'm going to use the chain rule. dG [AUDIO OUT] dS minus S dT. And that was just chain rule. And now, I can use the combined statement. dU equals TdS minus PdV plus mu of i dN of i. That is all dU. I carry the rest of the terms. Plus PdV plus VdP minus TdS minus S dT. So here I use combined statement. And I'm going to see a bunch of things cancel. I'm going to see a bunch of things cancel. So we have TdS cancels TdS. Minus PdV cancels PdV. And so this simplifies a little bit. And I get the following. Equals minus SdT plus VdP plus mu i dN of i. Good. This is what we wanted because I told you the natural variables for Gibbs were temperature, pressure, and particle number. And that's what we get. That's what the calculus gives us. So there's a little clue there into how the Legendre transform works. But again, we won't go there. So the independent variables are T, P, and N sub i. And what are the coefficients? Somebody? Please? AUDIENCE: S, V, and mu of i. PROFESSOR: Right. Minus S, V, and mu of i. I'll make a note here. A lot of times, in this class, I use implicit summation. You probably picked up on that. But this implies i, mu of i, and i. It's just a little bit more compact. There's no i index on the left-hand side. So that means it has to resolve on the right-hand side. So you implicitly sum. And by inspection, by inspection, minus S describes the change of Gibbs with temperature at fixed pressure and particle number. Volume describes the change of Gibbs with pressure at fixed temperature and particle number. And finally, our favorite-- chemical potential describes the change of Gibbs with particle number i have fixed temperature, pressure, and particles j not equal to i. So already, we can say something about the slope of these things. On a slope of-- let's see. Here's pressure. And here's Gibbs free energy. And we're fixing temperature and particle number. What's the slope? What's the slope? Is it positive or negative? Gibbs versus pressure for fixed temperature and particle number. AUDIENCE: It's positive. PROFESSOR: Positive because volume is always positive. Can't have negative volume. So that's good. Being able to pick off the slope and the curvature of these things is really useful. It's really, really useful, as you'll see. What about this? What about temperature? Change of Gibbs with temperature for fixed pressure and particle number. AUDIENCE: Negative. PROFESSOR: Yeah, it's negative because entropy is strictly positive. So negative entropy is strictly negative. You got to think about these slopes. If Gibbs free energy increases when we squeeze, that's related to the enthalpy. That's related to the enthalpy. When we squeeze something, we're increasing the energy content of all the bonds that make up that thing. When we squeeze something, we're increasing the entropy content of all the bonds that make up that thing. So this is Gibbs free energy getting higher as we squeeze. When we heat something, Gibbs decreases. When we heat something, we're increasing the entropy. We're Increasing the amount of mixedupness. And Gibbs depends negatively with entropy. Well see this in many examples. So up with pressure, down with temperature. What about conditions that we're interrupting? At fixed temperature T, at fixed temperature and pressure, dT equals 0. That's pretty easy. And dP equals 0. That means the equilibrium, the equilibrium condition, reduces to dG equals minus SdT plus VdP plus mu of i dN of i. And we're holding that fixed. And we're holding that fixed. That means it basically reduces to this-- equals 0 because G is min at equilibrium. So if you think back to calculus, when you would calculate extrema. The extrema positions of a function are where its slope is 0. So equilibrium condition is that the change of Gibbs with particle number i is 0. So for most of the cases of interest to us as practicing materials scientists, that is multi-component, heterogeneous systems. This becomes dG sum over i component k dN i component k. I'm introducing a new notation here. Superscript labels phases. And the subscript labels components. So now we can picture something in our minds. We have a phase boundary here, two phases. And each phase has two components. This was a purple-rich phase, and this was an orange-rich phase. And evaluating the equilibrium condition is going to come down to knowing how the components can exchange between phases and what the effect of that exchange is on the chemical potential. Let me write that out. What determines whether or not components can exchange between the phases? It's the boundary. The property of the boundary tells us what components can exchange between phases. And that can be engineered. So there's a lot of engineering around choosing boundaries to get the desired equilibrium condition. But much of what comes later in this class is calculating composition dependence, calculating the composition dependence of those chemical potentials. And we're going to develop models for that and graphical tools for that, finally leading to binary phase diagrams. One last note. Going to do one last note about independent variables, sometimes called natural. If a state variable is regulated, regulated, what does that mean? Controlled. Regulations are controls. If you go to a McMaster-Carr, you can buy a regulator for pressure. If you go to Home Depot, you can get a regulator for temperature. We call it a thermostat. And those things that you would buy have analogs in-- not the real world, but in R&D and materials science, as well. If a state variable is regulated-- that's why I'm using that term, because it will help you in the future. If it's regulated, then it must be independent. So choosing your independent variables is one of the major intellectual challenges of thermo. It helps even know that that's the challenge you're facing. Knowing that that's the challenge you're facing, it gives you a good start. Identifying independent and dependent variables from the problem at hand, that's key to success. So for example, if the problem statement says something about fixed pressure, then you need to choose pressure as an independent variable. If the problem says adiabatic, you need to choose what? What should be your independent variable if the problem statement said adiabatic? What should be one of your two independent variables? AUDIENCE: Entropy. PROFESSOR: Yeah, entropy. And if the problem statement says in a thermal bath, then you need to choose temperature as one of your independent variables and so on. So this is like a key for turning problems that you encounter on problem sets or in the real world into things you can solve using thermo.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_25_Building_Binary_Phase_Diagrams_Part_III.txt
[SQUEAKING] [RUSTLING] [CLICKING] RAFAEL JARAMILLO: So what I wanted to do today is reverse engineer the iron chromium system. We started playing with the iron chromium system last time, and so it's a nice place to pick up. So I've got Thermo-Calc here open, and I suppose you can probably see it. And what I'm going to do is I'm going to start by loading the iron chromium system. So I think Thermo-Calc should run a little faster today than it did on Friday. Knock on wood-- [KNOCKING SOUND] --because I already have the thing running. So I'm going to take chromium. I'm going to take iron. You see how I'm in the Fe demo? Iron demo database. And when I hit perform tree, it's going to do its generic thing which is after it defines a system, it calculates the phase diagram. And then it's going to plot the phase diagram. And I'm going to then take us to where we left off on Friday, which is I'm going to create a simplified version of this phase diagram where I turn off all but the BCC and the liquid phases, and then recompute the phase diagram. So has anyone had any trouble running Thermo-Calc? Good. Today is the last today which I'm going to spend running software in front of you in lecture. So for better or worse, I want you to learn how the software works and to be familiar with it, and so we're spending a little more time. On Wednesday, as a reminder, we are going to have a guest lecture from Professor Olsen, who's going to tell you about how this sort of software is used in the real world and has led to some really nice developments throughout his career. And then on Friday, we're going to be on to new material. Back on the board, as it were. So what it's doing is it's calculating the Gibbs free energy as a function of composition and phase. That is, it's drawing those free energy composition diagrams. And then it is doing the taut rope construction-- or if you like the inverse, hull construction-- and finding all the tie lines, and coming up with the phase diagram. OK. Well, that's nice, but I'm going to simplify this. I'm going to reveal the hidden spinodal. So I'm going to go to phases and phase constitution. I'm going to turn off sigma. And then I'm going to turn off this austenite phase, this FCC phase of iron, just to make it really simple. And I'm going to recompute. So while I'm waiting for this, I'm actually going to jump ahead a little bit. Save some time. So what we're going to see is we're going to see a simplified version of the phase diagram. And we're going to talk about modeling it. Modeling the simplified iron chromium. This lecture is all about modeling, right? What it means to build a model. So let's draw the phase diagram that you're going to see when Thermo-Calc finishes being slow. And I want to draw-- call your attention to some salient points. So the first thing you're going to see is we're going to have iron and you're going to have chromium. And we have a spinodal region. And then we're going to have a lens like two phase region, something like that. So this here is alpha or BCC structure. This here is liquid. All right, we're going to have a spinodal. And that happens at around 910 Kelvin and right around the middle of the composition range. Then we're going to have our melting point. Chromium melts at T melt of chromium equals 2,180 Kelvin. That's the melting point of chromium. And we're going to have iron melts at T melt iron around 1,810 Kelvin. So that's what we hope to see. Let me go back to Thermo-Calc. Good, there we go. You guys see that? We're going to make some changes here. I'm going to plot as function of mole fraction. And I would like to plot things-- well, right now it has iron as the independent axis. I drew it as chromium, but you can imagine how that looks. And I'm going to-- what am I going to do? What am I going to do? I'll recompute. Working hard. That was an entry. It said, working hard. All right, so my question for you is-- while this is working. See, it has to recompute because the x mole fraction of iron is actually a different set of numbers than a mass percent of iron. And although it could just apply a linear transformation, it doesn't know how to do that. So it recomputes the whole thing. OK, we're going to switch back to the board and talk about building the model. So now we're going to talk about building a model. What is needed to model this phase diagram? What does that even mean? What does that even mean? So we're going to start by building a model for pure iron. Let's start by building a model for pure iron-- Fe. And so what does pure iron need to do? Pure iron needs to melt at 1,810 Kelvin. Right? That's how pure iron should behave. So we need our model to capture that. We need our model to capture melting at 1,810 Kelvin. All right, so what do we need for that? What do we need for that? Needs to melt at Tm iron. So what that means is that the Gibbs free energy of iron in the alpha phase as a function of temperature, and the Gibbs free energy in the liquid phase for iron as a function of temperature, these curves cross at T melt. Or I should say, T melt of iron. So if we have plot at Gibbs free energy, Gibbs temperature-- and here, I have the melting point-- in general, have something like that for the alpha phase and that for the liquid phase where they crossed at the melting point. And we know that Gibbs equals H minus TS. And we know how to model the temperature dependence of these. Or not even model, we know how to calculate the temperature dependence of these. All right, so what we're going to end up doing is we're going to model the temp dependence of H and S, and the transformation quantities. That is, the change in enthalpy on melting at the melting point, the change in entropy on melting at the melting point. And that, of course, is equal to the change in entropy on melting at the melting point divided by the melting point. Right, so I have the enthalpy change, the entropy change, and the melting point temperature. And if you give me any two, I'll tell you the third via this equation. Let's take these one at a time. Enthalpy and entropy-- what do I need to build those models? What data do I need? I'm going to draw this fairly generically. Here's temperature, here's enthalpy, right? This is modeling H of T for pure iron. That's what we're doing. We're modeling H of T for pure iron. So I'm going to draw the melting point right in the middle. T melt-- I guess I'll keep the iron label-- T melt of iron. And let's say that it looks as follows. Let's say it looks like this. OK, what happens at the melting point of the enthalpy? How should this curve look? AUDIENCE: It should jump upwards. RAFAEL JARAMILLO: Thanks, That's right. There's an enthalpy of melting, right? A specific heat of melting. And then it continues on its way. So this here, this here is the alpha phase. And we know that in principle, I can imagine the enthalpy of the alpha phase being a function and continuous. In practice, this might be increasingly hard to measure or to calculate. And here's the liquid phase. And I can, of course, imagine this being as a function that continues but becomes increasingly inaccessible. And let's mark some features on this. So first of all, I want to start by marking 25 degrees C. Why would I want to mark 25 degrees C? That there is H of the alpha phase for pure iron at 298, right? At 298 Kelvin. That is the enthalpy in the standard state-- 298 Kelvin, 1 atmosphere. Come back to that in a second. All right, so that's a data point. That's good. What about the slope? The slope of this curve, dH alpha dT, at fixed pressure equals the heat capacity of the alpha phase. Which is, of course, temperature independent in general. That's the slope. Likewise, the slope up here. Right? Slope dH liquid dT at fixed pressure equals the heat capacity in the liquid phase. That's that slope. And we have the transformation quantity. So that jump in enthalpy is the transformation entropy-- transformation enthalpy. Delta H alpha to liquid at the melting point. OK, so now we have on this plot, all of the data that we need to build this model. We need a reference point. We need heat capacities that we can integrate. And we need to know what the transformation quantities are. So for instance, if I asked you what is the enthalpy at this temperature for iron? You would start here at the reference data. You would integrate the heat capacity up until the melting point. Then you would add the transformation quantity. And then you would continue integrating the enthalpy up until the desired temperature. Now something I want to point out. The enthalpy in the standard state, this is set to 0 for elements by convention. There's no such thing as a 0 of energy in the universe, so we can always set zeros of entropy wherever we like. And so by convention, we set it to 0 for elements at 288 Kelvin at 1 atmosphere. So it doesn't really matter at this point here. It'll end up being 0. And this is a preview of something to come later, so don't worry too much about this statement. But if the pure component is a compound-- let's say that my pure component is silicon oxide and I'm making a phase diagram for a glassy system, for instance. Then we use the standard enthalpy of formation. Again, don't worry too much about this. We're going to come back to reacting heterogeneous systems in about a week and a half or two weeks, and we'll use this more and more. All right, so now I have a model for the enthalpy. Good. Let's model the entropy. And this is going to be similar. Again, I'm modeling entropy, ST, for pure iron. Let me go a little bit faster here. So likewise, I'm going to have a melting point. I'm going to have, let's say, 25 degrees C. And I'm going to have some function. This pen is dying. Right, so entropy goes like that. And then there's going to be a transformation entropy. And then entropy will continue to go like that. And I have alpha phase, and I have liquid phase. And in general, this function continues. In general, this function continues. And as before, I have a transformation entropy. Transformation entropy. I have a standard entropy. Standard entropy. Entropy in the standard state. And I have my slopes. Slope dS alpha, dT at fixed pressure equals Cp alpha over T. You know that. And similarly, we have slope dS alpha liquid dT p equals Cp liquid over T. All right. And so here, again, we have a picture of all the data that we need to model entropy as a function of temperature for pure iron. A reference value, the heat capacity data which gives me the slope-- and that is how to integrate this curve-- and transformation quantities. And as before, if I needed to find the entropy at that temperature, I would start here. I would integrate up. I would transform. I would continue integrating up, and I would find my value. So let me summarize this. The data, right? Hold on. Sorry. With this enthalpy model and this entropy model, I now have fully established the model, or at least the types of data that I need to model this system. All right, that's an important point. What data do I need? Data needed to model pure iron. What sorts of data do I need? Well, I need standard state data. Standard state data, that is S0 298 and delta H0 form 298. Which, again, for an element, it doesn't have to form. It's already been formed, so that's 0. I need heat capacity data, right? Heat capacity. So I need Cp of the alpha phase as a function of temperature, and I need Cp as the liquid phase as a function of temperature. And I need transformation data. Transformation. So I need delta H alpha to liquid at T of melting. And I need delta S transformation alpha to liquid at T melting. Or alternatively, I just need the melting point. I need two of those three. So this is the type of data that I need. All right, this-- dead pen. These three numbers here, this is a triple with two are independent. So I need two of those. What about the heat capacity data? Heat capacity data. This can be modeled as polynomials. So for example, I can have this model. Cp alpha temperature equals a plus bT plus c over T squared. Right? And I'm also going to have Cp liquid temperature equals a plus bT plus c over T squared. It's just a polynomial. But I need to be careful because these are different phases. So in general, these coefficients are not the same. So I'll label them a of alpha, b of alpha, c of alpha, a of liquid, b of liquid, c of liquid. Right. And again, if I-- let's say I'm given the enthalpy of transformation and the melting temperature, I can simply calculate the entropy of transformation. OK. So how do I get this data? How do I get this data? So let's go back to Thermo-Calc. So here's Thermo-Calc. And what I'm going to do is get Thermo-Calc to give us some of this stuff. So the first thing I'm going to do is I'm going to rename some of these objects. So this, I'm going to call the phase diagram calculator. This is just to maintain sanity. You don't need to do this. Phase diagram plot. And then I'm going to create a new equilibrium calculator. And this is going to calculate the quantities of pure iron for me as a function of temperature. I'm going to rename it. I'm going to rename it pure iron calc. So what I'm going to do here is I'm going to make this mole fraction, and I'm going to make it pure iron. And I'm going to calculate the properties along one axis, and that axis will be temperature. So I'm calculating the properties of pure iron mole fraction x Fe equals 1 along one axis, and that axis is temperature. And this will calculate all the thermodynamic properties for me. I'm going to create a plot here so I can plot and view some of these properties. So let me call this pure iron plot. And I'll start by plotting on the x-axis temperature and the y-axis Gibbs energy. And we'll do this for all phases. We'll perform the tree. So now the system has already calculated the phase diagram, and so that's not changing. But I added this new calculator here, which is calculating some properties of pure iron, and it finished. So look-- this is the Gibbs free energy of pure iron as a function of temperature. And it's plotted for all phases, meaning the blue data is for the BCC phase, and the red data is for the liquid phase. These are two different curves. The difference is the kink there is a little subtle. It's not very obvious. But these two curves do intersect at the melting point, as they should. Let's calculate and plot-- let's plot enthalpy instead. Enthalpy. I'll plot for all phases. Perform. Ah ha. Cool. This is now enthalpy of iron as a function of temperature with the two different phases marked in two different colors. I have a point to make here, which is this looks real. This looks real. This looks real. This looks real. And then it gets zeroed, right? The 0 is not real. That's simply the program representing to you the fact that it doesn't have any more data beyond that point. So it doesn't know what the enthalpy of solid iron is at temperatures in which the liquid is stable. So here's the enthalpy of solid iron. Here's the transformation quantity. All right? And there's the enthalpy of liquid iron. That's neat. OK, how do I estimate the transformation enthalpy? That is, the enthalpy of melting iron from this plot. Does anybody know how I would do that easily and quickly? AUDIENCE: Enthalpy over temperature? RAFAEL JARAMILLO: I'm sorry, One more time. AUDIENCE: Like, heat over temperature, which is like enthalpy here? RAFAEL JARAMILLO: Well, the y-axis is already enthalpy. So how do I measure the transformation enthalpy? AUDIENCE: Oh, like the-- we go to the temperature at the melting point and take the two values? RAFAEL JARAMILLO: Yeah. So I'm just going to mouse over it, right? Your boss is hovering over you and you need an answer quickly. Here we go. You see? Can you guys see how I mouse over? It's giving me the value at the cursor. Folks can see that? So I'm going to say 72,416 minus 58,445. That's it. That's my estimate of the transformation enthalpy. And I'll say maybe that's good to two digits. So I'm not going to carry five digits of significant figures, I'll carry two digits. So if someone asks you for the transformation enthalpy from this data from this plot-- they ask you, you were in a hurry-- you just mouse over, or take a ruler, or any other method of doing this, and estimate the value. Cool. All right, what about entropy? Entropy. All right, there's the entropy plotted for system. I'll plot for all phases just so it'll colorize those two different phases. Great. Similarly, the entropy increases with temperature. It jumps up when you melt, and it continues increasing with temperature. And as before, if you wanted to calculate the transformation entropy-- sorry. I didn't mean calculate. If you wanted to estimate the transformation entropy, just mouse over. It's 99 minus 92. So let's say the entropy of melting iron is 7. And I'm going to look that up. Everybody, appendix C-- phase transformation for the elements. Iron BCC to liquid, delta S of transformation-- 7.6. Not bad, right? So we're in the ballpark. Let's say I want to get the heat capacity. This I'm going to have to work a little bit harder because Thermo-Calc unfortunately does not output heat capacity data. It does output enthalpy, however. So I'm going to show you how I'm going to estimate the heat capacity. I'm going to grab this enthalpy data. I'm going to grab it as follows. I'm going to create a new successor from the pure iron calculator. It's going to be called the table renderer. I'm going to rename this. I'm going to call it pure iron table. And I'm going to ask the table to be enthalpy of the system as a function of temperature. There we have it. Those are the numbers. Now I'm going to right click. I'm going to Save As. And I've already gone through this exercise. But as you can see, iron enthalpy. OK? So I Saved As, I'm going to save over iron enthalpy dot text. So now I have that data on my hard drive. Now I'm going to go over into a data analysis software of my choice. This case, it's going to be MATLAB. So now I'm in MATLAB. If you don't use MATLAB, that's fine. You can use Excel, or Mathematica, or Igor, or any program you like. Python-- whatever you're comfortable with. Import Data-- and I'm going to import chromium enthalpy dot text. OK, I don't know whether this shows the Import window, whether Zoom is sharing the Import window. In fact, probably isn't. So never mind that. I'm going to import an object-- chromium enthalpy. Great. And now I have in my workspace, an object called chromium enthalpy. Can you see that? OK, let me dock this. That's right. OK, so I have an object called chromium enthalpy. Let's see how it looks. Plot chromium enthalpy. Let me dock this plot so that you guys can see it. There we go. You see this? You see the plot? It doesn't look so good because the data is not ordered by temperature. So let's plot it with a point line style instead so we don't have to see that jump. That looks much better. OK, so now we have chromium enthalpy. All right, what am I going to do now? What should I do now? What should I do now? I'm going to fit it. So it's going to bring up a curve fitting utility. Give it just a second. OK, so here's MATLAB's curve fitting utility. Lots of programs have something like this. And you see what I'm doing here? I've got the data, and I'm going to fit it to second order polynomial. And for now, I'm going to just fit-- well, it doesn't make sense to fit both phases because we don't expect both phases to fit to one model because they're different phases. So I'm going to stick with just the low temperature phase. And that means that I'm fitting the enthalpy as a function of temperature for-- I'm doing chromium here, aren't I? Not iron. Same applies. I'm fitting enthalpy versus temperature for the low temperature phase. OK. So this fits pretty well with the second order, right? It fits pretty well through a second order polynomial. I don't see a need to go to a higher order polynomial. What if I instead wanted to fit the liquid phase? Let me exclude all of the low temperature phase data. Would you recommend fitting this data to a first order polynomial or a second order polynomial? AUDIENCE: First order, because it's a line. RAFAEL JARAMILLO: Yeah. This looks like the quadratic term is really superfluous here. So quadratic term is really not needed. And you can actually see this in the fitting results in the value assumed for the quadratic term, which is close to 0, and the uncertainty in the quadratic term, which is large. So I think that the liquid phase in this particular material is better fit to enthalpy versus temperature being a straight line. And what is the slope of that line, by the way? What's the slope of enthalpy versus temperature? The slope of enthalpy versus temperature? AUDIENCE: Heat capacity at pressure. RAFAEL JARAMILLO: Yeah, it's the heat capacity. So what is this fitting routine telling me? It's telling me that the heat capacity of liquid iron is 50. That sounds about right. Let me look back in my table here. And you can look up the data, obviously, online. You don't need to have the book handy. Although having the book handy is handy. And let's see. Does it have the heat capacity of liquid iron? Well unfortunately, it doesn't have the heat capacity of elements of high temperature. But you'll find numbers between 45 and 50 are typical of liquid metals. Great. OK. Now I want to make a note here about this procedure. And then we'll move on to modeling the solutions. So what have I just done? I've modeled the temperature dependence of H. That's what I've done, right? I've modeled it as follows. H equals-- well, if you're MATLAB, it calls the polynomial coefficients by indices p. So it does this. I've modeled it like this. That's what MATLAB spits out. Other programs spit out similar things. But what I want is this. Right? I want this. That's the heat capacity data. So what's the derivative of this polynomial? It's 2 p1 T plus p2, right? And I want to model the heat capacity as a plus bT. So by inspection, I see that twice times my polynomial-- my quadratic coefficient is b. And my linear coefficient-- that is, the slope-- is a. This is as in MATLAB. And this is my heat capacity model. Again, we're gaining a little proficiency here with modeling, and data analysis, and flipping back and forth between software, and so forth. All right, so let's say that I've done building my model for pure iron. I do that for all the phases of iron, and I grab the transformation quantities. And then I'm going to build my model for chromium. All right, kind of similar procedure. And then I need to build my solution models. So let's start with the spinodal phase. And let's model a simple regular with one adjustable parameter, right? Delta H mix equals A0 x1x2. One adjustable parameter. You know that. All right, so now we're going to grab that data from Thermo-Calc. How do we do that? Let us now calculate the properties of the system now at a fixed temperature as a function of composition. Mole fraction iron from 0 to 1 at a fixed temperature. What temperature? This over here is a solid solution region. Let's plot it in the solid solution region. Let's plot it at 1,000 Kelvin. That's a nice round number. So I'm going to plot at 1,000 Kelvin, the thermodynamics of this system as a function of iron composition. I'll call this 1,000 K calculator. And then I want to be able to plot that. So I'll call that-- call that slow is what I'm going to call that. I'll call that 1,000 K plot. And what I'm going to want to plot is the enthalpy, right? I want to fit a simple regular solution model, which means I need the enthalpy as a function of composition. Huh, neat. You see that? Now how would I go about fitting my simple regular model to this data? Somebody, how would I go about fitting my simple regular model to this data? I'm going to do it by eye. It's quicker. I could export it and fit it in MATLAB. I'm going to do it by eye. This is x Fe. Right around 50/50 is kind of a special point. The data kind of looks like that. This is enthalpy. The straight line connecting those two endpoints is what? X 0 Fe. I'm going to read that off. That's about 24.8 joules-- kilojoules per mole. We read that off the plot. Sorry-- H. H0 of chromium. I'm going to read that off. That was about 19.5 kilojoules per mole. And H at x chromium equals one half was about 29 kilojoules per mole. And I can see just geometrically that this height here, this is 0.5 squared times A0. OK, that's simply x1x2 A0 where x1 and x2 are both 0.5. So I can just read right off the plot that A0 for the BCC phase is about 28 kilojoules per mole. You can get all that simply from mousing over the plot and pulling these numbers out to two or three digits. All right. Alternatively, you could-- alternatively, you could export data and fit it. You could do that. You could do that. All right. And you're going to likewise build model for liquid phase. Right? We build a model for the BC phase-- BCC phase. We're going to likewise build a model for the liquid phase. All right. Now we have a model for pure chromium. We have a model for pure iron. We have a model for the liquid phase solution. We have a model for the solid phase solution. What do I do next? At each temperature, we draw free energy composition diagram and find common tangents, if any. If there are any. All right, so I have two phases. I have alpha, which is x of chromium mu chromium 0 in the alpha phase, plus x of iron mu iron 0 in the alpha phase, plus the solution model-- the alpha phase. I have the liquid phase. This is x chromium mu chromium 0 in liquid phase, plus x iron mu iron 0 in liquid phase, plus the solution model. These terms, these are the pure component models. And those, of course, include reference state changes because we've built in to those models the fact that these materials melt. Right? Or maybe there's other state changes for the pure materials. We built that in. And these are the solution models. So this is it. From this, you get the phase diagram. So to pull way back, we have pure component models. We have solution models. Those feed into the free energy composition diagrams. And those result in the phase diagrams. This is thermodynamics, right? This is what we're trying to work on this semester. So this problem set, which is out and is due Friday, was going to include all three aspects-- building the models, plotting the models, and solving the phase diagrams using this web based app. But instead, because the web based app is funky, we're going to limit it to just that first part. And with the remaining minutes, I'd like to try to show you what you're missing because I was able to get this web based version to play a little bit nicely. So I want to show an animation of how this process works. So never mind all this data entry screen you're going to see in a minute because you won't be using it. But I'm going to simulate a two lens system, which is like the one you were asked to do in the p set with two lens system. It's loading. It's loading. So this is a screen where you would enter all the parameters for a model that you've built. And here-- can you see this? Can you see these two plots? What this program is going to do, it's plotting the Gibbs free energy as a function of composition for three phases now. Three phases. In the last 15 minutes, we've been working on a two phase model. Now we have a three phase model-- liquid, alpha, and beta. And it's going to show you the free energy composition diagrams at a series of temperatures. And at each temperature, it's going to determine if there are any common tangents. And if there are, it's going to draw that as a tie line on the phase diagram. And we have this animated so that effectively, it sweeps the temperature and paints the phase diagram. So the red arrow down here indicates the temperature. And right now, we are in an alpha phase region. But you're going to see the beta phase is starting to become more and more competitive. And at some point, it develops a common tangent with the alpha phase when component one transforms to beta. There. It's actually happening now. The common tangent is so teeny tiny, you can't really see it very clearly. We have it marked in red, and we have these gray lines indicating its extent. And you see what's happening is that two phase region is being painted across the phase diagram. Now, believe it or not, we're in the beta phase. This whole system is stable as a beta phase solid solution at all compositions. The alpha phase is now no longer stable for any composition. And liquid is now about to become relevant. What's going to melt first is the stuff on the left. The green line is going to become the favorable phase for the stuff on the left right around now. And we're going to get another series of common tangents that sweep across the phase diagram as the system melts from left to right. And there you have it. That thing on the bottom, we don't have such a high temperature point density. I think we're stepping in every 20 Kelvin steps. So the density of tie lines isn't as high as you might like for a published phase diagram. But nevertheless, you can see this is a two lens system-- three phases, two lenses. OK. That's where I wanted to get to for today. I'm going to stop recording, and I'm happy to take questions.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_22_Free_EnergyComposition_Diagrams_General_Case.txt
[SQUEAKING] [RUSTLING] [CLICKING] RAFAEL JARAMILLO: Let's get right into it. We are-- I'm going to overview here thermodynamics of binary phase diagrams. That's what we're working on. So we've done notation and bookkeeping, notation and bookkeeping. We've talked about solution modeling. And we've talked about spinodal systems in some detail-- spinodal systems. And this is spontaneous unmixing, so phase separation, when both phases are in the same-- so this is-- we haven't made too much of this yet. But it's been underlying our talk about the spinodal, which is that-- this red pen is dying. Life is too short to keep dying sharpies on your desk. So spinodal systems are spontaneously unmixed when both phases are in the same structure. We haven't made a lot of this yet, but it's been an important point that all the spinodal systems, it's the same structure. You have a chromium-rich bcc and an iron-rich bcc, or an ethanol-rich liquid and dodecane-rich liquid, or so forth and so on. But the structure is the same. And so this is our story thus far. And what we're going to do is we're going to now generalize this to analyze what happens when you have phase separation between two phases that have different structures. And so when you had two phases that had the same structure, you only need one solution model. You only need one solution model to analyze the free energy, to analyze the free energy composition plots. So today we're going to start introducing more complexity, which is having multiple solution models. And more generally, the pure components-- What do I mean? Pure components may be in different structures. Let me illustrate this with a simple example. Let's do silicon germanium. It's one of my favorites. Does anyone remember, who has a higher melting point, silicon or germanium? Give you a hint. STUDENT: Is it germanium? RAFAEL JARAMILLO: No, it's silicon. Lead is very soft, very soft. We know that lead melts. We saw it run off the roof of the Notre Dame a couple of years ago in Paris-- then tin, then germanium, then silicon. Carbon, it doesn't even melt. It's [INAUDIBLE] at 3,700 kelvin. So as you go up here, you have stronger and stronger covalent bonds, meaning higher and higher melting points. So silicon has a high melting point than germanium. Thank you, mousepad. All right, so here's what we're going to have is we're going to have a lens diagram, which looks like this. Here's the melting point of germanium. And here's the melting point of silicon. And, of course, this is a two-phase region. As we saw in the last lecture, this is a fully miscible system with a fully miscible liquid, and a fully miscible diamond-structured solid-- sometimes just use alpha for the ground state solid structure-- so fully miscible system. So if you're below the melting point of germanium, you only need one solution model to analyze this whole phase diagram. That is the solution model for diamond-structured silicon germanium. And if you're above the melting point of silicon, you also only need one solution model to analyze the whole system. That is the liquid-phase solution model. But if you are in between these two temperatures, you need two different models because the endpoints are at two different structures. So if you have a temperature, which is intermediate, we need two different solution models to analyze the system. So let's look what that looks like. So for temperature less than the melting point of germanium, we can draw a free energy composition diagram. And let's just say it looks something like this. This is going to be x. And let's just say it looks like this. For some , temperature, low temperature it's fully miscible. All the solutions, solid solutions, are stable. So we have that positive curvature. And I'm going to label that alpha because that is the alpha-phase solution model. Here. For temperature greater than the melting point of silicon, I'm again going to have a relatively simple looking for energy composition diagram. The liquid is fully miscilbe. So we expected downwards curving for energy composition diagram. And I'm going to label that liquid. But things get a little bit more interesting when we are in between. When we were in between and we have different reference states, that is the pure components are in different structures-- I'm just going to go ahead and draw it and we could talk about why it is. So let's see. This is the silicon axis, the x silicon axis. So when I'm on the left-hand side of this free energy composition diagram, what is my composition? What is my system made out of when I'm along this axis? This is the silicon opposition axis. STUDENT: From the left it's germanium and on the right it's silicon. RAFAEL JARAMILLO: Right, OK, good. So left is germanium, right is silicon. So what is the reference state of germanium in this temperature range? That is, what state do you find pure germanium at this temperature? And it's unstated that the pressure is 1 atmosphere unless stated otherwise. So what is the reference state? What is my state here of germanium? Or I should say, what's the phase of germanium in this temperature range? STUDENT: Liquid. RAFAEL JARAMILLO: Liquid. So my liquid-phase solution model is going to start from zero because the free energy of turning pure germanium into a liquid is zero. It already is a liquid. However, this is going to zoom up above zero for pure silicon. Pure silicon is solid alpha. So the solid alpha solution model starts at zero. And that is going to zoom up above the zero axis for germanium. What is that telling us? That's telling us that if you have pure germanium, you have to input free energy to turn it into a solid. Or if you have pure silicon, you have to input free energy to turn it into a liquid because those are not equilibrium states. So we're going to read the equilibrium states of the pure components from the solution model that intercepts zero on a free energy and mixing plot. And what do I have here? I have a common tangent. And I expect to have a common tangent because I have a two-phase regime. For these temperatures, I have a two-phase region. So we're going to be analyzing this in quite a bit of detail. Let's consider the process. All right, let's consider the process of actually making a solid solution at some intermediate temperature. And this is not going to be the process that you would use in the real world. This is going to be thermodynamics where we can take any process we like and imagine a reversible process and calculate changes in state functions. So what I'm going to first do, so I have-- what are my starting-- what are my starting materials? Again, liquid. So I have here a beaker, a beaker of liquid germanium. That is not so clear. Here's a beaker of liquid germanium. And I have, let's say, a crucible of solid silicon. That's my starting point. I'm at this high temperature. My pure components are liquid germanium and solid silicon. And let's imagine a process that converts this starting system into an alpha-phase solid solution. So here's my process. First, I'm going to convert the germanium from liquid to alpha phase. Maybe you can do this in reality. But you can definitely do this in your mind. You can definitely do this when you do a calculation. So in your mind, you're going to take some pair of atomic tweezers. And you're going to pick up each and every atom. And you're going to force it into the diamond-cubic structure and hold them there. This is called a reference state change. reference date change, it's sort of the topic of today's lectures. And the second thing I'm going to do is I'm going to mix solid silicon and solid germanium in the alpha structure. I'm going to mix them. This is described by solution model, which we have already spent some time with. All right, so we have two steps here, two conceptual steps. The second step we spent time with you're at least somewhat familiar with. The first step is new. Reference state change, this is new. And if I want to do this, if I have liquid germanium, and I imagine a process that converts it from liquid to alpha phase, is it going to go up in Gibbs free energy or down in Gibbs free energy? STUDENT: Up in Gibbs free energy. RAFAEL JARAMILLO: Up, I think you said up. Let's say you said up because up is right. At this temperature, the germanium is liquid. So nature is telling you, I am happiest as a liquid. My Gibbs free energy is lowest when I'm a liquid. That's the reference state. That's what equilibrium is in nature. So if you're going to force it into a different state that you don't find at equilibrium, by definition you're going up in Gibbs free energy because at fixed temperature and pressure, equilibrium is a state of lowest Gibbs free energy. So thank you for that. And if you look at our free energy composition diagram here, for pure germanium, we're going to imagine there's an intercept of this curve. And we'll be drawing these more and more. This is a gamma [INAUDIBLE] alpha. This was a liquid. And on this plot, which is a plot with units of joules per moles, a Gibbs free energy plot, there's some Gibbs free energy you need to put in. You need to pay some Gibbs free energy to convert the system from liquid into alpha phase. And, again, with a few exceptions, this is not how you actually do it in the lab or the factory. But it is the way you analyze it and get the right answer. So let's talk about how much free energy is required to effect these reference state changes. STUDENT: I have think quick question actually, if that's all. RAFAEL JARAMILLO: Yeah. STUDENT: So is the reason that the Gibbs free energy goes up, is that because of-- that the delta S is changing? RAFAEL JARAMILLO: Well, delta G equals delta H minus T delta S for isothermal processes. This is sort of-- I want to bring you a question all the way back to the baby book. The whole class is about balancing delta H and delta S. And so, in general, you don't know. If you have some transformation that raises Gibbs free energy, it could be an enthalpy-driven thing, or an entropy-driven thing, or both. In general, you don't know. Now, in this case, if we take liquid germanium and we're turning it into solid germanium, what do you think the change in entropy is? STUDENT: I think it would decrease just because it's a solid. RAFAEL JARAMILLO: So let's write that down. Delta H liquid to alpha, and delta S liquid to alpha. So delta H liquid to alpha-- delta S, sorry, delta S is less than zero. That's what you just said, that the entropy decreases. And that's exactly right. The liquids are more mixed up. They're more disordered. They have higher entropy. Let's just finish the argument. What about delta H? Is delta H bigger than zero or smaller than zero? STUDENT: Is it like an exothermic reaction, going from liquid to some sort of solid? RAFAEL JARAMILLO: It's good thinking. So remember, high-temperature phases are higher enthalpy phases. The liquid is the higher temperature phase. The solid is the lower temperature phase. So it's exothermic. Another way to thinking of this is there are heats of crystallization. There are heats of crystallization Or heats of condensation if it's-- maybe you're thinking about climate systems. And I'm looking at the window thinking about climate systems and thinking about phase changes in water. But, yeah, you're exactly right. It's going from a high-temperature phase to a low-temperature phase. So it's necessarily lowering the enthalpy and also lowering the entropy. So we have this thing which is negative, and this thing which is negative, so this thing which is positive. So you don't necessarily know whether delta G will be positive or negative if this is all the information you're given. But in this case, I'm telling you the temperature is above the melting point of germanium. So you know that the Gibbs free energy, change has to be positive because if it were negative, this wouldn't be the reference state. But let's analyze this graphically a little bit, is where we're heading. Does that address your question? I didn't see you ask that, but-- STUDENT: Yeah, that makes sense, thank you. RAFAEL JARAMILLO: I love the questions and interruptions. It's hard to really have a back and forth over zoom. I know that, but the more the better. So let's talk about the temperature dependence-- temperature dependence of the free energy of the pure components. Everything that we learned from unary phase diagrams we're going to need here. We're going to need to use all that stuff in order to build up binary phase diagrams. So let's draw that for germanium and silicon. This now is going to be temperature. And this is going to be Gibbs free energy. And this is going to be for pure germanium-- pure germanium. I'm going to draw the Gibbs free energy versus temperature curves for the two phases in question here, alpha and liquid. OK, first off, let's recall, Gibbs free energy versus temperature, is this-- have a positive slope or does it have a negative slope? STUDENT: A negative slope. RAFAEL JARAMILLO: A negative slope. And the curvature is negative as well. So let's draw the Gibbs energy for alpha phase. Let's just say it looks like that. All right, now I want to draw it for liquid phase. How should the Gibbs free energy for liquid phase look? By the way, let's just mark-- let's mark the melting point here. It's kind of a hint. How should the Gibbs free energy per mole of the liquid phase look? STUDENT: Should it also be negatively sloped and cross through the alpha phase? RAFAEL JARAMILLO: Cross through, perfect. So it's going to more or less look like this, except there's going to be some important features. It has to cross the alpha phase where they coexist. Where those phases can coexist, their Gibbs free energy has to be equal. We know that from the unary phase diagram module. So it's got to cross that. At high temperature, does it lie below or above the solid? STUDENT: Below the [INAUDIBLE]. STUDENT: Below. RAFAEL JARAMILLO: Right, great. So here we have it. How's that? That looks good. This is completely consistent with what we know about the phase diagram. At high temperature, the liquid phase is the equilibrium phase. It has lower Gibbs free energy. At low temperature, the solid phase is the equilibrium phase. It has lower Gibbs free energy. At the coexisting temperature, they cross. And they have the same Gibbs free energy. And now we can see graphically the measure of what we need to pay to transform germanium from liquid to solid. And we can see that this measure flips sign once you cross through the coexistence point. So this is the energy you need to pay for that reference state change. Let me draw something similar for pure silicon, which will basically look the same except I'll put the melting point higher. That's make believe because I haven't really labeled the axes, but. Liquid, alpha, and we have, again, a measure delta G-- this is silicon-- liquid to alpha. And it's sign flips as we go through the melting point. So there's a reason why we spent time on these sorts of plots. I know we plotted S and we plotted H. Now we're plotting G. There's a reason we spent time on these sorts of plots back when we were doing unary phase diagrams. It's because you need to have these in the back of your mind as you're building binary phase diagrams. You need to understand this stuff is the data input that goes into binary phase diagrams. Or as I've mentioned before, if you look at a binary phase diagram, the unary phase diagrams are kind of hidden here. That is an isobaric slice of the unary phase diagram of germanium. And that is an isobaric slice of the unary phase diagram of silicon. So this stuff comes-- this stuff is essential. OK, so how do we account for this? How do we account for this? Accounting for delta G of some component k-- now I'm going to make this a little more general-- any transformation between any two phases alpha to beta. And that, of course, is a function of temperature that is temperature dependent. And as we've just written down, this is delta H, alpha to beta. And that thing is temperature dependent minus T delta S alpha to beta. And that thing, of course, is also temperature dependent. It does both, maybe temperature dependent. All right, at the alpha/beta coexistence temp, at their equilibrium coexistence temp, we'll call that T alpha, beta, at that temp, delta G equals zero because they're in equilibrium. Often, we will make an assumption. And I want you to recall back to some problem sets and such where we did this. If the transformation heat capacity, that is the difference in heat capacity between the two phases, can be ignored, then delta H and delta S are approximately temp independent. So, again, this is stuff from unary phase diagrams we're recalling. In the case that the heat capacity difference is negligible, then you have approximately temperature independence of those things. And when that is the case, your expression for the temperature dependence of the Gibbs free energy simplifies. And you should convince yourself. Work this on the side after class-- a few lines of algebra, five minutes of your time. All right, so that is a simplification. This thing is linear in T. And what do you need? What data do you need to evaluate this? You need transition temp and the entropy of the transformation. And that's the sort of data that-- as a reminder, that's the sort of data you get in databases. So here's, for instance, phase transformations of the elements. Here's a bunch of phase transformations of pure materials. And normally you get these triples. You get a temperature-- I know it's blurry, but I'm looking at the book, Appendix C. There's a temperature of the transformation, a delta S of the transformation, and a delta H of the transformation. So this is the data that you find in databases. OK, so let's keep moving. We're talking about two-phase equilibrium. We're talking about coexistence, solving for two-phase coexistence. Let's make this specific. Let's go back to the case of silicon germanium where we know we have two phase coexistence in between the melting temps. And so we're going to use our two-phase coexistence conditions from previous. What's that? Chemical potentials are equal. Chemical potential of silicon and the alpha phase equals the chemical potential of silicon in the liquid phase. And that's going to become the following. The chemical potential of silicon in its reference state-- what's the reference state of silicon, somebody, at this temperature, below the melting point of silicon? STUDENT: Solid. RAFAEL JARAMILLO: Solid, thank you. So the chemical potential of silicon in its reference state, which is a solid, plus the mixing term in the solid phase-- that, that's that. Now we're going to do the right-hand side. The right-hand side, we have chemical potential of silicon in its reference state, which is solid. Now we need to pay some free energy. We need to get silicon from solid to liquid. And then we need to mix in the liquid phase. So mu silicon zero equals reference state, reference state for pure silicon, which here is alpha-- delta mu silicon alpha mixed. What's that? Solution model for alpha phase solid solution. Delta mu silicon mixed in the liquid, that comes from a solution model for liquid phase solution. And delta mu silicon alpha to liquid, that's called a reference state change. So you're all going to be great accountants when you're done with this class because that's what we're doing here. It's just accounting, accounting, accounting. Let's do the same thing for the germanium. The other equilibrium condition is that the chemical potential of germanium in the alpha phase is the chemical potential of germanium in liquid phase. So this is a two-phase equilibrium condition that we've derived very recently. And for this case, you end up with chemical potential of germanium in its reference state. Its reference state is the liquid. So we need to pay some free energy to get germanium from liquid to the alpha phase. And then we're going to mix germanium in the alpha phase. And that is going to be equal to chemical potential of germanium in its reference state, which is liquid. And so now all we need to do is make the liquid solution. So, again, we have the liquid stuff and the solid stuff. So I'll be really explicit here, mu germanium zero equals reference state of pure germanium, which here is the liquid phase. Delta mu germanium mix in the liquid comes from the solution model for liquid-phase solution. Delta mu germanium mix alpha comes from the solution model for alpha-phase solid solution. And delta mu germanium liquid to alpha is a reference state change. So we're really, we're bringing out all the stuff that we've done to this point in the semester. Everything we've done is needed here. We need to be able to calculate the temperature dependence. We need to be able to identify our reference state. So we need to be able to read unary phase diagrams. We need to be able to analyze solution models, solution models. And you're going to have different solution models for different phases. So we need to analyze solution models. We need to be able to calculate Gibbs free energy changes for pure materials going through transformations. And in order to do that, we need to calculate temperature dependence of enthalpy change transformations and entropy transformations. And in order to do that, we need data for temperature dependence of heat capacity. So it's like, it's all coming together in this process of making binary phase diagrams. So I want to say one final thing, and then pause, and have a brief discussion about the p-set and take questions. So if both solutions-- we have two solution models here, alpha and liquid. If both solutions behave ideally-- which is not realistic, but it's a nice model. If both solutions behave ideally and if the heat capacity difference is approximately zero for both pure components-- so a lot of ifs here-- then the coexistence conditions-- phase coexistence conditions being mu k alpha equals mu k-- well, let me make this not liquid. Let me make it beta so it's a little more general. That's coexistence conditions-- can be solved explicitly. And you have closed-form equations, functions for the resulting phase diagram. So the math there is a little bit complicated. We're not going to do it in this class, but section 10.2.1, figure 10.22. So DeHoff does this. It's kind of interesting-- figure 10.22. So if those conditions are met, then you can have different types. All different phase diagrams that can emerge. And this is an example of the different types of binary phase diagrams that you can get. And so these are all two-phase systems. It's alpha and liquid, alpha and liquid, alpha and liquid, alpha and liquid. It could have been called alpha and beta. It doesn't matter. And you can get different shapes, different types of lens diagrams there. But that's not very general. It doesn't look like what nature provides us. In all other cases, we use computers. So in all other cases, we let the computer do the work for us because it becomes unrealistic to do it ourselves. And so we use the computer to evaluate these equilibrium conditions, or it's saying the same thing to say we use a computer to evaluate the common tangent conditions and draw the phase diagrams for us. So in the days and weeks ahead, we're going to be using CALPHAD more and more. We're going to be using Thermo-Calc more and more. We're going to use some other software that we wrote to play with these free energy composition diagrams. We'll have a guest lecture by Professor Greg Olson who has made a very successful career, built largely from data-driven predictions of high-performance materials using CALPHAD tools. So that's where we're heading.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_20_Introduction_to_Binary_Phase_Diagrams.txt
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: All right. So good morning, again, everybody. What are we doing today? What did we do last time? We worked on solution models. And we were working on new tactics and things like that. So we're continuing to sort of gently dip our, first, our toes and our ankles into the deep end of the pool of the class, which is binary phase diagrams and free energy composition diagram. So we're getting deeper into the pool now. So what I want to start with is some basic results for the equilibrium condition for multi-component and heterogeneous systems, including Gibbs phase rule. And then, we'll walk through a number of slides showing you binary phase diagrams. How they look. Some interpretation of them. The reading for today includes reading in both the Hoff and Callister. And I hope that people are able to do both because there's a lot of foundational context that isn't really appropriate for a Blackboard or whiteboard lecture, but you have to be familiar with. Just how phase diagrams look, and how they present differently in different resources. And how they're used. So we'll try to give you some of that today as well. But we'll start with something a little bit more specific. Equilibrium and multi-component heterogeneous systems. OK, so we actually haven't done this yet. We've done equilibrium in multi-component homogeneous systems. That was reacting gas systems. And then, we've studied solution models. Those are some models of phases, which are multiple components. But we've studied them in isolation, that is, a model of a single phase, a given phase. And so it hasn't been heterogeneous yet. Heterogeneous means more than one phase. But now, we're going to start putting it all together. So let's draw a general picture of this and derive the equilibrium condition for this. I'm gonna need new Sharpies. Add that to my shopping list. Maybe this one is a little better. OK. So we have-- that's a phase. In general, let's have a phase alpha. And it has temperature alpha, pressure alpha, and x1 alpha, x2 alpha. And although in this class we're only going to deal with binary systems, that is, two components. Right now, what we'll derive is more general. So you can imagine having more than two components. And then, we're going to have phase beta. And this, in general, is at some temperature, some pressure, and it has some composition. Which of these pens is better? This pen is better. Life is too short to keep faded Sharpies on your desk. OK. So the overall system is isolated. Overall system is isolated. And so we're going to use our isolation conditions that we remember from some time ago. And here's what we're going to ask ourselves. What is condition for equilibrium between alpha and beta if the phase boundary is open so material can flip back and forth? Non-rigid, so this phase can grow or shrink. And this phase can grow or shrink. Right? And thermally conductive. Or sometimes that's known as a thermal. But we'll just say thermally conductive. So what is our condition for equilibrium in this situation? So we're going to do something like what we did a couple of weeks ago now. For the total system, for the system as a whole, we can write the full differential of entropy. So we're going to sum now over phases. Sum over phases. And we know what this is. This is just from the combined statement here, so phase label J. And of course, each phase we have to sum over components. Sum over components. Mu, K, J. T, J, dN, K, J. This is very similar to what we've done before. dS equals 0 at equilibrium leads to the equilibrium conditions. And I'm not going to write this out because you've seen it before, more or less. That is thermal equilibrium, mechanical equilibrium, and chemical equilibrium. Right? OK. Thermal, mechanical, and chemical. All right. So let's see what the implications of this are. We're going to start with Gibbs phase rule. So we did Gibbs phase rule before. We did it rather briefly for the unary systems. We're going to revise it now. Revised for multi-component systems. Right. And as before, we're going to use this Ph to mean phases. So Ph phases and C components. As before, I'm not using capital p for phases as the book does because capital P means pressure. So this is a problem in linear algebra. We count our number of variables. The number of variables, let's see, for phase alpha, we have temperature of alpha, pressure of alpha, x1 of alpha, x2 of alpha, and all the way up to Xc minus 1 of alpha. I'm not including the mole fraction of component C because it's determined by the mole fractions from 1 to C minus 1. Right? You don't have C independent mole fractions. You have C minus 1 independent mole fractions because they have to sum to 1. OK, that's for phase alpha. Here's for phase beta. T beta, P beta, x1 beta, x2 beta, so forth and so on, Xc minus 1 beta. And we can repeat this for each of Ph phases. So we have this many lines of variables. And for each phase, we have 2 plus C minus 1 variables per phase. See what we're doing is we're counting the intensive variables that control-- that describe the system. So the number vars equals number of phases times 2 plus C minus 1. All right, so let's just number of phases times C plus 1. So that's a number of variables that describe the system. And now, we're going to apply our number of constraints at equilibrium. Number of constraints at equilibrium. So the equilibrium conditions we found in two slides ago, so they're as follows. Temperature of alpha equals temperature of beta equals, ba, ba, ba. Right? For as many phases as you have, you keep going. Pressure of alpha equals pressure of beta equals, ba, ba, ba. And you keep going. Then, chemical potential of component 1 in phase alpha equals chemical potential of component 1 in phase beta equals, so forth. Then, of course, chemical potential in component 2 in phase alpha equals chemical potential in component 2 and phase beta. And you have to carry this all the way down to the chemical potential of component C in phase alpha equals the chemical potential of component C in phase beta and so forth and so on. So we have C plus 2 rows of equations-- temperature, pressure, and all the components. And how many independent equations per row do we have? So this is going from temperature alpha equals temperature beta equals ba, ba, ba, all the way up to the temperature of the final phase. How many independent equations do we have, let's say, in this one row? STUDENT: Minus 1? PROFESSOR: Yeah. Right. This equals that. That equals that. That equals that. That equals that, and so forth. And then, the final one equals the first one. And you can just count the equal signs, right? The number of equal signs you're going to have is going to be Ph minus 1. Right? Good. So that means we have our number of constraints equals C plus 2 times Ph minus 1. OK. So now, we have constraints, and we have variables. So again, this is a linear algebra thing. The number of degrees of freedom DoF equals variables minus constraints. And this comes out to be C plus 2 minus number of phases. And this is a rather well-known result. This is Gibbs phase rule for multi-component systems. That's Gibbs phase rule for multi-component systems. And what is that in words? Somebody, what are we doing here? What does that tell us in words? What does that mean? Degree of freedom is the number of thermodynamic variables that can be independently varied while maintaining equilibrium between Ph faces in a system of C components. So an example that you're familiar with already was from unary systems. We had temperature, and we had pressure. And we had several saturation vapor pressures, for example, which kind of looked like that, a P sat. You remember that? That was a coexistence curve between vapor and solid phases in the unary system. This is unary. How many degrees of freedom does a line have? STUDENT: One? PROFESSOR: One. You can move along T, left or right, but once you've decided how far to move along T, you can't arbitrarily vary P. You've got to tow the line. So basically, you can move along the line in this direction or that direction. That's your degree of freedom. This is a condition for two-phase equilibrium. Right? This was degree of freedom equals how many components in unary? 1 plus 2 minus 2. Two phases, one component, right? So this is 1. The line here has one degree of freedom. So if you start here, and you go over here, you vary temperature and pressure in some random way. And you end up over here. You no longer can maintain two-phase equilibrium. Right? You'll be in a one-phase region. And the entire system will vaporize. That's what that means in the context you're more familiar with. So now, let's see what that means in this new context. Case of binary system. Binary system C equals 2. So if we have one phase, degree of freedom equals 3. So for example, what are three things which I can all independently vary while staying in the same phase? What are my three independent parameters for solutions at fixed temperature and pressure? STUDENT: Volume? PROFESSOR: Well, no, temperature and pressure, right? Those are my independent parameters for a fixed temperature and pressure system, so temperature and pressure. And let's say composition of component 1. I can vary any of these. And I can vary them all three. And I can vary them at random. And I'll still be in a one-phase equilibrium. That's what that means. What about two-phase? Degree of freedom equals 2. So here, I can vary two parameters, but then third will co-vary deterministically if I'm to stay in two-phase region. And we're going to see this emerge in the next couple of lectures with two-phase regions in tie lines and so forth. All right. So for example, free to vary T and P, but then x1, x2, of course, follows my x1 must follow. OK, so that's an example. Three-phase, right? Degree of freedom equals 1. And four-phase degree of freedom equals 0. So before, we couldn't have four-phase equilibrium with unary systems. That wasn't allowed. Now, we can. And we see that in nature. And here, before we had three-phase regions. Those were called triple points. And they were points in unary phase diagrams. They couldn't vary. Now, three-phase regions are lines in temperature, pressure, composition space, right? In this three dimensional space, these three-phase regions are our lines. So I'm done on the board for now. I'm going to switch the slides and walk through some examples of binary phase diagrams. And hopefully, illustrate some of this stuff. So let's see some examples of binary phase diagrams. Here's one that I think I pulled from Callister, right? And we talked about sugar water before in the class. And so this is a very simple binary phase diagram. We have the x-axis being composition here in weight percent. Normally, we like to stick with atomic percent, but this is in weight percent. And the y-axis here in temperature, they even have Fahrenheit. And we have this concept of a solubility limit. So if you start with pure water, here in the left-hand side, and you start adding sugar, at some point, you reach saturation. And if you add any more sugar, that sugar will precipitate as solid pure sugar at the bottom of the beaker. So this over here is a two-phase region. And we understand that you can't actually have a uniform material with any of these compositions. What's actually happening when you have an overall system composition in this region is you have the coexistence of saturated sugar water and pure solid sugar. So that's what we understand this to mean. Why does this solubility limit curve go up and to the right? Why does it not go up into the left? STUDENT: Does it relate to the density? PROFESSOR: It actually doesn't relate to the density. This means that, let's say imagine this 65% weight percent composition, which would be super saturated at lower temperature. If we raise the temperature, now, we can actually achieve that solubility. Why would the solution phase be more favorable at higher temperature than lower temperature? STUDENT: Because it has higher entropy, so-- PROFESSOR: It's an entropy effect. That's right. We remember Gibbs is h minus ts. So as we raise the temperature, entropy becomes more and more important and becomes a stronger driving force. So solutions are more mixed up. They have higher entropy than two-phase regions with a segregated pure solute. And so that's why you see solubility regions typically expanding, getting wider, as you raise the temperature. All right. So let's talk about some other systems. Some definition here. Isomorphous system, so now, we're getting away from sugar water. We're going more towards solids that are more in line with the core of DMSE. Isomorphous systems means systems for which both pure components have the same crystal structure. So that's the definition. So an example we like in DMSE is silicon-germanium because silicon-germanium is an isomorphous system. And it's one that DMSE has played a major role in developing. So silicon and germanium both have this diamond cubic crystal structure. Silicon-germanium alloys have a pretty important role in microprocessors. And let's see what the phase diagram looks like. So this is a phase diagram of silicon-germanium. So now, what do we see? At low temperature, that is-- what's the axis here. Low temperature means below roughly 1,000 C, below 940 C. That counts as low temperature when you're dealing with covalently bonded solids, like silicon and germanium. You have the diamond structure throughout. So you have diamond throughout this entire region. It's fully miscible, right? Germanium is fully miscible in silicon. Silicon is fully miscible in germanium. And we have only one phase field labeled alpha solid solution. In high temperature, we have fully miscible liquids. That's maybe not too surprising, although, we saw even last lecture an example of liquids which did not mix. But at high temperature, we have fully miscible liquids. At low temperature, we have full miscible solids. And the intermediate region gives us this lens-like shape. So this is called the lens diagram. In general, any phase diagram that has this kind of appearance with a two-phase region separating two fully miscible regions can be called a lens diagram because it looks like a lens. So what happens if I prepare a system with 30 weight percent silicon at 1,200 degrees C? What happens at equilibrium for that system? Does anybody know? STUDENT: Is it a phase transformation? PROFESSOR: You're going to have-- yeah, you could call it a phase transformation. If you wait long enough, what is your final phase composition going to be? What will you find in your system if you wait long enough for the transformation or the spontaneous process to complete? I have an overall composition of 30 weight percent silicon, and the temperature is 1,200 degrees C. STUDENT: Is it a combination of the alpha and liquid phases like in the shaded region? PROFESSOR: Yeah. Right. So the shaded region is a two-phase region. It's a combination of alpha and liquid. In fact, it's written right there-- alpha and liquid. And to find out what compositions those phrases have, you use the lever rule. And you apply the concept of tie lines. So we have a liquid solution at this composition. It's about 16 weight percent. And a solid solution at this composition it's about 45%. So it's a two-phase system with a germanium-rich liquid and a silicon-richer solid. Phase separation, phase segregation. That's right. You should be familiar with this because it takes just a little bit of flipping through the textbooks and looking at different pictures, which we're going to continue to do for the next 20 minutes. But this is something which you just have to familiarize yourself with how to read these binary phase diagrams. Here's another point. The y-axis here tells you something about pure germanium. The y-axis here tells you something about pure germanium because it's 0 weight percent silicon. All right. So what does it tell you about pure germanium? What can you learn about pure germanium from just this y-axis alone? STUDENT: It has a melting temperature of about 940? PROFESSOR: Right. You learn its melting temperature. This is a little slice of the pure germanium phase diagram. What's unsaid here is the pressure. Pressure is almost never reported in binary phase diagrams. And unless it's reported otherwise, you can assume one atmosphere. OK. So at one atmosphere, as you take pure germanium, and you heat it up, you're in the alpha, alpha, alpha, alpha, alpha liquid. So in between, it melted. Good. What's the melting temperature of silicon? Maybe somebody who hasn't contributed yet today. STUDENT: 1,412 degrees Celsius. PROFESSOR: Thanks. Yeah, 1,412. That's right. So if you take pure silicon, you can see it's in alpha, and now, it's in liquid. So it melted. 1,412. So these binary phase diagrams contain unary phase diagrams. They contain isobaric slices through unary phase diagrams along the y-axis. If you like, you can imagine the unary diagram just coming out of the board. OK? Let's see what else. Here's a binary phase diagram. We've seen this one already. So this is an isomorphous system, but it has a miscibility gap. This is not a system which is fully miscible. This is a system, which is miscible at high temperature, but immiscible at low temperature. So we've started to see that the last time. This is called the spinodal system. And as you start at high temperature, and you start cooling down, you have separation into an ethanol-rich phase and the dodecane-rich phase. So for instance, let's imagine that I'm at 4 degrees Celsius, and I have an overall system composition of 0.6 mole fraction of ethanol. What should you expect to find at equilibrium? Let me ask this. Should you expect to find a uniform solution with 0.6 mole fraction of ethanol? The class isn't sure. But somebody guess. Let's take a wild guess. What would I expect to find it equilibrium? Overall, in the system composition, this is what I prepare, 0.6 mole fraction ethanol, 4 degrees C. I shake up the beaker, and then I wait for a long time. What will I find at equilibrium? STUDENT: It would spontaneously unmix. PROFESSOR: It would spontaneously unmix. Thank you. And it would unmix into what? What would I find at the end of this process? I love that-- spontaneous unmixing. That's exactly right. What would be the final product when the changes stop happening? Part of this is stuff that you're working on in this p-sat, which isn't due for four days, so I'm sort of putting the class on the spot. But you imagine the tie line drawn to connect the edges of a two-phase region. So what you have is an ethanol-rich phase with 85% ethanol and a dodecane-rich phase with, whatever this is, 33% of dodecane. And those are the two phases that you'll find coexisting at equilibrium. Here's a slightly more complicated spinodal system. Now, instead of a spinodal system with a liquid spinodal phase, we now have a solid. So there's a little bit more going on here. This purplish region is a solid solution between aluminum and zinc. Now, often in phase diagrams, the parent or the peer material will be indicated in parentheses. What that tells you is that this solution region has the structure of pure aluminum. Reading by phase diagrams is often an exercise in being a little frustrated in how the people who made the diagrams are very concise. In other words, there's a lot of shorthand. And you'll encounter binary phase diagrams drawn in many different ways and with many different types of shorthand. If I could wave a magic wand and change all the binary phase diagrams in the world into a uniform presentation, I would do so, but I don't have that magic wand. And so, instead, my job is to get you familiar with being a little bit annoyed that you're not getting all the information you need. Because many of you will see this next in industry. You'll go straight from O to O to actually needing to read these things on the job. So that's why I'm showing you these different presentations because it's the sort of thing you're going to find out there. So aluminum here, doesn't mean that this whole region is pure aluminum. It's, obviously, not because this is a binary phase diagram. What it does mean is that this whole solution region here has the structure of aluminum, which happens to be FCC. So we have an aluminum-rich FCC solid solution here. And here is a spinodal. At below 353 degrees C, this solution will spontaneously unmix into a zinc-rich FCC and then aluminum-rich FCC. So this little dome here is a spinodal system. There's more going on here, of course. There's more to read. We see that zinc over here, zinc over here is octagonal [INAUDIBLE]. You can dissolve a little bit of aluminum and HCP zinc, but not too much before its phase segregates. The liquids are fully miscible. If you go up above the melting point of aluminum, which you can read here is 660 degrees C, then zinc and aluminum mix completely. On these phase diagrams, which are this is as produced by ASM, and this is the Society for Metallurgy, so you're going to see a lot of these phase diagrams in your careers because ASM is a major source of data. They indicate solution regions with this nice purplish color. And they indicate two-phase regions with white fill. So they don't say two-phase region. They don't tell you what two phases are coexisting. How do you tell what coexist? So look where my cursor is. What two phases coexist for systems with this overall composition as my cursor is? You want to draw a tie line. So what two phases coexist at this overall system composition? STUDENT: Aluminum-rich FCC and a liquid. PROFESSOR: Aluminum-rich FCC and liquid. Great. OK? Here's another example. Somebody who hasn't contributed yet today, please. What phases coexist when I have an overall system composition that's indicated by the cursor there? STUDENT: Is it zinc-rich FCC and HCP? PROFESSOR: That's right. You're going to have-- here, I'll indicate a little more. You're going to have a zinc-rich FCC with this composition coexisting with HCP with a little bit of aluminum dissolved in it. So you have this composition coexisting with this composition. That's how you read those two phase regions. Good. So you have here, what? This is kind of interesting because if you cool down just a little bit more, let's imagine cooling down to this temperature. So when I am at the slightly higher temperature, my system at equilibrium is zinc-rich FCC and the HCP with a little aluminum. But if it cool down, suddenly my system composition and equilibrium is now aluminum-rich FCC and HCP with a little aluminum. These phase diagrams get very rich and complicated, as you can imagine. We're just learning the basics here of how to read them. Let's move on. Here's another example. This has something called intermediate phases. So we have this chromium titanium system. The chromium titanium system contains a lot of things which we're going to learn about. It contains a spinodal looking thing. It's got this BCC region, which is fully miscible in this little narrow temperature range. At 1,400 degrees C, looks like, this system is fully miscible as a solid in the BCC structure. And you see here they've indicated that with parentheses tie chromium, fully miscible as BCC. But when I cool down, I have spontaneous unmixing. And here, I don't just have spontaneous unmixing into chromium-rich BCC and titanium-rich BCC. This system gives me intermediate phases, which we'll cover in a couple of weeks. So it gets a little bit complicated. I notice that my annotations haven't disappeared. So let me clear those. Here we go. OK. This system also has an interesting phenomenon of liquid melting point suppression. So this is indicative of new tactic-like behavior. We haven't gotten there yet. When you see the melting point of the solution is actually lower than the melting point of either of the pure components. That's very common behavior. We could probably stare at this for another hour and continue to be learning. Let me move on. Now, not me move on. Let me sit here for another minute and collect any questions or curiosities that have come up. Is there anything about this that you're just burning to have answered? Understanding that this is kind of complicated. And we'll be spending the next couple of weeks on diagrams like this. STUDENT: So what happens when there's like a tie line that goes through multiple phases? Is it just telling you about what's happening on the left and right or? PROFESSOR: Yeah, they're complicated. So what on earth is going on here? This is what's going on. Are you talking about this tie line here? STUDENT: Yes. PROFESSOR: Let's look at all the tie lines in this diagram. Let's start there. At 686 degrees, they've drawn a tie line in between titanium and its HCP phase, which has a tiny little sliver of chromium solubility. You can just see that little purple region. And this ti-chrom 2 room temperature polymorph RT, room temperature polymorph. And so this two-phase region down here below the tie line that's drawn is a two-phase region between titanium room temperature polymorph and this ti-chrom room temperature polymorph. Now, when I go above, what happened? I went above this kind of funny looking minimum. Now, this two-phase region is between ti-chrom in the BCC structure and this ti-chromium 2 room temperature polymorph. Whereas this two-phase region is in between titanium-rich HCP and this ti-chrom BCC. You could identify similar sorts of funny business over here when you look at this point, which is the lowest temperature that the high temperature 1 polymorph is stable at. So you have a phase transformation here between a room temperature polymorph and a high temperature polymorph. And it looks like there's two high temperature polymorphs. There's a ti-chrom 2 HT2. So there's several solid state transformations here. Ti-chrom 2 room temperature, if you heat up, it transforms to ti-chrom 2 high temperature 1. And it continues to transform to ti-chrom 2 high temperature 2. I can't tell you what those crystal structures are. I don't know the titanium chromium system very well. But we know how to find out. We go look up [INAUDIBLE]. We learn what they're all about. And this is not purely academic because these are structural alloys. And different crystal structures have different mechanical properties. So you might care very how much of your finished part is ti-chrom 2 high temperature 2, ti-chrom 2 high temperature 1, and ti-chrom 2 room temperature. Right? That might be a really important thing for you. So there's this funny tie line here that starts off at 1,271. Jogs down a little bit to 1,269. That indicates the transformation in between the high temperature 1 and high temperature 2 polymorphs of titanium and chromium 2. Yeah, it gets complicated. It gets complicated in a hurry. So these diagrams are drawn minimally because I think if they were drawn with all the features, it would be like unreadable. But you do need to learn how to read them. Let me move on, show you some examples. Here's another example. We have a lot of spinodal examples in polymer systems. This is an example of a polymer blend PFB and FABT. This is a polymer blend that's used to make solar cells. And this is, I think, AFM data-- Atomic Force Microscopy-- showing you that when you prepare a fully mixed system, in this case a thin film, and you anneal it, that is, you try to drive it towards equilibrium with time and temperature, you start to see pattern formation. That's the word for this. This is called pattern information. What starts off featureless develops features. And these patterns, which happen over nanometer length scales, often have real functional implications. So for example, this is a report of the efficiency of a solar cell based on this stuff. The efficiency only gets out of the basement once they start getting the pattern formation. And there are reasons for that which go beyond the scope of this class, but the point is that by changing the morphology, by changing the phase fraction and driving from the single phase to this two-phase situation, you can affect the performance of something like a solar cell. So it becomes important. There are countless examples for why this pattern formation-- controlling it-- is important for technology. Most of those examples-- there are probably more examples in structural metals than there are in other fields. But as we saw in the last lecture, it extends even to cosmology. OK. It's 10:55. I'll walk quickly through. Let's see, brass is nice. I like brass maybe because I used to play French horn when I was in middle school. So this is brass. This is complicated. We're going to start stepping through diagrams like this. We're not ready for it yet, not quite. But here's brass with copper zinc. So copper-rich FCC is called brass. Over here is zinc. We've seen zinc already. We saw it a couple of slides ago. It's HCP. And in between, you have all these intermediate phases. Let's look how-- here's a simplified view. Copper in parentheses, meaning this is a solution with a crystal structure of pure copper. This A here is an alpha. Here's zinc. Here's a zinc solid solution. Coming back to Gibbs phase rule, here is a single phase region with three degrees of freedom. I can vary pressure, temperature, and composition while still staying in this brass phase. Right? Temperature and composition are x and y in the plot. You have to imagine pressure coming out of the board. OK, three phases. Here's a two-phase region. So this is a two-phase region. This is a region of coexistence between alpha and beta. And they only have two degrees of freedom because the composition of the phases is fixed by the end points of the tie line. We're not going to fully take time to think about that now, but I wanted to point it out. Here's a three-phase region-- alpha, beta, and liquid, only one degree of freedom. So this defines a line through the temperature pressure composition space. I don't have a four-phase region here. OK, We're two minutes over, so I'm going to end before I entertain you with pressure dependent ternary phase diagrams, which mercifully, is not covered in 3020.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_7_Ideal_Gas_Processes.txt
[SQUEAKING] [RUSTLING] [CLICKING] RAFAEL JARAMILLO: And all right, so let's go, ideal gas processes. How to motivate this? It's not just that we like ideal gases because they have a simple equation of state. What we're going to see in a couple of lectures is that the process of mixing ideal gases is a model for the process of mixing real materials. So in case you're wondering why are we spending all this time on gases-- and ideal gases, for that matter-- there is a material science motivation for it. OK. Let's start with reversible adiabatic expansion. You looked at this a little bit at the last lecture. I want to look at it some more because it is really important. Reversible adiabatic expansion. So what to choose for independent variables? So this thought exercise again, you have a problem, you want to calculate something about that process. What are you going to choose for independent variables? Well, adiabatic means no heat. And we said it's reversible. So what does that mean? Is anything fixed? And what does that imply we should use for an independent variable? AUDIENCE: Pressure and temperature. RAFAEL JARAMILLO: So here's a rule of thumb. If something is fixed, meaning constant, then it's independent. Let's call this stubborn. If something is fixed or constant, you can also say it's stubborn. Something that's stubborn is independent. So what is a state variable, which is fixed, for a adiabatic reversible process? AUDIENCE: Would it be entropy? RAFAEL JARAMILLO: Entropy, right. This is a process for dS equals 0, which means we need to use x as an independent variable. If something stubborn, it's independent. All right. And I said expansion. So you could say P or V change. Let's use P. So these are going to be our independent variables here, S and P. So we want to find the equation of state that describes, let's say, temperature as a function of entropy and pressure. Just for example, let's say the problem asked, calculate the temperature change for a reversible adiabatic process moving from pressure 1 to pressure 2, something like that. So that would be a useful equation of state to find. All right. So now this is going to be more practice on the general strategy. That's really one of the reasons I'm doing this here. So you see this. Use general strategy to find dT equals-- let's say M dS plus N dP, where M and N are the unknown coefficients. So then we substitute in. We do a change of variables. We change from S and P to T and P, as we did the last time. Pardon me. Minus V alpha dP plus NdP. Change-- that's a change of variables. And then we gather terms. Just gathering terms. Then we can say by inspection, M Cp over T equals 1. Why is that? dT has to equal dT. So this got to be equal 1. This has to be equal to 1. And so it's equal-- M equals T over Cp. All right. And likewise, if we have T as an independent variable-- dependent, sorry-- and the two independent variables here are T and P-- it's kind of silly-- this thing has to be 0. dT has to be equal to dT, end of story. So N minus MV alpha equals 0. And so N equals TV alpha over Cp. All right, so these are the coefficients that we were looking for. I did this more quickly than I did it on Monday. Again, there's lots more examples in the book, but this is the general strategy. So we found our coefficients there. That's what we were looking for. Now we can plug in. We can use properties of the ideal gas. What are some properties of ideal gases? V equals RT over P. Let's just keep the number of moles equals 1 first because it's easier that way and also because I did this thing where I used N and M, because the textbook does it that way. I don't like to do it that way because N is also the number of moles. So we're going to just pretend I didn't do that. All right. And alpha equals 1/T. We calculated on that first or second day. And another result, which you're probably familiar with from the book-- we haven't explicitly mentioned in class-- which is that Cp equals 5/2 R for a monatomic gas, monatomic. All right. So now we can calculate the thing which we are looking for. dT at fixed entropy equals dT dP at fixed entropy dP. And this we have from the previous board. That's what our general strategy gave us. TV alpha over Cp dP. And we can plug in stuff from the ideal gas-- equals TRT Cp TP. All right. We're using ideal gas properties there. And we're going to simplify R over Cp T/P dP equals-- you can also say 2/5 T/P dP. I'm writing off the edge there. Sorry about that. I can separate variables. This is a separable differential equation. This is a separable differential equation. There's dT. There's dP. There's T. There's P. It's separable. Separate and integrate. DT over T equals R over Cp dP over p. Integrate that, and I get T final over T initial equals P final over P initial R over Cp. So I gave you this in the P set and in a lecture a couple of lectures ago. And then on Monday, we did something like this. We used the general strategy, and we reduced it to the case of ideal gases. And then today, we just went a little farther because we went a little faster. And we actually derived an expression for the adiabatic reversible expansion of an ideal gas. So we've kind of seen the same thing now at three different levels. And this is just one expression which you'll find. If you look up on Wikipedia ideal gas adiabatic process, you'll find several expressions. They're all equivalent. There are alternative forms. There are alternative forms for adiabatic of the original IG. PV to gamma equals constant. That's an alternative form. TV to the gamma minus 1 equals constant. These are simply related by [INAUDIBLE].. It does use the ideal gas equation of state. You can get from one to the other. Here again, gamma equals the ratio of Cp over Cv, which we know is greater than 1 for physical reasons we've discussed before. Or if you want, P final over P initial equals V initial over V final. I think we've seen that already. So these are all equivalent. These are all the same thing. And we know that we have-- on a VP plot, if we have isotherms, the adiabat is going to be something steeper. So if these are isotherms, this is the adiabat. Right. OK. So we've seen this before. There's an interesting alternative derivation of this that does not use the general strategy. I think it's interesting. You should check it out. DeHoff section is-- I wrote this down-- 4.2.4. He uses the fact that for an ideal gas, the internal energy depends only on temperature. And you can also start from this expression and derive these. So if you're interested, let's do that. Let's do that. dQ equals 0 alternative. Alternatively, dQ equals 0. That means that dU equals work-- because there's no Q-- equals minus PdV. All right. Previous-- we're going to use that. And we will use both of these combined, substituting Cv dT equals minus p dV equals minus RT V dV. Again, separate and integrate. We got-- well, this is separable, right? Cv dT over T equals-- minus R dV over V. And we can integrate that, and we get T final over T initial equals V initial over V final R over Cv-- which, again, is the same thing. So we're dancing around the same thing here. Good. OK. So by now, you've probably had more than enough of reversible adiabatic expansion of an ideal gas. Let's do isothermal expansion. All right. Isothermal expansion-- this is our second one. Here, same question I asked in the beginning, what to use four independent variables? What should we use for independent variables? AUDIENCE: Temperature and then either pressure or volume again. RAFAEL JARAMILLO: Good, thank you. Isothermal dT equals 0. That means you use T. Temperature is stubborn. It's independent. And expansion, you're going to control something. Something that's controlled sounds like it should be an independent variable expansion. So we're going to-- in this case, we use pressure. We could use volume, but we'll use pressure. All right. And what I want to do now is find the equation of state Gibbs free energy T and P. So this is an example where-- let's say someone said-- or your problem statement boil down to, we have an isothermal expansion and we want to calculate the change of Gibbs free energy for some process going from pressure 1 to pressure 2. That's a very common scenario that you'll find. To do this calculation is very, very common. So let's see how to do it. We want to find a change, so we want to use calculus. So let's use the proper differential form. dG equals-- well dG equals minus SdT plus P dV. That, we know. But we also said this was isothermal. So this is really pretty simple. It's a single-variable calculus problem. I point this out over and over again, so forgive me for pointing out again. This is why if something is stubborn or something is fixed, it's independent. Not only does that make sense, it also makes your life easier. When you have multivariable calculus and you have an opportunity to reduce it to single-variable calculus, you should take that opportunity. So here's an example. If you have two terms on the right-hand side and you can set one of them to 0, the math gets easier. That's why a correct choice of independent variables is so important. This boils is way down here. PdV for an ideal gas is RT P dP-- wait, I'm sorry. No one called me out. Right. Gibbs free energy, independent variables are T and P. And I get to ignore half of the right-hand side. So things simplify. And this is pretty easy integral. So dG equals G final minus G initial-- we'll call that delta G-- equals RT plug P final over P initial. The math is easy when we set the problem up right. I'm going to put two boxes around this. The math-- oh it's like a-- it's a Christmas equation. The math you just saw is very easy, but this is something which will stick in your mind. Because you will see expressions like this in other classes over and over and over again. So, for example, if you do electrochemistry, this equation sets the voltage of your device, be it a reactor or a battery. This is called the Nernst equation in another context. If you do chemical engineering and you calculate change in chemical potential with fugacity-- we're not going to use that word in this class because this is course 3. But if you're in course 10, it's like fugacity this, fugacity that-- you're going to have log of fugacities, and it's going to be the same equation-- and so forth and so on. This is really foundational, so that's why I put it in two colors. AUDIENCE: Sorry, right have a quick question. Are we assuming that we're using molar volume here? RAFAEL JARAMILLO: I've kept N equals 1 moles. N equals 1 for simplicity here. AUDIENCE: OK, thank you. RAFAEL JARAMILLO: Yeah. So thanks for clarifying. Yeah, sure. It's molar volume because it's 1 moles. I just didn't want to carry an extra term. If you wanted to, you could put an N there. And then you have an N there, and that would be fine. So let me we put a go to N here. Here's an N, and you can put an N there. And everything else is the same, same thing. Good. All right. Now, again, I don't expect you to have the skies part and the angels singing now, because we haven't gotten to the usefulness of this expression. But I want it to lodge in your mind because we're going to come back to this over and over. All right, let's talk about another ideal gas process, adiabatic free expansion. So this is a little bit of a kooky example when you first see it, if you haven't seen it before. So just bear with me. So here's what we have. Here's the situation. We're going to have a piston. So we're going to have a piston. And this is a cylinder, and it's thermally insulated. So I'm drawing insulation. Insulated. Pardon me, that's not very clear-- insulated. And let's make the gas orange. Simple picture, right? So what we're going to do is we're going to withdraw the piston instantaneously, instantaneously, so much faster than the gas molecules can move, definitely much faster than their sound velocity. I made it bigger by accident, didn't mean to do that. Here's the original position of the piston. And I'm going to pull it back very quickly. So it's now going to sit here. The same picture. And the point is that at T equals 0-- let's call it at-- instantaneously, none of the gas molecules have had time to move anywhere. So the gas molecules are occupying the space here. And in the space evacuated by the piston, we have a vacuum. So that's the setup. Like I said, it's a little kooky. There are actual, useful, real-world scenarios that approximate this. But for now, it's just a thought experiment. All right. So what's going to happen spontaneously next? Gas will spontaneously do something. AUDIENCE: Expand? AUDIENCE: Diffuse into the vacuum? RAFAEL JARAMILLO: Freely expand into the free volume. It's kind of spontaneous to expand into the free volume. And this word spontaneously in thermodynamics it also means irreversibly. So when you see one, I want you to think of the other. If you remember, we did this with the baby book, right? This was the baby book from as well. This gets beyond 0.2.0, but somebody said diffuse. The process here would be initially something that's super diffusive. Because initially, the molecules that are moving to the right encounter no collisions with any molecules coming from the left or coming from the right, moving to the left. So you can imagine almost, like, a shock wave of expansion, followed by a diffusive mixing process-- so anyway, again, just because somebody mentioned diffusion. So this is the setup. All right. You got that picture? So let's calculate something about that process. Let's calculate the work and heat during free expansion. Start with the work. Work equals minus dV times P. And here's the sort of kooky thing which we tell you, but it's true. The pressure of the expansion is 0 because the gas is expanding into-- or, here, better say-- against vacuum. So the gas is doing no work in order to expand. There's nothing pushing back. So work equals 0. What about heat? AUDIENCE: I have a quick question about homework. RAFAEL JARAMILLO: Yeah. AUDIENCE: So since it's like an integral dV and the volume, isn't the volume changing? RAFAEL JARAMILLO: Volume's changing. AUDIENCE: So, like, mathematically, how would it work out to 0? RAFAEL JARAMILLO: The pressure is 0. AUDIENCE: Is it the pressure? Oh, I see. Sorry. RAFAEL JARAMILLO: If you think about-- work comes back to energy, which is forced by distance. Go way back to the beginning. So if you're pushing something, something's pushing back, right? That's Newton's third law or something like that. If nothing's pushing back, there's no work. So that's one reason it's free. I mean, this term free expansion is as old as the hill, so I'm not going to change the term even though sometimes I think it's confusing. But that's one way to think of the meaning of free. The expansion comes for free. You don't have to do any work to move into that space. All right. So dV here is finite, but the pressure is 0, the pressure during the expansion. And heat Q equals 0 because I said it was adiabatic. So the change of energy during adiabatic free expansion, this implies dT equals 0 for ideal gas. Right? For an ideal gas, remember temperature and energy are one to one. So if there's no change of energy, there's no change of temperature. OK. Let's keep talking about this. Adiabatic free expansion is spontaneous, so delta entropy has to be positive. That goes back a couple lectures. It's spontaneous. That means that it has to change the entropy, has to change the entropy. And it has to change it with a positive sign. How do we calculate-- how to calculate delta S? How do we calculate it? We want to find a differential form and integrate. dS equals-- delta S equals dS equals something. All right. So here's one of the first cases where we're going to, in this class, see how we can calculate the results of processes using thermo even if we can't calculate the specific processes themselves. So we have this spontaneous process, which involves this nutty expansion into a vacuum, takes place very quickly. That doesn't sound like something we know how to calculate. We can't use PV equals nRT because during that process, the system is not at equilibrium. So the challenge is, can we think of an equivalent reversible process that brings us to the same endpoint? Reversible process with same initial and final states. So we're going to have start, end. And we have this free expansion process, which I'll draw as being kind of nutty. And we want to come up with a reversible process with the same starting and ending points. Because if we can do that, we can calculate delta S. And we'll know it's the same, regardless of whether I take this path or that path. So let's imagine a reversible process. What do we know about this process? We know that delta T equals 0. We know that delta T equals 0. So let's consider isothermal. As before, isothermal process, no change in temperature. We're going to use T as an independent variable. We'll do expansion. Let's use volume this time. Isothermal expansion. So this isn't saying that the adiabatic free expansion is the same as isothermal expansion. But if I want to calculate a change in state variables, I can use this path instead of this path. OK. So does anybody want to-- let's see. I have some-- let's see. I want to pause here before we do that calculation. I'm going to pause here before that calculation. Because that was a lot of stuff. I'll take questions before moving on. AUDIENCE: Can you go back and explain again why heat equals 0? RAFAEL JARAMILLO: For this process? AUDIENCE: Yes. RAFAEL JARAMILLO: Yeah. And part of this is-- shoot, I was about to-- just the way the problem was set up. So I told you the setup is a little bit kooky. I'm imagining a thermally-insulated piston. So a thermally-insulated piston, it's in the title, adiabatic. So thermal insulation, heat equals 0. So for that process, for that process, work equals 0, and heat equals 0. Remember, those are process variables, right? So that means that a different process might have different process variables that still go between the same start and end state. So I'm going to calculate delta S next, but I want to gather more questions. AUDIENCE: So for isothermal expansion, previously, you had used P as your independent variable. Is there a reason why you're changing it to V now? Or is there a way that we should-- RAFAEL JARAMILLO: It's about the same. For ideal gas, of course, PV equals RT. For ideal gas with N equals 1, PV equals RT. And for isothermal processes, this is just a constant. So you can always swap P and V. They just end up being the inverse of each other. AUDIENCE: I see. OK, thank you. RAFAEL JARAMILLO: That's always easy. I just did this because before, I did the other way, and just see it both ways. AUDIENCE: I have a question-- sorry-- going back to the reversible adiabatic expansion. And you mentioned the step by inspection. And that was when we identified the variables M and N. Would you mind clarifying how you got to there? RAFAEL JARAMILLO: Sure. So that was back here, right? That was back here? AUDIENCE: Yes. RAFAEL JARAMILLO: Yeah. So let's talk about coefficients, coefficients. I'll ask you, tell me, what is dT at fixed pressure? Obviously, that's 1, but forget about the fact that you know that's 1. Just pick it off of this line. What is dT dT at fixed pressure? I'm glad you asked. Because this is a very common mental exercise in thermal, and maybe you're less familiar with it coming into this class. All right, differential form, dependent variable, independent variable, coefficients. The coefficients are the partial differentials. So dT and fixed pressure is this coefficient. Likewise, dT dP at fixed temperature equals N minus MV alpha. Now, this is sort of silly in some sense, a silly example. I mean, it's useful. The result is useful. It's a little bit of a silly example because dT has to be 1. You don't need to go to table 4.5 in DeHoff for that. You can just go ahead and say that's 1. And dT dP at fixed temperature, what's the dT when temperature is not changing? AUDIENCE: Oh, that would be 0. RAFAEL JARAMILLO: Has to be 0, yeah. So what we have here is we have a system of equations, right? We have two equations and two unknowns. And that's what I'm doing here, is I'm solving for those two unknowns given those two equations. The example that you're doing in the P set is a little bit less-- I don't want to call this silly, but it does require that you look up the coefficients in table 4.5. It's not as immediately apparent what these should be in the problem you're working on the P set. But that's what I meant by inspection. It means you look at the differential form, you pick off the partial differentials, and you equate them to what you know them to be. In this case, it's 1 and 0. AUDIENCE: OK, that makes sense. Thank you so much. RAFAEL JARAMILLO: Yeah, thanks. So to finish here, coefficients, coefficients, those are coefficients. Let me calculate the change in entropy here, because I think that it's another example of using these differential forms. And so maybe it will help right now. So let me give you let me give you something. We have T and V, and I want to calculate delta S. So I need to integrate dS-- sorry. I need to integrate dS, which is going to be dS d-- no change in temperature-- so dV, dS dV at fixed temperature dV. This is the thing I need to integrate to get my answer, which means I need to be able to write this down. I need to know what is this coefficient. That's what I need to know. So the implied differential form here is dS equals dS dV at fixed temperature dV plus dS as dT at fixed volume dT. I'm going to skip a couple of steps here because we have 8 minutes remaining. So I'm going to use the general strategy-- or I think this particular case is also just worked in the book, so I could also go look it up. And I get dS equals-- and we have to copy this-- Cp over T minus V alpha squared over beta. A lot of times, again, these coefficients are non-obvious. You just have to trust that you're doing the process correctly and use the result. OK. So this happens to be what you get. So for dT equals 0-- again, good choice of an independent variable because I get to ignore half the right-hand side of the equation. For dT equals 0, dS equals alpha over beta dV. For an ideal gas, we have alpha equals 1/T. Beta equals 1/P. dS equals P over T dV-- again, using ideal gas equation of state. So this is pretty simple in the end. I can integrate dS. It's going to be the integral by dV of R/V equals simply R log V2 over V1 dS. This is my answer. This is the answer I was looking for, the change of entropy for a reversible process isothermal expansion. And here's an important thing. It's greater than 0. So what does this mean? If I start with my gas molecules here in vacuum on the right, spontaneous process is going to bring me to gas molecules everywhere. V2 over V1 greater than 1, delta S greater than 0. Spontaneous entropy generation. What happens if I tried to run this movie backwards? If I ran this movie backwards, everything would be the same in terms of the calculation, except V2 and V1 would be switched. The final volume would be smaller than the initial volume. So the change of entropy would be negative. So it's our experience that you won't spontaneously have the gas molecules uniformly distributed in the box and then going through a situation where they leave a vacuum on the right-hand side. That's not our experience. Here's uniform gas molecules going to-- that red pen is causing trouble here-- going to a situation. But they segregated themselves spontaneously on one side. We know this would never happen spontaneously. Again, this sort of goes back to the baby book we discussed. This happens spontaneously, this doesn't happen spontaneously. Now we have some math to support that. We have something called the second law. We know that spontaneously, entropy has to always increase or stay the same. It can never decrease. And now the math of this backs us up. In this case, over here, we have V2 over V1 would be less than 1. And delta S would be less than 0. So that doesn't happen.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Supplemental_Video_Ternary_Phase_Diagram_and_Ouzo_Demo.txt
[SQUEAKING] [RUSTLING] [CLICKING] RAFAEL JARAMILLO: Hi. Today we're going to introduce phase diagrams of unary binary, and ternary systems with a real focus on ternaries. The challenge of drawing phase diagrams is how to capture a phase equilibria with flat pictures on a piece of paper. So before we begin, we're going to remind ourselves of the Gibbs phase rule. I'm going to keep this down here in the bottom of the board. Gibbs phrase rule says that the number of degrees of freedom in a system is the number of components minus the number of phases plus 2. So I'll keep this down here so we can refer to it. We'll start with unary systems. I'll draw the phase diagram of a very well known unary system for iron, temperature on the vertical axis, pressure on the x-axis. So this is the very well known phase diagram of iron-- alpha phase, high-pressure epsilon phase, beta, gamma, and liquid. So for a unary phase diagram, we have two independent intensive variables, and that's pressure and temperature. We also have two axes on a flat piece of paper. That means we can represent the full phase diagram as a flat image. So for instance, in a unary system, the condition of two-phase coexistence implies 1 minus 2 plus 2. That is one degree of freedom. And we do indeed see that two-phase coexistence is represented as lines on the unary phase diagram that is geometrical objects with one degree of freedom. Now we'll do binary systems. I'll draw the phase diagram for a eutectic system, something like the lead/tin system perhaps. We have lead, tin. The horizontal axis is the second component, in this case, tin. The vertical axis is temperature. And we'll make this a eutectic. So there's our lead/tin phase diagram-- the lead solid solution, a tin solid solution, and the liquid phase. So now we have three independent intensive parameters. That is, temperature, pressure, and x2. Or, if you like, you can choose x1. So if we only have a flat image, we have to represent a subspace. So what we do is we represent-- we represent a subspace of fixed pressure as a flat illustration. And 99 out of 100 binary phase diagrams you encounter will be for 1 atmosphere of pressure. So unless it says otherwise, you can safely assume it's for 1 atmosphere of pressure. So let's consider how Gibbs phase rule plays here. So for example, two-phase coexistence implies two components minus two phases plus 2. So we have 2 degrees of freedom. There's only 1 degree of freedom evident in this binary phase diagram. So for instance, let's consider this two-phase region here. We have a timeline connecting that material, which is a solid solution with a fixed composition, and this material, which is a liquid solution with a fixed composition. The 1 degree of freedom that's evident in the phase diagram is that you can move along these lines. That's 1 degree of freedom. The second degree of freedom is actually pressure. So it's coming out of the board. It's not apparent in this diagram. We consider three-phase coexistence. In this case, the number of degrees of freedom is 1. So the three-phase point in a eutectic diagram is, of course, the eutectic point. The eutectic point is a point. It looks like an object with 0 degrees of freedom, but we know that the 1 degree of freedom becomes apparent when you come out of the board. So as you vary pressure, this point becomes a eutectic line, and you get your 1 degree of freedom. Now we'll move on to ternaries. In this case, we have four independent-- four independent intensive parameters-- temperature, pressure, and two independent composition variables. We know that, by definition, the sum of the mole fractions or the weight fractions, if you like, is 1. So we can rearrange this as a function for x2, a function of x1 with an intercept, and a constant slope of minus 1. Then we're going to draw that function. So on the horizontal axis, we'll have x1 going from 0 to 1. And on the vertical axis, we'll have x2 going from 0 to 1. And we have a family of curves, which is parameterized by x3. So there is the line of x3 is 0. And I'll draw a couple more lines. So here's x3 is 0.25, x3 equals equals 0.5, and x3 is 0.75. OK, we're going to represent composition on the two-dimensional flat picture. But we want to represent-- we want to represent all three components on an equal footing, not give special status to x3. So we're going to deform our composition map into an equilateral triangle. So we'll start with the composition map from the previous slide. And we're going to imagine taking this triangle and transforming it into an equilateral, recognizing now that this corner represented pure component one, this corner represented pure component two, and this corner here represented pure component 3. So we'll label the corners as such, component one, component two, component three. And the lines of fixed component three mole fraction remain. This construction is called the Gibbs triangle. On the Gibbs triangle, the points of pure component 3, 2, and 1 occupy the corners. As we've already seen, compositions, that is, materials with fixed x3, radiate, as it were, as lines from the x3 corner. I'll draw those lines. So this, for instance, is a line of x3 equals 0.75. This is the line of x3 of 0.5. And this is the line of x3 of 0.25. The edge of the triangle here is, of course, is the line of x3 equals 0. In other words, this is the binary line between x1 and x2. This axis has become the x3 axis. Similarly, lines of constant x2 emanate from the x2 corner. So we have lines which cross the diagram as such. And we can label them with tick marks-- 0.75, 0.5, and 0.25. This has become the x2 axis. And finally, compositions with fixed x1 emanate as lines from the x1 corner. See how I've drawn this. So we have lines 0.75, 0.50, 0.25, and this here is the x1 axis. Of course, these lines didn't cross down here, but I'm not great at drawing. All right, so for example, how do we evaluate the composition of any point on this diagram? Let's take that point. Call it point P. Let's first read x3. Well, what we do is we draw a dashed line parallel to these lines of constant x3 composition, and we see where that hits the x3 axis. That's at about 0.6. All right, let's do x2. So here we look for the x2 axis. And we draw a line over. That looks like it's at about 0.3. Now, of course, we know that these guys have to sum to 1. So we know that x1 has to be 0.1. We can verify that by drawing a line down to the x1 axis. And we do see that it falls right about 0.1. Now we're going to see how we represent phase equilibria on the Gibbs triangle. This is best done by illustration. So I'll start by drawing a fairly generic ternary phase diagram that shows both single, two, and three-phase regions. Start by labeling our components. I'm going to put a three-phase region right in the middle. This is going to be bordered by two-phase regions and one-phase regions. The two phased regions will have tie lines. And let me label my phases. I'll switch to pink to label the two-phase regions. Let's start by looking at this alpha-gamma region. Gamma plus alpha, two-phase region with tie lines. You'll note that, in ternary phase diagrams, the tie lines themselves are straight. I drew them as close to straight as I could, but they're not necessarily parallel. So there's the gamma plus alpha region. Here is the gamma plus beta region. Here, of course, is the alpha plus beta region. And right in the middle we have the alpha plus beta plus gamma region. Now let's see what the implications are of Gibbs phase rule. So for instance, in a ternary system, we have three components. So 3 minus 2 plus 2 is 3. So in this case, we have 2 degrees of freedom-- 3, sorry-- 3 degrees of freedom. So take, for instance, this solid solution in coexistence with this solid solution. 1 degree of freedom is apparent in this ternary phase diagram just as, with the binary phase diagram, that 1 degree of freedom is represented by those lines of variable composition. But the other two degrees of freedom are in the third and fourth dimensions of this drawing, and they correspond to temperature and pressure. Let's take another example. For a three-phase equilibrium, we have 2 degrees of freedom. On this phase diagram, the three-phase region is bounded by these points. And those are points of fixed composition. So there are 0 degrees of freedom apparent in this ternary phase diagram. The two degrees of freedom of the system correspond to, again, pressure and temperature, which we can't show on a flat image. Now, in a ternary phase diagram, analyzing phase fractions is exactly analogous to a binary phase diagram. We use tie lines and the lever rule. However, analyzing phase fractions in the three-phase region is a little more complicated, and we'll do that next. So we're going to analyze this three-phase region in a little more detail. These are called tie triangles. For a given temperature and pressure, the compositions are fixed. And since, of the four independent, intensive parameters, on the ternary phase diagram, we're showing the composition parameters, for typical ternary phase diagrams, the temperature and pressure are fixed. In order to determine the phase fractions, we use a generalization of the lever rule. So let's consider a given composition, overall system composition. Let's say that my overall system composition is that point there. So the way to figure out the phase fractions is, you imagine this triangle as a solid triangular sheet. And you have it balanced on a pedestal or a fulcrum. Here's my overall system composition. So we're going to try to balance it with the fulcrum at that position. So you imagine a cone. This is now meant to be a real, three-dimensional object, like a three-person seesaw. And the way you determine phase equilibrium is you add material to the three corners until the triangle is level with the ground. So if you want it to balance out this triangle, you'd probably add just a little bit of material over here because that has a large lever arm, a little more material over here that has an intermediate lever arm, and then a big pile of material over here. That's the corner with the shortest lever arm. And you adjust these fractions until this triangular sheet is level with the ground, and the result is the proper equilibrium phrase fraction for the system with that overall composition. We're going to illustrate ternary phase diagrams with a very well known case study of the ouzo effect. So ouzo, if you're in Greece, or pastis if you're in France, is a spirit made up of water, ethanol, and anise, anise essential oil. So you need to know for this demonstration is that ethanol-- ethanol is a common solvent. Meaning, it's a solvent common to water and the essential oil. But water and oil are, of course, insoluble. So first, let's see what happens. [VIDEO PLAYBACK] [MUSIC PLAYING] What that video shows is that ouzo, itself, is a clear, colorless solution. That is, it's a single-phase solution. However, adding just a little bit of water causes it to transform into a milky substance that is a two-phase suspension. So we're going to figure out what the ternary phase diagram for the system has to look like in order to be consistent with the ouzo effect. So let's start by drawing the triangle. I'm going to put water down here, ethanol here, and the essential oil here in this corner. All right, so when we add water, we don't change the ethanol-to-oil ratio. So let's draw lines of constant ethanol-to-oil ratio. Those are lines that emanate from the water corner. So those are-- now, ouzo is a pretty strong drink. So let's say it's 100 proof. That is roughly 50/50 ethanol-water. So let's draw a line corresponding to 100 proof. All right, so now we have a little bit of understanding how to navigate this diagram. There's our 100 proof line. That is 50/50 ethanol-water. And there's our lines of constant ethanol-to-oil ratio. So let's imagine that, when we buy it at the store, or we have it delivered to our table in the taverna, we start at this composition. It's mostly water and ethanol. It has a little bit of essential oil. And we need to figure out a phase diagram such that, when we move along that line of constant ethanol-to-oil ratio, we move from a single phase to a two-phase region. I'm going to draw the boundary of a two-phase region that's consistent with that observation. There. I'll draw the tie lines. Right, so here is a proposed phase diagram. We start with ouzo, single phase, solution of water, ethanol, and essential anise oil. When we add a little bit of extra water, we move along this line of constant ethanol-to-oil ratio until we enter into the two-phase region. And now I have a cloudy, milky suspension with a water-rich phase co-existing with this phase, which lies roughly in the middle of the ternary phase diagram. All right, so now we have our cloudy, two-phase ouzo drink here represented by that point in the two-phase region. That's at ambient temperature. Now, we know that increasing temperature makes solution-forming more favorable. So let's see what happens in practice [VIDEO PLAYBACK] [MUSIC PLAYING] Right. So what we saw was that, at 41 degrees C, the two-phase regions seemed to be resolving into a single-phase region. And as we continue to increase the temperature to 47 and 62 degrees Celsius, we've recovered a single-phase solution. Now, granted, you could have been changing composition as you have differential evaporation from the solution. But the cuvette was capped. So let's assume that the overall system composition of that liquid-liquid two-phase system did not change. Let's see how we represent the effect of changing temperature on the ternary phase diagram. So I will start by labeling our initial two-phase region boundary by the temperature which that initial video was shot, 20 degrees C. Now let's draw a different boundary that's consistent with the observations at 41 degrees. They're just on the border. I'll label that 41. As we continue to increase the temperature, solution-forming seem to become more and more favorable. So here, for instance, might be 47, and here is 62. So what we see is that, as we increase the temperature, our overall system composition at this point moves from being within the two-phase region to being back within the one-phase region. So in this way, we can show you a little bit about what happens in the third dimension on the flat ternary phase diagram by drawing contours of the borders between regions as if they were contours on a topographical map and labeling those contours by the temperature.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_17_Solution_Models_Ideal_Dilute_and_Regular.txt
[SQUEAKING] [RUSTLING] [CLICKING] RAFAEL JARAMILLO: So we've motivated solution modeling. And now we're going to start to see how it's done. So this is the outline of today's lecture because we're going to cover a lot. I don't mind covering a lot here. It's covered well in the text as well. So this is basically a-- I'm being a tour guide here. So we're going to start by introducing the ideal solution model. Then we're going to talk about nonideal solution models. We're going to talk about two particular types of nonideal solution models, dilute solution model and regular solution model. And I hope folks are ahead of me and have read chapter 8 in the book. And if you have, it will help. I flipped this upside down-- better. OK, so let's talk about first ideal gas solutions. We've already done this actually. We've already done this. Ideal gas solutions, we have pure A at temperature, pressure, and some volume. And we have pure B at some temperature and pressure-- same temperature same pressure-- and a general different volume. And we mix them. And if these are ideal gases, PV equals nRT tells us that we get A and B at the same temperature, same pressure, and a total volume that is simply the sum of the two volumes. So this is for ideal gases, all right. So we know this. And we've seen that we can model this process, this process of making the mixture, as an isothermal, an isothermal expansion for each gas. So we've seen this. And this is the result that we got. Isothermal change in chemical potential is molar Gibbs free energy. That means the total change in the chemical potential of each component throughout this process is going to be the integral of its partial pressure times v. And making this a little bit more explicit, this is from P to P sub i, dP dummy variable, RT over p. And this equals RT ln partial pressure of i over the total pressure. We already worked on this. We derived this already. And this is the change of chemical potential for isothermal expansion. This is also-- and here's the new part-- this is also delta mu of i for the mixing process. Well, that's actually not new. We use this in the previous unit on reacting gas systems. But we're going to really focus on this. This delta is now going to correspond to the mixing process. So this is delta for the mixing process. OK, so what is the Gibbs free energy of this mixing process? We know how to write that down now because we've done the rudiments of solution modeling. It's the sum, weighted sum by mole fraction of the partial molar property. And we just take from the previous board. So what we're going to do is we wrote down-- sorry about that. We wrote down the partial molar properties, the partial molar Gibbs free energy-- that is the change of chemical potential with mixing. And now we can write this using the formulas that we already have seen. So it's a sum of the mole fractions and the partial molar property. And we just borrow from the previous slide. This xi R of T log P of i over P. And we're going to use Dalton's rule to re-express the arguments in the log as a mole fraction. And so this is a solution model. We've talked about solution models. But we haven't actually seen one written down mathematically before. So this is one. This is actually an important one. So let's graph this and see what it looks like. So in a graph delta G of mixing-- and the first thing is that it's strictly negative. And you can see that by the log expression. Mole fractions are less than or equal to 1. So this sum is less than or equal to 0. And so that's an important thing to observe. You have a family here with increasing temp. So the first thing to observe here is that there are very few parameters in this model. This is a composition, x sub i. And there's temperature. And that's it. So if I plotted this solution model versus composition, I'm going to have a family of curves in varying temperature. And the way this is going to work is with increasing temperature, delta G mix gets more negative. So we're going to note that. Note, delta G mixture is less than 0, and d squared delta G mix dx2 squared is greater than 0 everywhere. So that is, its negative and it's curved up everywhere. And what we're going to see in a couple lectures is mathematically, this ensures that the mixing is always spontaneous. Which is good because we expect that to be the case for ideal gases mixing. This goes back to the baby book. When you have these two populations of gas molecules, and they don't interact with each other, then you remove the barrier between them, you expect them to mix spontaneously. And that's what this math is enforcing. So, OK, so now what we're going to do is we're just going to abstract this a little bit and call everything we just did the ideal solution model, the ideal solution model. And we can write it down very compactly. So this model, it's motivated, motivated by or derived from ideal gas mixing. But it approximates a broader class of real-world systems. So this is very typical in science. You have some simple case. You treat it as a model system. You develop the model. You study the model. And then you say, oh, well maybe this model applies elsewhere. Maybe it does. Maybe it doesn't. So here are some properties of the ideal solution model-- the partial molar entropy of mixing. We can get this from the combined statement. And this derivative is really easy. It's just a partial with temperature of this thing. And we have minus R ln x sub i. And this is everywhere positive. That's good. So entropy of mixing is positive. Another thing which we have, the partial molar volume of mixing-- and, again, we get this from the combined statement, going back to chapter 4 in the text. And there's no pressure dependence here. There's no pressure dependence in this model. So there's no volume of mixing. That's good. That's what we expected. What about the enthalpy of mixing? This is delta mu of i minus T delta S of i, going way back to the definition of our potentials. And if you write this out, you see that 0. And likewise, the partial molar energy of mixing equals the partial molar enthalpy of mixing minus P and the partial molar volume of mixing. And that's also 0. So this is simply using the stuff, which we did a couple of weeks ago, definitions of these potentials and the combined statement of the first and second law, and applying those operations to these quantities in mixing. And I encourage you to work through these on your own. And so the results here are what we want. That's what we want to see. What, the entropy increases on mixing. So this is a process driven by entropy increase, which is-- we postulated that back in lecture 1 with a baby book. There is no volume of mixing for ideal solutions. The final volume is sum of pure component contributions. That's good. And there are no interactions between molecules. That was the foundation of the ideal gas molecule, right-- no bonds made or broken by the mixing process, no change in enthalpy for energy. So what we're seeing here is, again, what we intuitively have known now for weeks. We're just seeing it in this formalism. OK, so this is the ideal solution model. Great, let's move on. STUDENT: I have a quick question, sorry. RAFAEL JARAMILLO: Yes, please. STUDENT: Is there a reason why we write the bar for entropy, enthalpy, et cetera, but then we don't write the bar for the mu's for the chemical potential? RAFAEL JARAMILLO: Yeah, Gibbs is special. And the reason-- the chemical potential is the partial molar Gibbs free energy. We don't have a special term for partial molar entropy, partial molar volume, partial molar H, and partial molar u. It's just, historically there's not. I mean, you could imagine coming up with new words and using different Greek letters. But Gibbs is so important that the partial molar Gibbs free energy has been named chemical potential. And so it's special, good question. STUDENT: OK, thank you. RAFAEL JARAMILLO: Yeah, Gibbs is special. OK, all right, so what happens if the molecules do interact? What happens if we're not mixing ideal gases? Well, we want to capture the deviation. We want to capture the deviation from the ideal model. And what you're about to see is a very familiar thermal thing, which is more bookkeeping. Your pattern recognition might start kicking in soon that this is a thermal thing. This is what thermodynamics does. We let delta mu equals R of T log of A of i-- I'm going to introduce that in a second-- equals RT log gamma of i x sub i. Just bear with me. These are just definitions. A i is being defined as the activity of component i. And it's further defined by the product of gamma of i and x sub i. And gamma of i is known as the activity coefficient of component i. And why do they do this to us? What is with all these terms? If we rewrite delta G of mixing in terms of these activities and activity coefficients-- let me just pull what we know to be the case. This is x sub i delta mu of i equals product of i x sub i RT log gamma of i x of i. And we're going to use the property of the log function. And we're going to see that this turns into x of i RT log x of i plus x of i RT log gamma of i. So the reason why they do this to us is because when we model solutions in this particular way, we have this neat separation between the ideal case and deviations from ideal, deviation from ideal. And we can see that nonideal behavior is captured by gamma of i not equal to 1. So if you have ideal solution, ideal behavior, the activity coefficients of all the components is 1. And this thing, which we call the activity, is just equal to the mole fraction. And if the solution does not behave ideally, if it deviates, we just dump that deviation into this term over here. And we say it's captured by these activity coefficients being not equal to 1. And we can figure out what they are later-- so, again, just a bookkeeping exercise at this point, and also introducing essential terms. Now, this activity, if you're in course 10, this is equal to fugacity. And that's not exactly true. Fugacity sort of is activity in certain cases. But, anyway, if you've heard that term fugacity, or you wondered what it was when you were reading the Denbigh book, here effectively we have it. It's the activity. And so we now have this mathematical framework to capture nonideal behavior. And so what we're going to do next is introduce two solution models for nonideal solutions. The first one is going to be the dilute solution model. OK, so this is the dilute model. It's based on some physical reasoning, which is always a relief after all this math-- so we have solvent and solute. OK, so what does dilute mean? Not concentrated. So these are the assumptions. We're going to make two assumptions about these physical systems-- one, each solute molecule is surrounded by solvent. And solute-solute interactions are negligible. So this has physical implications. So this is like, someone drops you in the middle of the woods very, very, very far from civilization. And you start acting the way you do when there's no one else around. And there's not a chance of you running into somebody. And however you act, that's up to you. But this is the point. You are acting as if there's not another soul on Earth. You're in the middle of the woods. And now let's say that somebody else gets dropped into the same woods 10 miles away. All right, that person will also act as if there's not another soul on Earth, however they act. And if we populate the woods in this very dilute way where there's not anybody within miles and miles and miles of each other, then each individual person will act as if they're the only person in those woods until we get to a point when they start becoming aware of each other. And just as you might if this were you in the woods, if you started becoming aware of somebody else where before you didn't think there would be anyone else, your behavior might change a little bit. It's the same with dilute solutions. We're in the dilute limit when each solute is acting as if it's the only one, completely surrounded by solvent, and all of its thermodynamics are dominated by solute-solvent interactions. So that's the first assumption and the second assumption is that each solvent molecule on average on average is surrounded by pure solvent, and therefore act like a pure substance. So this is a little bit harder to justify. And actually, on the P set, we asked you to take this and justify this. We don't put it that way. We'll cover that in three minutes. But, effectively, what you're asked to show on the P set is that given this, this follows. So, yeah, there's some solvent molecules here that are near solute. And, yeah, they know that there's solute. But if you averaged over all the solvent molecules, the average solvent molecule is surrounded also by only solvent molecules. So these are the physical assumptions. And so here are the mathematical implications of that. So assumption number 1 leads to something called Henry's law, Henry's law of the solute. And that's the following. In the limit of mole fraction x2 going to 0-- so the second component here is the solute-- the activity of component 2 approaches this quantity, where gamma 2 0 is called Henry's constant. This mathematical form results in what we want, which is that thermodynamically, each additional solute molecule thinks it's the only one in the forest. Assumption number 2 has the following implication called Raoult's law of the solvent. And Raoult's law of the solvent is that the limit of x1 going to 1, that is the solvent becoming increasingly concentrated, the activity is simply equal to the mole fraction. So these are Henry and Raoult. I don't know whether they were contemporaries, whether they were BFFs or whatnot. And I have a really hard time remembering which one is which. Just a note for the P set, this can be derived from Henry via Gibbs-Duhem integration which is discussed in detail in section 8.20.3. So that's relevant for the P set-- Raoult's law solvent, and Henry's law of the solute. All right, so let's plot how those look. Variation of activity with composition, it could look like this. This is an example. And I'm just going to copy this figure from DeHoff figure 8.4, because I think it's useful. So I'm about to copy here. All right, so it could look like this. What I'm trying to plot here is as a function of x of 2, the activity of the first component and the activity of the second component. So this goes between 0 and 1. This goes between 0 and 1. On this plot goes between 0 and 1. It doesn't have to. You can have activities greater than 1. But we're going to keep it simple here. And so the ideal solution model is that. Those are the ideal model, slope minus 1, ideal; slope, 1, ideal. So the ideal solution model is like that. Henry's law here is going to have a slope not equal to 1. So this is a slope of gamma 1 0 Henry. And, similarly, we can have some slope gamma 2 0, and this is Henry. That's Henry. And the reality could be something like this. When x2 is concentrated, x1 is dilute. And you start up Henry's law there. And then it deviates. And at some point, you asymptote, so Raoult's law. So this might be reality. There's lots of different ways that this could shake out. But as drawn here-- and, again, you have for dilute x2, you start along Henry. And then at some point, asymptote Raoult's. So this could be what you get. All right, Raoult's law, Henry's law; Henry's law, Raoult's law, and one implies the other. All right, so that's dilute solutions. And I ask you one problem on dilute solutions on the P set. We're not going to spend a lot of time on dilute solutions in this class. The reason why we took this time, these 15 minutes of lecture and a little bit of time on the P set, is to set you up, for those of you who are going to do aqueous chemistry or electrochemistry. You'll want to know this. So that's why we did that. There is another type of solution modeling, which is much more useful for 3.020. And those are called regular solution models. So these are models that we're going to spend the next couple of lectures on, at least the lectures that are new material. And you spend more time on the P sets on. So regular solution models are like this. We also start with some physical assumptions, and then we develop the implications of that-- one, the entropy of mixing. The entropy of mixing is captured by the ideal model delta S mixed ideal. That gets a little bit fuzzy over there. I'm sorry for that. OK, so we're going to assume that the ideal model captures the entropy of mixing. And as we'll see later when we do statistical thermodynamics, we will show later that this is what's called configurational entropy. Configurational entropy is entropy associated with the spatial configuration of things. And, again, you can recall the pictures in the baby book, mixed up in this represented by where things are in space. So that's configurational entropy. And that's the ideal model. But we're also going to allow intermolecular interactions. Intermolecular interactions are nonzero or finite, I don't know-- exist. So there you have it. What that means is going to be this, nonzero enthalpy of mixing. So in this model, when you make a mixture, some bonds are made. Others are broken. And there's going to be an energy cost to that. So this is what we get. We get delta G of mixing equals delta H of mixing, general nonzero, minus T delta S mixing where this is ideal. That's ideal. And that is equal to the following. OK, so this is what's called a regular solution model. And I'll give you one more piece of information, which is the simple regular model. The simple regular model is the simplest case of regular solution models for binary systems. And we can write it down really easily. This is it. Delta H mixing equals a0 x1 x2. So it's a model with one parameter. It's a mixing term parameterize by a0, which typically has units in joule per mole. If A0 is greater than 0, this is an endothermic mixing. Endothermic mixing, we have to put in heat energy. So maybe some bonds are broken. The molecules end up in a higher energy state. In less than 0, this is exothermic mixing. OK, and so with that model for the enthalpy of mixing, we have the full solution model, which is a0 x1 x2 plus RT sum i x sub i natural log x sub i. OK, so this is the simple regular model. And we're going to spend some time on this. You're spending some time on the P set. Yeah, it's got two parameters. It's got enthalpy, a mixing parameter. And it's got temperature. And from these two parameters, you can generate a nice diversity of physical behavior. And we'll see that in the weeks ahead. OK, today's lecture covered a lot of essential information. And I went rather quickly. So I have plenty of time for questions. And I will keep the recording on for some time now and then switch it off at 10:55. And before we start that, I'll remind you my office hours have switched to Wednesday at 12:00. So that's in about an hour, an hour and 15 minutes from now. Office hours switched to Wednesday at 12:00. And there's a very slightly revised syllabus for thermo. None of the due dates change or anything like that. It doesn't really change your life substantially. But it is a little bit of a more accurate view of where we're heading in the next couple of lectures. The exam is Monday. And it will cover unary phase diagrams and reacting ideal gas mixtures and the rudiments of ideal solutions. But none of this really involves solution modeling apparatus that we're working on now. What that means is that the P set that's due on Friday is only minimally covered on the exam. So you can focus on slightly earlier material for the exam-- exam Monday. And we will get new thermo next Wednesday-- not this Wednesday, but next Wednesday. And what is that, the ninth-- is that the ninth-- next Wednesday, 4/9. this Friday-- sorry, next Friday. This Friday, we're going to work some problems. And then I guess it will be new thermo. But we're going to walk through the Nernst equation just for your own curiosity and interest. That won't be tested or covered formally in this class. And then Monday is the exam. And then Wednesday we're going to talk-- we're going to spend an hour talking about social implications of material science. And so we won't really see new, hard-core course content until next Friday. OK, this is time for questions. STUDENT: Can you explain the delta H of mixing and how you got that? RAFAEL JARAMILLO: Yeah, so the way I got it is I just told you that's the model. It doesn't apparently come from anywhere right now. But I'm glad you asked because we do have something called-- here, let's do this. OK, that's basically what asked. And the answer is that you can say, oh, it's empirical. I made a bunch of measurements and this fit the data. And, historically, that's probably the most accurate answer. But it also comes from something called the quasi-chemical model. And we are going to, I believe next Friday, start right here where we try to justify that. And as a preview, what we do is we imagine-- green and red, this is never good. Red and blue is OK. We imagine atoms or molecules on a lattice like this. I'm not drawing a nice lattice here. We imagine molecules on a lattice where we have two different components on the lattice. And we say, OK, all the energy is nearest-neighbor bonding. So we have blue-blue, blue-blue, blue-blue, blue-blue, blue-blue, blue-blue, blue-blue, blue-blue, blue-blue, blue-blue, blue-blue, blue-blue, blue-blue blue-blue, blue-blue, blue-blue, blue-blue. This is a lot harder to say this over and over again than it would seem. And, oh, here we've got some blue-red bonds. And just for completeness, let's say I have a red over there. There's a red-red bond. And what we're going to do is we're going to count up these bonds. And we're going to do some combinatorics. And we're going to end up with this expression. So it does come from somewhere. I would be willing to bet a cup of coffee that historically it comes from empirical data, comes from data and fitting and people making reasonable guesses as to what empirical-- what model will empirically fit the data. And then at some point somebody worked out this model and said, oh, this is neat. This justifies what we've been using all these years-- forgot a blue-blue. Does that help? STUDENT: Yeah, thank you. RAFAEL JARAMILLO: How would you get that? How would you get that? Well, you'd have a beaker. And you would have inside of that beak-- it doesn't have to be a beaker. It can be a crucible or anything. And inside of that you'd have another little beaker or crucible. And this beaker would be full of something which has a large heat capacity like water. And it's easy to measure the temperature of this thing. So you've got a thermometer in there. So we've got a big thing of water and we've got a thermometer in there. And what we're going to do is we're going to make mixtures. So we're going to take some of A. And we're going to take some of B. And we're going to add them-- just mixing up my colors here-- we're going to add them in in different quantities. And we're going to end up with different solutions in this little beaker. And we're going to measure the temperature rise. And so for every composition that I have, we're going to have something like this. We're going to have x of A, x of B and a temperature rise. And we measure. Let's see, 0.99, 0.01, 0.98. We're doing a bunch of experiments, right. And I'm going to have numbers for temperature rise. Why does that help me? Well, the temperature rose in this big bath of water. So I can estimate the heat evolved from this mixing process because I know the capacity of water. So the heat evolved are going to be numbers. And then I can say, oh, well this is a constant pressure process. So heat and enthalpy is about the same. And, OK, I can model the heat evolved is a function of composition. And maybe it fits this functional form. That's how this actually would go. That's how this actually does go when we do calorimetry in the lab. This isn't like stuff relegated to the distant past. I mean, we do this every day in the lab, especially those that have been focused on thermochemistry and such things, so.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_27_Introduction_to_Statistical_Thermodynamics.txt
[SQUEAKING] [RUSTLING] [CLICKING] RAFAEL JARAMILLO: We're going to move on now from CALPHAD. There's a couple, but we're going to move on. And we're going to do statistical thermodynamics for three or four lectures. All right. So a note on the reading here, almost everything that we're going to cover is going to be in chapter 6. In the syllabus, I also have a signed chapter 3 of a different book by Chandler, and that PDF is on the website. That is a more advanced book. You don't really need the formalism in that book to do the problem sets, for instance, or to succeed on exam 3. So I would say that reading is quasi-optional. I hesitate to say that because I wonder whether anyone would read it at all. But it helps to know that because it is a more advanced book. I find that presentation in Chandler to be just masterful. I really like that book. I like how it's written. I like how the material's presented. I like how it's presented better than the way DeHoff half presents this topic. But that said, it is a little bit more advanced than we need. So it's good that you know that. So let's talk about microstates-- macrostates, and microstates. Macrostates and microstates. So let's start with microstates. A microstate-- this is key, foundational definitions. This is a description of the state of every atom in a system, every atom in a system. So that's kind of, crazy, right? That's a crazy thing to think about. It's describing every atom in a system. In practice, we almost never do it. But in theory, it's the foundation of this topic. So, for example, if you have a mole of atoms-- let's say, an ideal gas, a mole of argon atoms, you have on the order of 10 to the 23 velocity, position pairs. So you have velocity, and you have position. These are vectors, and they're indexed by atom. And you have 10 to the 23 of these pairs, so just an ungodly amount of data. And for those of you who don't know, this script is often used to indicate on the order of, so of in the order of 10 to the 23 velocity and position pairs. OK. A macrostate is a description of a system on macroscopic length scale averaging over microscopic. And by microscopic, I mean that is atomic or molecular processes. So macrostates are what we have been dealing with to date in class. And so we know that even if we might have 10 to the 23 atoms, we can describe the state of the system with a very small number of parameters, like pressure, temperature, and mole number. So at times, we can imagine the atoms zooming around. At times, we can imagine that, right? And you could draw pictures, right? So this is going back to the baby book. This baby book is called Statistical Physics for Babies. So we're back to the baby book. And so we imagine that we can draw pictures of where the atoms are and how they're moving. This is velocity-position pairs. But we never actually do it, right? We never really do it. We characterize a system by the pressure and the density and the temperature, a small number of variables. So we're going to spend some time counting microstates. Because it's really through counting microstates that the profound nature of this emerges. Here's a simple example. This is right from DeHoff. So this is combinatorics. Let's say four particles, A, B, C, and D, and two possible states for each particle, 1 and 2. So you can imagine these states are two boxes. You can imagine a carton that's separated into two boxes. We'll call this 1. We'll call this 2. And you can put some particles in here, and you can put some particles in here. And we're going to figure out how to distribute the particles. So let's do some counting, so state 1, state 2 OK. So we can have all the particles in state 1. We can have three particles in state 1, We can have two particles to state 1. And I'm going to imagine-- you could continue filling this in if you wanted to. So these are a list of all possible micro-- I need to be careful with my writing. This is a list of all possible microstates. You can imagine listing these out. So now I'm going to tell you what the macrostates are. Macrostate 1, macrostate 1 is going to be something described as follows-- all particles in state 1. Macrostate 2, macrostate 2 is going to be three particles in state 1. Macrostate 3, macrostate 3 is going to be two particles in state 1. And you can continue to delineate these. And now we're going to count. So macrostate 1, macrostate 1, that's macrostate 1. So how many different microstates are there within macrostate 1? AUDIENCE: Just one. RAFAEL JARAMILLO: Just one, that's right. OK. Macrostate 2 has these microstates. So how many microstates are there that are consistent with this macrostate definition? AUDIENCE: Four. RAFAEL JARAMILLO: Four. There are four states that are consistent with this macrostate. So what we're doing here is we're defining microstates and then we're counting macrostates. Define microstates, count microstates. That's what we're doing. So let's do this a little bit more general. Well, before I do the general case, let me again-- the baby book is fantastic here, right? The baby book does exactly this. DeHoff and baby book are very similar books. So here is a case of six particles and two states. And there's only one way for all six particles to be on the left-hand side. So this would be a macrostate called-- the volume is 1/2. Let's just say this is a volume of 1. This macrostate would be defined as the volume is 1/2. There's only one way for them to all be on the left-hand side. Maybe you say there's two ways. The volume's 1/2, it can be left or right. That's what we're doing. We're counting, right? And then you can say, OK, let's say that I have a state where I have two regions, one with density 5 over 1/2 and one with density 1 over 1/2. How many different ways can that macrostate be instantiated by microstates? And you say, OK. 1, 2, 3, 4, 5, 6. There are six ways. You can define a different macrostate, and you can count. So the baby book here is doing exactly what DeHoff is doing. So now we'll do a little bit more of a general case. Let's do a general case. n particles distributed over r states. n particles distributed over r states. And we're going to introduce capital omega. Capital omega is going to be number of microstates in the macrostate defined as n1 particles in state 1, n2 particles in state 2, and so forth. So the macrostate is defined by a set, n sub i. N sub i, that set describes the macrostate. And sometimes those will be called occupation numbers. So when you do probability and statistics or when you do quantum mechanics-- and especially, quantum statistics-- you spend a lot of time talking about occupation numbers. So now you've seen the occupation numbers. I mean, it's encountered in other areas as well. I'm just thinking about-- what am I thinking about? I just came out of a talk about material science challenges for quantum computing, and so I was seeing a lot of that stuff. So that's what's on my mind. All right. Anyway, here's the answer. The answer is this. And if there's time, we can come back to derive this. The book does not derive this. It's not hard to derive. OK. So this is the number of microstates for n particles distributed as follows, with n sub 1 in state 1, and n sub 2 in state 2, n 3 in state three, and so forth. So again, we can come back to that. But I want to show you something qualitative about that function. For large systems, for large systems-- that is, n much greater than 1 and especially when r is much greater than n-- so a very, very large number of particles and an even larger number of places to put those particles. This function becomes very sharply peaked around a particular macrostate. What does that look like? Here, I'm going to plot omega, and it's going to be on a log scale. I'm not going to put tick marks on there, but I just want you to appreciate the orders of magnitude involved, so I'm going to tell you this is a log scale. And the abscissa here are macrostates, macrostates-- let's say indexed somehow. And you're going to have some macrostates for which the distribution of microstates is overwhelmingly higher than for all other macrostates. It's this area under the curve here. Let's draw that in orange so it's even possible to visualize it. This is a highly-peaked distribution function-- so again, another concept from probability and statistics. And this state, we will see, is the most probable macrostate. It's the most probable macrostate. So we saw that a little bit at the beginning of class when we looked at ways of distributing six balls in two halves of a container. And Fenney told us that the reason why this is a more common macrostate-- of course, he didn't use that term-- the reason why this is the most common macrostate is because it has the most number of microstates. So this is a macrostate with one microstate. This is a macrostate with six microstates. This is a macrostate with 15 microstates. And this is a macrostate with 20 microstates. And that was the reason it was the most often seen. You could say it's the most likely. And we will see soon within a lecture and a 1/2 that it is also the state with the highest entropy. So we're starting to see a connection between equilibrium and likelihood. OK. So let's do some more counting. Let's work on defining-- I think I have too many letters in that-- defining-- or, that is, counting-- the macrostates for n particles in r states. And again, you can think of these as boxes, places to put the particles. So we're going to do some more counting. I'm going to index these things just so we have an ordinate. And so-- let's say-- and I'm going to write sets of occupation numbers. These are going to define the microstates. Let's start with all the particles in the first box, so n followed by a bunch of 0's. That is a good macrostate-- all the particles, let's say, on the left. That's what that would correspond to. Let's call that macrostate 1. We could also have all the particles in the second box or all the particles in the third box and so on. Let's see. How many of these states are there going to be? How many of these sort of states are there going to be? These states are defined as all particles on one box. How many of those are there going to be? AUDIENCE: R? RAFAEL JARAMILLO: R, right. There are r choose 1 of these. I'm using the parentheses notation for the binomial distribution. Some of you have seen this as r choose 1. It's a little more compact to write in parentheses. If you've never seen the binomial distribution, we should make time to talk about it. But why is r choose 1? It's because you have r states and you're just choosing one of those states into which you're going to put all the particles, so r choose 1. And, by the way, r choose 1 is just r, right? You have r choices. OK, all right. Next, what about states where you have n minus 1 particles in a box and one particle in another box? And the other box is, of course, empty. And then there's going to be a bunch of ways that you can do this, right? Here, I put n minus 1 particles in the first box and the remaining particle in the second box. Here, I put n minus 1 particles in the first box and the remaining particle in the third box-- and so forth. OK, this is a challenge. Can somebody tell me how, in a compact way, to count how many states such as these are there? How many total macrostates such as these are there? How do we think about it? I have r boxes. And out of those r boxes, I'm picking two of them. I'm choosing two of them. I'm choosing two of those boxes to have particles in them where the rest are empty. So do we have an expression for how many ways can we choose 2 boxes out of r boxes? AUDIENCE: R choose 2? RAFAEL JARAMILLO: Our choose 2. Good. And this keeps going until you get to r choose minimum of r and n. AUDIENCE: Can I ask you a question? RAFAEL JARAMILLO: Mm-hmm. AUDIENCE: Would it be r choose 2 regardless of how you distribute the particles within those two boxes? So, like, say you did, like, n minus 2 and then 2 in the next. Would it still be r choose 2? RAFAEL JARAMILLO: n minus 2 and then 2. That would be-- AUDIENCE: Or oh, OK. By definition, you can only have, like, max 1 or-- RAFAEL JARAMILLO: No, that's OK. That's going to be a different set, and that would also be counted this way. It's just not the way that I've defined it here. AUDIENCE: OK. RAFAEL JARAMILLO: There are so many ways of doing this. I'm only giving you a very introductory example here how to start thinking through it. But we'd be writing all day if we were to write these all out. We'd be writing these all day. But that's a good way to think about it. So now, in principle, we now know how to identify-- or you could say count-- the macrostates. We know sort of a systematic way now to generate these sets. And the result in DeHoff, we know how to count microstates for each macrostate. We have a systematic way of moving through all the macrostates and describing them and counting them. And we have an equation for the number of microstates for a given macrostate. So we have this procedure. We have this algorithm. and so now we can calculate and plot the distribution function or log sigma or omega. So let me show very, very simple case, extremely simple. So I wrote a little script that does this, and I ran the script for a couple of cases. And so as I move from left to right here, I'm changing r. I have three particles in five boxes, three particles in 10 boxes, three particles in 20 boxes, and three particles in 100 boxes. The top and bottom plots are the exact same data set, I'm just distributing them differently. It's the exact same data set, distributions visualized with different sorting. And so with three particles and five boxes, the log of the number of microstates spends some time around 0-- that's 1, right? Log of 0 is 1. So these are conditions for which there's one microstate. And then it jumps up and it spends some time around 1-point-something. And then it jumps up and it spends some time around 2-point-something. And what I want you to see is, as the number of boxes it gets higher, the distribution of microstate count gets more sharply spiked. So what you're seeing is even at low numbers, three particles in 100 boxes, you're entering a situation where this distribution function is becoming very sharply spiked. And this is a very small number case, three particles in 100 boxes. I tried to run 10 particles in 100 boxes and I calculated that my computer would take about a year. So these numbers get really big really fast. If you typed in Avogadro's number here, I think your computer literally laugh at you. But you're supposed to see, qualitatively, that this function is becoming sharply spiked. That has profound implications. That's why I'm spending time on it. So why does that matter? Why does it matter that that distribution is sharply spiked? Who cares? It cares because of something called the ergodic principle. I'd never heard that word before I took statistical thermodynamics or stat max for the first time. So maybe you have. I don't know. Maybe you're more well-read than I was. But I've never seen that word outside of this context. It's, like, the weirdest word. Where does it come from? I don't know. If anyone knows where it comes from, I'd be curious. Anyway, this is what it is. The ergodic principle is the following. It's a hypothesis, not always true. Here's the hypothesis. All microstates, all microstates that are compatible with constraints are equally likely. That's the ergodic principle. So we have a word here, which is, again, from another discipline. We have the concept of constraints, which we're familiar with, but here it is again. And we have this sort of funny new word. So this gives rise to the concept of ensembles, ensembles of microstates-- going to use shorthand here-- ensembles of microstates that all satisfy given constraints. So what's an ensemble? An ensemble is a collection. So here's an ensemble. Here is the ensemble of microstates that all satisfy the constraint that the volume is the full rectangle with solid lines. That's a constraint. The constraint is that the volume is the full volume, and this is an ensemble. It's an ensemble of different microstates that all satisfy some clearly-stated constraints. And here is something a little bit profound. A corollary of this is that time averaging is the same thing as an ensemble averaging. Time averaging is the same thing as ensemble averaging. And for those of you who like probability and statistics, this connection between time averaging and likelihood is known as frequentist probability-- as opposed to Bayesian. So what does that mean, that time average is the same as the ensemble average? It means that if you can define, if you can write down the ensemble of all possible microstates that satisfy a given macrostate, if you can write down the ensemble of all microstates and you average over that ensemble, the average you get for any observable quantity-- be it pressure, or temperature, or magnetization, or density or what have you-- that average is the same as if you take a given system and evolve it forward in time and take a timeout average-- so, in other words, like this. The average you get when you average over this ensemble is the same as you would get if you start with just that and then play forwards in time and average over the movie. That's what time averaging equals ensemble averaging means. And the likelihood of finding a given macrostate is proportional to its number of microstates. The likelihood of finding a given macrostate is proportional to his number of microstates. The probability of finding the macrostate j is equal to the microstate number for macrostate j divided by the microstate number summed over all macrostates, is number of microstates in state j- that's what that is. This is total number of microstates within constraints. And this is the probability of finding macrostate j. That's what that is. So let's go back to the baby book and to the plots. So if we have constraints that the particles just have to be somewhere inside of this gray box, we have one microstate here, we have six microstates here, we have 15 microstates here, and we have 20 microstates here. And then we're going to have another 15 with the particles weighted more to the right, another six with the particles weighted more to the right, and another one with the particles fully on the right. So we have 1 plus 6-- that's 7-- plus 15-- that's 22. 22 times 2 is going to be 44-- and then 20-- is going to be 64. So we have 64 total microstates. And the most likely scenario is this one. Why is it the most likely scenario? Because its likelihood is 20 divided by 64. You have a 20/64 chance of finding something that looks like this. You have a 15/64 chance of finding something that looks like this. You have a 6/64 chance of finding something that looks like this. And you have 1/64 chance of finding all the balls on the left. So the balls not know what they are doing. They do not care about any of this. They move at random. But somehow, from their collective behavior, some simple rules emerge, that you're far more likely to find this scenario. And 20/64 is not a very large number. That's only because we're dealing with six balls and two compartments. 20/64 is not that large of a number. But when you have a distribution function which looks like this, where this is now log of microstates and this is number of macrostates, you can see that this number, taking the exponent divided by this number is very, very large. That's the likelihood that you're going to find this macrostate instead of this macrostate. And I think in the very first lecture, I referred to this as being the reason why we don't have to worry about suddenly suffocating because all of the oxygen molecules in the room have decided to go into one corner. The oxygen molecules move at random. Perhaps if you waited a very long time, you would find a snapshot in time for which all the oxygen molecules were in one corner. But none of us expect to find that. Why is that? Because the likelihood of them being evenly distributed through the room is overwhelmingly close to 1. OK. So that is all following from the ergodic principle. All microstates that are compatible with constraints are equally likely. So likelihood follows microstate counting. All right. So that's all I wanted to get through today. I want to leave you with a question, which is something we can discuss, and it gets to the P set. For what types of systems might the ergodic principle break down? For what types of system might this break down? So we had this example, that ensemble averaging equals time averaging. And I said, what is the time average? Well, let's take a snapshot at time equals 0, call it. And we'll play the movie forward and we'll know the average over the time history of that movie. That implies something. What does that imply about this system? AUDIENCE: Maybe that it's closed and there are no additional particles being added? RAFAEL JARAMILLO: Good. That's good. That's not what I was going for. You're talking about the boundary conditions and constraints. There's something else, though, that is beyond the scope of this class. It's the title of a course you'll take in the fall. What does that imply about this system? AUDIENCE: The kinetic energy is constant, maybe? RAFAEL JARAMILLO: Kinetics. It's not quite what you said, but you said the keyword, which is kinetics. It implies the kinetics or sufficiently fast that the movie will play. So if you're at 0 Kelvin and nothing is moving, if this is your starting point, this is pretty much where you will remain. So your time average will no longer be your ensemble average because the particles don't move anywhere. So you only get this sequence of snapshots in time if the particles are moving at all. So systems that have sluggish kinetics violate the ergodic principle. What's something that we said about equilibrium way at the beginning? Equilibrium is a state of rest. Equilibrium is what you get when you wait long enough. And we've never quantified long enough. We don't really do that in 020 because this is a course on equilibrium thermodynamics. We don't quantify long enough. But the idea that you have to wait long enough for a system to evolve to equilibrium is equivalent to the concept that you have to wait long enough for the time average to become the ensemble average for any of this statistical thermodynamic stuff to work. Those turn out to be quantitatively the same thing. OK. So we talk about kinetically-sluggish systems, kinetically-sluggish systems, systems that don't move very fast. They don't move fast enough to reach equilibrium. That might be an example of the types of systems in which the ergodic principle breaks down. OK. I have four more minutes. And one thing which I could do with those four minutes is go back and show you how this comes about, because this is just stated in the text without proof. But I could also answer questions about this or anything else. So let's start with questions about this material or even the material that we finished with Professor Olson's lecture or the problem set, which is due today. There's another problem set that goes out. Oh, I'll mention-- the problem set that goes out today is P set 9. It's due Thursday because Friday is a student holiday. And it is shorter than I would otherwise give you because you have one less day to work on it. So don't forget that. Any other questions on any of this? AUDIENCE: Could you go back to the counting macrostates page? RAFAEL JARAMILLO: Sure. AUDIENCE: Yeah, this one. So for that one where you have r choose 2, will the n minus 1 always come before the 1? Because first you choose like which two boxes will be occupied, right? So there's r choose 2 ways to do that. But then don't you also have to choose which one has n minus 1 and which one has 1? RAFAEL JARAMILLO: Yeah, you do. So that's another case that I don't draw out here. Thank you for that. So if you like-- so I'm not being explicit here. I'm writing sort of suggestive cases and then putting ellipses, dot, dot, dots, right? So it's like, what on Earth is he talking about? You're getting to an ambiguity in the way I've written this out. So it's the same ambiguity that you have here. Does this picture stand for one state, or does this picture stand for two states? It depends how you count. I imagine that Chris Ferrie-- He's a research physicist. He actually works at a quantum computing center in Australia. This is to say that he's a smart person, and I gather that he could do this accurately if he chose to, but this is the baby book. So he wasn't explicit about, oh, there's actually two states and they're equivalent. that make does-- are you following me here with respect to the baby book? AUDIENCE: Yeah. So basically, what you're saying is, if you were to write out, you would also have one where it would be, like, 1 comma n minus 1 that would be counted separately. RAFAEL JARAMILLO: Right. Let's flip that. So if you want to say that we're counting those, we just double it-- AUDIENCE: OK, that makes sense. RAFAEL JARAMILLO: --in this case. But then the next case down, you can't just times 2 it. It becomes more complicated. So the counting is not easy sometimes. So you can imagine that if I asked you to sit down and do this bookkeeping correctly, you could. And don't worry, this is not a case that's on the problem set. But you could do it, and other people have done it right. I try to do it in a MATLAB script that I wrote to produce those plots, for instance. Yeah, good point. Good point. It all comes down to how you think about doing combinatorics. AUDIENCE: My question was basically equivalent so I'm glad that you asked it. Thank you. But then for the slide before that, then-- or maybe it was two sides before, where you're counting states in the very beginning. When you were doing two particles in state 1, two particles in state 2, just like you explained it here, that would mean that if you had, say, like AB state 1 and CD state 2, that would count separately, then, AB state 2 and CD state 1, right? Those would be two different cases. RAFAEL JARAMILLO: Yes. Yeah, again, another case where I just put ellipses and I wasn't explicit. Yeah. AUDIENCE: OK. So basically, the type of particle matters. We're labeling these particles. It's not just labeling-- RAFAEL JARAMILLO: We're labeling these particles. We're labeling these particles, yes. We're labeling these particles. AUDIENCE: On the r choose 2 on that slide, could you talk a little bit more about how you draw that conclusion? I'm not super familiar with the way that you wrote it out. RAFAEL JARAMILLO: Sure. So let me finish this comment on labeling these particles. This is all classical statistics. It's perfectly fine. It gets you a long way. When you get to quantum statistics, you have to start dealing with indistinguishability. And you have to-- well, this might be much more than you want to know at the moment, but I'll share it because I think it's kind of cool. Imagine that all of the grid squares are states, and I have two particles. And let's say they're the same isotope of an atom in the same spin state, same quantum numbers. In that case, they're indistinguishable, quantum mechanically. So I want the modulus squared of my system wave function to be unchanged if I exchange these particles, because they're indistinguishable. That still gives me a choice of sign in the wave function, because a negative number squared gives you a positive number. And a positive number squared gives you the same positive number. And that's the difference between bosons and fermions. And it produces very, very different physical behavior, those statistics, quantum statistics. And actually, there's a quantum computing scheme that uses anyons, where the phase is imaginary in any way. So you're getting somewhere which we don't have time for in this class. But right, OK. So that was a point on distinguishability versus indistinguishability. The other question was about combinatorics. So here, I have r states. And two of them are special. Two of them are special because they have some particles in them. And one of them, I've put n minus 1 particles, and the other, I put one particle. And all the other states are 0's. So one way to think about defining such states is, I'm just picking two boxes out of all these boxes. So imagine again the grid square here are all the possible boxes. And I just have to pick two. I just have to pick two. So the combinatorial expression r choose 2 does that for me. I'm picking two. All the rest are 0. So that's sort of funny. You think about have to write all these 0's down. Forget about writing all these 0's down. Just choose two that have some particles in them. And, by the way, r choose 2 times 2 is r pick 2, right? So that's the difference between a pick distribution and a choose distribution. But anyway, so let's say I have three boxes of particles in them. Well, I didn't even draw that case, right? Then it's R choose 3. You choose 3 boxes that have particles, and all the rest you forget about. And in general, a choose be, which is the same thing as-- sometimes it's written this way, a choose b, it's a factorial over b factorial a minus b factorial. So anyway, if this is unfamiliar to you, the wiki entry is a good place to start if you haven't seen this before. I hope that helps.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_11_Phase_Coexistence_in_Unary_Systems.txt
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: OK. So we are going to spend today mostly on the Clausius-Clapeyron equation. And remember we're still in chapter seven of the text. OK. Good. So what is the CC equation? The CC equation is the following. We have a phase diagram here. I'm going to draw a little close-up section of a phase diagram, and this is very generic. And all right. So let's say we have alpha. I like to use colors. Let's make alpha kind of teal, and let's say we have beta. And beta is going to be kind of purple. And we have a coexistence line between them. And at any point on that line, we have a slope. Right? We have rise over run. So here's a little dP, and here's a little dT. OK. What makes these phases stable? Well, over here, mu of alpha is less than mu of beta. Right? That's the definition we developed last time. And over here, mu of beta is less than mu of alpha. But on the coexistence line-- right? Last time, we developed the conditions for two-phase equilibrium. So on that coexistence line, mu of alpha, which is, of course, a function of temperature and pressure, equals mu of beta, which is a function of temperature and pressure. So if I take a little jog, I take a little jog from here. Let's see. We'll start running out of colors pretty soon. Let's take a little jog. Let's walk a little bit. If I'm going to stay on the line, I need d mu of alpha, which is, of course, a function of temperature and pressure, to be equal to d mu of beta as a function of temperature and pressure. Right? I want you to think about this line as a mountain ridge, and you're walking along the mountain ridge. And you want to stay on the ridge because if you fall off the ridge, you fall very far. It hurts. And the ridge is a curvy line. And so if you want to stay on the ridge, you just can't walk in any direction for as long as you like. If you want to stay on the ridge, you can't walk in x and y as much as you like, north, west, east, and south. You need to stay on the ridge. So you only have one degree of freedom, right? If you walk a little bit north, you've got to adjust a little bit west or east to stay on the ridge. That's pretty intuitive if you're on a mountain ridge. You don't have to go through all that thinking and think about multivariable calculus. You just don't want to fall off the ridge, but that's what's going on here. We want to stay on the co-existence line. This is the condition to stay on the line. All right? The potentials are equal. And so if I take a little step, I've got to take a step such that the changes of these chemical potentials are also equal so that I stay on the ridge. OK. Questions on this concept because we're going to start evaluating this condition now. So it's going to get kind of mathy. All right. So let's evaluate that condition. That's the condition for the line. All right. So d mu alpha, we know what this is. Who wants to give me an expression for d mu of alpha? It's something times dT plus something times dP. STUDENT: Minus S dT plus V dP. PROFESSOR: Right. You may recall that chemical potential is molar Gibbs free energy. So we're seeing the same differential form here as we did a couple of weeks ago now, when we developed a differential form for d Gibbs. Right? So the only thing I've done here is I'm expressing the molar Gibbs free energy as chemical potential. And I'm adding the phase labels because now I have a heterogeneous system. Multiple phases, so I have to keep track of the phase labels. So thank you for that. I didn't see which rectangle answered that question, but thank you for that. And likewise, we have for beta-- OK. So this is true for alpha and beta phase. And I want the d mu of alpha to be equal to d mu of beta. So I'm going to combine these three lines. All right. I'm going to plug in for d mu alpha. I'm going to plug in for d mu beta. I'm going to see what I get. And so what I get is what? d mu alpha equals minus S alpha dT plus V alpha dP. And that's going to be equal to minus S beta dT plus v beta dP. I'm going to collect terms. Collect terms, and I get S of alpha minus S of beta dT equals v of alpha minus v of beta dP. And this here, this is a transformation quantity. I hope everyone watched the three D's of thermodynamics. And if you did, I hope you found it useful. So this is the transformation quantity. This is delta S, and this is delta v. As we go between phases beta to alpha, these are transformation quantities. And we're going to then divide through. Divide through, and we're going to get an expression for the slope. dP dT equals delta S over delta v. Right? That's pretty useful. So now we have an expression for the local slope of that coexistence line as a function of these transformation quantities. And we have something else. For an isothermal equilibrium transformation, we know that delta S equals delta H over T. So I can re-express this as dP dT equals delta H over T delta v. So these are equivalent, and these are called the Clausius relation or equation. OK. So that's what these are. These are very famous. And so it's a differential equation. It's a differential equation for this line, right? It's not an equation for the line. But it's giving the slope of the line as a function of temperature and pressure, where temperature and pressure parameterized these transformation quantities, delta S and delta T. OK. So this is in general true for any two-phase coexistence line. We're going to analyze it now for the case of vapor pressure because the equation simplifies for vapor pressure. And it's also a very useful result which is used in lots of different fields. But before I move on to a specific case, this should be a good time for questions on the general case. STUDENT: Can you explain when you said that for the isothermal equilibrium transformation why you specified that case? PROFESSOR: Sure. I think that was [INAUDIBLE] but anyway. Thank you. So when we are having an isothermal transformation, at equilibrium, if alpha and beta can transform to each other at equilibrium, that's almost the meaning of coexistence. So then delta G has to be 0. That, again, was our requirement for equilibrium at constant temperature and pressure. That's the Gibbs free energy of two phases is equal, or, in other words, the chemical potentials are equal. So delta G equals delta H minus TS. Right? That's just from the definition. And if it's isothermal, this is delta H minus T delta S because T here is a fixed number for an isothermal transformation. So this has to equal 0 if phases alpha and beta are in equilibrium. So this is useful. And I mentioned this in the last class that the fact that delta H minus T delta S is 0 for two-phase equilibrium. It's used, for instance, in databases for transformations. You might see temperature, and you might see delta S. We might see delta H, and you might see delta S, or you might see a pair of numbers. But you might not see all three because having all three is redundant because you have to have this condition satisfied. And you can check that. If you open the textbook and you go to phase transformations for the elements, there's a bunch of data here. And if you like to, you can go and check in, for example, silver here. Silver melts from FCC to liquid at 1235 Kelvin. I know you can't read the numbers, but I'm just pointing out the structure of the database. And there's delta S for the transformation and the delta H of the transformation. And you can confirm that delta H minus T times delta S is going to be 0 for every single set of these. So De Hoff is nice, and he gives you extra information. Right? But he could, for example, block out the temperature of the transformation and not list the melting temperature. And you say, what on Earth? It's a table of melting temperatures, and he doesn't list the melting temperature? What is he doing? Right? Well, you can solve for the transformation temperature from delta S and delta H under the condition and the delta G is 0. Now, I don't know if that answers your question. If it's off base, please try again. [CHUCKLES] STUDENT: No That was helpful. Thank you. PROFESSOR: OK. Thank you. All right, so let's move on. Right. So let's use the Clausius-Clapeyron equation. Using the CC equation, vapor pressure. All right. So this is a very useful case, and let's just remember what we're doing here. All right. So I have temperature here. I have pressure here. That's a typical way that vapor pressure curves are plotted. And there's a typical curve, and here's solid, for instance. And here is vapor. I used green for both. That's not very creative. Let me point out that everything we're about to do also applies if that's liquid. So it's a vapor pressure over a condensed phase. It doesn't have to be over the solid phase. It could also be over liquid. So here, vapor condenses. Right? On that side of the curve, vapor condenses. And here, solid sublimes, or liquid evaporates, as the case may be. All right. So that's the meaning of this phase diagram. OK. So on the coexistence line, vapor is saturated. That's an important word. You need to be able to interpret that word when you see it in all sorts of problems. It's saturated-- that's a really important concept-- at its saturation vapor pressure. So these terms, saturation and saturation vapor pressure, you're going to encounter these all over the place, even on your weather forecast. And it's nice to be able to connect it back to the thermodynamics. OK. So we're on the curve. Let's say if excess vapor pressure is added, what will happen? Imagine you have a system. You have these two phases in a box, and you turn up the vapor pressure. What will spontaneously happen? STUDENT: The vapor will condense to decrease the vapor pressure. PROFESSOR: Thank you. Will spontaneously condense to-- and I put this in quotes-- try to re-approach, right? To try to re-approach P sat. Likewise, if vapor pressure is reduced-- let's say you hook it up to a vacuum pump-- what will happen? Let's do it. STUDENT: Solid will will evaporate. PROFESSOR: Solid will. Thank you. I'm going to use the case of the solid, but you're right. Spontaneously sublime to-- again, in air quotes-- try to re-approach saturation vapor pressure. OK. So it seems that when you're on this coexistence line, if you try to change the intensive parameters of the system, if you try to change the pressure, say, the system will respond to try to counteract your change. It will try to resist whatever you're trying to do. If you try to increase the pressure, it's got a way to try to pull it back down. If you try to decrease the pressure, it's got a way to try to push it back up. All right? So it has a way to resist you. What's the principle behind that? And it's a French guy. Who remembers? There's a general principle which explains this. STUDENT: Le Chatelier's. PROFESSOR: Right. This is a really nice example of Le Chatelier's principal-- that is nature reacts in such a way to counteract any external impulses, or, in other words, it's stubborn. It will resist you by any means possible. So in this case, you're on the coexistence line. It has a means of resistance, which is this phase transformation. And it will use that phase transformation to try to resist the changes which you imposed. All right. So I hope that's a little bit intuitive. Let's do some math around that. All right. So let's calculate the saturation vapor pressure. How do we do that? We have a differential equation, right? So how do we get an equation from a differential equation? Right? I have to solve it. So we do it by integrating the CC equation. And as we'll see, normally, in general, you can't do this because you might not have closed-form expressions for everything that you need. But we're going to make some approximations. So in general, we have this. dP delta v equals dT delta H over T. Right? If we're going to integrate this thing, in general, we need what? We need delta H as a function of T and P. And we need delta v as a function of T and P. Right? In general, we're going to need these things. And even if these equations of state exist, there's no guarantee that they're separable or that we know how to solve the equation. Right? This is why we use computers. So in general, these transformation quantities can be-- right? As we saw last time, in general, I'd say this is pressure. This is temperature. Let's say there is a standard T0 P0, like standard temperature and pressure conditions. Right? Standard state. And someone gives us the transformation quantities there, right? And we actually need them over here. Right? And some other P and T. Right. So in general, we can integrate. So, for instance, calculating delta H as a function of T and P. dH Equals cP dT minus plus v1 minus T alpha P. So delta H at T and P equals delta H at, say, T0 and P0, some standard data which I found maybe in a database, plus from T0 to T d delta H at fixed pressure plus from P0 to P d delta H at fixed temperature, which equals delta H T0 T0 plus T0 over T dT prime, dummy variable, delta cP. So there's my capacity differences. So here's where that data comes in. P0 to P d P prime, dummy variable, and, again, a transformation quantity, v1 minus T alpha. These are transformation quantities. Right. These characterize the change as you move between phases. Right. So I'm just laying this out in general. Very rare that you actually have to do calculations like this. Let me lay out how to calculate delta v. Calculating delta v, right? That's the other part of the integrand, delta v. So we have dV equals v alpha dT minus v beta dP. That's the general form. So delta v at some temperature and pressure equals delta v at a standard temperature and pressure plus from standard temperature to my desired temperature, d delta v at fixed pressure, plus from standard pressure, my pressure of interest, d delta v at fixed temperature equals delta v at T0 P0 plus T0 T d dummy variable delta v alpha plus P0 to P, dummy variable, delta minus v beta. And again, these are transformation quantities. They're the differences between phases. Transformation quantities. All right. So why am I telling you this? Right? I can hear a line from "Hamilton" in my background. Why am I telling you this? I'm telling you this because, in general, you need a lot of data to do these calculations. You need to know the heat capacities of all the phases and their differences. You need to know the thermal expansion coefficients of all their phases and their differences. For this calculation, you need to tell the volume of all the phases. And here we are calculating delta v's. You need to know the volume of all the phases. Right? And if you don't know the volume of all the phases at all temperature and pressure, you have to know it at some temperature and pressure. Then you need the thermal expansion coefficient of all the phases. And you need the isothermal compressibility of all the phases. Right? So, again, you can see why CALPHAD or computer calculation and phase diagrams is so powerful because it integrates all this stuff for you automatically. Right? So, in general, this requires knowing-- writing is getting sloppy here-- this requires knowing cP as a function of temperature and pressure, v as a function of temperature and pressure, alpha as a function of temperature and pressure, and beta as a function of temperature and pressure for both phases. So let's get phase labels here, right? There's a lot of information that we need to tabulate for both phases, kappa equals a, alpha, and beta. Right? However, for vaporization of a condensed phase over a limited temperature range, we can simplify using three assumptions. One, we're going to say delta H is approximately equal to a constant. That's probably the worst approximation. We don't know if it's good or not. We're just going to go with it. If delta H is a constant, I don't need to do all that heat capacity integrating and so forth. Right? It's just a number. OK. Two, volume of the gas is going to be much, much greater than volume of the solid or volume of the liquid. Right? If I assume the volume of the gas, molar volume, is much, much greater than the solid or the liquid, then delta volume is approximately equal to what? Delta volume is the transformation quantity. I am vaporizing. So delta volume is approximately equal to it. STUDENT: The volume of the gas? PROFESSOR: Right. Exactly. Right? Delta v equals volume gas minus volume solid, right? And if the volume of gas is so much bigger, we can say it's basically equal to the volume of the vapor. When typical condensed phases sublime or evaporate, you have factors of 20,000 or more increase in volume, molar volume. So that means you're making an error one part in 20,000. It's a pretty good approximation. So I would say that this is a good approximation. This one here is kind of an eh approximation. It's OK. It's not bad. And the third approximation is that the gas behaves ideally. So we can use the ideal gas equation of state. And unless you're at high pressures, that's also a pretty good approximation. So what we're going to see next is how these three approximations-- well, what I'm going to do, I'm just going to outline the way that this goes forward because on the P sat you need to use this a little bit. So I want you to explore a little bit. But if the volume of the gas equals RT over P, that's PV equals nRT. And that's approximately equal to delta v, and delta H is constant. All right? Let's remember what I'm trying to do here. Here, I have dP delta v equals dT delta H over P. So let's see. dP delta v equals dT delta H over T, right? OK? And so what? We have dP RT over P equals dT delta H over T. All right, so this is a constant now. dH is our constant. That's our approximation. We have temperature and temperature, so it's separable. I can take this temperature over here. So I'm going to have dT over T squared on the right-hand side with a constant factor. On the left-hand side, I'm going to have dP over P with a constant factor. So this is now a separable and easy differential equation to solve. And as you can see already, you're going to get logarithms from the dP over P term. OK. So I'm going to leave it here for you to finish this calculation. And you'll use the result in the P sat. And this is very well described in the textbook. So you can step through. It's a nice calculation to finish. I'm going to move on to Gibbs phase rule. So we're going to leave this here. Questions on saturation vapor pressure? This is a good time for them. I will note that we're going to return to the concept of saturation vapor pressure on Wednesday, when we're going to do an extended example. So this isn't the last you'll see saturation vapor pressure. And it will come back with a vengeance in a couple of weeks from now, when we do multi-phase, multi-component reacting systems. STUDENT: I just have a quick question. So in order to assume dP over dT equals delta H over T delta V, we said that it had to be isothermal? PROFESSOR: Yes. I'm sorry. You're referring back to delta [INAUDIBLE] equals delta H minus T delta S? STUDENT: Yeah. And then we said when we wrote the second form of the Clausius-Clapeyron equation that it had to be isothermal for that? PROFESSOR: Correct. Right. STUDENT: So then why is dT not 0? PROFESSOR: So good question. This is a consistent point of confusion. And again, I won't put you on the spot and ask you whether you've watched and spent time thinking about the three D's of thermodynamics video. I don't mean to suggest that's a panacea, but this is a very confusing point. These are transformation quantities. These are functions of temperature and pressure at any given temperature and pressure that they're numbers. So, for instance, delta S, delta V, delta H, delta v-- at any given temperature and pressure, they're numbers. They characterize the change in state variables as you move between phases. Being functions of temperature and pressure, they depend on temperature and pressure. So I want you to think of these capital D deltas as the distances between these surfaces. And I'm sort of drawing surfaces here, but I drew it in the last lecture, and it's really well illustrated in the textbook. So each of these quantities-- delta S, delta V-- is the distance between, let's say, the surface S for phase vapor and S for phase solid, as drawn in the last lecture. And since they're functions of temperature and pressure, they vary with temperature and pressure. So, for example, you have volume. Wow. That's a really fat pen. That's not helpful. Let me find a page where I have some white space. There. I have white space on this page. Let me grab a pen that's not too worn not yet. So let's say I have volume of phase alpha. That, of course, depends on temperature and pressure. Volume of phase beta-- that, of course, depends on pressure and temperature. And so I might have a transformation quantity, the change in volume as I transform from alpha to beta. This equals volume of beta temperature and pressure minus volume of alpha temperature and pressure. And so this explicitly depends on temperature and pressure. Right. That means that even though I can consider an isothermal transformation between alpha and beta, I can also ask how that quantity varies, let's say, with temperature. Right? That's the slope of that surface for varying temperature with fixed pressure. And I'm hesitating to launch into redrawing all those surfaces that I did the last time because if I do it quickly, it'll be too messy to be useful. And if I take my time, I'll be repeating myself, and we'll run out of time with today's lecture. Another way to think about it is that at any given temperature and pressure, a fixed temperature and a fixed pressure, I have these equilibrium conditions between the two phases. And now I'm going to take a little trip. I'm going to go from one temperature and pressure to another temperature and pressure. And Clausius-Clapeyron is about making sure that I stay on that equilibrium condition. OK. I don't know if that was helpful. Let's move on and then, as time allows, come back to that question at the end of the lecture. But again, transformation quantities are super important. Gibbs phase rule, we're going to go over this a little bit quickly. Gibbs phase rule is the answer to the following question. How many phases can coexist at equilibrium? How many phases can coexist at equilibrium? So Gibbs phase rule is a linear algebra problem. It's like this. Number of degrees of freedom equals number of variables minus number of independent constraints. So hopefully, this is familiar from linear algebra. So let's count variables. I just have to go through this in order for it to make any sense. All right. So let's see. For phase alpha, I have a temperature and a pressure. Sorry. That was drawn for phase beta. I apologize for temperature. Temperature and pressure for alpha. Temperature and pressure for phase gamma and so forth. For each of PH phases, I have two variables. That's what I need to set up. I need to suggest an array of pha phases and two variables. So in this approach, the number of variables in my system equals 2 times pha. And I use pha because the textbook uses P, whereas every other page of the textbook P means pressure. So I don't like what they do, so I use pha. All right. OK. Now let's do a number of degrees of freedom, number of variables, number of independent constraints, number of constraints. Number of constraints, I'm not going to write this out because it's very well presented in the text. But my constraints are my equilibrium conditions such as T of alpha equals T of beta equals T of gamma equals T of all the other phases. And, for example, P of alpha equals P of beta equals P of gamma equals and so forth. So I have my variables, and I have my constraints. The variables come from counting. The constraints come from the equilibrium conditions, which we've already derived. And then we skip straight to the answer. The answer is that my number of degree of freedom equals 2 times phases minus 3 times phases minus 1 equals 3 minus the number of phases. This is Gibbs phase rule for unary systems. Right? So this is skipping straight to the answer. It's more thoroughly developed in the text. What does this mean? That's what I want to get to. That's why I'm rushing ahead. A one-phase region, phase equals 1. The degree of freedom equals 2. For example, the TP plane-- all right? So this is getting back to what we know to be true, which is that in a unary phase diagram, in general, I'm going to draw the one for water here because it's pretty familiar. See, this is temperature. This is pressure. This is liquid, solid, and the vapor. In general, in a unary phase diagram, I can stick in one phase region. And I can wander around randomly in temperature and pressure and remain in that one phase region. That's the meaning of having two degrees of freedom. I can wander at random in temperature and pressure independently while remaining in my one-phase region. Now, it's considered two-phase region. Two-phase region, pha equals 2. So what's my degrees of freedom? Anybody? STUDENT: 1. PROFESSOR: 1, right? So what does that mean? If I am in a two-phase region, let's say, on that coexistence line, I can move along the line and maintain my two-phase coexistence condition. That's one degree of freedom, right? If I move in temperature, the move IN pressure is prescribed. I can't move independently in both pressure and temperature. And it's prescribed by the Clausius-Clapeyron equation. We just derived that. So that's one degree of freedom. It's a line, right? This is a coexistence line. And finally, a three-phase region, number of phases equals 3. So my degrees of freedom equals 0. What's a geometrical object with 0 degrees of freedom? STUDENT: A point. PROFESSOR: Point. So we have in unary phase diagrams so-called triple points. These are points where three phases can exist at equilibrium, can coexist in equilibrium. However, there are special points. If I change temperature or pressure away from those points, I fall out of that three-phase coexistence condition. So, for example, my triple point here of water is-- gosh-- what is it? It's like 10 torr and right around 273 C. I forget exactly the number. If I'm in any temperature and pressure other than that special triple point, I cannot have all three phases coexisting at equilibrium. And it comes out of this relatively simple linear algebra approach. And four-phase region, not allowed. So from thinking about the number of independent variables in a unary system, that is pressure and temperature, and deriving the conditions for equilibrium, which you'll recall came from entropy max condition, in thinking about those things, we have arrived at a pretty strict set of rules. And they're fully consistent with everything we know nature does. So I think that's kind of cool. It's an example of the deductive power of thermodynamics. You start with these abstract principles and some math. And you get very practical real-world predictions that are borne out. What happens? How do I get a four-phase region? Does anyone know? There are four-phase equilibria in nature. They're allowed, but they're not allowed in unary systems. How do I get a four-phase coexistence? Right. It happens if I somehow increase my number of variables faster than I increase my independent constraints. And that's what happens when you add multiple components to a system. So we're going to revisit Gibbs phase rule when we do multi-component systems, except if multi-component. OK. So this is the end of our treatment of unary phase diagrams. In terms of new material, on Wednesday, we're going to study saturation vapor pressure in some detail. I hope this does give you a little bit of a flavor of the pace of this class. There's no way that I can cover all the material and work examples in lecture. And so we rely also on the textbooks, on the readings, so please do keep up.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_19_Regular_Solution_Models_and_Stability.txt
[SQUEAKING] [RUSTLING] [CLICKING] RAFAEL JARAMILLO: Back to thermodynamics. Happy Friday. We are going to study simple regular resolution model and spontaneous mixing and spontaneous unmixing. So the simple regular model is just a refresher from before the lecture-- sorry-- before the exam, the simple regular model. And by the way, my handwritten lecture notes are up on Canvas. Those will be replaced soon with the digitized notes. Sorry, we're a little bit behind on that. But anyway, this is a simple regular model. So this is generic for any regular model. We assume ideal mixing and just some nonzero-- ideal entropy of mixing and nonzero enthalpy of mixing. And for the simple regular model, we have this form for the enthalpy of mixing-- sorry-- enthalpy of mixing. And this can take us a long way, just this relatively simple form, where a naught parameter rises enthalpy of mixing. a naught greater than 0 is endothermic, so we have to put heat in. And a naught less than 0 is exothermic, so we have to pull heat out from wave mixing at a fixed temperature. And what does that actually mean? This is bonds, bonds being made and broken. When you hear endothermic and exothermic, that's what you have to remember. This is energy going into and getting pulled out of chemical bonds. All right. So that was a refresher. So a week ago when we introduced this, there was a question of, where does this come from? Can we justify this? And I previewed this model, which I want to go into in more detail today because it's motivating. It's called the quasi chemical model. I don't quite know why it's called that, but that's what it's called. And it motivates the simple regular solution model, makes it seem less random. Model of molecules, or if you like atoms, on a lattice. So it's a lattice model. It's the latest model. And so we're going to have species A and species B. We're going to have a lattice. So it doesn't really matter how I draw this. I just want to draw something that looks random. Can you hear the birds right out my window? They've been going crazy all morning. It's really nice. Anyway, so here we have atoms on a lattice, and the bonds are the lines connecting them. So this an A bond. It's a bond between two A's. This here is a B bond, and here over here is a BB bond. So that's the model. And we're going to add up the bond energy. So we have bonds and the internal energy of those bonds, so bond internal energy. And so we have AA bond. We're just going to assign an energy EAA. That's the energy per bond. BB bond, we're going to go ebb. AB is going to have energy EAB and beyond nearest neighbor, no energy. So we're saying that only nearest neighbor bonds matter. That's the model. So it's very simple. All right. So next, what we do is-- where's the pen that I need? we're going to add up the total internal energy. So U, the total internal energy, equals the number of AA bonds times the energy of AA bonds plus the number of AB bonds times the energy of AB bonds plus the number of BB bonds times the energy of BB bonds. That's it. N, I, J equals number of IJ bonds. And we know that the total number of bonds is the total number of sites times the coordination times 1/2 so that we don't double count. M equals total site number and Z is the coordination. This is geometry. You can convince yourself of that. All right. We have conservational mass. What does that mean? Z, M, X, A, that's the total number of A species. That equals 2 times the number of AA bonds plus the number of AB bonds. And similarly, ZMXB equals 2 times the number of BB bonds plus the number of AB bonds. So we're just counting atoms here, counting atoms, or I should say molecules to be more general, counting molecules or atoms, and then we solve for U and delta U of mixing. We can get this already. Watch. Delta U of mixing equals U minus the pure contribution. Let's imagine pure AMZEAA and similarly for B. So now we're imagining lattices here of pure A and pure B. And if you plug in the numbers from above, you find a pretty simple expression. It's NAB times quantity EAB minus 1/2 EAA plus EBB. Welcome back to that. All right. But the math works pretty easily. All right. So we have the energy here of mixing, depends on the number of unlike bonds, and this thing, which is the difference between the energy of an unlike bond and the average energy of the bonds. So that's a little funny. So next what we're going to do is we're going to count up NAB. In general, NAB depends on structure. The structure of the lattice will determine how many of each type of bond there are, but if the alloy is random-- do you remember what random alloys means? Anybody want to volunteer of guess? STUDENT: There's no general consistency for the arrangement of A's and B's. RAFAEL JARAMILLO: Right. So a random alloy is you have some crystal lattice. That's correct, [INAUDIBLE]. You have some crystal lattice, and you have an alloy. So it's a mixture of components, and you distribute the components on the lattice at random. No order. That's opposed to ordered alloys, which there are many of. So random alloys, ordered alloys. So if the alloy's random, as it should be if we're using the ideal entropy of mixing, which has that assumption of randomness in there, going all the way back to the baby book and lecture 1, then we have the probability of an AB bond equals-- what's the likelihood here? What's the likelihood that a given bond is AB? Well, you can have the first atom being A and the second atom being B, that's a conditional probability. They're uncorrelated probabilities, but this is a joint probability because we want A and B. That's an AB bond. And we can also have B and A. That's also an AB bond. So we also have the probability that the first one is B and the second one is A. So this is combinatorics, the stuff that drove me crazy in high school because they didn't teach it well. So you have two XaXb. So that's the probability in a random alloy that a given bond is an AB bond. So based on that, you can get Nab equals 1/2 Mz, total number of bonds, times this probability of AB, which equals MzXaXb. So now we're going to gather terms. Gathering terms, we have delta U of mixing equals alpha 0 XaXb-- sorry-- A0, where A0 depends on some things, the total number of sites, the coordination number per site, and this difference between the energy of unlike bond and the average energy of like bonds. And here's a bit where we make an approximation, which you don't have to worry about too much right now, but we could talk about if you're curious. This by definition equals P minus delta H minus P delta V mixing, just by definition of enthalpy and energy. And for solid solutions, this can be well approximated by delta H of mixing. And this is related to-- this approximation is related to what you did on a couple P sets ago, which is you estimated the energy changes for changing temperature and changing pressure of solids. And you found that temperature is a much more important knob in terrestrial conditions. So anyway, you don't have to worry about this approximation so much right now. What I want to focus on is the form of this. We got this form. We got A0XaXb. So this pretty simple model of molecules on a lattice gave us an energy of mixing, or if you like, an enthalpy of mixing, that has this simple regular form. And there's some physical interpretation here. If the AB bond is a really low-energy thing relative to the AA and BB bonds, then this will be negative, and you'll have an exothermic process. If the AB bond is a really high-energy thing relative to the cases of the pure materials, then this overall will be positive, and you'll have an endothermic mixing process. So let's plot this. Plotting-- so that was the end of the quasi chemical business. And I think it's useful. Plotting the simple regular model. Now we're going to step away from this quasi chemical business and just, again, analyze the simple regular model. Plotting the simple regular model. So for the case A0 less than 0 exothermic mixing, we have delta H mix. Delta H mix is a parabola that's only negative. And we have T delta S mix, and we know what this. This is the ideal solution model with those x log x things, and it's all negative. So this thing looks like the following. The series is your increasing temperature. We have delta H mix minus T delta S mix equals delta G mix. So we can see here the enthalpy favors mixing. The entropy favors mixing, and so the Gibbs free energy is going to? Somebody? STUDENT: Favor mixing as well. RAFAEL JARAMILLO: Favor mixing. All right. Thank you. So the Gibbs free is very favorable for mixing. And I have here a series of temperature. So this is qualitative. I'm just drawing lines on paper, but hopefully, you now see where this is coming from. That's exothermic. What about the endothermic case? So in this case, we have something a little bit different. Delta H of mixing is now positive. We have to put heat in. The system absorbs heat from the environment when it mixes at fixed temperature. So now we have a 0 here, and this thing is an upwards moving parabola. But the ideal entropy of mixing is still the same, so this thing is strictly negative with the x log X's. So we still have this series, and the combination is delta G mix. And now our function is a little more interesting. So what does this look like at low temperature? Somebody, please. What does this look like at very low temperature? STUDENT: Downward-sloping parabola. RAFAEL JARAMILLO: A very low temperature. Imagine temperature is zero, what does it look like? [INTERPOSING VOICES] STUDENT: --positive? RAFAEL JARAMILLO: Yeah, you got the parabola part, that was right, but if you look here at the sign-- if T is negative-- I'm sorry, that-- if T is 0, then you don't worry about this at. It's not here. So then you just have the upward facing. So it's an upward-facing parabola. At low temperature, you have the upward-facing parabola. It's ugly. What about a very high temperature? What about a really, really high temperature? What does this look like? STUDENT: You would have the negative parabola? RAFAEL JARAMILLO: Not the parabola, but you've got this thing that comes from the x log x. The parabola is parabolic, so that's x squared. But here you've got this downward-facing thing. That's right. So again, at high temperature, the entropy term dominates. At low temperature, the enthalpy term dominates. All right. Remember that. At high temperature, the entropy dominates. At low temperature, the enthalpy dominates. What happens in between? In between is more interesting. So in between, you get a series of curves that interpolate between that low temperature case and the high temperature case. Sometimes I call this the duck phase plot. It kind of looks like Daffy Duck. But anyway, this is-- so now this is a little more interesting. Let me share-- I think this is useful to see plotted by-- it looks slightly more accurately than I'm able to do by hand. So right here is the first case. This is exothermic mixing. I chose a physically reasonable number, minus 20,000 joules per mole. And you see the isentropic term here is strictly negative, downward-facing parabola. The entropy term is strictly negative, this x log x business, with increasingly negative with temperature. And when you add them up, you get this strictly negative thing, which favors mixing. On the other hand, here, in the endothermic mixing case, we have-- and you said it's unfavorable enthalpy of mixing, takes energy to make it happen-- favorable attribute mixing. And when you combine them, you get this family of curves. And here, I've chosen physically reasonable numbers, plus 30,000 joules per mole. And you see what happens as I go from 300 Kelvin to 2,100 Kelvin. I traverse this series. So you're got to get a sense for units in temperature scales if you're going to be professional material scientists and engineers. And so you see this is a reasonable number. I'll just tell you that now. This is a reasonable number for molecular or atomic systems that don't favor mixing, and now you see the temperature scale at which we often run materials processes. So this is typical furnace temperature. This is like 1,800 degrees. It's an aluminum furnace or maybe a graphite furnace, maybe a blast furnace. All right. So now I want to talk about these curves a little more. Before I move on, questions, please. Questions on how that looks and why it looks the way it looks. We're going to talk about what the implications are in a minute, but-- STUDENT: So which sign of A favors mixing, again? RAFAEL JARAMILLO: So if A is positive is an endothermic, takes heat. So in general, this means the enthalpy will disfavor mixing. It means the mixing is a higher energy process. Mixing is a higher energy state than unmixed, as opposed to exothermic, where mixing is a lower energy state. So it's favorable. And if you remember way back to lecture 1, there are processes that are spontaneous that are driven by lowering the system energy, sometimes at the expense of entropy. There are other processes that are spontaneous, that raise the energy, but they're spontaneous because the entropy also is significantly enhanced. That was the hot and cold pack example. And then as you vary composition and as you vary temperature, you have transitions between those cases. You have spontaneous processes that are driven by entropy and spontaneous processes that are driven by enthalpy. This is a case of a spontaneous process that's driven by both. For ideal mixing for either no enthalpy or exothermic entropy, mixing is going to be spontaneous because it lowers the energy while also increasing the entropy. So this is a win-win. But in many interesting cases, you have instead a balance. You have instead of a balance. Then you have a situation where you have to put energy in, but you get an entropy benefit. And so what happens is in the balance. This goes right back to hot and cold pack. All right. Allow me move on. I want to talk about the curvature. We haven't talked about curvature much, the curvature of this thing and solution stability, curvature and stability. So again, use a simple regular model. For a simple regular model, the curvature of delta G mix changes with temp for the case of endothermic mixing. So I'm just going to repeat what we just drew at low temp, medium temp, and high temperature. So at low temp, we have delta G-- I'll use some colors here-- delta G of mixing is like that. So the curvature D squared DG mix DX2 squared, that curvature is everywhere negative. This is everywhere negative. This is negative curvature everywhere. Another way of putting that is that there are no inflection points. At intermediate temp, medium temperature, we might have something like this. That individual has two inflection points. Here we got a little bit darker. That individual has two inflection points. The curvature changes sign. And it does it twice. There are two inflection points. Changes sign twice. And at high temperature, where you go back to a situation with no inflection points, except here the sign is flipped. The curvature is now positive everywhere, and again, no inflection points. All right. Why am I telling you this? So I want to draw a little picture here. I want you to consider spontaneous, microscopic fluctuations. For example, let's consider a system of squares and triangles at 50/50 composition. And let's go with the green and orange again. They have nice contrast. So let's imagine at some time, maybe barely ever say time in this class, but I just said it, all right, so sometimes, I'm just going to draw-- this is very conceptual. I'm not actually drawing what a real system looks like, you know that. Let's see. How did I want this to be? I have a particular way that I want this to look, although the lesson could be taught through lots of different ways of drawing this. 1, 2, 3, 4, 5, 6 7, 1, 2 3, 4, 5, 6-- I was going to put another one somewhere. Do I have 7? 1, 2 3, 4 5, 6 7, 1, 2, 3, 4, 5, 6. Let's put another one here. It doesn't quite matter. So just imagine, we have some mixed system. This could be on a lattice. This could be on a fluctuating lattice. This could in a gas. This could be in a liquid. And now what I want to do is I'm going to imagine a thermal fluctuation that swap positions through thermal fluctuation. And now we're looking at the system a moment later, and what does it look like? It looks like-- see if I can do this. And I wanted to have this down here and this here and this here, and then I had a triangle up here. And I have a triangle and a triangle and a triangle, and I have another triangle. And I have a triangle, and I have a triangle. All right. What on earth? So you just had this randomly fluctuating system, but the fluctuation is such that it created a cluster of triangles. And you have to imagine that this kind of thing happens all the time, 10 to the 15 times per second in any thermal system, any sort of liquid at room temperature. Not that many times, 3 second in solids, but still many billions of times per second in solids, you have these fluctuations that are happening. And here's the question, will this delta-rich cluster spontaneously dissolve or grow? That's a question we're going to ask. And to analyze this, we follow the money-- what do we follow? It's like follow the money but for thermodynamics. STUDENT: Follow the Gibbs free energy. RAFAEL JARAMILLO: The Gibbs free energy. Gibbs free energy is like thermo cache. That's it. So we follow the Gibbs free energy. So that's what we're going to do. We're going to analyze the Gibbs free energy of this little fluctuation, and that will tell us whether that fluctuation will spontaneously grow or spontaneously dissipate. How do we do that? Well, we calculate the free energy of spontaneous composition fluctuations, and then we draw the plot. So this is the way I want you to think about it. We have delta G of mixing and we have-- this x-axis here is X2, and we have some segment of a curve. We're zooming way in right now. We're not looking at the whole solution model. We're just zooming way, way in and looking at a segment of this curve. And so let's imagine that we start here at overall composition X2. So the system initially uniformly mixed at composition X2. And then what we're going to do is we're going to spontaneously fluctuate into regions. We're going to spontaneously have a little bit of region that's X2 rich and a little region that's X2 poor. And I'm going to call this X2-- r for right, you can call it whatever you want-- X2 left. Just for today's lecture and drawing, its left and right makes some sense. So we're going to spontaneous fluctuation inter regions with composition X2 left and X2 right. How much of each material is there? How much of this composition is there and how much of this composition is there? That's determined by the phase fractions. I'll come back to this in a minute. Let me just get this down on paper. The phase fractions, that is what fraction of the system is at the left composition and what fraction of the composition, they're determined by the lever rule. So we're previewing a little bit here what you're going to be reading and working on in the future. But if these compositions are equally spaced around the central composition, that is X2r minus X2 is the same as X2 minus X2 left equals, let's just say, some dx, some little differential change. So this is dx, and this is the x, so the same spacing. Then you have simply f left equals f right equals 1/2. So half the system is a little rich in component two, and half the system is a little poor in component two. And what you want to do is calculate the differential change, the change in the overall free energy of mixing. And so what you get is something like following, phase fraction of the left stuff times the free energy of mixing evaluated at the composition of the left stuff plus phase direction of the right stuff times the free energy of mixing evaluated at the composition of the right stuff minus-- not the right stuff, that's funny-- minus what you had before the spontaneous fluctuation. So this is after fluctuation, and this is before fluctuation. Now, a word before I go on. I'm actually helping you on the p set. So finishing this calculation is one of the problems on the p set. Reading about-- and then proving the lever rule is also on the p set. So this is all stuff for next week. So if some of these concepts are new to you, that's OK. It's coming up. And these concepts are not particularly complicated. You just want to become exposed to them. And the reading for the next week also includes a section of Callister, which I've discussed. There's more pictures. It's a little more entertaining than DeHoff. And so you can get this content from Callister or DeHoff. And I'll make one final note, which is that I have a mini lecture of one of these-- what are these called? These lectures where I was standing in front of a piece of glass. Light board, a light board video on the lever rule. So all of that's available for you. Anyway, so this is what you do. You calculate this change, and you can show, or you will show, that negative curvature leads to spontaneous unmixing. So that's going to be the condition for those little unmixed clusters, those little fluctuations, to grow in size and for the system to spontaneously over macroscopic length scales unmix. And we have a special name for this process, is called spinodal decomposition. In my experience, and as far as I know, spinodal decomposition is synonymous with spontaneous mixing. They exactly mean the same thing. And you can similarly show that mixtures are stable for positive curvature. So this is neat because we get to analyze our solution model a little more. We also get to start learning about binary phase diagrams, which is coming up. So we learn some concepts, like lever rule and phase fractions. And we learn something about stability. Before pausing and stopping for questions, I want to share some pictures about spinodal decomposition. So this is a really important general phenomenon. Here's a general spinodal phase diagram. This is from the book. This is from chapter 10 to the book. So we're not there yet, but here's a delta G of mixing for a system that undergoes spinodal decomposition. Let me grab a laser pointer here. So this delta G of mixing has this feature that its curvature changes sign, and it's got inflection points, rather like what we were showing half an hour ago. And the resulting phase diagram has this two phase region. And we're not quite there yet analyzing phase diagrams like this, but we're going to get there within a couple of lectures. So a free energy composition diagram like this leads to a phase diagram like this. And here's an example. This is dodecane and ethanol, any system that at high temperature mixes and at low temperature separates. I think Aviva was asking earlier, which one was which, high temperature, low temperature. This is the system, which is an endothermic mixing system. It takes energy to mix. So low temperature, where enthalpy terms tend to dominate, it prefers to unmix. At high temperature, where entropy terms dominate, it prefers to mix. And unmixed is within this region of composition called the miscibility gap. And you get this phase diagram. This is very general phenomena. I pulled four examples here of spinodal systems, and this spinodal decomposition analysis shows up. You can't get away from it. Now that you've heard of it, you are not going to be able to get away from it. It is very general, and it's good to be introduced now. Of course, it happens in liquid phases, nucleation and growth. We just showed an example of dodecane and ethanol. Those are two liquids. And this is randomly picked pictures from the literature. So I don't really cite these properly, but it's pictures here of nuclei forming and growing, secondary phase nuclei within a liquid. Before we move on, how can you tell this to the liquid? Anybody? How can you tell this is a liquid system and not a microscopy of a solid system? This goes beyond O2O by the way, just if someone has intuition about it. STUDENT: Maybe because there's less order than there would be in a solid [INAUDIBLE]. RAFAEL JARAMILLO: So that's not it. And so that's a really good point. This length scale here is not the length scale of atoms and molecules. You can't see the atoms and molecules on this length scale, although that's a good point. This is a macroscopic length scale that you might see under a microscope. So you don't see atoms and molecules. What I'm looking for here is the fact that they're round. Liquids tend to have isotropic surface energy. Liquid particles tend to be rounded. And droplets of one liquid in another, like your droplets of oil in your salad dressing, tend to be round. So that's a surface effect. We don't have time for surface effects in this class, but it's a good clue. All right. Here's another example, spontaneous unmixing in high entropy alloys. So this is a micrograph now. Still not on the atomic scale, although I guess the total x and y direction here is no more than a couple tens of microns, if I had to guess, but we're still macroscopic. We're not on the atomic lever here. But here you see that there's been spontaneous unmixing of this iron, copper, nickel, manganese, copper alloy into iron, copper-rich regions and copper-rich regions. And here, the boundaries between the regions, they follow some geometrical order. Now the boundaries seem to line up along some grid, and they don't just curve randomly. And so that's reflecting the fact that these are crystalline materials with an isotropic surface energy. So this is a counterpoint to the liquid case. And as you can imagine, this pattern formation, that's what it's called often, this pattern formation is very important to controlling the material properties. Here's another case, slightly surprising maybe, where we see spinodal decomposition, and it's in neural activity in the cortex, where you have decomposition into regions of high activity and low activity based on the input. That the math that we developed, or started to develop, just a couple of minutes ago has been generalized and generalized again and found to apply to many, many systems, not just material. So here is neuroscience, and it applies to the structure formation of the universe. Who knows what this picture is? STUDENT: Is that cosmic background radiation? RAFAEL JARAMILLO: Cosmic microwave background across the entire visible universe. And the physicists are very concerned about the fact that the Big Bang started off with spherical symmetry and we ended up with the universe with you and me and MIT being regions of high density and most of space being regions of-- Harvard being a region of low density, for instance. And so we have this-- how did this happen? And the mathematics of that, the physics of that is analyzed in terms of spinodal decomposition. It's called quantum spinal decomposition. Fully beyond my understanding. But I do know that if you look at current papers in the cosmology literature, you will see free energy composition diagrams very much like this one. So this is a general phenomena. It's 10:55. I want to-- let's see-- remind folks that we're shifting towards phase diagrams here. Remember Callister, so slightly different textbook just for a short time. It's an easy read. There's also a paper that is focused on teaching spinodal decomposition that I'm going to post on the website, and that will help a little bit with understanding and also with the p set. And there's a white board-- sorry-- a light board video on the lever rule. And we're almost done with the exam grading, and there's a new p set out within minutes.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_29_Boltzmann_Distribution.txt
RAFAEL JARAMILLO: Right. So what we're going to do today is we're going to finish our coverage of statistical thermodynamics, and then we're going to do a couple lectures, starting Monday. Friday's a holiday. So we'll do a couple lectures, starting Monday on reacting systems, and that includes oxidation processes. And that's mainly to have chapter 11, and then that'll be it. Then we're going to have one more hour on social and personal, then we're going to have a game show, which I don't know how to run a resume. We're going to try it. And then the last exam, which is noncumulative. So let's pick up where we left off, max entropy and the Boltzmann distribution. And this is continued, continued from the last lecture. So I don't like to do this, where I continue a derivation of something across two lectures, but that's how I broke. So from last time, just at a high level, we decided that we wanted to optimize entropy for an isolated system. And we had Boltzmann entropy formula, so we could do some math. And we could write down the S equals 0 subject to constraints. And the constraints being conservation of energy and conservation of mass. And we're going to do this using the method of Lagrange multipliers. And we had two of them because we had two constraints, alpha and beta. And we ended up with an expression like this, some stuff, so coefficient, independent variables. And in order to get our max entropy condition, I should say optimize entropy, we have to set these coefficients to zero. So it was a very familiar thermodynamic thing that we did, and what? We determined the first multiplier by normalization. So this is where we left off. Where we left off was the following. We had ni over n total. So this is a distribution function of fractional distributions of particles in state I normalized by the total number of particles. And it was like this, this multiplayer beta as yet undetermined, energy of state I normalized by Boltzmann's constant. And that whole thing normalized by Q, which was the partition function. And this was the sum of all possible states e to the beta epsilon i over Boltzmann's constant. And so this is where we left things. So now what we're going to do is we're going to determine beta. We're going to determine beta by considering a microscopic, reversible process where some of the particles change their shape. So we're going to do this. We're going to analyze ds for a process of some state changes. And I'm going to just go ahead and write down the equations that we've been using and developing for this. ds equals log n of i over n total. dn of i, you can pull that from the last two lectures. But now we actually have an expression for this distribution, so we're going to use that expression, kb sum over i. The distribution is e to the beta epsilon i over Boltzmann constant over Q, then we have some logs of exponentials and things. So we can simplify this a little bit. And this equals minus beta sum over i epsilon i dn of i plus k beta log of Q sum over i dn of i. And taking our definitions, this equals minus beta du plus kb log of Q dn total. So first thing to note, this is zero by construction. This is the Lagrange multiplier working, because we wanted ds to be 0 under conditions where du was 0 and dn total were 0. And we see that for this distribution, it works out. So Lagrange multiplier worked. The method worked. So I'm going to say 0 by construction. I really like this orange pen, but life is too short for fading sharpies. So by construction, this is zero. If you'd like, you can convince yourself that if you have a different distribution function here, if you have a different distribution function, not this distribution, let's say a flat distribution function, a distribution function that doesn't depend on energy or has some other functional dependence of energy, I don't know what polynomials, you can put anything in there, you can show that that in that case, ds will not be 0. So again, Lagrange multiplier method works. But here's the takeaway. We want to compare this to what we have derived two months ago from the combined statement, which is that ds equals 1 over t du plus p over t dv minus mu over t dn. And this is really the important thing here. We are comparing the coefficients. We're comparing the coefficients. And if you remember, this is almost like general strategy stuff. If you compare coefficients, you can buy inspection, set like coefficients to like. So what is ds, du at fixed volume and fixed particle number? Well, from our statistical method, that's minus beta, but from the classical method, that's 1 over temperature. And so we can simply conclude that beta equals minus 1 over temperature. So now with that identified, we have the Boltzmann distribution. Oh, there's more paper. So with beta equals minus 1 over temperature, we have the Boltzmann distribution. Number of particles in state I normalized by the total number of particles equals e to the minus Ei over kbT normalized by the partition function, where the partition function equals the sum over all states e to the minus Ei over kbT. And this generally has a falling exponential form with state energy. So state e of i, occupation n of i, all right, so falling exponential function. In words, this is the distribution that maximizes entropy for an isolated system with n total particles distributed over states with fixed energy levels, so unchanging energy levels, according to that set of occupation numbers. And what is the state of maximum entropy for an isolated system? We have a name for that. What's the state of max entropy for an isolated system? Think way back to late February even. STUDENT: Maybe equilibrium? RAFAEL JARAMILLO: Yeah, equilibrium. So this is an equilibrium distribution. So we have an equilibrium distribution of single-particle energy levels, or single-particle energies. And again, you knew this already, but equilibrium does not mean everything is at the lowest energy. All semester, we've been exploring that consequence. So if you were at zero Kelvin, if you're at zero Kelvin, or you're a physicist, which means you have a frigid heart, you're zero Kelvin, this is your distribution function. Everything's piled up, has zero energy. But in any real temperature, you get energetic particles, and you get them with a distribution function. So a couple of observations here. The first observation is maximum distribution-- sorry, said maximum-- Boltzmann distribution and likelihood. Again likelihood, this is a stealth class on statistics. That's what statistical thermodynamics is. It's like a stealth class on statistics. So we'll ask, how much less likely is it to find a particle in state with energy, let's say, Em-- that's an ugly epsilon-- equals El plus delta e then in state with energy El? So that's a typical thing that you might need to analyze. So let's see. nm over nl, that's how you turn the word problem into a math problem. How much less likely is it to find this distribution and this distribution? Within the equilibrium distribution, how much smaller is this number than this number? And you'll find the partition functions cancel out, and you get the following, El plus delta e over kt over e to the minus el ovet kT. And so the e to the minus Els cancel, and you get e to the minus delta e over kT. Right. So this depends on delta epsilon over kT, clearly. It becomes less likely with increasing energy splitting delta e, and it becomes more likely with temperature. All right. So you're going to explore this a little bit on the p set and the context of semiconductors. Another point, the thermal energy. This quantity is really important. It shows up in the natural sciences over and over and over again. And we're starting to see why. The why is what we've just shown, but the implications are really broad. You're going to see this all over the place. So let's remind ourselves what Boltzmann constant is. It's R over Na-- oh, that's funny-- RNa. So it's ideal gas constant divided by Avogadro's number. We have that from before. And this is something-- let's see, 1.380e to the minus 23 joules per Kelvin. So it has units of entropy. But that's not a number that I remember. Here's the number I do remember. And if you work on the molecular systems, or semiconductor systems, pretty much-- well, gee, a lot of systems that we work on in DMSE. This is a useful number to have memorized, Boltzmann's constant and units of eV per coulomb, 8.617e-5. Why on earth would you memorize that? It sets the energy scale for likely fluctuations. And we have-- all these words have meaning, likely fluctuations. Those words carry a lot of weight. So fluctuation, single-particle fluctuations with energy on the order of KbT are they're likely. That's something you're going to find, naturally occurring, as opposed to fluctuations, which are much, much larger than the thermal energy. Those are unlikely. We have all sorts of language for this that comes up in all sorts of contexts. For example, these fluctuations might be called thermally activated processes. You need a little bit of thermal energy to activate it. All right. So not only is this a number you should know, but the thermal energy at room temperature is a number you should know. A 298 Kelvin KbT equals-- well, in joules, it's not a number that I know because it's kind of unwieldy, but in eV, it is a number I know. So that's the natural energy scale of molecular processes. So it's 0.0257 electron volts, or 25.7 millielectron volts. So approximately 25 millielectron volts is the thermal energy at room temperature. So if you have a molecular system or an atomic system or an electronic system, processes with that energy are likely to be happening spontaneously at ambient temperature. Processes with much, much higher energy are not likely to be happening spontaneously at ambient temperature. So you can see how powerful this is. For example, it's the foundation for the Arrhenius rate equation. The Arrhenius rate equation-- and I'm going to use the notation from a chemistry class here-- a rate k equals a pre factor times e to the minus activation energy divided by Boltzmann's constant times temperature. All right. Now, this is a rate. This is an activation energy, rate and activation energy. And we are going to say hello to 3091 here. Actually may have seen that there. So that's another context, where you're going to see these exponentials, say why, where. Where does that come from? It comes from this Boltzmann distribution that we just derived. More. Let's talk about-- and for one slide on this, a little dense, but this is to help you with later Let's talk about the Maxwell-Boltzmann distribution. And I posted the wiki entry for this on website. So it's a good article to read, and I think that the lab instructors refer to it as well. So we're going to consider particles with kinetic energy. Kinetic energy is momentum vector modulus squared divided by 2 times a mass. That's kinetic energy of a particle of mass. And we're in three space, so this is px squared plus py squared plus pz squared divided by 2n. So that's the energy per particle and state of momentum p. So what we want to do is we want to figure out the distribution of particles as a function of energy. So this distribution in energy, I use f of v for consistency with Wikipedia. It depends on both the Maxwell-- sorry-- the Boltzmann distribution-- I had an error in my notes there-- the Boston distribution n of 1 over n total equals Q to the minus 1 exponent. And it's going to be minus momentum of state I modulus squared over 2 MkT. So it depends on that, the Boltzmann distribution, and also it depends on the density of states in momentum. This concept of density of states is not something that we've covered in this class. If you are in 029, you've discussed this. You're going to see it in the lab, but this really is a preview of what comes in the fall when density of states becomes kind of-- will be second nature for you in the context of quantum mechanics. But density of states means, well, this is the Boltzmann distribution for a given state I with this energy. How many such states are there? How many such states are there? So 3d differential, 4 equals 4 pi p quantity squared dp. We get the 4 pi there because they integrated the spherical coordinates, 4 pi radians all around the sphere. And that equals 4 pi m root 2 mE dE. So this is a distribution of states in energy E. And you combine these two things, you combine the density of states and the Boltzmann distribution, and you get the Maxwell-Boltzmann distribution, distribution of energetic particles zipping around on the kinetic energy included as a function of energy. And it is 2 E over pi, 1 over kT to the 3/2, e to the minus e over kT. So this is the Maxwell-Boltzmann distribution. And if you plot for example particle speed over probability, here's a low temperature, you have something that's pretty sharply spiked here, intermediate temperature, and then an high temperature. This is a series of increasing temp. And so this has the feature that-- there are more particles that go faster at higher temperature. You can see that coming from this exponential. And there's also a vanishingly small number of particles at zero speed. And there's going to be some mean speed that characterizes the distribution. And that mean moves with temperature. So I haven't derived this in detail because we haven't spent time on density of states, but we're just exploring implications of that Boltzmann distribution. There's another implication. The last thing I want to cover with only a couple of slides to go is the concept of ensembles, which is really just 3030 readiness. We're doing this just to show you that some concepts, which you're not going to use in this class, but you will use in kinetics and microstructure in the fall. So before we move on to discuss ensembles, I want to pause and take questions on the Boltzmann distribution, or Maxwell-Boltzmann or anything else, Arrhenius or anything else related. Who is seeing the Boltzmann distribution or this-- who's seen the Arrhenius rate law? There's not a lot of participation on Zoom, so I understand that. I'm guessing that a large number of people raised their hands. And then if I ask you about the Boltzmann distribution, there'd be a small number. And if I ask you about the Maxwell-Boltzmann distribution, they'd probably be an even smaller number. But I bet that everyone's seen something of this form somewhere before. So I hope it's a little bit satisfying for you to see where it comes from and to understand the principles behind it, maximum entropy. So let's talk about ensembles. Ensembles are-- what? I looked at the different definition of the Cambridge dictionary, says-- a group of things or people acting together or taken as a whole. So this is a concept, and we're going to use it a lot in statistical thermodynamics, if not necessarily in this class. But like, I said this is 3030 readiness here. If I didn't tell you about this, I've got in trouble with Professor Hu. All right. So we're going to start with the microcanonical ensemble. And this is something that you've seen before, microcanonical-- something called the microcanonical ensemble. This is the set of all possible microstates. So this type of stuff that we've been doing, set of all possible microstates of a system with fixed energy, volume, and particle number. So a system with fixed energy, volume, and particle number, that is that's an isolated system. For an isolated system equilibrium is state of max entropy. That's what we were just analyzing. All right. So this thing, which we've been doing for the last 2 and 1/2 lectures, this is called the microcanonical ensemble. Just gave a name to it. And so the partition function that we've been analyzing is actually better known as the microcanonical partition function, normally written as Q, as we've been doing. And it is a sum over single-particle energy states sum over all possible single-particle state. And we know that we're at equilibrium with S maximize for what? For the Boltzmann distribution. So the probability of finding particle in state I equals caused number of particles in state I at that distribution over the total number of particles equals e to the minus eI-- you are getting sick of seeing me write this by now all right-- prob of finding a single particle in state I with energy hereby. So this is a repeat of everything we just did. I just gave it a new name. I called it microcanonical. So again, that name is kind of highfalutin, but there you have it. But we can generalize. So now I'm going to tell you about a different type of ensemble. So we've been analyzing microcanonical. Now, I'm simply going to tell you about the existence of a canonical ensemble. And please, don't ask me where these names come from. A canonical ensemble is the set of all possible microstates of a system with fixed volume and n but not energy. So now we're allowing the system to exchange energy. The system is closed. It's rigid and closed. So volume and mass are conserved. System is rigid and closed but can exchange energy and form of heat with surroundings. And the system energy, which we're now going to give a label to, u sub nu, the system energy can fluctuate. So I want to draw for you a representation of the micro of the canonical ensemble. So we're going to imagine-- I grab a fine tip marker-- we're going to imagine system labeled I with energy u sub I, but this thing is in thermal contact with a lot of other systems. So we're going to imagine a whole population, or, if you like, an ensemble of systems all with fixed volume and fixed particle number but with diathermal walls. So if heat is like sound, you have to imagine this is an apartment building that's really noisy. Here are all your neighbors. So the volume is fixed and the particle number is fixed, but you're exchanging energy through the walls. But this ensemble as a whole-- so I've drawn an ensemble. I've drawn a set or a group considered as a whole. I've drawn a bunch of subsystems now that can exchange energy with each other. Each have the same volume. Well, that's how I've drawn it, but you can imagine they each have the same volume, each have the same particle number, but the ensemble as a whole is isolated. So all of these ensembles, you have to imagine this being a very, very large number here, not just 4 by 6 or whatever I've drawn, is isolated. And so this is an ensemble, and this is an ensemble that has its own partition function. It's known as the canonical partition function. And that's typically written a z, and this is now sum-- as before-- sum over states. But we're not summing over single-particle states. We're summing over system energy states, sum over all possible states of the system. So it's distinct from before when we were coming over single-particle states and equilibrium Hold on. When we had equilibrium in an isolated system, it was max entropy. Now, think back two months ago. When we had equilibrium for fixed temperature and fixed pressure, that was minimum Gibbs, but there was another case, which we haven't spent a lot of time on. It was equilibrium at fixed volume and fixed temperature. Do you remember what thermodynamic potential we use when we have a system at fixed volume and fixed temperature? STUDENT: Is it Helmholtz? RAFAEL JARAMILLO: Helmholtz. Equilibrium at minimum F, again, we haven't used that, this course basically, is minimized for the distribution probability of finding system and state nu equals e to the minus u nu over kT over Z. So has the same functional form as the bottom distribution except, instead of single-particle energies is energy of the system as a whole, and instead of maximizing entropy, it minimizes how much free energy. So there's something very similar That's happening here, similar to when we started with entropy, and then we move to Helmholtz energy, and then we moved to Gibbs energy as we change the boundary conditions of the system. And I think you'll remember in problem set 2, we really delve into this. So there's something very similar that happens in statistical thermodynamics, where you start with the microcanonical ensemble and its partition function, and then you go to the canonical ensemble and it's partition function, and there's more. There are other ensembles. The next one is called the grand canonical ensemble, and that corresponds to situations with fixed pressure and temperature. We're not going to bother to introduce that. So this is the Boltzmann distribution, but is for the canonical ensemble now. STUDENT: So in this case, what was it that you held constant? RAFAEL JARAMILLO: So this is equilibrium at fixed temperature. When you have an ensemble of systems that can't exchange volume and they can't exchange particle number, but they can exchange heat energy, they reach equilibrium when the temperature becomes constant. That's thermal equilibrium. So that's what we've done here. We've set up a system, a collection and ensemble of subsystems, that reach thermal equilibrium with each other because they can exchange heat energy. It all works out. You end up getting minimum F for this distribution. This is the probability of finding system in state nu with energy u of nu. That's what that probability is. So I want to leave you with some final words on partition functions. We've seen some partition functions, Q and Z. And there are others, which we won't cover in this class. We haven't really covered this. This won't be on the exam. This is just commenting on what comes next. Again, this is in service to later classes in our curriculum. So the partition functions, they describe equilibrium and fluctuations. The partition functions really describe the entire thermodynamic properties of a system. So for instance, they are the basis for calculating thermodynamic properties and expectation values. That is, most likely quantity. So for example, the mean energy of a system can be shown to be this derivative, d log z, d 1 over kT. That's a little bit formal, can't necessarily calculate anything yet, but in principle, if you know the partition function, you can calculate the mean energy. heat capacity of a system is 1 over kT squared d squared log z d 1 over kT squared. The entropy of a system can be shown to be mean energy divided by temperature plus kb log z. The Helmholtry energy of a system is minus KbT log z. And it's not just equilibrium properties, it's properties of fluctuations. So this gets really fun in a later class and has a lot of implications. So for example, the root mean squared energy fluctuation of a system is another quantity that can be calculated from partition functions. So I'm just showing you things, but if you have the partition function for a system, you can calculate the entire thermodynamic properties of that system. So another example is in the textbook. If you look into Hoff, they calculate the partition function for a couple of different systems, a two-level system and then a system of a monatomic gas. And then by applying formula such as these, they derive the ideal gas law. So it's the ideal gas law from first principles. It's no longer empirical. So I'm going to leave you with some thoughts. At this point, often students say, ah, this is the real stuff. This is somehow the most fundamental knowledge of thermodynamics, is a partition functions. So why did we mess around with all that classical thermodynamics? If the partition functions are the keys to the kingdom, why not just start there? And the answer is that, although this approach is beautiful and elegant and very powerful in instances, it also is extremely limited because we don't know and we can't calculate the partition functions for all but the most toy problems. And you can see that. If we take thermodynamics in a physics department, you start from this approach. You start from the statistical thermodynamics approach because physicists love to write problems, things as intellectually rich but may or may not apply to the real world. We're a little bit more applied. So I'll give you an example. There's a toy model of magnetism known as the Ising model. And the Ising model says that you can have spins on a lattice, or magnetic moments, I should say, on a lattice, and they can face up or down. That's it. So you have a one-dimensional chain, and each spin can face up or down. And the energy of a system depends on the orientation of the nearest neighbor. It's a very simple model of a magnet. And so if you take a course on magnetism, or you take a thermodynamics course in physics department, you'll probably calculate the partition function for 1D Ising chain. It's not that hard. It's kind of fun. However, if we simply make it a two-dimensional Ising system-- and I'm drawing the direction of the arrows at random here, just showing you. All right. So we've just made this from one dimensional, which is definitely an imaginary thing, to a two dimensional, which is still imaginary, but we're getting there. We're at least getting close to real space. Calculating the partition function for this model is one of the grand intellectual achievements of 20th century physics and, it won Lars Unger a Nobel Prize. It's horrendously complicated mathematically. I've never been able to follow even the first page of that. It's not in any textbooks. It's too long. So let's say I want to model a real magnet in three dimensions. Forget it. Unless you're Nobel Prize quality, that's an open problem you have at it. But basically, the point is that for real systems that we care about as engineers, this is a magical construct. In theory, this exists. In theory, it allows you to calculate all the properties of the system from first principles, but in practice, it just remains an intellectual exercise. In practice, we have to-- if you go with a much more powerful route, which is the route of classical thermodynamics with its postulates and observations and data and models.
MIT_3020_Thermodynamics_of_Materials_Spring_2021
Lecture_5_Second_Law_and_Entropy_Maximization.txt
[SQUEAKING] [RUSTLING] [CLICKING] RAFAEL JARAMILLO: So we've gone through the first law, and we've gone through heat engines, which means it's time for the second law. So we're ready for the second law. All right, and this is what I call a plane for act. So I'm going to present this by way of Clausius's theorem. You'll see what that is in a minute. Then, we'll move on to reversible processes and entropy, and reversible and irreversible processes, and finally entropy maximization. So we talked about entropy in the first lecture with the baby book. And I think everybody does have this feeling of entropy as being mixed up and it's disorder, and that's true. That's true. And so one could state the second law of thermodynamics as something like the entropy-- the disorder in the universe is always increasing. Rather like the statement that the energy of the universe is always constant, that's not obviously useful for us as scientists and engineers. The entropy of the universe is always increasing. So today, we're going to take a different approach to the second model, which is very classical and deductive. And by the end of today, you will have seen the second law and entropy in this classical and deductive way. You will have also gotten this sense for it-- this more hands-on sense for entropy as being disordered and mixed up-- and, hopefully, have a better understanding there for having seen it in different ways. So I call this a play in for x. I'm playing for x-- Clausius, reversible, irreversible, and entropy maximization. All right, one, Clausius's theorem. So this is Clausius's theorem. So this should look familiar because we arrived at this at the end of the Carnot cycle. I didn't call it Clausius's theorem. And I told you that there was going to be something funky about this dQ over T quantity, and it's important. So that's more or less why we did the Carnot cycle to motivate this. This is a cyclic process, any cyclic process. It doesn't have to be a Carnot engine to be anything at all-- any cyclic process where the state variables return to starting point, which is just rephrasing what it means to be a cyclic process. And so this is super general right now. We have any system with initial temperature, initial volume, pressure, and it's a collection of stuff. And you're going through any process at all that brings you back to the starting point. I think my new pen-- by the way, I bought a bunch of different Sharpies, fat, thin, ultra thin, fine, ultra fine. Sharpie markers, they have Sharpie brand markers now. I was trying them all. Anyway, so that was Clausius's theorem. So we're going to use it now. Two, reversible processes and entropy. Sorry, give me a minute. Can I ask you guys a question? It's less than or equal to 0. Does anyone remember under what conditions it's equal to 0? We saw a case when it was equal to 0. AUDIENCE: And we took like, forever. Like, it was always in equilibrium for every step. RAFAEL JARAMILLO: Always in equilibrium, right. That's exactly right. There's another word for that, which means a reversible process. But you're absolutely right, always in equilibrium every step. Reversible process, you get the equal sign. The less than, if you remember, was the heat engine that was less efficient than ideal. Now, we're going to hypothesize that there exists for which every step is reversible. And we're going to consider a cyclic process along two reversible paths, R1 and R2. And this is what I mean. We have some state A. We have some state B. And there are two ways to get from A to B, and both of these paths are reversible. All right, so let's think about the different ways we can do this. We can start at A, and we can go to B. And then, we can go back to A. So I'm going to call this minus R2 because they're going the opposite way of path R2. So this is a clockwise-- a clockwise cycle, R going clockwise. So let's calculate this thing. VQ over T equals AB dQ over T along path R1 plus BA dQ over T along path minus R2. I'll rewrite the first term. I mean I'll just copy the first term. And by virtue of the fact that path R2 is reversible, I can simply say it's the reverse of going from A to B along path R2. So going backwards along path, R2 is just the opposite of going forwards along path R2. And by Clausius's theorem, we know this is less than or equal to 0. Now, we're going to consider going the other way. We're going to start at A, and we're going to go along path R2 and back along minus path R1. So this is counterclockwise. Let's do the same calculation. dQ over T equals from A to B dQ over T along R2 plus B to A dQ over T along path minus R1. And just as before, this is going to be the negative of taking path R1. And by Clausius's theorem, this is also less than or equal to 0. So these are by Clausius. Clausius is a fellow with an S at the end of his name, so that apostrophe is trying to be at the end after the S, but anyway, Clausius's theorem, I should say. So by Clausius's theorem, we have that both of these are less than or equal to 0. One way we went clockwise. The other way we went counterclockwise. But both were cyclic processes. We started at A. We returned to A. We started at A, and we returned to A. We want to compare clockwise and counterclockwise circuits. Compare clockwise and counterclockwise circuits-- both of these things have to be less than or equal to 0. So what do you observe about these terms? AUDIENCE: It's like you just take the negative of. RAFAEL JARAMILLO: They're just a negative of each other, right. They're negative of each other. All right, so if something has to be less than equal to 0-- y is less than or equal to 0, and also minus y is less than or equal to 0. Then what can you say about y? AUDIENCE: It's 0. RAFAEL JARAMILLO: Yeah, then y equals 0. That's the only way you can satisfy that. So that's what's going on here. So that means that term equals 0. So let's write that down. From A to B dQ over T along reversible path 1 minus from A to B dQ over T along reversible path 2 equals 0. That means from A to B dQ over T for reversal path 1 equals from A to B dQ over T along a reversible path 2, just rearranging. What do we call a quantity whose change doesn't depend on the path you take? From A to B, we have two paths here-- path reversible 1 and path reversible 2. And this quantity integrated along that process doesn't depend on which process we took, R1 or R2. What sort of a function do we call that? AUDIENCE: Path independent. RAFAEL JARAMILLO: Path independent. And for thermo, we have a specific fancy name for path independent. AUDIENCE: State functions. RAFAEL JARAMILLO: State functions, right, exactly. That's great. So we're going to-- and this is historically how this came about, by the way. This is based on a lot of data. Don't worry that you're not as smart as people used to be in the past and somehow this isn't obvious to you. This conclusion, which I'm about to just hand you 19 minutes into the hour, is based on centuries of data and people finally hypothesizing the existence-- maybe not centuries of data, maybe like 50 to 75 years of data-- entropy S as a new state function. Why? Because it seems to be something which has an exact differential. In a reversible process-- any reversible process, not any specific reversible process, but any reversible process-- dQ over T was equal to the exact differential of entropy. What that means here is that this is entropy of B minus entropy of A, or delta entropy. So there you have it. This is a very deductive way of postulating that entropy exists, and it's a state function. We found an integral of an integrand that doesn't depend on the path. That's it. From that, you say I'm going to postulate that there exists entropy. Its total differential is this for a reversible process, which means I can calculate its change between states along any path-- along any reversible path. Let's keep moving. Next, three, reversible and irreversible processes, reversible and irreversible processes. All right, so we've hypothesized that there are reversible processes. Now, let's hypothesize that there exists processes that are irreversible, irreversible. And we're going to set up another one of these funny thought experiments. Consider states A and B, connected by both reversible and-- reversible and irreversible processes. So here's state A. Here's state B. And here, I'll draw a nice smooth line. Let's say this is a very slow process. Call that reversible. And I'm going to draw a wiggly line. This is a very fast process. You're not very careful. You carry it out in a rush. I'll call that irreversible. So we have two states connected by a reversible process and an irreversible process. So now, let's go back to Clausius. Let's calculate dQ over T. This equals from A to B dQ over T along the irreversible process plus from B to A dQ over T along the reversible process. Now, why do I do it that way? I wanted to go from A to B and back again. Can I go this way? Can I go counterclockwise? AUDIENCE: No, because the one on top is not reversible. RAFAEL JARAMILLO: Yeah, exactly. AUDIENCE: Can't go backwards. RAFAEL JARAMILLO: This is-- I told you can't run this backwards. I said it was irreversible. So you have to believe me because I'm in charge here. So you can't run this backwards. So if you're going to get from B to A, it can't be that way. It's got to be that way. It's got to be that way. That's right. So there we are. All right, so that's the way, and Clausius's theorem says that. We're running the reversal process backwards to complete the cycle. Now, we're going to use this thing-- which we used before-- minus the reversible process equals minus the forward process. And we postulated this is the change of a state function. It's the integral of the exact integrand, the exact differential of the state function as we go from A to B. All right, so we're going to use these. We're going to combine, and we get the following. dQ over T is less than or equal to entropy-- wait, I'm sorry. It shouldn't be the cyclic interval. From A to B dQ over T is less than or equal-- slow down, right. Third time's going to be the charm here. From A to B dQ over T is less than or equal to the difference entropy of B minus entropy of A, which is delta entropy. This is it. This is combining the previous three slides. We're using Clausius's theorem. We have the less than or equal from that. And it's equality for a reversible process. You get the equality. For irreversible process, you get the less than. All right, we're still just kind of playing with math. It doesn't feel like we're there yet. You're just-- I'm just showing you some things. We'll get there. All right, so here's there. This is where we're getting-- entropy maximization in an isolated system. And this is where the cosmologists get off, and they say, oh yeah, I told you this was about the universe. But again, it's really not. So an isolated system-- any process from A to B dQ over T is less than or equal to the entropy of B minus the entropy of A. And already, we've come a really long way just to be able to write that down. So I want you to recognize that. Now, we're going to isolate the system-- isolate the system so that dQ equals 0. All I'm really doing here is thermally insulating it. I'm isolating the system. I'm putting some thermal insulation around it. I'm making sure that no heat can flow across the boundary. That's what I'm doing here. And so if dQ equals 0, what is this integral? AUDIENCE: It's just 0. RAFAEL JARAMILLO: 0, right. So I'm going to take that integral to be 0 and rearrange the equation. And I get this very simple and, as we'll see, very profound inequality. Entropy never decreases for any process in an isolated system. That's what we just found. This inequality orders the states in time. This is the emergence of the error of time for thermodynamics. We don't know anything about this system. We don't a thing about it. We don't know what A and B are. We don't know what it's made of. We don't know if this is the universe or your beaker in your lab. We don't know anything about it. But we do know that no process can decrease its entropy. Because if a process goes from A to B, then the entropy of B has to be at least as large as the entropy of A. That's wacky This is actually a form of the second law. So there's about two dozen statements on the second law. There's a website devoted to them. It's kind of funny, kind of a goofy website. But this is one way you'll see it written. This is one way you'll see it written. So, all right, so anyway, that's crazy. That's crazy. What was the equality for? When did S of B equal S of A? What kind of a process? AUDIENCE: Reversible. RAFAEL JARAMILLO: Reversible, right. So for a reversible process, reversible SA equals SB. Irreversible-- irreversible was the inequality, the less than. S of A is less than S of B. So reversible process is one for which things are always in equilibrium, and they equal. And the entropy remains the same. Any irreversible process will increase the entropy of this system. So now, we get to write another form of the second law. Let's call the combined statement combined statement of first and second laws. All right, combined statement-- first, we write the first law. The change of energy of a system equals the heat received plus the work done on it plus the change through addition or loss of mass. So this is first the law. And I like to call this bookkeeping. First law is not the energy of the universe is constant. It's the energy of any system changes only through additions and subtractions of energy that we can keep track of, so first law bookkeeping-- work done on a system. DW equals minus P dB. And heat received by system for a reversible process, dQ equals T dS. This comes from two slides ago. We had dS equals dQ over T for reversible heating. So I'm just rearranging. So I'm going to combine these. I take the first law of bookkeeping. I substitute the mechanical expression for work. I substitute this expression for heat for a reversible process, and I get the following. TU equals T dS minus P dV plus mu dN. And now, this is for a reversible process. Now, we have this-- all state functions, which sometimes is useful for calculating. So this is going to be one of the foundational, most important expressions-- equations-- that we use in 020. AUDIENCE: I have a question about the energy itself. If it's not a reversible process, then we couldn't use T dS for TQ, right? RAFAEL JARAMILLO: If it's not a reversible process, TQ is not going to be equal to this. AUDIENCE: So then how would the internal energy-- how would, like U, for example, still be a state function if it's dependent on DQ? RAFAEL JARAMILLO: So this expression-- this expression is true always. This is in general through this top expression. I think it's easier to understand because it's almost like your bank account-- what goes in, what goes out. If you track them, you'll know the difference it's a state function. This down here is not always true, and it's the less obvious. This is only true for a reversible process. For an irreversible process, you're going to get the wrong expression here if you plug this in. We'll actually-- AUDIENCE: I think-- RAFAEL JARAMILLO: --that an irreversible-- sorry to interrupt you. What we'll see for an irreversible process, that dQ is always less than T dS. But at this level right now-- OK, so I want to now ask you if that made sense or I should continue? AUDIENCE: OK, I think my question was, like since we learned that state variables could only depend on other state variables, like, it seems like dU is dependent on dQ, which it's not. RAFAEL JARAMILLO: Right, thank you. That is confusing. There's a couple of ways of thinking about that. First of all, entropy is funny. It just is. Second of all, this is not true in general. It's only true for a reversible process. And reversible processes don't exist. They're figures of our-- figments of our imagination. They're the limits of things go arbitrarily slowly. So in that sense, it's funny. Another way to think about this, if you remember from, I think, single var-- single or multi-- is that T here, or more specifically, 1 over T here, is a concept of integration. For those of you for whom that doesn't ring a bell, just forget it. But this is an integration constant. So you can go back to your calculus textbooks and review what that is, but anyway-- so I'm not going to convince you here because it's just profoundly weird at its core. But I hope I can make you a little bit more comfortable with using it as we go forward. This is very much connected to the entropy flow across boundaries, which is sort of a-- it takes a couple of weeks for us to get comfortable with that concept. For a reversible process, entropy flows across boundaries in the form of heat. For an irreversible process, an irreversible process receives less entropy across its boundaries and generates entropy inside of itself to make up for that difference. We'll get there. We'll get there. All right, one thing I want to point out before we move on-- and I have to kind of slip this in around the margins because this is not a calculus class. Calculus is a prereq. But I want to remind you that this is a differentiable form. And we use a lot of differential forms in this class. This is differentiable form of the general type dY equals A dX XYZ. DX plus B d-- trying not to use Y-- dM equals A dX plus B dY plus C dZ plus so forth. We're going to use a lot of these in this class. This is a total differential. This is also a total differential-- dX, dY, dZ. And these A, B, and C are coefficients. I'm going to go slowly through this. I'm not teaching you this because it is a prereq. But I want to go slowly through this because the sooner you recognize this general form in thermal, the more successful you'll be. There are so many shortcuts to useful answers in thermo when you start recognizing this sort of pattern. So, for example, A the coefficient is also the partial derivative dM dX for fixed Y and Z and likewise for the other coefficients. We'll come back to this over and over, but it cannot be stressed enough. But it's a prereq, so I can move on. Let's finish out what I want to do here. Let's talk about equilibrium. Equilibrium is-- this is going to be a very holistic introduction to equilibrium. It's a state of rest, a state of rest. What's that going to mean for us? The state functions aren't changing. It's also a state of balance. Molecular scale changes or fluctuations average to 0. And you can think back to the baby book where there's particles in the box, and they're all zooming around. So you know that things are happening on the molecular level. But at equilibrium, any observable thing anything you can measure on a length scale larger than the length scale of molecules is not changing. So we want equilibrium to be like this. At t equals 0, we prepare a system-- prepare a system and its boundary conditions. And then, we're going to wait for a very long time, as long as necessary. And as t goes to infinity, equilibrium, when all macroscopic changes are finished. You know, Wikipedia has a really good entry. If you look for the Wiki entry-- the definition of thermal equilibrium in Wiki-- I kind of like it. It's got different words all meaning the same thing. All right, so now, one last board and then we'll wrap up. And let me just say this lecture is the most-- to say it's philosophical is a little bit too highfalutin-- but I would say it's the most abstract of this entire semester. So in case you're thinking, what on Earth is going on here? When are we going to talk about materials? I just want to share that with you. So here's the question. We're waiting. What happens while we wait? We've prepared the system at t equals 0, and then we're waiting. What's happening? Answer-- all spontaneous processes, they will happen eventually. So spontaneous means what? Something that will happen on its own. So it will happen while you wait. It's in the meaning of the word. Something spontaneous will happen while you wait. So while you wait, all spontaneous processes will happen. That means that equilibrium is a state in which all possible spontaneous processes have already happened, By the meaning of the word, they already happened. For an isolated system, we know that any process means the entropy unchanged or increases it. This is a ratchet. It can kick up one way. It can never go back the other way. It follows then that equilibrium is a state of max entropy. So let me just state that again. For an isolated system, equilibrium state of maximum entropy, given the boundary conditions. So this is also the second law. All right, so that's where I want to leave it for the day. We have plenty of time for discussion and questions. I see I've missed some things in the chat, beautiful words from Wiki. So someone's enjoying Wikipedia. Now, all right, so what we've done is we've arrived at something which maybe we would have guessed at after the baby book. Equilibrium is a state of maximum mixed upness or maximum disorder. Except that we haven't yet introduced entropy as any measure of disorder. Entropy is this funny thing that has to do with heat flow. And the way that we got this concept of equilibrium is through this concept of spontaneity being used up, there being no more spontaneous things possible. But we got to the same place. So where we are now is we have the first two laws of thermodynamics at a very conceptual level. There's a third law that we don't really need in 020. It's definitely less important than the first two, so we're going to skip that one. And we've done some heat engine stuff. But where we need to go now is figure out how to use these principles for situations that we care about. Because it's kind of hard to isolate a system. It's easy if you're a cosmologist, and you say my system is isolated. I don't have to worry about that. But if you're a material scientist or an engineer, you often have systems which are held at constant temperature or constant pressure, not systems which are isolated. So we need to do next is go from this principle of the second law to a restatement, a principle of the second law that's useful for conditions of constant pressure and constant temperature. And spoiler alert, this is the state of max entropy. What we will find is that for systems at fixed pressure and temperature, a state of minimum Gibbs-free energy is the equilibrium state. So that's where we're heading. And then, for the rest of the term, it'll be Gibbs, Gibbs, and more Gibbs because that's the material scientist's thermodynamic potential of choice.
MIT_6849_Geometric_Folding_Algorithms_Fall_2012
Lecture_4_Efficient_Origami_Design.txt
PROFESSOR: All right. Let's get started. So we have a fun lecture today about efficient origami design. Last Monday, we did inefficient origami design, but it was universal. We could fold anything. And let's see, Thursday, we talked about some basic foldability, crease patterns, what make the valid. That'll ground us a little bit today when designing some crease patterns. Although, we're going to stay fairly high level today because there are two big methods I want to talk about. One is tree method which is hit it pretty big in the impractical origami design. Lot of modern complex origami designers use it either in their head or occasionally on a computer. I demoed it quickly last time. So we are going to see some level of detail how that works. And then, I want to talk about Orgamizer which is one of the latest techniques for designing crazy, arbitrary, three-dimensional shapes that seems to be pretty efficient. Although we don't have a formal sense in which it is efficient, it has some nice properties. And it's pretty cool, and I can also demo it. It's also freely downloadable software for Windows. Good. So just to get you motivated a little bit, I brought a bunch of examples. I'll show you more later. But this is the sort of thing you can do with the tree method. It's not going to be the tree method as I presented here. This is a variation on it called box pleating which you can read about in Origami Design Secrets. And I don't think Jason will talk about that either. But it's a variation on what we'll be talking about. It lets you do crazy things like these two praying mantises, one eating the other. This is a design by Robert Lang. Fairly new. I don't have a year here, but I think it's last year or something. And that's the sort of thing you can do getting all the limbs, all the right proportions, even multiple characters by representing your model as a stick figure. And that's what the tree method is all about and doing that efficiently. So this is a statement last time of the theorem. There's some catches to this. It's an algorithm. Find a folding of the smallest square possible into and origami base with the desired tree as a shadow or as a projection. So you remember, this kind of picture. You want to make a lizard. You specify the lengths of each of these limbs and how they're connected together into a tree. And then, you want to build an origami model on top of that, so to speak. So that it looks something like this. And you want to find a square the folds into such a shape. This projection is exactly that tree. Now, say it's an algorithm, and it finds the smallest square. But to do that, essentially requires exponential time. We'll prove in the next class that this problem, in general, is NP-complete. So it's really hard. But there is an exponential time algorithm, and I didn't say efficient here. It's efficient in terms of design, quality, or in terms of algorithm. But you have to pick one of the two. So in TreeMaker the program, there's an efficient algorithm, which finds a reasonably good-sized square. But it's not guaranteed to be optimal. It's just a local optimum. In principle, you could spend exponential time here. So slow algorithm and get the smallest square. So it depends. The other catch is this folding. We're still working on proving that this does not actually self-intersect in the folded state. I checked the dates. We've been working on that for six years. But it's closing in. Maybe next year we'll have a draft of this proof. It's quite-- it's many, many pages. Good. So those are the catches. Now, let me tell you about this term uniaxial. Essentially, it just means tree shapes. But I'd like to be a little bit more formal about that. And last time, I showed you the standard origami bases. All of these are uniaxial, I think, except the pinwheel which we folded. So the pinwheel-- so let me tell you intuitively what uniaxial means. It means you can take all these flaps of paper and lie them, place them along a line. And the hinges between those flaps are all perpendicular to that line. So this is the axis. Whereas something like this, essentially there are four axes. The flaps are here, or two axes I guess. But definitely not one. So these cannot be lined up along a line, even if you've flapped them around some other way. That's intuitive definition. Multiaxial is not a formally defined thing. But uniaxial we can formally define. And it will capture things like this water bomb base, all the other bases there, as well as bases like this. And it's defined by Robert Lang, I think probably around '94 was the first publication. And it's just a bunch of conditions. And a bunch of them are just technical to make things work out mathematically. First thing I'd like to say is that the entire base-- base just means origami for our purposes. It's sort of practical distinction not a mathematical one. Is that everything lies above the floor. So the floor is equal to zero, and we'll just say everything's above that. And the action is going to be in the floor. That's where I've drawn it that way. Here, there's a floor. And the tree is going to lie on the floor, and everything else is above that. Second property. Sort of a shadow property. If I look at where the base meets the floor is equals to zero, that's the same thing as if I look at the shadow onto the floor. This is essentially saying that this base does not have any overhang. So if it had, for example, some feature like this that hung over its shadow-- was more-- went out here. The shadow goes out here, but the base does not. That's not allowed. So I want everything-- actually want things to get smaller as you go up in z. This is a stronger statement of property two. And then, I want to define this notion flaps. And the basic idea is that you have faces of the crease pattern. so the faces are just the regions we get out of the creases, all these triangles for example. I can divide them, partition them into groups which I call flaps. So for example, these two guys over here form one flap. They fold together. They're going to be manipulated together. And so in this case, I'll get four flaps. Anything I want to say here? Yeah. Each flap is going to project to a line segment. It's going to be one of the edges of the tree. So then, there's the notion of a hinge crease. And these are just creases shared by two flaps. So they're the creases that separate one flap from another. These will always require that they projects to a point. So this is equivalent to saying the hinge crease is vertical. It's perpendicular to the floor. I'm always projecting straight down onto the floor orthographically, just setting z to zero. And so that's saying these are the hinges. They should be vertical. So projection is a point. And then from those two properties, I can define a graph which I want to be a tree. So each flap I want to make an edge of my graph. And that edge is going to be the line segment that the flap projects, each flat projects to. And I'm going to connect those edges together at vertices when the flaps share the hinge crease. All right. That's a graph which you can define. And that graph is a tree. That's the constraint. And I think I have even more. I've got one more property. I think I actually want projects here. Let's try that. All right. This is a bunch of formalism to state what's pretty intuitive. I want all the flaps of paper to be vertical, so they project to a line segment. When I look from the-- when I look at the projection, I can define a graph where there's an edge for each flap, where it's projecting. And I join those edges together. Here, I'm joinging four them at a vertex. Because if you unfold it, they all share hinge creases. Hinge creases in this case are the perpendicular. These four guys. So because-- it's hard to manipulate. I've got a flap over here. A flap over here. They share a hinge, so I connect them together in the graph. It's just a formal way to make the graph correct. It may seem tedious, but this definition sidesteps some issues which would occur if you defined it in the more obvious way which is just take the projection, call it a tree. But I don't want to get into why you need to do it this way exactly. Maybe, we'll see it at some point. Essentially, some flaps can be hidden inside others, so you need this definition for it to really work. And then, there's this extra constraint which is that my base is pointy at the leaves. Leaves are the vertices of the tree to have only one incident edge. And so I want there to be only one point that lives at the leaf. Obviously, elsewhere in the tree, there's a whole bunch of points, a whole vertical segment, that all projects to that point. Here, I just want one. That's important because I want to think about where the leaves are. And the whole idea in the tree method is to think about how to place the leaves on your piece of paper so that this folding exists. So that's what we're going to do. The tree method is kind of surprising in its simplicity. There's a bunch of details to make it work. But the idea is actually very simple. Let's suppose you want ability uniaxial base. I'll tell you something that must be satisfied by your uniaxial base, a necessary condition. Assuming you're starting from a convex piece of paper, which is the case we usually care about. Actually, we're starting from a square, a rectangle, or something convex. Here's what has to be true. I didn't give a name, but this graph that's supposed to be a tree, I'm going to call the shadow tree for obvious reasons. And now, I want to take two points in the shadow tree, measure their distance in a tree sense. So I have some tree like this. I have two points like, say, this point and that point. The distance between them is the distance as measured if you had to walk in the tree, how far is it to go from here to here. And because our tree is a metric tree, because we specified all the edge lengths, we can just add up those lengths, measure them. And that's the distance between two points in the tree. That must be less than or equal to the distance between those two points on the piece of paper. What does that mean? So on piece of paper that's convex-- so it might not be a square, but square's easier picture draw. The distance between them is that. Pretty simple. So what does this mean? I'm taking this square. Somehow, I'm folding it into a base whose projection is the tree. So I look at these two points, p and q, I fold them somewhere in the 3D picture which is not drawn up here. Those points-- so maybe there's a p up here and a q up here. I project those points down onto the floor which is going to fall on the tree by this definition. Call that, let's say, p prime for the projected version of p, q prime. I measure the distance here. That has to be-- the distance between p prime and q prime in the tree should be less than or equal to the distance between p and q in the piece of paper, for every pair points p and q. That's the condition. It's almost trivial to show because when I take this segment of paper, I fold the piece of paper. But in particular, I fold p and q somehow. I can't get p and q farther away from each other because folding only makes things closer. There, I'm assuming that the piece of paper is convex. There's no way to fold and stretch pq because that's a segment of paper. It can only contract. I mean, you can fold the segment something like this. Then, the distance between p and q gets smaller than the length of this segment. Because if I took this-- this line segment of paper that got folded. If I project it onto the line here, it's only going to get shorter. So I fold p and q. They get closer in three-space. And then, I project them down to the floor. They can also only get closer when I do that. So that's essentially the proof. Do I need to spell that out? So you have the line segment on the paper. You fold it. It gets shorter. You project it onto the floor. It also get shorter. Therefore, whatever this distance is on the tree has to be less than or equal to the distance you started with. So this may seem kind of trivial. But the surprising thing is it this is really all you need. So this is true between any two points in the shadow tree. In fact, we're going to focus on the leaves. We'll say, all right, so in particular, I've got a place this leaf, and each of these six leaves here, I have to place them somewhere on the piece of paper. I better do it so that that condition is satisfied. I have to place these two leaves and the piece of paper-- let's say this distance is one, and this distance is one. These two leaves have to be placed on the piece of paper such that their distance is at least two. And the distance between these two guys has to be at least two and between these two guys has to be at least two. And same over here. Let's say all the edge lengths are one. And the distance between, say, this leaf and this leaf has to be at least three because the distance in the tree is three. So at the very least, we should place the points on the paper so that those conditions are satisfied, and it turns out, that's enough as long as you find a placement of the points such as those conditions are satisfied. There will be a folding where those leaves actually come from those points of paper. That's the crazy part. But this idea is actually kind of obvious in some sense. I mean, once you know it, it's really obvious. But what's surprising is it this is all you need to worry about. There's a lot of details that make that work, but you can. So let me just mention one detail which is the scale factor. If you fix the size, the edge lengths on the tree which is the usual, which is one way to think about it, and you start with some square-- like if I start with a one by one square, there's no way I'm going to fold that tree. There's just not enough distance in the square. So what I'd like to do is find the smallest square that can fold into this thing. Or equivalently find-- you can think of scaling the piece of paper, or you can think of scaling the tree with a fixed piece of paper. Doesn't really matter. In general, you get this problem which I'll call scale optimization. This is the hard problem. So let's say-- just defining some variables. So P i, I'm going to maybe number the leaves or label label them somehow, various letters. And then, P i is going to be the point where that-- of paper that actually forms that leaf in the folded state. That leaf which corresponds to a single point of paper projects to that leaf. And then, my goal is to maximize some scale factor which I'll call lambda. Subject to a bunch of constraints which are just those constraints, except that I add a scale factor. So for every pair of leaves, i and j, I'm going to measure the distance between those leaves in the tree. This as a tree distance. Compare that to the distance and the piece of paper between those two points, the Euclidean distance. And instead of requiring that this is greater than or equal to this, which is the usual one, I'm going to add in the scale factor which you can think of as shrinking this or expanding that. It doesn't matter. But I want to-- because here I'm sort of shrinking this amount. I want to maximize that factor, so I shrink it the least possible. You can formulate it this way or maybe a more intuitive way. But this is the standard set up. And this is something-- this is called a nonlinear optimization problem. It's something that lots of people think about. There are heuristics to solve it. You can solve in an exponential time. In general, it's NP-complete, and we'll see next class that actually this problem of origami design is NP-complete. So there's not going to be anything better than heuristics and and slow algorithms. So the idea is, you solve that. Now, you have your leaves on your piece of paper somewhere. Now what? Now, you have to figure out how everything folds. That's where we get to some real combinatorial, some discrete geometry. Fun stuff. Yeah. I have one extra motivation here. Origami design is fun, but here's a puzzle you can solve, too. Which we can already see. Margulis napkin problem. Origin of this problem is not entirely clear, but I think it came from Russia originally. And the problem, the puzzle is usually stated as follows. Prove that if you take a unit square paper-- so it has perimeter four that you can, no matter how you fold it, the perimeter always gets smaller. Never bigger than four. We used a very similar thing here. We said, if you have two points, they're distance can only get smaller. That's true. Margulis napkin puzzle is not true. That's the difference. Perimeter is different from distance. And in fact, you can fold a piece of paper to make the perimeter arbitrarily large, which is pretty crazy. And this is something that Robert Lang proved few years ago, using-- It's sort of easy once you have the fact-- which I haven't quite written down here, but I've been saying. As long as you place your points subject to this property, there is a folding that has that shadow tree. And so the idea with the Margulis napkin problem is let's make a really spiky tree, a star. I want to fold the smallest square possible, so that projection is this thing. Let's say that it has-- I won't say how many limbs it has. But the idea is, if you're using paper efficiently, in fact, the folding will be very narrow. It'll be a pretty efficient use of paper, hopefully. And so the actual 3D state will just be a little bit taller than that tree. And then, you just wash it. And the idea is that then the perimeter is really big. You've got a-- the perimeter as you walk around the edges of that tree. So how big a tree can I get? I'd like to somehow place these leaves-- now, what's the constraint on the leaves? Let's say all of these are length one. Then, this says it every pair of leaves must be at least distance two way from each other. So I got to place these dots in the square so that every pair has distance at least two. This is like saying-- here's my square. --I'd like to place dots so their distance is at least distance two. That's like saying if, I drew a unit disk around two points-- I got to remember. You should always draw the disk first and then the center. Much easier. Those disks should not be overlapping. If this is length one, and this is length one, the disks will be overlapping if and only if this distance is smaller than two. I want it always to be greater than or equal to two. So I just have to place a whole bunch of disks in the square so that they're not overlapping. So how big a square do I need to do that? This is a well-studied problem, is the disk packing problem. A lot of results known about it. It's quite difficult. But we don't need to be super smart here to get a good bound. Let's put a point-- let's put points along a grid. I'm going to regret making such a big grid. Let's say, an n by n grid. And I'm going to set the size of my disks right so that these guys just barely touch. This is actually not a terribly good packing. You should do a triangular grid instead of a square grid. But it'll be good enough asymptotically. You get the idea. If I set the size of my paper to be n by n, I can fit about n squared unit disks in there. N by n paper folds something like n plus 1 squared. But let's just say, approximately n squared disks. So that means I can make a star with about n squared limbs. It's insane. It's like super efficient. Each of these little portions of paper ends up being one of these segments. That's the claim is, you could fold that. So once you fold this thing, I have an n by n square. You started with perimeter about four n And now, I have perimeter about n squared. That's huge with respect to four n. So this is much bigger than four n, for n sufficiently large. AUDIENCE: [INAUDIBLE] about the length flaps. PROFESSOR: Here, I was assuming all the flaps are length one. So the disks are size one, and so it's an n by n square. Clear? So this is more motivation for why this theorem is interesting. It lets you solve this fun math puzzle and show not only does a perimeter not go-- not only does the perimeter not only go down, but it can go arbitrarily high. It just takes a lot of folding. So let's say something about how we prove that once you have a valid placement of the points, you can actually fill in the creases, find folding. Let me bring up an example. So this is actually the example I keep using which is, you want to make a lizard or some generic four-legged tail and head kind of creature. This is the output from TreeMaker, complete with crease pattern and everything. But here, I've labeled all the-- or actually Robert Lang, I think, has labeled all of-- this a figure from our book. --all the vertices of the tree and the shadow. And then, we're labeling where they come from on the piece of paper. So in particular, you see something like a leaf h. And it comes from this one point on the paper. This leaf d comes from this point, and g comes from that point. It's actually kind of similarly oriented to this guy. The interior vertices, they come from several points. It's a little messy. But let's-- one of the things is to try to locate where those points ought to be. s So there's this idea of an active path which is a path in the tree between two leaves. I'll call them shadow leaves to say that they're in the shadow tree. And the length of that path equals the distance in the paper. So in the case of making a star graph, this is exactly when the disks kiss, when the just touch each other on the boundary. So in other words, we have this inequality, saying the distance between in the paper should be greater than or equal to distance in the tree. If that inequality is actually an equality, if they're the same thing, then it's kind of critical. I can't get those points any closer in the paper. Those things I call active paths. And that is some of the lines up here. I guess the black dashed line, actually, in a lot of the dash lines. All of the dash lines, I think. So for example, d to h, that's a distance between two leaves. And if you measure the distance here, it's two. And just imagine, this example has been set up so this is exactly two. So this is tight. I can't move h any closer to d or vice versa. And also from h to a, a is actually in the middle of the paper and corresponds to that flap. That's all of those green, actually it's just the green lines, green dashed lines are active. They're kind of critical. And what's nice is that subdivides my piece of paper into a bunch of smaller shapes. So I have a little triangle out here. That turns out to be junk. We're not going to need it because the sort of outside the diagram. You could folder underneath. Get rid of it. You've got a quadrilateral here between the green lines. We've got a triangle up here, a triangle at the top, triangle on the left. All we need to do is fill in those little parts. Fill in that triangle. Fill in that quadrilateral. Of course, in general, there might not be any active paths, and we haven't simplified the diagram at all. But if there are no active paths, you're really probably not very efficient. That means none of these constraints are tight. That means you could increase the scale factor lambda, make a better model. You can increase lambda at least a little bit. If all of these are strictly greater, you can increase lambda until one of them becomes equal. So you should have at least one active path. And in fact if you're efficient, you should have lots of active paths. I don't think I need to be too formal about that. But it's true. And here's one thing you can show about active paths. So what would be really nice, in this example, I have triangles and quadrilaterals. In general, I'm going to have a whole bunch of different shapes. Some of them could even be non-convex which would be annoying. I would really just like to deal with triangles because I like triangles-- geometer. And triangles are simple. And it looks like the crease pattern in a triangle is pretty simple. In fact, it's just angular bisectors of the triangle plus a few extra perpendicular folds. So that would be kind of nice if I could get everything triangles. To do that, I need lots of active paths. So how can I guarantee that there's lots of active paths? I'm going wave my hands a little bit about how this is done. But the idea is to augment the tree. So I have some tree that I actually want to make, like lizard. And I'm going to add some extra stuff. Like maybe I'll add a branch here and a branch here or whatever. Whatever it takes. I got to do so carefully. So let me say what that means. So I'm going to add extra leaves to the shadow tree. My goal is to make the active paths triangulate the paper without changing the scale factor. So this is kind of a cheat. And most of the time, you don't actually need this cheat. But for proving things, it makes life a little easier. So we want to show that it's enough to place the vertices subject to this, the leaves subject to this constraint. So ideally, we make our tree. But if we make an even more complicated tree, like with these extra little limbs, we can get rid of them at the end. You just fold them over and collapse this flap against an adjacent flap. So if we make our life harder, that's OK, too. If we could fold a more complicated tree, in particular we folded the tree we wanted. If we can do that without changing the scale factor, then great. Then, we did what we wanted to do. We folded our piece of paper with the desired scale factor. In reality, we're actually going to move the leaves around a little bit so that we have to do. We're going to move around the leaves that you already placed in order to make room for the new leaves. But here's the idea. We have these leaves. There's some active paths, these green lines. And we'd really-- we have this quadrilateral in the center. We'd really like to subdivide it. Like this black line is kind of asking for it. It would be really nice if we could just add in an active paths there. And you can do it. Let's see if I can identify what we're talking about here. So a fun thing about active paths, you look at two leaves like g d here, which corresponds to this path g d here, because it's active, you know this length is exactly the length traced right here. So that means, this segment has to be folded right along the tree here. You know that this segment is that. And so in particular, you know where c is on that segment. C actually comes from multiple points in this diagram. But you know that this point right here must fold to c. And you know this point must fold here and so on. These guys correspond. So that's good. So if I look at this quadrilateral, it corresponds so g to c to d to c to h to c to b to a back to b back to c. And so my guess is if you add a little limb in here-- I think I can draw on this. That would be nice. Should really tell you about-- is this going to work? Yes. It's kind of white, but there we go. So great. Draw a fun diagram here. This is how I make my lecture notes if you're curious. This is a tablet PC. Now, I've got some-- Tell me if I make a mistake, those who know what I'm drawing. What the hell is this? I think it goes there. There. There. I'll explain what I'm drawing once I've drawn it. It's easier. Something like that. This is a bunch of disks and a bunch of other things, there's only one here called rivers. And this is a geometric way to think about the constraints. If you look at this structure-- so I have a disk down here corresponding to d. I have a disk corresponding to h, a disk corresponding to g, a river corresponding to the segment b c. The reason I only have one river is there's only one interior edge in this tree. Everything else is a leaf edge. So leaf edges are going to be disks. All non leaf edges are going to be rivers. And the structure, the way that those things connect to each other is the same as the structure in this tree. So you've got the three disks down here, which corresponds to these leaf edges. They all touch a common river because all of those edges are incident to that edge in the center. And there's three disks on the top that correspond to the three leaf edges up here. This is really just the same thing. It's saying that if you want to look, say, at the distance between h and a here. The distance between h and a should be length three. And those three lengths are represented by the size of this disk, followed by the width of this river, followed by the size of the a disk. It's say exactly the same constraints, just represented geometrically. Now, if I'm lucky, these regions actually kiss, they touch at points. That's when things are active. And you could draw straight across from a to h and never go in these outside regions. If you're not lucky, they won't touch. If they don't touch, make them touch. That's all I want to do. And so I just want to blow up these regions, make them longer, for example, until things touch. When they touch enough, if you do it right, you can actually get them to triangulate. That's my very hand wavy argument. It's proved formally in the book, and it's a little bit technical. So I think I will move on and tell you what to do with triangles. So suppose you have some triangle. And each of these edges is an active path. So there's some leaf here. We'll call them a, b, and c. And this segment we know will map right along the floor to make up that path, that active path in the tree. Like I said, we're going to follow along angular bisectors. You may know the angular bisectors of a triangle meet at a single point. And then, we're going to make some perpendicular folds like that. Where the perpendicular folds go, well, they go whenever there's a shadow vertex along this segment. Remember this edge, b c corresponds to some path between b and c in the tree which looks like whatever. And so for each of these branching points that we visit along that, we can just measure. As we move along here, we get to some vertex then another vertex then another vertex then c, except I did it backwards. And so for each of these guys, I know that I need to be able to articulate there. I need a hinge crease. And so I just put in a hinge grace perpendicular to the floor, essentially, because we know this is mapping to the floor. And conveniently, those will all line up. So if I have some vertex here-- let's call it d. --d will be here. But d will also be here. Because if I follow the path from b to a, a is some other guy, maybe this one, I also have to go through d. And so these things will conveniently line up perfectly. I'm not going to prove that again. But it's true. And you just get this really nice simple to fold thing. Shoot, I'll fold one if you haven't. This is a standard rabbit ear molecule in making origami. You have a little triangle. You want to make it an ear. You squeeze along the angular bisectors, and it makes a cute rabbit ear. And you can see it also, the crease pattern, in here like in this triangle in the upper right. You've got the red lines which are the angular bisectors. And then, you've got all those perpendicular folds. And they go exactly where those letters go. And the triangle at the top is similar. It's a little different because the very top edge of the paper is not actually active. So there's really a special case there. Upper right is also not active. Oh, that's annoying. Yeah. There's a little bit of extra stuff that happens at the boundary of the paper where you don't have active paths. But it's, as you can see from the crease pattern, it's basically the same. In fact, I could call it the same. It's a little bit less pretty because this is not green. And so you don't actually know that c is here. And you don't know that b is there. But you know about all the other edges. There's just one edge you might not know about. And so you can figure out what the right edge is based on the other edges of the triangle, the other two edges. That's just a feature. You can triangulate everything except the boundary. You may not be able to get active paths in this step. That kind of does the tree method in a super abbreviated version. I showed you a demo last time, just in case you forgot. You draw your favorite tree. See if I can get it to do the same one. And you optimize, generate a crease pattern. Oh, it's a different one. Fun. There it is. And here, TreeMaker knows how to draw the disks. It doesn't currently know how to draw the rivers because it's kind of tricky to make a snakey path in a computer program. But you see the three disks down here, the three disks up there, and you can imagine the one river in the middle representing the central segment of your tree. And one of the problems on the problems set, problem set one is released, is to just make something using TreeMaker. I would encourage you to start simple unless you know what you're doing. You don't have to use the program. You could do it by hand, placing disks. That's how most origamists actually do it. I'm sure Jason will do it that way. You can use the program, print out a crease pattern, see what it looks like. Next thing. If you want to do this in reality-- and what TreeMaker is doing is not this triangulation. Doing a triangulation is a bit of a pain, but you could keep modifying your tree until it triangulates. The alternative is you just deal with polygons that are bigger than triangles. And there's this thing called the universal molecule by Robert Lang. Here it is for a quadrilateral. And it makes it-- this works for any convex polygon. Now sometimes, you're active paths don't decompose your shape into convex polygons. And this still doesn't work. You still have to do something here. You need to add some extra leaf edges to the tree to just fill things up. But you don't have to stop. You have to go all the way to the point of triangulation. You can stop at the point which happens most the time when all of the faces are convex. And then, it's a slightly more general picture what happens. Intuitively, what you want to do is, this is the tree you want to make among those leaves. All the boundary edges here are active paths. You have g d h a. Those are active paths, so you know where all of those branching points are in the middle. You'd like to build that. And so what we're going to do is build it bottom up in the literal sense from z equals zero, increasing z. And what that corresponds to in this picture is shrinking or offsetting these edges inward. So you offset these all by the same amount. That's like traveling up over here. So you see the red lines here correspond to the red cross sections. So I just see what happens in cross section is I shrink things in. And the first thing that happens at this first critical red drawing is that the path from d to a becomes critical, becomes active. Before it was inactive that-- that was kind of annoying for me. I wanted it to be triangulated, but it wasn't. The distance from a to d in the piece of paper was bigger than the distance between the leaves in the tree. I wanted them to be equal. Well, it turns out, if you shrink this thing, eventually they might become equal. And that's what happens. And that's what TreeMaker computes. And what you should do if you're building the universal molecule. If you discover, oh, now a d is active, now, I subdivide into two triangles. And then, I do the thing in the two triangles. And generally, you start with some convex polygon. You shrink it. At some point, some diagonal might become active. You split it into two, just keep going in the two. And there's one other thing which can happen, which is what's happening at the end of a triangle. You shrink. And then, it could be two vertices actually collide with each other. And then, you just think of them as one vertex and keep shrinking. So that's the general universal molecule construction. You see in a sort of-- these are the cross sections from above. You see that as you go up, things are getting smaller. That is one of the statements of the uniaxial base as you go up. Cross sections get tinier. And that gives you the crease pattern. If you follow along where the vertices go during this process, and you draw in and all the active path that you create along the way, that's your crease pattern. So that's how you do it more practically is you use the universal molecule. But to prove it, you don't actually need that. All right. I have now some more real examples by Robert Lang and by Jason Ku. So here is Roosevelt elk. And Rob is all about getting very realistic form. So all of the branching measurements and-- I'm sure if you knew a lot about elks, you could recognizes this a Roosevelt elk not some other elk. And you can achieve that level of detail and realism using the tree method because you can control all of the relative lengths of those segments and get perfect branching structure and get the right proportions for the legs and tail and so on. And you can see here, the-- and you can go to Robert Lang's webpage, landorigami.com and print this out. And try it out if you want. This will fold not this but the base for that model. And you could see the disks. And you can see some approximation of the rivers here. But they're not quite drawn in in this particular diagram. But a lot of detail. And if you look carefully, you can really read off what the tree is here. You can see how these things are separated, and it will correspond to the branching structure over there. Here's a more complicated one. Scorpion varleg which you can also fold at lifesize if you're really crazy. And you can also see from these kinds of diagrams that paper usage is super efficient in these designs. And presumably that's how Robert design them. The only paper we're wasting in some sense is the little regions between the disks and the rivers which is quite small. Most of the papers getting absorbed into the flaps. Here's one of the first models by Jason Ku that I saw, the Nazgul from Lord of the Rings. And pretty complicated. So here, the bold lines show you essentially where the disks and the rivers are that have been-- AUDIENCE: Those are actually the hinge creases. PROFESSOR: Oh, those are hinge creases. Yeah. Good. And the top is the actual crease pattern. And it's pretty awesome. You've got a horse and rider out of one square paper. Here's a shrimp. He's super complicated and super realistic. It looks very shrimpy. I know some people who are freaked out by shrimp. And so this should really elicit that similar response. Or other people get really hungry at this point, I guess. But you could see the tree is pretty dense here, lots of little features getting that branching right. And one last example is this butterfly which is pretty awesome in its realism. And I guess the tree is a lot simpler here. But there's a lot of extra creases here. You see just for getting the flaps nice and narrow. So in general, these kinds of constructions will make this guy rather pointy and tall. And you can just squash it back. And it's called a sync fold and make it tinier like-- you have something like this. The flaps are you think are too tall. You just fold here. Which, if you look at the crease pattern, makes just an offset version of the original. And hey, now your flaps are half is tall. And if you're a proper origamist, you-- I shouldn't do this live. You change the mountain valley assignment a little bit, and you sync everything on the inside instead of just folding it over. It's not going to look super pretty. But same tree structure, just the flaps are half as tall. So that's all this pleating here. And I think that's it for my little tour. And Jason Ku next. Next Monday we'll be talking more about the artistic side, history of origami design, and what it takes to really make something real by these approaches. That should be lots of fun. I want to move on to other kinds of efficient origami design. Less directly applicable to real origami design so to speak, at least currently. But mathematically more powerful. Uniaxial bases are nice, but it's not everything you might want to fold. So what if we want to fold other stuff. And to a geometer, most natural version of folding other stuff or folding anything is a polyhedron. You have a bunch of polygons, flat panels in 3D, somehow joined together to make some surface. How do I fold that? And let's start with a super simple example which is I want to fold a square into a cube. How big a square do I need to fold a unit cube? Or how big cube can I fold for a unit square? Either way. And I'm going to make it a one by one square. And I'm going to fold it into a cube of dimension x. And I want to know-- it looks funny. It's the quintuple x cubed. It's the x-coordinate. That's my motivation. So we talked-- one thing we can think about is what makes the corners of the cubes and how far away should they be. So if I want to fold this cube, I look at, let's say, the opposite corners of the cube. They're pretty far away on the cube. And I know that by folding I could only make distance is smaller. So somehow, if I measure the shortest path on the cube, from this point to this point, it's that if you believe-- when you unfold this thing, it should be flat. If I unfolds to just those two squares, it's a straight line between the two. And so that goes to the midpoint of this edge and then over there. And you measure that length. And oh, trigonometry. Root five, that's not what I wanted. So we have x here, 2x here. So this distance is-- yeah, I see. Why is that different from what I have written down? Because that was not the diameter of the cube. I see. AUDIENCE: You want them equidistant. PROFESSOR: No. I do want this but, I think if I go from the center of this square-- this is hard to draw. --to the center of the back square, which is back here, that distance is going to be wrapping around. Which is just going to be like 2x. Is that bigger or smaller than root 5x? AUDIENCE: It's smaller. PROFESSOR: Smaller. Interesting. One, two, three, four. What did I do wrong? Oh, I see. OK. Here's a fun fact. This is actually the smallest antipodal distance. Get this right. So if you take some point on the cube, and you look at the point farthest away from it on the other side of the cube, it will always be at least 2x away. So here, it's bigger. This is probably the diameter. It's bigger than 2x, but it will always be at least 2x away. This is actually the smallest situation you can get. And so I want to think about the point that corresponds to the center of the square. Right? Yes. Now maybe that maps to the center like this. And the antipodal points is 2x away, or maybe it's bigger. But at least I know that this length is greater than or equal to 2x because that's-- the antipodal point has to be made from that. I need to think about all situation because I really want to think about the center of the square. Once that is at least 2x, then I know that the side of the square is at least 2 root 2x. Yes. And so I know that this is one. And you work it out. And x is root 2 over 4. Or it's at most that. And so that gives you some bound on what it takes. So this is actually really the only technique we know to prove lower bounds on how much-- how big a square you need to make something. It's this kind of distance increasing argument. And it turns out you can actually achieve x equals this. So this is what I call lower bound. It says, you can't do any better than this. But there's also a matching upper bound which achieves this and not going to draw it perfectly. So there are the six sides of the cube. You've got one, two, three, four, five. And the sixth one is split into quarters. And you can see, you just actually fold here, here, here, and here to get rid of that excess. And it will come together as a cube. You also fold along the edges of the cube. And it perfectly achieves this property. That from the center of the paper, you have exactly one this distance 2 root 2x to the antipodal point which is the center of the opposite face. Question? AUDIENCE: Sorry. Can you explain where the 2x came from [INAUDIBLE]? PROFESSOR: I wave my hands. So I'm thinking about an arbitrary point on the surface of the cube. Here, it should be clear it's 2x. There's x right here. And there's 1/2x here. And there's 1/2x on the back. And I looked at another situation which is when it was at a corner that bigger than 2x. And I claim if you interpolate in the middle, you'll get something in the middle, in between 2x and root 5x. For example, if take a point here that's closer to the corner, then that point-- you should probably also think about the edge case. But you check all of them, and they're at least 2x. That's what I'm claiming. So I didn't really prove that formally. But claim is 2x is the smallest antepodal pair you could get. AUDIENCE: What does antipodal mean? PROFESSOR: Antipodal simple means on the other side. The anti pode, the opposite pole, like from North Pole to South Pole. AUDIENCE: [INAUDIBLE]. PROFESSOR: Right now, we know we're taking whatever point is the center of the square. It maps somewhere in the cube. I take the antipode from there. I know that has to be at least 2x away. And if you look at the distance map here, the farthest away point in the squared from the center is the corner point. So I know that that distance can only get smaller. And other distances only get smaller. So if I have to make a 2x distance from there, this is my best chance for doing it. And that gives you a lower bound. Doesn't mean it can be achieved. But this shows you that you can achieve it. This is a result by Catalano, Johnson, and Lobe in 2001. It's like the only optimality result we have for folding 3D shapes. That's why I mention it. Tons of fun open problems like you don't want to make a square-- a cube. Maybe you want to make a triangle. If you want to cover a triangle on both sides, that's probably open. If you want to make regular tetrahedron, that's probably open. Pretty much any problem you pose here is open. It would make fun project. You can also think about, instead of starting from square, you start with a rectangle of some given aspect ratio. What's the biggest cube you can make? That's kind of fun because in the limit for a super long rectangle, you should do strip wrapping. For a square, we have the right answer. What's the right answer in between? Who knows. The next thing I wanted to talk about where there's been some recent progress is checkerboard folding. In lecture one, I showed you this model which I never go anywhere without, the four by four checkerboard folded from one square paper, white on one side and red on the other. And so I think this is probably the most efficient way to fold a four by four checkerboard. You start with a square of one size, and you shrink both dimensions by two. And you get a four by four checkerboard. But we don't know if this is the best way to fold a checkerboard. Be nice to know. And this has been studied for a while. And this is not the standard method for folding a checkerboard. But it's actually pretty efficient which is kind of crazy. So you take a square, white on one side and brown on the other. You do this accordion pleat. You get a bunch of nice color reversals, bunch of squares. And then, you just need to make a square of squares from that. So general problem is, I want to fold an n by n checkerboard from the smallest possible square. How big does it have to be as a function of n? And the standard approach is-- well, this is the first method that does it for all n in a general simple way. But the practical foldings people have designed, like four by four and there are a bunch of eight by eights out there, are little more efficient than this. But they have the same asymptotics which is the perimeter of the square you start with has to be about twice n squared to make an n by n checkerboard. And the reason that is, is if you look at the checkerboard pattern, we're trying to get color reversals along all of these lines between the red and the white, brown and white. And the way we're doing that here is we're taking the boundary of the paper, and we're mapping it along all the color reversals. And if you work out how much color reversal is there in an n by n thing, it's about twice n squared. And so either your perimeter has to be at least that large if you're going to cover the color reversals with perimeter. And for a long time, we thought that was the best we could do was to cover color reversals with a perimeter of paper. Of course, know that you can take a square of paper make the perimeter arbitrarily large. So this was never really a lower bound. We never really knew that you needed 2n squared. The four by four achieves 2n squared. We think it's the best for four by four, but we proved last year-- this is with Marty and [INAUDIBLE] and Robert Lang-- that you can do better and get perimeter about n squared. Now, there are some lower order terms there, the order n parts. So this is really only practical for large n. I think-- I'll elaborate on that a little more in a moment. But here's the idea. Instead of visiting all the boundaries between red and white squares, I just want to visit the squares themselves. So if I could fold a, in this case a rectangle paper into this shape which has slits down the sides, and it has these flaps hanging out. Now, you've seen how to make flaps super efficiently. You really don't need to shrink the paper by very much to make this pattern. Then, you take these guys-- and everything is white side up. You take these flaps, fold them over. They become brown. And these guys fall over. These fall down. These guys fall up. You can actually make any two color pixel pattern from this idea. And it will make white squares on top of the brown surface that you folded. So this is the starting point. You just fold everything over. And you get your checkerboard. And now, essentially, you're visiting each square only once instead of the boundary edge for all the squares. And so you end up using only n squared instead of twice n squared. And you can do it if you start from a square also. You just need more flaps. And there's a bunch of tabs sticking up here, and a bunch of tabs sticking up there. You can fold this again super efficiently, using all these standard techniques. And then, you make a checkerboard twice as efficient for large n as we've previously thought possible. Now, we still don't know whether this is optimal. We think it is. But we thought so before also. Who knows? So big open problem is [INAUDIBLE] for anything. In terms of actual values of n, for n bigger than 16, this method is better than the standard approach. Although if you look just at seamless-- so seamless, I didn't mention, but we're going to talk about it more in a moment. When I make a square of a checkerboard, I'd really like this to be a single panel of paper not divided into little panels. And like in this checkerboard, this white square has a bunch of seems on it. It's made out of three smaller triangles. And that's not so nice. This method is seamless. You get whole panels making each of your squares, so it looks a little prettier. If you look at the best eight by eight seamless folding, this beats the best seamless eight by eight folding. Although it's rather difficult to fold. Hasn't yet been folded. That would be a good project also. Build an actual checkerboard with this method. Questions? Now, I want to move to the general case. So I talked a little bit about checkerboards and about cubes. Let's think about arbitrary polyhedra. And this is the Origamizer. So Origamizer's actually two things. It's a computer program for Windows that you can download, and it's an algorithm. And they're not quite the same. So there's original computer program and Tomohiro Tachi wrote a couple papers about it. That program does not always work. Doesn't make every polyhedron. It need some finesse to make it work, but it's super efficient. And it's practical. He's made lots of models with it like the bunny you've seen on the poster. There's the algorithm, which we developed together. And we know it's similar. And we know it always works. But it's a little bit less practical. So it's-- theory's always a little behind practice, let's say. So there's a practical thing here. There's also a theoretically guaranteed thing here. They're not quite the same, but they're very similar. I'm going to tell you a little. I'll show you both, basically. But the idea is, a practical algorithm to fold any polyhedron. And practical here is a bit vague. We don't-- that's not a theorem. We don't know how to define practical in mathematics anyway. It has some fun features, though, mathematically. One is that it's seamless. So for it to be seamless, I need to assume convex faces. So faces are the sides of the polyhedron. So like in a cube, every face is a square. Those are convex. And provided all the faces are convex, if they're not, you have to cut them up into convex pieces. My folding will be seamless in that it will be covered by an entire piece of paper. There maybe other things hidden underneath. But there won't be any visible seems on the top side. So that's nice. It's also water tight. And for this, I have an illustration. This is a feature missed by the strip method. And if you've always felt like the strip method of making anything is cheating, here's a nice formal sense in which it's cheating. We didn't realize it until we start talking about Origamizer which does not have this cheating sense. So here I'm trying to make a 3D surface, looks like a saddle surface, by a strip. If I just visited the guys in this nice zigzag order, which I know is possible, I get all these slits down the sides. This thing would not hold water. If you poured water on it, it would fall through all the cracks. And if I fold it right, like in this picture, there should be no seems in here. The square, the boundary of the squares is what's drawn in red. So here the boundary of your piece of paper gets mapped all over the place. So it's lots of holes. Here, I want the boundary of the paper to be the same as the boundary of the surface. So the only place the water to run off is at the edge. I mean, obviously, this thing is not a closed solid. But if you actually made a cube, you're still going to get some edge because the boundary paper has to go somewhere. But if you then sewed up the edge, it would totally hold water. So that is the informal version of watertight. The formal version is the boundary of the paper maps within some tiny distance epsilon of the boundary of the surface, boundary of the polyhedron. And here, when I say polyhedron, I really means something that's topologically a disk. Brief topology. This is a disk. This is a disk. This is not a disk. Cube is not a disk. It's a sphere. This is a disk. A piece of paper is a disk. So really the only things you could fold topologically in a pure sense in a water tight sense are disks. You can't glue things together. That's not in the rules. So I can't make-- I could fold this. But I'd have to have an extra seem somewhere in order to make this thing just be a disk. I could fold a cube. But I have to have some seem somewhere. Here-- the top-- a square gets cut into four pieces in order to make it into a disk. Any higher topology can be cut down and made a disk. So this is still universal. But in terms of the water tightness, you have to think about the disk version. AUDIENCE: Erik? PROFESSOR: Yes. AUDIENCE: Even when you go back and forth with the script method, you could argue that that was topologically a disk. PROFESSOR: Oh, yeah. Anything, any folding you make is still topologically a disk. This is making a disk. But it's doesn't preserve the boundary. AUDIENCE: It what? PROFESSOR: It's not preserving the boundary. So yeah. Any origami, you're still disk like, and you're watertight for some disk surface. But I want to make this disk surface with that boundary. And watertightness is supposed to match the given boundary. But that boundary must form a disk. That's the point. I can't say, oh, there's no boundary in a cube. So you have-- so the boundary of paper goes nowhere. That's not allowed. So you get to specify it, but it has to be a disk. I'm going to wave my hand some more. There's another feature which you can see in this picture. This is a schematic of what Origamizer would produce which is that there's some extra stuff underneath. It's slightly lighter because it's on the bottom side there. But you can see along every edge, these are the edges of the actual polyhedron. And then, there's these extra little tabs, extra flaps on the underside. This is actually necessary. If you want watertightness, you can't fold exactly that polyhedron. You fold a slightly thickened version. But you can keep all those flaps on one side. So if you're making something like a cube, you can put all the garbage on the inside where no one can see it. So there's another feature here is we get a little extra. And that's necessary if you want watertightness. And it's sort of the trick that makes all of this possible and efficient and so on. So the high level idea of Origamizer is we're going to say, there's all these faces that we need to make. So just plop them down on the piece of paper somewhere. And then, fold away the excess. Get rid of it by tucking. And that excess paper is going to get mapped into these little chunks here. And maybe I'll show you a demo. So you takes something like-- this is similar to the thing I was showing. And I forgot a mouse. There are all the faces in the plane. And they've conveniently already been arranged. I can't zoom out because I lack scroll wheel. But there's a square that makes the-- yeah, or a multi-touch trackpad. Sorry. And all the faces are just there. And then there's this extra stuff. And now I say-- I should probably do this, too. Maybe not. Well, all right. And then I say, crease pattern. Boom. That folds away the excess. And then, just the white faces, these guys which correspond faces over there, are used. And then, you just fold away the extra junk. Easy. You want to make a bunny. This is actually an example where it will not work by itself. Because as I said, this algorithm is not quite guaranteed to work. So I'm going to change the boundary little bit by cutting to the ears. And so this is still-- it was a disk before. It's a disk after because there's this boundary here. But it turns out, now the algorithm will work, assuming I didn't mess up. It's bouncing around a little bit. You can see it's pretty efficient here. There's very tiny gaps between the triangles. There's actually a little bit a violation here. What's happening, which you can see here, is, on the inside are all this extra structure, these flaps. And sometimes they're so big that they actually penetrate the surface. But that can be dealt with by just a little bit of cutting, maybe a little more cutting. Not cutting in the literal sense. But we just subdivided these panels into lots of smaller panels. And now, it is valid. This not the design that you've seen on the poster. The design on poster's little more efficient. I'm not so expert. I'm not so pro that I can make exactly that design. So it's a little inefficient on the sides here. But you can use this tool to make super complicated 3D models. Let me quickly tell you what goes into the algorithm. So the first part is to figure out where all these tucks are going to go. They lie essentially along angular bisectors on one side of every edge. But at the vertices, things are super complicated. And in general, if you have a non-convex surface with tons of material coming together, what I'd like to do is add lots of little flaps on the side. So that when I open it up-- so let me draw you a generic picture. So we have two faces coming together. What I'd like to do is add a flap here and a corresponding one just behind it. So that's sort of a tab. And I can unfold that and think of some other surface that I'm trying to fold. So I really wanted just those two polygons. But now, I've made some other thing which is still a disk. You can add those faces in such a way that you will have, at most, 360 degrees of material everywhere. So even though in the original thing, you had maybe tons of material coming together which you cannot make with real paper, you add in a bunch of tabs along the edges and a few more at the vertices. You can fix things so that the thing is actually makeable from one sheet of paper. That's the high level idea. Doing that is a bit detailed. The paper isn't even published. These are some figures from it. But this is some-- way this is how you resolve a vertex with some triangulation stuff. Each of these corresponds to a flap in the original thing. And then, this is where we're a little impractical. It doesn't quite match what we do in practice in the computer program. But the idea is, you imagine, you have your faces which you want to bring together somehow. They're distributed in the piece of paper somewhere. But you'd really like to connect them together when they have matching edges. So this edge might be glued to some edge of some other triangle. And I need to just navigate these little river-like strips to visit one edge from the other. And we proved that if these guys are sufficiently tiny, you can always do that. And of course, in reality, you want to arrange things, so you can do it efficiently with little wastage. But we proved at least it's possible with this kind of wiggly path stuff. So now our picture, the thing we're trying to make looks something like that. Where we have-- originally it was four triangles, the gray triangles. We added in these extra flaps so that it's nice and well-behaved. And each of those flaps we're covering from both sides. And if you think the red diagram-- I'll get this right. The red diagram corresponds to that. It's just been kind of squashed. So there are four triangles, which correspond to the four top flaps. And there's this outer chunk which corresponds to that flap. And then, you look at the dual which is the blue diagram. You take that picture, and that's how you set up the crease pattern essentially. So these are the original four triangles. And then. There's all this stuff that represents the structure of that thing that we want to make. And then, you just have to fill in the creases in the middle. And you do that just with something-- this is how you do it guaranteed correct. And we saw I had to do lots of subdivision here. What I called, accidentally, cutting. But just lots of pleats there. Because, essentially, this is the edge. And we want that edge to lie in a tiny little tab. So it's got to go up and down and up and down and up and down, accordion style. And if you do it right, all those things will collapse down to a little tab attached to that edge which is also attach that edge. And they will get joined up. Then, you've got to get rid of all this stuff in the middle. And rough idea is, if you pack it with-- or I guess you cover it with disks so that everything is again very tiny. And you fold what's called the [INAUDIBLE] with those points. And it works. It's a complicated but very cool. And the paper hopefully will be released later this year finally. And that's Origamizer. And that's efficient origami design.
MIT_6849_Geometric_Folding_Algorithms_Fall_2012
Lecture_13_Locked_Linkages.txt
PROFESSOR: All right, let's get started. So today we're continuing the theme of locked linkages. Last time we talked about the Carpenter's Rule Theorem, which brought together all of the rigidity theory and tensegrity theory we had built, essentially, and showed that there were no locked 2D chains, paths or cycles, as graphs. And in general you can think in 2D, you can think in 3D, and you can think in 4D and higher. I drew this table way at the beginning of class. And you can think about chains, which are-- these are so-called open chains, like robotic arms. Paths is a graph. And polygons are closed chains. And then another case it's interesting to think about, and there's been a lot of work on, are trees where you don't have any cycles in your graph, but you could have branching points with degree more than 2. You may recall Carpenter's Rule Theorem worked whenever you had a graph of maximum degree 2. So it allowed even multiple chains in one graph that could be disconnected. But it forbade any kind of tree branching stuff. So Carpenter's Rule Theorem is that there's no locked chains in 2D. So we did that. But trees, there are locked trees in 2D. 3D there are locked chains. And these are two results that we will talk about in today's lecture. And that implies that there are locked trees, sort of obvious. Trees are more general than open chains. And because there's a locked open chain in 3D, there's also a locked tree. In 4D, though, it switches. So none of these things are locked. Of course, there are locked graphs in general. But if you look at trees, or paths, or cycles, none of them can lock. We won't have time to talk about that today, maybe another class. Essentially we're always thinking about an intrinsically one dimensional structure. These edges, bars, are always one dimensional. In 2D, where the space are living in, the ambient dimension is just one more than the dimension of your edges, it's really interesting. And that's what we saw for chains and what we'll see for trees. In 3D it gets tricky. This mismatch of two in dimension allows you to lock things, kind of like knots. And in 4D there's such-- there's so much freedom in the way you move, somehow, nothing can lock, in the same way that there are no knots in 4D. Cool. So that's the reminder summary about locked linkages, what they are, and what's known about them. Now we've seen that chains aren't locked. But before I get trees I want to talk about algorithms for actually doing this. This is, after all, geometric folding algorithms. We've seen a proof that they exist, that in fact if we had a chain, there's an expansive motion increases all the distances and therefore doesn't collide with itself, and eventually straightens all the outermost open chains and convexifies all the closed chains that are outermost, not containing anything. But how do we actually find such a motion? And this has been the subject of some study. There are three algorithms for doing it. So we'll call this unfolding 2D chains, unfolding being the goal of straightening or convexifying. And the first algorithm is just to mimic the proof that we saw. I call that proof CDR, because it's Connelly-Demain-Rote. And if you think about what we proved, we just looked at the sort of infinitesimal case. We said, if we're at some position then at least, for an infinitesimal amount of time, we can increase all the distances. All the struts can increase in length, and the bars stay the same length. That's really only specifying the first derivative of a motion. So if you think of motion as being, I don't know, some configuration C that's a function of time, and you take the derivative with respect to t, then we computed what that derivative should be. That was the first order motion. I think we called it d, which is a little confusing. This is the d that we computed. And we said, well that's an infinitesimal motion of some tensegrity. We can find it with linear programming. So at any moment in time we can figure out what our derivative should be. This is what's called an ordinary differential equation, for those who have taking 1803 or whatever. You just solve it. And a very easy way to solve it is if you imagine configuration space, you start somewhere, you compute which way to go, you move a little bit in that direction. You're only supposed to go for an infinitesimal amount. But it goes some positive epsilon. Then recompute your new direction, follow that, recompute your new direction, follow that for a little bit, and keep going. And you're almost tracking the intended curve. It's not quite perfect. But as your step size, this epsilon, goes to 0 you approximate the intended path. And that's, I mean how close you approximate is computed in the textbook. You can see that for details. This is a method called forward Euler. It's like the simplest way to solve an ordinary differential equation. There are much better ways. But it's easiest to think about, and reason about. And so it gives you some approximate solution. Let me show it to you. I'm going to show you three methods. The top one is the one I'm talking about. So we're starting with the same polygon on the left, simple little v shape. A second method is called pseudo-triangulation. We'll talk about that next. The third method's called energy. That one's one you've seen. Let's start with the first method, where we take some example like this teeth example, and we, at each step, compute what is the, in some sense, the biggest expanse of motion you could do. We go a little bit in that direction. So in these animations, the edge links are changing a little bit. And I've chosen the step size so the edge lengths change by, at most, 10%, something like that. And here, this one, left side we're zooming, right side we're not zooming. These are somewhat older animations. So apologies for the frame rate. What else do I want to say? This is an example-- well, notice it's pulling the teeth off sort of from the end. If you want to be expensive you cannot pull the teeth apart. We're going to see other ways to do the teeth in a moment. Here's another example. Both of these examples at some point we're conjectured to be locked. Before the Carpenter's Rule Theorem was proved, a lot of people were trying to find counter examples. There aren't any. So this is nice. It preserve five-fold rotational symmetry. And you can do it for multiple shapes as well. So here we have one polygon in the center, four arcs, four paths on the outside. In this case, the simulation stopped when one of them convexified because I didn't implement all of the algorithm. At that point you just have to rigidify that polygon and expand the rest. So that gives you an idea sort of visually what this looks like. Qualitatively, we can say a bunch of cool things about this method. One is that's it's the only method that's strictly expansive, strictly meaning not only do the struts not decrease in length, but they also don't stay the same whenever that's possible. They strictly increase. Now, there are some exceptions. If I have a bar, obviously, if I add a strut here it's not going to strictly increase in length. Also, if I have a bunch of bars that are collinear and I add a strut from here to here, that's not going to increase in length. It's not possible. But all other struts will strictly increase in length. You can check. That's what we proved here. So that's nice. Another fun fact, I didn't write it here, but it preserves symmetry if you do it right. You may notice in this example actually, it was originally four-fold rotationally symmetric. It does rotate a little bit. That's from computational error. And you could correct for that. I didn't correct for it. It should, in theory, preserve the four-fold symmetry there. Good. We can compute one step. So how fast can you compute this thing? We can compute one step in polynomial time. That's finding an infinitesimal motion of a tensegrity. That's linear programming. But how many steps does it make? We don't really know. There is a bound in the textbook. It's not very small. It's also-- with forward Euler you lose a bit of accuracy. Now you could correct for accuracy, force all the edge lengths to be the same, then you won't preserve the symmetry. There's a bit of trade off here. I don't want to spend too much time on it. This is the first algorithm. Although it's the only one that's strictly expensive, to me it's not the nicest. So I'm going to move on to the next method which is pointed pseudo-triangulations. This is a method by Ileana Streinu, basically at the same time as the original Carpenter's Rule Theorem. And these are examples of what are called pointed pseudo-triangulations for a given set of points. So in general, a pseudo-triangle is a polygon that has three convex vertices. So a triangle has three convex vertices and that's it. A pseudo-triangle, I allow a bunch of reflex vertices in between those guys. So this should be a polygon. I've drawn it curved. And if you check in this figure, all of the faces are pseudo-triangles. Most of them here are actually quadrilaterals. But in particular, that's a pseudo-triangle. Sometimes there's actual triangles. Here they're all quads or triangles. And a pointed pseudo-triangulation has the property that at every vertex all of the edges incident to it lie in a half plane. They all lie on that side. In other words, there's an angle that's bigger than 180 degrees. So this was a pseudo-triangle, and this is pointed. And if you check this example, every vertex has an angle that's bigger than 180 degrees. Obviously the ones on the outside have to. They're always convex. And pointed pseudo-triangulations are-- they're not only supposed to have these two properties, but you're also supposed to have as many edges as possible, subject to having these two properties. If you tried to add any edge to either of these examples, you would violate the pointed property. If I added this edge, then I'd destroy this 180 degree angle. Any edge you consider, either you get a crossing-- which is not allowed-- or one of the vertices loses a 180 degree angle. So this is actually a really powerful concept. And it was introduced by this paper by Streinu in 2000. It has lots of theory built around it by now. I'm not going to talk about that theory. I want to talk about how it applies to Carpenter's Rule problem. I guess the first thing to say is that if you have, say, a polygon-- maybe I should do a real example, so I'll make it not so giant. OK, you take a polygon. So far it's pointed, right? Every vertex-- one of the two sides has an angle bigger than 180. So just add as many edges as you can while still being pointed. Do, do, do. This one, that looks OK. I think that's it. I think this is a pseudo-triangulation. I've added as many edges as I can while still being point. It turns out if you start with something that's pointed, you'll be able to keep adding edges, keep it pointed. And in the end all of the faces will be pseudo-triangles. So mission accomplished. This is a pointed pseudo-triangulation that contains, as a subgraph, my original polygon. So these are the edge lengths I want to preserve. Those should stay rigid. I'm going to be kind of crazy and also preserve the lengths of all of these edges. Now if you did that, this thing would be rigid, which is not so exciting. All you need to do to fix that is remove one of these added edges that's on the convex hull on the outer boundary here. Then it will have one degree of freedom. So there will be two ways to move it. One is to expand this distance, the other is to contract it. And the one that expands that distance will expand all distances. So if you look at some other pair, like this guy, every other pair of distances will expand or stay the same. So this thing is expansive. But not strictly expensive, because we're preserving these lengths. These ones will stay the same. So let me show you some of the things that can happen. So this is sort of at the heart of the algorithm. I'm not going to go into all the details. This can happen like n cubed times, where you say, OK I've added in all these edges. Here's a nice pseudo-triangle here. It happens to be a triangle with some stuff on the bottom, stuff on the top. As I flex this to expand the distances this is going to move like that. At some point, those three points will become collinear, got to deal with that. And the way you deal with it is as you continue moving, it's no longer the case that those two blue edges are the edges in the pseudo-triangulation. Now it's this one red edge. This is called a flip. And you see, we used to have a quadrilateral up here an a triangle down here. Now we have a triangle up here and a quadrilateral over here. That's pretty intuitive. You preserve that it's a pseudo-triangulation as you move this. If you tried to go too far here you would actually get a convex quadrilateral. If this kept going then this would be a convex quadrilateral. That's not a pseudo-triangulation, that's bad. But you can always do these flips to fix it as you go along. And it turns out this will only happen a polynomial number of times. AUDIENCE: [INAUDIBLE]. PROFESSOR: Like shown, is how I flipped it. AUDIENCE: Is it [INAUDIBLE] to a red? PROFESSOR: Yeah, the red edge now goes from here to there. Whereas before there were two segments that went from here to there, now I'm going to go straight from there to there. That's the difference. This is the general picture, you have to believe me, because I want to talk about other things. And, cool. So here's an example of it actually running. So we have some polygon in the top left. We've deleted the edge that's cyan. So that one's going to expand. And we move a little bit. This is moving a little bit. This is moving a little bit more. At this point, these three points are collinear. Actually, the way it's drawn it looks like a little bit too far. But they should be collinear. That would be a violation, because this would be-- if we go any farther-- that would be a violation because this would be a convex quadrilateral. So we do a flip here, and we end up with that edge instead. So now everything's a pseudo-triangle, everything's pointed, everything's happy. You keep going, then these three points become collinear. So you end up getting that edge. And you keep going. And all throughout this motion all pairwise distances are not decreasing. Everything's expanding or staying the same. And then the wrap up looks like this. In the end, you get a convex polygon. Then you're stuck, because there's no edge to remove on the outside. You've got to keep all of those edges. So that is the pseudo-triangulation method in a nutshell. And it's expansive. It has a polynomial in n number of moves. It's not clear how quickly you can compute the moves. I know an exponential time algorithm. There might be faster one. I don't think this has been resolved. Let me just check my notes here. It says best algorithm is exponential to compute one move. In these examples, where they're all quadrilaterals or triangles, it's really easy to compute them. In general, if you get complicated things, it's unclear. Pseudo-triangulations though have a lot of nice structure. So maybe it's easier than general linkage folding. This, I have here, implementing this algorithm would be a cool project. Oh, another open problem is how many steps does this algorithm really need? Can you prove a pseudo-polynomial bound? I'm not sure. But let me tell you about my personal favorite algorithm, the energy method. This is also the most recent, in 2004. The weird thing about this algorithm is that it is not expansive. It's really easy to compute one step of the algorithm. I can do it in quadratic time, even exactly. This one you have to specify some error tolerance epsilon. This one you have to specify some error tolerance epsilon. Here you can do it perfectly if you have exact square roots. And the number of steps is also small. It has a pseudo-polynomial number of steps. Have I talked about pseudo-polynomial yet? Maybe, briefly at some point. Let me tell you what I mean. I want to be polynomial in the number vertices n and another parameter r. r is going to be, basically, the maximum distance in your initial configuration divided by the minimum distance. Exactly how you define distance, don't worry about it. Not a big deal. So this is a geometric feature of the input. It has nothing to do with n. n could stay fixed and you could change this to whatever you want by making nastier and nastier examples. Nastiness is measured here by how big your linkages versus how tight things get. So as long as that's reasonable, this is going to be a polynomial number of moves. So this is a good bound. It's better than all the others that's been proved anyway. It might hold for some of the other methods. Here, of course, we have a better bound on the number moves. But each move is kind of complicated. It's pseudo-triangulation. So there is a trade off between complexity of move and number of moves. But this gets a decent bound for everything. You've seen this method. I showed it in lecture one. But it's interesting to contrast it with the CDR method, which sort of unrolled the teeth from the end. Here, this method basically pulls the teeth right apart. And that's possible because it's not required to be expansive. And we have the double tree. This one looks more or less the same. It's a little smoother, I would say, than the other method. I don't have it side by side. So it's a little hard to guess. And what's really exciting about this method, to me, is that you can run it for really large examples in like a minute or so. 500 vertices, no big deal. Whereas, because each step is pretty easy to compute in quadratic time, with this method you need it to solve a convex program. And that's a little costly for large examples. Cool. So that's the energy method. Now I'm going to actually tell you how it works. Big difference is it's not expansive. And the idea is, well, being expansive on every edge, that's kind of hard. How do I figure that out? Oh, it makes my brain hurt. I've got to solve this tensegrity. But if I was just expansive on average, maybe that would be good enough. That's the crazy idea. So we define this energy function that does just that. It's a function on configurations. So I'm going to write E of C. No M in this equation. We're going to sum over all edges Ew, and then sum over all vertices U different from V and W. And we're going to take 1 over the distance from U to the edge. This is just saying sum over all non-incident vertices and edges, take their distance-- there's many ways you could define that, like the minimum distance between them would be fine-- take the reciprocal, add them all up. This is my energy function. Kind of weird. But it's averaging not of distances, but 1 over distances. So why is that so interesting? Because distances are always positive. So I only have to go over here. If I write distance in the x-axis, and 1 over the distance in the y-axis, the plot of that function looks like this. It has this vertical asymptote at 0. So if the distance were to go to 0, which is bad for me because that's one thing start crossing-- they will start touching at d equals 0, and after that they might cross-- the energy shoots to positive infinity. So if I take the sum over all these, if any of the distances decrease to 0 then the energy will shoot to infinity. What if my distances increase? Because I know expansive motions exist. That's what we proved last time. I'm not going to compute them. But I know they're out there somewhere. If I had an expansive motion, all of these distances increase. So that means this reciprocal decreases. So I have-- it's hard to imagine-- I have this giant dimensional configuration space. It has d times n dimensions, whatever. A lot of stuff. I'm somewhere. But if I now plotted-- imagine that's in two dimensions, in the plane here, so that's there-- and then in the third dimension I plot what is this energy landscape. For each of these configurations I compute some height. What is my energy? I get some 3D surface here. In general, it's going to be d times n plus 1 dimensions. Here's the configuration space. We plot over that. What I'm saying is, because expansive motions exist, there has to be a motion for every point that decreases energy. Because if you decrease every distance, you also decrease them on average in this sense. If every term decreases, then of course, the sum will. So this means energy decreasing motions exist. Now energy decreasing feels like a good thing. Because I start somewhere, it has some energy, not infinity. If I decrease the energy, it's really hard for my energy to go to plus infinity. It can't happen. So all I need to do is follow any energy decreasing motion. It might be an expansive one, but probably not. We know that there are energy decreasing motions because there are expansive motions. But let's just take any energy decreasing motion, sort of expansive on average. Then it won't self-intersect because if energy decreases it will never get to plus infinity. And therefore none of the distances will go to 0. So what this algorithm does is follow the gradient. So I should say something about what this notation is. This is whatever, higher order calculus. You live in some space. You're on like some hill, or whatever. You're on some surface. And you say well I'd really like to go downhill from here. And there might be many downhill options. There might be many uphill options. This is some crazy high dimensional choice. Just take the option that decreases energy the fastest, the most downhill. That is negative gradient of E. Believe me that it exists. It exists because this energy function is smooth, in some sense. You're basically taking first derivatives. That gives you the highest chain-- or you take the place that has the highest change and boom. That gives it to you. It's very easy to compute because this function has quadratically many terms. It takes about n squared time to find it. And that gives you some energy decrease in motion. It's just an easy one to find. And that's what we're animating all the time. So we just find energy decreasing motion, move a little bit in that direction. Not too much, because if you go-- it's again a first derivative. This is only an infinitesimal motion. But we're actually going to move in that direction for a positive amount of time. As long as we don't go too far, and we can find how far by binary search, we won't self-intersect. Because we know locally it's decreasing energy. If we go small enough step it really will decrease energy, and then life is good. So it's really easy to do this. Super simple algorithm. Good. It's non-expansive because we're only decreasing the average, not each of the terms. Do, do, do, do, do. You can prove the number of steps-- this is the part I'm most proud of here-- is pseudo-polynomial, polynomial in n and r. What is the polynomial? n to the 123 times r to the 81. This is the largest polynomial bound I know of that's not dependent on dimension. So I'm very proud. In practice, the number of steps is much, much smaller than this. This is what we could prove in an easy way. If you want a project, it's a little tedious, but it would be easy, I'm pretty sure. You could decrease this to, I don't know, at least n to the 20 or something by being a little more careful on the analysis. But once we got something that was pseudo-polynomial we were happy. So we could leave others to figure out the right bound. It's a little tricky with these gradient descent algorithms to get good bounds. In practice they work really well, because the gradient is not as nasty as you might imagine it to be in the worst case. But the worst case bound, hey, it's pseudo-polynomial. It's a nice theoretical guarantee. And in practice it also happens to work. Like these examples only take, I don't know, a few hundred steps, 1,000 steps, whatever. AUDIENCE: Where do those numbers come from? PROFESSOR: Where does 123 come from? We really wanted it to be 1, 2, 3. 81 is my birth year. [LAUGHTER] PROFESSOR: That's why I like this number so much. Oh, actually here I have 41 written. I'm pretty sure it's 81 though. I should correct the notes, double check, something. It's really you're just adding up twos and threes a lot, and fours, and things like that a whole bunch of times. And then just luckily it came out to a nice number. As far as I know, not intentional. There are few authors though, so I don't know. Maybe one of them increased a bound to make it a little cooler number in the end. Good. So that's pseudo-polynomial number of steps. It's interesting, each of the steps is actually a very nice motion. It just moves along a straight line in the configuration space. I would be nice to know whether you can actually achieve a polynomial number of steps independent of r. I think the answer is no, it's not possible for some linkages that are really tight. I think you need a dependence, a polynomial dependence, on r-- at least a linear dependence on r. But I don't know how to prove that. It's a nice open problem. Another fun problem, these examples with polygons end up with a particular convex shape in the end. Is that shape unique? If I gave you the sequence of edge lengths here, they're all 1, or they're almost 1. Not quite the same. No, sorry, they get tinier as you go to the center. But for that sequence of edge lengths, is there a unique minimum energy configuration? I think so. But I don't know. We know that this method will get to convex, because only at the convex configuration do you no longer have an expansive motion and no longer have a decreasing energy motion, maybe. Cool. Those are the three algorithms. Any questions about them? Before we go to trees, again, tell you a cool application to origami, which is to rigid origami. Remember, rigid origami, we're not allowed to add any creases. And all of the faces between creases have to stay flat. They're like made of-- imagine you have pieces of metal representing the faces. You have piano hinges making the edges. That's rigid origami. That's kind of like a linkage. In particular, when you have a single vertex origami-- say that's five foldable, I don't know. And if you say each of these wedges is a rigid piece of metal, well we already know this kind of set up can be modeled by its boundary. Ignore the interior. Just think of there being hinges of this one dimensional structure like that. So that looks a lot like a linkage. These have to stay rigid. They're not straight lines. It's a little different. It's almost the same. If you think about how you're allowed to fold these things-- really you should think about it here, I guess-- what happens, by a continuous motion what happens is that you're living on a sphere. So you start out on the equator of the sphere. This thing, just plop it down the equator. The boundary lies on the boundary of the sphere. The interior of the paper is inside the sphere, right along the flat part there. As you fold, these points will stay on the sphere. And these edge links will be preserved, their arcs on the spheres. There will be great circular arcs at all times. So folding this thing in three space is really equivalent to folding this linkage on the sphere. And that looks an awful lot like a polygon on a sphere. What we really want is a spherical Carpenter's Rule Theorem. And there is. This is by Streinu and Whiteley. So if you have a closed chain, a polygon, of total length at most 2 pi on a unit sphere then you have a connected configuration space. So in the plane, closed chain always had a connected configuration space. On the sphere you need that the chain is not too long, because there's only sort of a bounded amount of room on the sphere. The equator has total length 2 pi, perimeter of the equator. For unit sphere it's 2 pi. So we're just canonically making it a unit sphere. And 2 pi really corresponds to this situation, which is 360 degrees. That's how much total length we have. So it just fits in the equator, life is good. The reason you want it to have length at most 2 pi is because then you actually do have a convex configuration. In the case of 2 pi, it lies along the equator. If it's smaller than 2 pi it'll be like some smaller portion. This would be less than 2 pi on the sphere. If it's greater than 2 pi, you can't draw it convexly on the sphere. You can draw something. Like I could draw something like this, that'll have really long length on the sphere. But there will be no way to convexify it. You get stuck at some point. And as long as you don't have that situation, you can take this-- what this actually implies is that your polygon at all times will lie in a hemisphere. You take that hemisphere and you sort of unroll it, you splay it out, you project it to the plane. You apply the planar Carpenter's Rule Theorem. You apply the fact that unrolling operation, or that projection if you will-- I think it's like projection from some point here out to the plane-- that projection preserves infinitesimal flexibility of tensegrities. So it'll still have the expensive motion in the plane. And then you could turn in to an expansive motion on the sphere. And eventually you'll get a convex polygon. So that's sort of how this is proved. Then you can apply it to rigid origami, and say for single vertex origami, it's always rigidly foldable. Any state you want to reach can be reached by continuous motion, without bending any of the faces. So a fun little application. And one of the few things we know about rigid origami, for multiple vertices it gets a lot harder. Not always possible, and we don't have a nice characterization. So finally, let's move onto locked trees in the plane. So these are sort of the classic examples. The top two were in a 1998 paper. This was when I started working on folding stuff, very beginning. A whole bunch of people from a big workshop. First example is this one. And it's a bunch of sort of arms tucked into their armpits, and in a cyclic way. And the dotted circles mean objects in this mirror are closer than they appear. So imagine that all of these points are actually really, really close and tight in here. And this guy's actually really close and tight against this edge. So I've drawn it with lots of slack so you can see the combinatorial structure. But geometrically it's much tighter. Then the intuition is that you can't get any of these arms open unless you could somehow expand one of these wedges. It's like, if you could expand this angle then this guy would have room to come out. But how do I expand that angle? Well I'd have to compress the other angles because of the cyclic picture. And I can't, if I look at some other angle like this one, I can't compress it because the arm is in the way. So I can't open an arm until I've closed some other arm. And I can't close an arm before I've opened it. And so nothing can happen. That's the intuition behind the proof. The details are very messy, because you have to define open and closed, and what things must happen before what things. This example should look familiar because if you double it, if you replace every edge with two edges, we get one of the examples that was expanding with the five-fold symmetry. So people have sort of known for a while, and then we finally proved, that this is locked. If you double it, people thought well maybe it's still locked. Turns out no, because the center vertex can expand in a polygon, but with a tree it can't. OK, big deal. These examples, this is just yet another way to do that. But what's interesting is if you double some of the edges you get this tree. So you have 1 degree 3 vertex in the center. And you go around, you visit this arm. You go back, you visit this arm. Turns out this really does simulate this tree, because we haven't touched the central degree 3 vertex. And this is nice because it has one degree 3 vertex. Everything else is degree 2 or 1. So in the Carpenter's Rule theorem where we said maximum degree 2, it's really tight. And as soon as you add one vertex of degree 3 you can be locked. That was the point of this example. What other fun things can you do with trees? Well, three years ago in this class we started thinking about other locked trees. How low could you go? How many edges do need to get locked? This example-- no I guess this example would be minimum. Here I've drawn it with eight petals. You can get away with five petals, or are five arms, each of length 3. So that's 15 edges. That was the state of the art. And then we had this crazy idea in a problem session of this locked chain, locked tree. It has two degree 3 vertices. And it kind of just winds in there. We're going to see why these things are locked. But OK, that's kind of neat. 11, that's much better than 15. Can we do better? And every week we improved it, for a few weeks. This one looks messier, but it has one fewer edge. And then this one came along, and we're like whoa. So symmetric, so beautiful. What's interesting is it doesn't have the cyclic structure. It's almost flat, in fact. You could think of all these guys being in one point. All these guys being in one point. All those guys being in one point. It's like they're, all three, collinear. So it's a very different kind of example. Before we thought locked trees required this kind of cyclic condition. And so, for example, we conjectured-- or I guess Poon, one of the authors here-- conjectured that there would be no way to lock something if all the edges were horizontal and vertical. Because if you have that you couldn't have five things in a circle. Now suddenly we think, oh, this is interesting. Because it has two degree 3 vertices. It's so symmetric. Surely this is the fewest possible edges we could get away with. But no, then we found the eight edge example. And this is kind of funny. It's like instead of being nice and symmetric, essentially, what we're doing is removing this edge. And we want to instead attach it in here. But then we have to futz around with the vertices to make it work out and be a tree. And now has only one degree 3 vertex. So it also has that nice property. It only has eight edges. And this, we believe, is optimal. And we can actually prove something along those lines. So a linear tree is one where all the vertices lie nearly on the line. So this is a linear locked tree. And we can prove that among all linear locked trees, they must have at least eight edges. Locked linear tree has at least eight edges. So at least among linear locked trees that example is optimal. But maybe you could use the second dimension to do something better than eight edges. But I don't think so. That's an open problem. What other good things can we do? You can make it orthogonal. You can mimic exactly this structure with an orthogonal structure, all the edges horizontal and vertical. Just expand each of these vertices into very tiny, and if this is drawn really squished, these are super short edges. So they really don't change the motion space hardly at all. You can actually prove that. And this is also locked. That's one of the nice things you do once you get out of the cyclic kind of structure. I think I have something else, no. OK, stay there. Yeah, actually maybe I do want to go there. This is an example, it has a cyclic structure again. Same paper, last year. This is a newer example. And it has the property that all of the edge lengths are the same, and nothing touches. Now you'll appreciate this in a little while, how crazy this is. Because all of the previous examples required things to be very close and tight. And we crucially use very tight proximity in order to prove things are locked. And if you take the example that looks like this, with six arms-- I didn't draw it very well. But if you draw it with six arms, these are like equilateral triangles, the edge lengths will be almost all the same. In fact, if you allow them to touch they will be exactly the same. So there was this open question, well, if I want them to be exactly the same but not touch, can you still lock? We thought no, but in fact you can do it. And this is very tricky to prove locked because we can't make these arbitrarily tight. It really has to look like this, because the edge lengths are all the same. Now this has seven-fold symmetry. Because sixfold they would all touch. It's tricky. Cool. A big open question here, of course, is to characterize locked trees. I think that's quite hard, because in particular, if you're given a tree and you want to know does that go from this configuration to this configuration, that's known to be p space complete, which is really, really hard. But all of those examples are locked. So maybe unlocked trees have some special structure that's easy to find. I guess not. But who knows? What I do think has a nice structure are these linear locked trees. They're pretty simple. They're basically one dimensional. And I think we could characterize linear lock trees in polynomial time. Maybe we'll work on that this afternoon. But that is open. All right, next thing I want to talk about is how the heck do you prove that these things are locked? Now historically there have been lots of different proofs. And this tree still does not have a nice proof that it's locked. But all the other trees I can give you very succinct proofs that they're locked. And so I want to tell you how we do that. Because it uses tensegrity theory, our good friend. This is the idea of infinitesimally locked linkages. So ignore this part of the picture for now. If we take some tree, most of the examples I drew these little scion circles to say, well, these guys are really tight. And the intuition is the tighter you make it, the less that configuration can move. If you look, we're claiming the configuration space is disconnected. But not only that, we have this configuration and we say there's a small ball of motions that you could possibly do. It does wiggle a little bit. And then there's other stuff over here that you can never reach because this can't move very much. Well in fact, if you make these circles tighter and tighter, the claim is that this, in the space of motions, you get less and less freedom. How could we formalize that? As you draw the thing tighter you get less and less motion. Well, let's go to the limit. Suppose we went all the way to the point that these things are touching. Now this is going to be a little tricky mathematically, because we have to remember that this vertex is in this wedge, even though it's actually right on top of this point. So we have to remember, sort of, how things look locally. But geometrically the picture is going to look like that. If you look from afar, you won't be able to tell that there's three edges along here, because they're right on top of each other. But if you, sort of, imagine zooming in infinitesimally in each of these vertices, it's going to be whatever the heck it is. So you remember the fact that right here there are four vertices. And this is how they're connected to incident edges. There's actually three edges here, and two edges here, and so on. So that's how you describe one of these, we call them, self-touching linkages. Because the edges are touching each other. They're right on top of each other. Now normally, if you wanted to capture this notion of wiggling a little bit, we could define the idea of being locked within epsilon. So a configuration is locked within epsilon if it's impossible to get farther than epsilon in configuration space. So that's this little ball, thinking about radius epsilon here. And if you can't get outside that ball, you have some weird space you can get to. But if it's bounded by a ball of radius epsilon then I say you're locked within epsilon. And if epsilon is small that really means there are other things you can't get to. Well as soon as I get to self-touching I can actually think about being locked within epsilon for epsilon equals 0, also known as being rigid. We've already talked about being locked within 0. If you can't move it all, that's rigid. So this thing could actually be rigid. Whereas all the locked trees, they can never be rigid because I don't want them to really touch. This is an analysis tool. I don't like trees that are self-touching. It's kind of ugly and cheating. And so real trees, non-self-touching touching trees could only be locked within some positive epsilon. But if we consider for the moment the extreme when they're touching then we could hope for rigidity. Rigidity is good because we know how to prove things are rigid. We know how to test things are rigid in two dimensions. It's pretty easy. We could test is it infinitesimally rigid? If it's infinitesimally rigid we know it's rigid. It's a little bit trickier because we have to represent the non-crossing constraints. And that's what these purple edges do. So the idea is, well I definitely want to preserve the lengths of these edges. I'm not interested in expansive motions, because that's a subset of possible motions. But I do know that this vertex should move away from this edge, or stay on the edge. It's not allowed to go to the other side of the edge. So I imagine there's a little tiny strut here. It's of infinitesimal length right now. It can get longer, but it has to get longer in that direction. You have to move away-- you have to stay on the right side of this edge. And you have to stay on the left side of this edge. It turns out you really can represent that by a strut. It's now a strut between a vertex and an edge, because this guy can slide along the edge, or move away from it. But it's a strut. And you can think of this as a tensegrity, all the usual tensegrity theory applies. You can define equilibrium stresses, polyhedral liftings, infinitesimal rigidity. And so what's actually drawn here-- you should look at the book for more details. I don't want to go into details on this. But originally the state of the art for proving something like this is locked is you basically give-- you show an equilibrium stress that is positive on all the struts. We know that if you find a stress that's positive on all struts than all the struts in fact must act as bars. What that means is that this length, which is currently 0, must stay 0. Therefore this vertex is actually pinned because it has to both be on this edge and be on this edge. That's what a 0 length bar here and here would mean. It really has to be right here. It can't move it all. And therefore this whole thing is rigid, and nothing moves. Clear? So we can use all the tensegrity stuff to prove that when this thing is actually self-touching, and all these distances are 0, it is rigid. But so what? Who cares about the self-touching thing being rigid? It's nice, but what I really care about is the non-self-touching configurations are locked within epsilon, for some tiny epsilon. Well good news. Rigidity of the self-touching configuration implies what's called strongly locked, yet another term which I need to define. Strongly locked means that sufficiently small, sufficiently small perturbations of the linkage, or I should say of the linkage configuration, are locked within epsilon for any epsilon. This is exactly the property I wanted to formalize, saying that if you draw the example tighter it can move less. For any epsilon, we want to say you can only move within epsilon, there's some notion of sufficiently small such that if I take this example and imagine currently all the vertices are on top of each other. But now I perturb it a little bit, which involves changing not only the vertex coordinates, but also the edge lengths-- but a tiny amount, some delta that's a function of epsilon. Whatever epsilon you choose, there's a very small disk I can draw like this, such that as long as all the vertices stay within that disk your example will be locked within epsilon. This is great because it lets us analyze rigidity, which is easy for self-touching configurations, which are not interesting in some sense. But we get to conclude something about the perturbations which are not self-touching and therefore nice. And we get the property we want, that you're locked within epsilon for any epsilon you want. You just draw it tighter you'll be locked within a smaller epsilon. So this is pretty cool. And I didn't defined perturbation. But I just mean every vertex stays within a radius delta disk. And sufficiently small here is delta, which is a function of epsilon. So this is pretty cool. And it turns out also-- this is proved much later-- these results are like 2002. And then 2006 proved that if you take any self-touching configuration, which is like this, you specify the geometry where things are all on top of each other. And then you'd say what you want every vertex to look like. It turns out there really is a valid perturbation that preserves that combinatorial structure and is arbitrarily small. So here, of course, I've drawn it so it's clear. You can perturb things. And I changed the edge lengths a little bit. But I can actually realize this combinatorial structure. It turns out that's always possible. So you can take any self-touching linkage, you can perturb it so it's not self-touching, and it is arbitrarily locked within epsilon. And the way you prove it, or one way to prove that something is rigid, is to say well this proof is infinitesimally rigid by constructing an equilibrium stress that's positive on all these 0 length struts. As long as it's positive all those 0 length struts you know that they're effectively bars. Usually once they're bars it's really obvious that the thing is rigid because it pins vertices into corners. Then you know that the whole thing is infinitesimally rigid, therefore it's rigid, therefore it's strongly locked. And you could see some examples of doing that in the book. But this is no longer the state of the art. There are now easier ways to prove that trees are locked-- some trees. It doesn't always work. Of course, it might be infinitesimally flexible. Lots of things could fail. But if it succeeds in proving something is rigid, then you know it's strongly locked and you're happy. So it's a conservative test, you might say. It would be a cool thing to implement. This is not hard, it's just linear programming. But there's a cooler way that's even more-- like one where I can really draw the pictures here, you know we had to draw all these diagrams and figure out that stress positive numbers worked on all these edges, and eh, it's tedious. There's a much slicker way. And for whatever reason, they have become known as the rules. There are two of them. Although we've tried to come up with various rule threes, the ones that have been tried and tested and used all over the place are rule one and rule two. So rule one. You'll see why this is interesting in a moment. I have some linkage, self-touching linkage. We're in the same framework. Suppose these two edges have the same length. I'm drawing this one slightly smaller just so I can show you, which is on which side. But suppose they have the same length. And suppose that both of these angles are acute, strictly, less than 90. What do you think happens in this picture? I have this bar floating around. And I have these three bars. What happens to this bar? AUDIENCE: It's confined to be against the other bar. PROFESSOR: It's confined to be right against this edge. It can't move at all until this angle-- both of these, and I guess at least one of them would have to get to 90, then you could try to slide it out. Like if this one goes beyond 90 then you can slide out. As long as they're both less than 90 it's pinned there. So what I'm going to do is redraw this diagram as with two edges there, which I mean, you can just ignore. The point is if there's something attached here and here-- maybe many things, it's a tree, who knows?-- you can just attach them right there. This is a simplification to the linkage. It does not behave the same. But it has the same rigidity. Because if all I care about is rigidity, I care about can I move at all? So in order for this guy to move and all, these guys would first have to move for quite a while, a positive amount of time. I just want to know can I move 0 or more than 0? If I can move more than 0 I could move this guy more than 0 without this guy moving at all. So really I don't care about how this guy moves. He's effectively pinned for at least a small amount of time. I really care about can the rest move it all? So you can simplify your linkage like this, and the rigidity will be preserved. This is awesome because it's easy to see when this applies. And you just simplify your linkage until you can just tell whether it's rigid. All right rule two is sort of a special case. It looks like this. So here, this bar and this bar actually share an endpoint. And again, I need that this angle is acute. And I need that these two have the same length. And I can simplify to that, where if anything was attached here it just gets attached there. Let's try it out. Actually, I could use some more boards. So I'm just going to copy the example we had from before. And this will work on basically all the examples I've shown you except the equilateral one. This is the one, two, three, four, five, six, seven, eight bar example. This is conjectured minimum. Let's prove that it's locked. See any rule one's we could apply? Do you see any edges that are effectively wedged against another edge? Remember, all of these vertices are actually on top of each other, we're thinking about the self-touching version. These guys are actually touching, and these guys are actually touching. AUDIENCE: This rule one [INAUDIBLE]. PROFESSOR: There's a rule one-- AUDIENCE: That one, there. PROFESSOR: --here? Yeah, good. This edge is pinned against this edge. Because look, we have acute angles here, namely 0. There's an acute angle here, that's 0. And this edge, if these guys are on top of each other, and these guys aer on top of each other, these two edges have the same length. Therefore these are pinned together. So let me redraw it when they're pinned together. I'm drawing things with curves just so it's easier to draw, but you understand this is also basically flat. Again these guys are all on top of each other, these are on top of each other, these are on top of each other. Did I do that right? I think so, yep. Anymore? AUDIENCE: [INAUDIBLE]. PROFESSOR: Yes, on the right side. It's another rule one. Here it's actually pretty symmetric. Is this locked? Is this rigid? It's kind of hard to say. But now it's going to be pretty obvious. Because when these guys join together that means these two vertices really are on top of each other for positive time. Same for these. So a new example looks like two triangles with an edge floating there. Now it's pretty obvious this is rigid. But if you really want to make it obvious you can apply rule two to this guy. And then these guys are pinned together. And then you have two triangles. Two triangles are rigid. I think that's pretty obvious. You could check it, whether they're infinitesimally rigid, whatever you feel like. But because that's rigid, this is rigid, this is rigid, this is ridged. Because these operations preserve rigidity. They don't preserve locked or whatever, but they preserve rigidity. Once you know that this is rigid you know that it's strongly locked. So when you perturb it so these guys are not on top of each other, but they're slightly spread out, it will be locked within epsilon, for any epsilon you want. Now this is super easy. And this is how we were able to iterate through all those locked trees, and say, oh yeah, this is still rigid, so it's still strongly locked. Now this doesn't always work. But it seems pretty good. And one of the conjectures on the tables is that for linear trees rules one and two are kind of almost enough. It's not literally true. But it's hopefully mostly true. AUDIENCE: It seems that that argument would mean you wouldn't need one of the n's on-- PROFESSOR: Right, so we could think about this example. And you're asking do you need-- does this need to be that long or could we throw away the last bar? Right, it looks like that's good. If you throw away this bar it's still the case that this edge you can apply rule one, and say that it's pinned against this edge. And once those are there, this thing basically acts as a single triangle, and life is good. Yeah, so you can remove this one edge. You could not remove both of the edges, at least for rule one to apply, because then this guy's not wedged into anything. He's wedged on this side but he's not wedged on that side. But you can remove this edge. And that's a super easy way to prove that this thing is locked. We didn't know this when we wrote the textbook, otherwise we would have given that proof. There's one based on stresses in the textbook. But here, yeah, you can make very easy judgments like that. Now it doesn't mean that it's not locked when you remove two of the edges. I'm not sure, no I think it's not locked. But just because the rules don't apply doesn't tell you that's not locked. But it at least makes it hard to prove. And it's a good sort of guideline. Questions? Yeah. AUDIENCE: Did the perturbations that are positive and finite have numbers? Like, what's-- it's sort of disturbing that they get very, very small-- what are they? PROFESSOR: So you want to know how big is delta? How big is epsilon? Well it depends, of course, how much motion you want to allow, how small the perturbations have to be. And I don't have a great answer. I do recall that we computed a bound, probably in terms of something like r, the maximum distance divided by the smallest non 0 distance. So it depends how close to tight you are in other places. And it depends on your epsilon. There is an explicit bound. I think it's polynomial on those two things. But I don't quite remember. So you can actually compute how much perturbation will give you locked within epsilon. But it's certainly not clean. It's actually not too hard to prove this statement just using topology. Should I try to remember how to prove it? Basically, yeah, so there's this fun fact. Suppose you have some tensegrity. So tensegrities are kind of hard. They say, look, a bar has this length, ain't changing. Let's be a little more flexible. What if you said, oh, this bar can change within epsilon? Because in reality you could probably pull the metal a little bit, just not very much. Struts, it's not supposed to get smaller. Let's say it can get epsilon smaller than it's supposed to. It turns out if you take some tensegrity that's rigid, and then you add this little bit of flexibility so the edges can change in length a tiny amount, then before you made this change your configuration was a point, and there might have been other stuff. But locally you couldn't move at all. If you add this flexibility, the new picture is a point with a tiny ball around it. And whatever, I mean this might change a little bit also. But the point is this point doesn't get much bigger when you add just a little bit of flexibility. This is a fact that was known in rigidity theory. It's called sloppy rigidity. And it's essentially what's going on here-- that we're adding a little bit of perturbation. Before you couldn't move at all. Now you can move a little bit. We just had to generalize from regular tensegrity so these weird tensegrities with sliding struts. And this is kind of intuitive. To really check it you just need to check that the constraints on edges and struts are closed sets, and then the have to behave this way. But you can actually compute how quickly they change. It's just messy. So I'll leave it at that. Hey, there's a little proof addition, glad I still remember it. Other questions? This is the end of 2D trees. And now I want to talk briefly about 3D chains. So we did this one, now I want to do this one. Actually this is one of the oldest results, from 1998. So right around the same time as the locked trees. I may have shown this example last time, or in lecture one. But here it is again. Three bars in the center, and two really long bars in the ends. We call this knitting needles, because it's like two knitting needles with a short string connecting them, tied in a knot, sort of. Topologically this is not knotted. If you could add extra creases here you could pull that through, no problem. But if this is rigid, like a linkage, then this thing is locked provided each end bar has length strictly greater than the sum of the middle bars. So there's three middle bars here, add up their lengths, it should be shorter than this one and it should be shorter than this one. That's the cutoff. And I believe once it's the other way around, this is not locked. Sadly, rules one and do not prove this thing is locked. They only work in two dimensions. But for this one example-- and to tell you the truth, this is pretty much the only example of a locked open chain that we have-- there's a really simple, nice proof. Let me draw the picture again. AUDIENCE: Do you need the bottom one? PROFESSOR: You're asking do I need three bars in the center, or could I get away with two? In fact, to draw this in 3D you need three bars. That's maybe not obvious. But if you tried to draw it would just four bars, like this, it's not really possible because these three points are coplanar, as all three points are, and then yeah. You can't get this weaving pattern. This guy's going to be either above or below the plane. And this guy's going to be above or below the plane. And in all cases, you don't get this weaving. So you really need the five. This is the minimum. You can prove all four bar 3D chains to not lock. But with five you can do it. Good question. So here's how we're going to prove that this thing is locked. So there are these three edges, they have various lengths, who knows? But add up the lengths, divide by 2, and measure out that far. So I'm going to call that the midpoint of those three segments. I want to center a ball here. It's going to look something like that. It's a 3D ball centered there. So it's a ball, diameter equal to the sum of the middle bars. So radius is half that. And the center is the midpoint of the sum of the middle bars, whatever. So the radius is half the sum of the middle bars. And this is at half the sum of the middle bars. Therefore these middle bars stay inside the ball. Maybe they touch the boundary. But they're inside or on the boundary of the ball. So that means the middle bars are in the ball. Maybe just barely, but no matter how you move this thing-- I mean if you move this point then the ball moves with it. So no matter how this thing folds, those three bar stay inside the ball. What about these bars? Or what about the endpoints? Well if this point is inside the ball, which it is, and this point is inside the ball, which it is, then this thing is longer than the diameter of the ball. This thing is greater than the sum of the middle bars. The diameter of the ball is the sum of the middle bars. So if I take any point inside the ball and move straight from there by the radius of the-- more than the diameter of the ball, I must go outside the ball. So the endpoints are outside the ball. How the heck are you going untie that not when all of the interior vertices stay inside of the ball, and these guys stay outside the ball? To formalize how the heck, you can say well if you ignore what's inside the ball-- something's happening there, who knows? But outside it's like there's a ball and there's two sticks coming out of it. So you can just imagine, for example, tying a string between the two ends. And something happens in the inside. But if you think of just from the outside perspective, these sticks just move around. It's really easy to not tangle this cord when these sticks move around. So now, in order for this thing to become unlocked, to straighten out for example, somehow inside you have to do some magic to get rid of this topology. Well what do I mean? Well if you could do it inside, you could do that motion even when there's a string tied out here. But when I tie the string out here, it is a knot. It's a trefoil know. There's no way to untie a trefoil know without crossing. So either there's crossing in here-- which better be-- or there's crossing out here. There can't be crossing out here. It's just two sticks and a string. You can easily arrange the motion of the string to not cross the two sticks. Therefore this thing in fact cannot untie. Therefore this thing is locked, cannot straighten out. So that's locked 3D chains. Let me tell you a bunch of cool open problems. This is really the only good example of a locked 3D chain we have. And it has length ratios, the best you could do is like 1 to 3 plus epsilon. If each of these is length 1, these have to be 3 plus epsilon. Is that the best? Or could it be, for example, that all the edges or between length 1 and 2, and the chain is locked? We don't know. In the extreme case, what if all the edge lengths of the same, all length 1? Is there a locked 3D chain? We now know there's a locked 2D tree, but for chains it's tricky. It's even open if you add thickness. You say, hey, let's think about a 3D chain, all the edge lengths are the same, and you get to specify some radius of the bars. For a while I thought maybe this was locked. I don't think it is. We can unfold. All of these questions are open, and pretty fascinating. Especially because proteins are a lot like equilateral-- like all the edge lengths the same-- 3D chains. It's even open for equilateral 3D trees. So we know 2D equilateral trees can lock. But 3D, it's open. I think that's enough open problems. Lots of cool questions here. All right, I have one more. It's just fun. Even if you have a 3D chain where all the edges are on top of each other on a line segment-- it's like a linear tree but now it's a linear 3D chain-- it's like a bundle of segments. Is that locked? I don't think so. But even that is tricky to come up with algorithm. Maybe we'll work on some of these in the problem session. That's it.
MIT_6849_Geometric_Folding_Algorithms_Fall_2012
Class_8_Fold_One_Cut.txt
PROFESSOR ERIK DEMAINE: All right, so this lecture we talked about fold and one cut, two methods and a little bit about polyhedron flattening, so most of the questions are about fold and one cut. I'll stick to that. First question is, is there an equal software for doing fold-and-cut now? And the answer is yes, there's some software. Maybe not the coolest possible yet, but it's getting there. There's two pieces of software. One is from Final Project after this lecture was given, 2010. And another one is in SourceForge project called JOrigami, or J Origami. Both are written in Java, I believe. I have this one. This is the swan, which you've seen before. I have it here. We can try it out. It's not online yet, because it could use some improvements, but it's already pretty cool. So you can take something like the angelfish here, and if you like, it has an editor so you can move your polygon around. And you can say, OK, please give me the straight skeleton, first straight skeleton only, and it will update on the fly, and gives you some nice intuition about how that works. I know that looks weird that it goes up that way, but it is correct, because it's bisecting this edge and some edge, this one I guess. They meet out here, and then you go that way. So you can play with that, and then you can add in the perpendiculars too. Takes a little bit longer, so the refresh may not be as immediate. But it works. It's really complicated behavior in there, some spiraling in, spiraling out, but it's fairly well-behaved. Here's one of the simpler spiraling examples. This one is stable under perturbation more or less. You move the vertices all a little bit. They should continue spiraling. It looks like that one is a little bit degenerate. So we'll go around, and as you go get bigger and bigger pieces of paper, you'll get more and more creases. I should mention, you can add new vertices as well and draw polygon. You can delete edges. I'm holding all sorts of crazy modifier keys to make this happen, but it does work. The one other thing you can do is snap to a grid. This is probably hard to see, but there's a square grid underneath. If you hold down Alt, it snaps to a square grid. I have one other example I've prepared. It has save and load, which is pretty cool. And this example is one of the dense instances. Although this one has rational multiples, so it doesn't actually go forever. Because I drew it on the grid so that could get all the horizontal and verticals. And you can see in particular it stops here, because at some point it gives up in reflecting perpendiculars, says that's enough. It won't draw anymore. So it's fairly robust in that sense. Occasionally it's using some straight skeleton code they did not write, and it has some issues when you have really degenerate situations, which they tried to mitigate. But occasionally it's not perfect. But there are lots of possible follow on projects to this work, improving the user interface. Actually putting it on the web for people to play with, I think, would be super cool. Alternatively, could port it to JavaScript-- right now it's in Java-- and make it even more accessible run on iPhones and things like that. Still, one of the questions I've posed in lecture was, can you make a nice interface that would let you force degeneracies, make like in-- I think the swan has instances of this, where you'd like more than three skeleton edges to come together at a point. Here they almost do. It would be nice to be able to say, and for it to snap to a position where many things come together. Because that, in general, reduces number of creases substantially. You could also try to compute the folded state. That would be another interesting project based on what we're going to talk about in a little bit, folding the underlying structure of corridors. And then you can computer a crease pattern. And there you have various choices, and that lets you throw away some of these creases. Like this is much messier than the swan crease pattern that's on my web page, because you don't need all these folds. If you choose the right subset of folds you can save a lot of time. So, still lots of cool projects to do here. It'd be great to get this software online, but that's its current state. Next question is, what about odd degree vertices? This is actually a pretty natural question. Even degree vertices seem nice because you can kind of-- well, it relates to a page that I didn't really talk very much about in the lecture, but it was in the lecture notes, this idea of a side assignment. So in general, if you have something like a swan, there's the inside of the swan, the outside of the swan, and generally you have a bunch of regions. And for each region you'd like to know is it above the cut line or below the cut line? If you imagine the cut line as horizontal. And in general, the side assignment would specify, do I want my region above or below? And you could do whatever you want. Now with even degree vertices, this is great. You could just alternate around and say, above, below, above, below. With odd degree vertices, you can't, so you're going to have two regions that are adjacent to each other which are both above or both below. So what does that mean? It means it's a little bit hard to cut. And there's actually two models of cuts. There's scissor cuts-- which are in particular what the question poser had in mind, and probably what you might have had in mind in general-- where the cut you're going to make separates material from above and below the line. So this is the cut line here. And it'd be really nice if you had material on both sides, because that's usually how scissors work, they tear apart material. An alternative is that you have a mathematical cut. And a mathematical cut can cut right along a crease line. So it could be you have two polygons, there's a fold here between them, and you can cut right along that line. Maybe I shouldn't draw scissors. Imagine a laser beam which can zap right along the line, so there's no material on the left side of the line, just material on the right, but yet it separates the two things on the right. So we call that mathematical cut, because it is the natural definition mathematically. You're erasing a line. But practically it's a little hard to do. So scissor cuts are also nice. This is a more restrictive model. This is general model. So what I was talking about in lecture, implicitly use mathematical cuts. And that's when you could make anything with one cut. But, it would be nice to get scissor cuts when possible. And when possible is basically when you have even degree at every vertex. And one fun example of that is checkerboard. This is actually an old magic trick. You have a piece of usually tissue paper, so it's really thin, pre colored as checkered squares, both sides matching. And you can fold this, because every vertex here has degree 4, so even degree, you can assign the black squares to be on one side or the blue squares to be one side, the white squares to be on the other side. And so you could make one cut and simultaneously cut out all the white squares and all the black squares which is kind of cool. This is old, decades old. So in general, you can do something with scissor cuts. Or scissor cuts are going to be possible if and only if you can find a side assignment that sort of alternates between above and below the cut line. Meaning when you have two regions that are adjacent, you don't want them both to be above and below. You'd like one to be above, one to be below. And this is what's called a face 2-coloring in planar graphs. You want to color the faces, the regions of your graph, with two colors such that no two adjacent cells have the same color, no two adjacent faces have the same color, and that turns out to be equivalent to having all vertices of even degree. So this is why that question was asking about odd degree, because indeed with scissor cuts, you can't do odd degree. There is a fun fact, though, that relates these two things. So if you have even degree like polygons, which is typical case, scissor cuts should be possible. Mathematical cuts can do everything. What if I have something that has odd degree vertices, but I still want to use scissor cuts? Well then, it turns out two cuts are enough, pretty much. That if you have, I'll call it a 2-edge-connected planar graph, equals the union of two even graphs. I should say even subgraphs. It doesn't much matter. So there's one exception, which is, if you have an edge in your desired set of cuts, that partitions the graph into two parts. So if you could delete an edge and disconnect the graph into more parts, there's no hope of decomposing this into even graphs. I think. Should check that. But as long as you do not have such, these are called bridges, typically. So bridges are forbidden. If you have no bridges, then you're at what's called a 2-edge-connected planar graph. And then you can decompose your graph into two parts, each of which is even. So you could fold, make one straight cut, and get one even graph, fold, make once straight cut, get the other even graph. Good luck folding it the second time if it decomposed. But in theory at least, with two scissor cuts, you can make anything that doesn't have bridges. So that's just kind of an aside. Some graph theory that tells you a little bit more about what you can do with two scissor cuts-- pretty much everything. Any more questions about that? So that was odd degree vertices. Next questions are about folding and how exactly-- so I kind of briefly sketched the proof for linear corridors, how you would fold or how you would prove that this thing actually folds. This is the skeleton method first. We'll go to disk-packing afterwards. So how do you convert a set of linear quarters into a tree? Then once you have that correspondence, how does trees folding flat relate to corridors folding flat? And so I redrew this figure to make it a little bit clearer. So this is one of the images in the textbook where I just face colored, in this case with three colors, the corridors. Those are the regions of constant width bounded by perpendiculars. This is for making a turtle. That sort of doesn't really matter. And this is the corresponding tree. And this is the folded state of this blue guy here between B and C. And to really illustrate what's going on here, I folded this thing. It's tricky to fold, let's say. And I added a few paper clips to make it really stay. But if you hold it right, which is a little bit challenging, the projection of this structure is exactly this tree. Maybe let me show you some examples. Out here, for example, this flap is labeled A B, so it corresponds to this flap. And it corresponds to the material here between A and B. And this A part is at the very tip here, and attached to this is this guy. That would be this unlabeled pink edge, which corresponds to this one. It's unlabeled, because it goes off to infinity in principle. So this would just keep going. This one turns around right here. Then there's this edge, this bit of material. Sorry, it's probably easier for you to see from that side. And that corresponds to this BC part, which is exactly this folding. And if I laser cut or even scissor cut down these two lines, and just cut out that little folded part, it would look exactly like this. So conversely, I was sort of begging the question by assuming I could fold this thing, if you don't know how to fold this thing, you can go on the reverse direction and figure out how to fold it by first modeling these corridors as a tree. How do you do that? It's just like in the tree method, going from crease pattern from the tree method to the shadow tree. Except there, we were given the shadow tree as input. Here, we have to compute it. But it's like what you solved in the problem set. Is it one or two? For each of these corridors, it has fixed width, so you make an edge of that length. It's not quite drawn to scale here. This has been scaled up by roughly a factor of two. So this length becomes this length. And you label it the same thing. This set of connecting component of perpendiculars A goes to that point. The connected component of perpendiculars here, this thing which branches, all of that is B. It's like a hinge in the tree method. So that's going to be a hinge between this flap and two other flaps, whatever touches the B perpendicular. So there's this pink one, and there's the cyan one. Cyan one is here. It's attached to whatever C is attached to. so here's C. It branches off here, bounces around. All that's C, and it's adjacent to the blue thing we just did. The yellow and the pink-- that's this yellow and this pink-- connecting to D and G, and so on through the thing. So it's actually really easy to map from here to the tree, just every corridor maps to an edge. Every connected component of the perpendicular graph maps to a point. And this is the projection. It will always look exactly like this. You'll be able to independently manipulate each of these corridors as a flap. And individually it's pretty easy to fold each corridor. It just looks like this accordion thing. So the faces will be stacked linearly in order from back to front, and be really clean. They'll all line up in this nice vertical strip. So that's easy to prove it exists from sketching the proof now. And then all you have to do is attach them together. So basically first you take this tree view, you fold it flat. Here, I can fold it flat just by collapsing the top and bottom. Then I replace each of these edges with one of these vertical accordion strips. And then I just need to check that at each vertex I can actually attach everything that needs to attach without getting crossings. So that's the one part of the proof I'm not going to show you, because it's kind of tedious. But basically, because the structure was planar, because it came from one sheet, you can show there's not going to be any crossings there. All of these layers, each of these edges expands to be many layers, and the layers will nicely connect together. So that's the sketch of that proof. Any questions? This is for just linear corridors, not circular ones. OK. Cool. Next question is about the bad example that makes a dense set of creases that completely fills the plane, and therefore is completely unfoldable. And the question was, is it really unlikely or is actually very likely that this happens? And there are two things going on. Here's the example, again, from the textbook. So it's got, the dark blue is the desire cut graph, then the black lines are the straight skeleton, and then the dash lines are the beginning of the perpendiculars. And the point was to make this corridor width verses this corridor width verses this quarter width versus this corner width, make those all irrational multiples of each other. And then as this thing spirals around, it never finishes. It never hits itself. And so it just keeps going. I think this is where this one's currently going. And so the question is, well, irrational multiples are actually very common. If you, for example, randomly perturb all these vertices and you measure the sizes of those corridors, with probability 1, they will be irrational multiples of each other, because irrationals are much more common than rational numbers. And that's true. But the other thing that this example requires is this outer boundary. We need that none of these perpendiculars can escape out to infinity. If they do, they won't stay in there. Eventually, if there's a tiny, tiny gap here, if these guys didn't quite match up, eventually because this thing is dense, it will find that little gap because it has positive length. When it finds the gap, it's going to spiral around and around and around and go out to infinity instead of being trapped inside. So the unlikely thing in this example is actually this outer pink polygon, that the perpendicular coming out this way bounced around, did lots of things, and eventually hit the same vertex. That's actually unlikely. Definitely in this example and in general we claim that we do not get cycles of perpendiculars except in one kind of scenario, which I guess I could draw in the software. Let's try it. So whenever you have a vertex of degree more than two, so let's do something like that. Little too far away. So let me just add a little guy like this. And this a little closer. It's got a lot of spiraling. Let's make it a little bit cleaner here. It's actually kind of degenerate, but the point is, if-- here I have three cut edges. I've kind of made them roughly equal so this doesn't get huge, but in general, if you have any straight skeleton vertex and you reflect it' around these bisectors, you will always come back to where you started. So this requires that each of these angles is strictly convex, so this is why you need at least three cut edges to come together here. But once you have at least three, you'll always cycle around. You'll always come back to where you started. It actually happens again here. This guy cycles around and comes back. So that's-- well, sorry, that one's actually a little bit special because of what I did with the dragging. In general, it's going to be more like this. It's a little hard to see. What's going on here is that this innermost guy cycles around, indeed, but everyone else spirals around and keeps going. So that's actually the perpendicular. It comes from here and goes that way, and then it spirals around. It hasn't finished yet, but it would actually spiral all the way out to infinity. So the claim is, in general, the-- this is conjecture. With probability 1, if you randomly perturb the vertices slightly, the only cycles of perpendiculars that you get are around a single vertex. So something like this example, which we saw where there is a straight skeleton edge here, here, and then this guy-- I guess I should draw the skeleton edges. It's a good test of how accurate I drew it, because I know if this is perpendicular, this is perpendicular, and this is perpendicular, this will always come back to its starting point. It's just property of reflections. Basically, because we satisfy Kawasaki here, this must be true because these are bisectors of these guys. So this has to happen. The claim is, that's the only situation it happens. And if this is all that happens, you can prove there's no density, and in fact, it folds flat. So this is the good case. Unfortunately, when we make real examples, we like degeneracies because they reduce the number of folds. So the theory says, avoid degeneracies Let's perturb things slightly, then it's guaranteed to fold. But in practice, you want to add degeneracies, just carefully so that you don't get dense behavior like the weird example I showed you. So that's clarification why we think this does not happen. Or, sorry, why we think with probability 1 things are good. But sadly we can't prove this. Maybe we'll work on it in the open problem session. I think it's tractable, I just haven't worked on it in a long time. Questions? OK. So, right. The claim is if you perturb this example, everybody will escape out to infinity like in the spiral. OK. So I think this is the end of the skeleton method, but before we go on, I want to show some examples. So problem set-- we have up to four [INAUDIBLE] later today. One of the questions is design your own folding cut, and draw it. So you probably want to draw it into a program like Inkscape or Adobe Illustrator. It has good snapping, so you can find intersections and things like that, and compute angular bisectors as in ruler and compass. And these are some examples from 2010, same question. There are lots of them, but I chose three that are particularly cool. This one has a line of symmetry, you get a fish bone by [? Ji, ?] who's a Ph.D. student in the media lab. This is by [? Sarah Eisenstadt ?], who's a Ph.D. student in CSAIL working on folding things. Witch's hat is pretty cool, pretty fairly simple. You have to fold your examples. It can't be too complicated. And then Jason [? Ku ?], who we saw the guest lecture by wasn't sufficiently impressed by my jack o' lantern, so he made a really complicated one and folded it. So those are some inspiration points. And I thought I'd show you a magic trick which is one of the sources for inspiration for the fold and one cut problem. So, a piece of paper. So this is a magic trick of unknown origin. It was described by Martin Gardner, I think, probably in the '60s, and then the book appeared in the '90s. And it's a story of two politicians, and let's just be generic. Let's say one politician was very much liked by the people, and the other politician was disliked by the people. Imagine a simple world where it was so simple. And by a freak accident, both politicians die at the same time. And they both happen to be Christians, so they go to the gates of Heaven. That's how the story goes. And arrive at the gates of Heaven. I guess Saint Peter's the keeper of the gates, and Saint Peter says, well not just anyone can get into heaven. You have to have a ticket. And the good politician being liked by the people has a ticket, which he folds flat. And the bad politician has no ticket. So the bad politician says, well you know, we've had our disagreements, but maybe you could put in a good word to Saint Peter or do something to help me out. I hear Heaven's a nice place. Maybe we could go there together, resolve our differences, whatever. So the good politician, having a ticket and a pair of scissors like any respecting politician, takes his ticket and makes one complete straight cut. And it's going to get a lot of pieces, so I've got to be, hold this carefully. So, put that down. And he hands all these pieces to the bad politician, says there you go. The bad politician has no idea what they're for, so he hands them to Saint Peter. Saint Peter starts unfolding the pieces, says, I wonder what shapes I get. I hear there's a cool problem about this. So it's a little hard to do without a table. So I'm going to have to use the board. First, we get the letter H. Then we get-- put these down here. Then we get the letter E. OK? Then we get letter L, and the letter L. And if it was on a table, I'd arrange it, and clearly Saint Peter's not happy, and the bad politician gets sent straight to hell. The good politician hung on to one piece cleverly, and his ticket is still more or less intact, and he gets into heaven. That's the magic trick. It's very simple folding. I've known this, memorized this even, for years. And you get exactly those pieces. So pretty cool. Of course, from a mathematical standpoint, it's a little unsatisfying. Because, come on, use three pieces for the H, three pieces for the E. Surely you could do better. And from a rectangle, you can kind of do better. It's a little awkward because the pieces are not all the same size, but is one scissor cut because all the vertices are even degree. You cut it, and you get all these pieces, which if I were more practiced at this, I could immediately pick out which one's the H. OK, I'll just do them out of order. Here is the cross. Well, kind of a cross. Not quite perfect proportions, but good enough. This is the letter E. This one's actually more impressive when you don't have a table, because you can't tell that they're all different sizes. The letter E. We've got the letter-- these look like L's. And they're small L's, but there you go. They're different orientations. And then I've got the letter H. There you go. OK. So that's our new and improved version using universality of folding cut. So it gives you some idea of A, where folding cut came from in recent times, is the magic community. I mentioned Harry Houdini did some tricks, and Gerald [? Low ?], but there's a bunch of these tricks around, and kind of cool. So the checkerboard trick and the previous hell trick. So those are some examples. Now we move onto the disk-packing method. So I have one question about this, which is, how exactly do we go from the disk-packing to-- yeah, cool proof, though, cool proof, bro. So how do we go from the disk-packing to the decomposition into triangles and quadrilaterals. So I thought I'd just review this slide. We start with our graph. We offset it by some tiny epsilon. That's to get things off the line, basically. Then we do this disk-packing. And remember roughly how the disk-packing works. We put some disks at each of the centers. Here, it's a little awkward because of the offset, but you put one at each of these four corners. Also the corners of the piece of paper. Why not? Then you also pack small enough disks along the edge, so that you cover the edge by diameters of the disks. I do that the same on all the sides. Basically, you try to put a big one in, but if that big one intersects, you decompose it into half. That's how the algorithm works. So now you've covered the vertices and the edges, but you may have big gaps like this, too many sides on this gap. And so then you just greedily put the largest disk you can in those gaps until all you're left with in these yellow gaps are triangles and quadrilaterals, or three-sided gaps and four-sided gaps. And then all we do is draw this red graph by putting a vertex at the center of each disk. That's what the question was about. And then whenever two disks touch, we call this kissing disks for historic reasons, because they're just barely touching I guess, at their lips. Got a lot of lips for disks. At least we're in flat land. So then whenever they touch, we draw the edge between the two centers. So that decomposes this piece of paper into parts, and because the gaps are three sides, those will correspond to triangles, or four sides, those will correspond to quadrilaterals. So that's all. And then we put rabbit ears in here, and line universal quad molecules in there, and it folds flat. It will align all of the red edges plus these black edges on the outside, and it will align all these inner red edges with these black edges, and then we do the sync folds that we mentioned. And that will get one of these out of the way of the other, and we'll end up aligning just these cut edges. So that's a quick review of that method. So one question is, how do you allocate the disks. Is there sort of a best way? And I guess there could be several measures. Maybe you don't want really tiny disks, because that tends to lead to very tiny folds. But the standard measure here is how many disks do you need, because that will reduce the number of folds. So can you minimize the number of disks, and in general, it's known roughly how many disks you need. And the algorithm I just told you achieves that number of disks. So first I'll give you the number. Number of disks is proportional to the integral-- I didn't say it was the easy bound, but it is the right answer. OK, there's a lot of notation in here, and this one you should definitely not know unless you've taken a computational geometry class. Even then, you probably wouldn't know it. It's not that common. It mostly comes up in meshing and disk-packing, so it's pretty specific. It's called local feature size. And this is if, once I define it, it is actually fairly intuitive, this bound. Local feature size at X. So we're imagining some kind of polygon here. And at every point of the polygon-- let's look at a vertex here-- but it works for any point along the boundary. Actually maybe not even on the boundary, but we're interested here-- this notation del p is the boundary of the polygon. So this is del p. That just means edges and vertices, all of those points. So we take some point X on the boundary, and we look at the smallest disk centered at that point that touches another feature. So that's the feature size. Features here are edges of the polygon. That's all we have. Now of course, no matter what size disk, you hit this feature and you hit this feature, so those don't count. Those are incident features. So in general, it is the smallest radius disk that hits a non incident feature on incident feature, which is an edge. So here, as soon as the disk gets this big, it hits this edge. And so the local feature size is this radius. In general, for every point, you can draw some size disk. So over here, it's going to be a little bit bigger. You can maybe get up to here. Actually, I guess it should be non-adjacent. That was going to be a little bit awkward. I also don't want to count this edge, because if I'm right here, the feature size is super tiny, and I don't want LFS ever to be 0. So you also have to skip the adjacent edges, which means local feature size of this point is going to be more like-- hard to draw circles-- it's going to be more like that distance. That's when you hit a non-adjacent feature, sorry. So you have to exclude this edge that you're on and the two adjacent ones, otherwise this definition doesn't quite work out. So then you measure this for all points. There's infinitely many, of course, continuum along the boundary. That's the integral over X in the boundary of P. And you take 1 over the local feature size of X, and you integrate that for all X. It gives you a number, and it turns out, some constant times that number is the right answer. The algorithm I achieve, which I described, which is put the biggest disk you can or divide in half if you can't will get within a constant factor of the best possible bound of all possible to disk-packings. So this is pretty much known. You get within a constant factor of the optimal disk-packing if you're counting disks. Not a great number. I can't say they're N disks or N squared disks, because you just can't do that. If you have a piece of paper and some really long-- here's your cut graph. Doesn't quite go to the ends, though-- then you've got to have a tiny disk here, and that's going to require you to have a bunch of disks. Even though this has constant size, you're going to need, I don't know how many disks here. Especially if your paper's really narrow, you're going to need a lot of disks. That's the best you can hope for disk-packing. I actually brought a little example of something like that where your goal is just to cut out a single segment in the middle of the paper. This is to illustrate odd degree vertices. These are degree 1 vertices, and you might think, wow, it's impossible for me to cut this thing without, I don't know, just cutting. I could cut with my X-ACTO knife from here to here. If I want to make a complete straight cut, you can fold it like this. This is what the skeleton method would give you. Fold it like this, and then you use your laser to cut right along the line. With scissor cuts, it's not so bad, right? I cut slightly in. It's pretty much cutting along the line. Just not quite. I made a tiny mistake. So I cut off a line, slightly thicker line, and I get my slit in the paper. So mathematical cuts are not really that bad. You just are cutting a slightly larger version than the single slit that you wanted to make. You just have to cut very carefully. OK. That was number of disks. Next question is, how does all this relate to the tree method? This is kind of a neat question. Both methods, skeleton method and the disk-packing method relate to the tree method in different ways. I thought I would talk about that. So first, let's compare the disk-packing method which we just talked about to the tree method. Both of them use universal molecules. This one only uses universal molecules for triangles and quads. We know in general, the universal molecule works for any convex polygon. This is an example of a pentagon. And it works. It has a gusset to get the tree lengths right. Here we use gussets to kind of get the tree lengths right. As you may recall, we want the perpendiculars to go to the disk kissing points, those tangencies. Because then they align with the others, and then we never get the perpendicular spiraling or doing other bad things. So the number of creases becomes proportional to the number of disks. The number of disks is this messy thing. So that's that. Let's see, what else? Here, we only use disks. Over here, we could just use disks, but there's also the ability to use rivers, so this is more general in that we can do bigger polygons than' just quads, and we can do rivers, not just circles. Here, the input, though, is a tree with specified lengths, and it's actually NP-hard to place the disks correctly. We proved that last time. Over here, the input is a polygon. It's already been embedded on the sheet of paper. There's no really hard decisions here except you have to fill the paper with disks. But that's not so hard. So you can do this in polynomial time. Polynomial and even, almost, roughly this time. Times a log factor. You can find displacement of disks. So that's not too hard, whereas over here, it's NP-hard to place a disk, optimally at least. Find a displacement you could do. So basically the inputs differ, and it's using the same backbone or the same, what do you call it, front end. The same back end, I suppose, after we've decomposed into a bunch of molecules, you just fold the molecules and you're basically done. Here we have to do some syncs at the end, but it's using it for different purposes. Otherwise they're very similar. And indeed, this is inspired by that. OK, so that was the disk-packing method. Then there's the straight skeleton method. These look much more different, but in fact, they are quite related. If you do fold-and-cut on a convex polygon, it gives you roughly the universal molecule, just with no gussets. So here, neither one is more general than the other. The universal molecule is more general in that it has gussets, which lets it control the tree topology you get and control the lengths you get in the tree. Over here, we saw the tree just happens. You can't control the tree. The shadow tree is just a tool to prove that it falls flat, but you can do non convex polygons, which is much more general than convex polygons. So they each have their own unique claims to fame. A natural question is, can you combine them. And in fact, you can. And this was, you may remember a bunch of lectures ago-- I think class four or so-- I showed this slide of the meat of origami design secrets volume two, or the second edition, rather. There's tree theory, which we talked about. There was a box pleating tree theory which we briefly showed. And there's this more general thing called polygon packing, and this is in some sense the fusion of straight skeleton method of fold-and-cut, basically the straight skeleton, plus tree theory. And this is done jointly between Robert Lang and the domains. And here's an example of what it looks like. It's a little hard to see here, but I will point out if you look at the blue lines, which are the cut lines-- here, of course, they're the active paths. This guy is non convex. And if you look at the red stuff in there, it is exactly the straight skeleton of that polygon. So those red lines are the straight skeleton of this blue thing. So there you go. I've got the fold-and-cut straight skeleton being used in that setting. You've got perpendiculars all over the place, which is an annoying feature of the skeleton method. It is inherited by this method. And if you look at something like this rectangle. It's convex, but you have gussets. This is a gusset, this is a gusset, gusset, gusset. It's a little hard to see, but we basically split the rectangle into this rectangle, this square, this rectangle, this square, this rectangle. So this is really the fusion of the two. It's not yet been mathematically formalized, because we're first proving that the tree method works. Then we will finally get to this. But if you want these slightly more informal but practical version, read Oragami Design Secrets and it will explain all the different cases and what we believe is a correct algorithm. We just haven't proved all the cases yet. And you can use it to design cool things in the box pleating setting and get more efficient than if you required rectangles or some convex shape. Cool. So that's that. What else do we have? I have one-- almost done-- one cool open question posed by you is, what if instead of cutting with a straight line, I cut with a constant curvature line? So a piece of a circular arc. Then, of course, the things I get will have circular arc boundaries, but can I get any circular arc boundary with that fixed curvature? Or are there limitations? And at first I thought the answer is clearly yes, but I do wonder about situations like this. Can you align these two by folding? I haven't thought about it enough to be sure, but not totally sure. Whereas if I have things like this, I think this is good. I mean, I could fold here, and in general, I could just kind of pretend there's a segment. And the segment I saw fold and cut on the segments, and that will also align the circular arc. That won't work in general, but I feel comfortable if they have the same orientation, it looks good. When they flip orientations, I'm not sure. So a neat problem to think about. And one final thing is about flattening. This is about this question, this picture. So we said, oh great, because we can solve fold-and-cut, we can also take the boundary of that polygon and flatten it like this. Of course, the folding of the interior here which we're ignoring-- in general, this whole thing goes through three dimensions until we end up here. One dimension up, if we solve the 3D fold-and-cut problem where you have a polyhedron, you fold it through 4D back into 3D which is flat so that you align all the boundaries of the faces of that polyhedron. And then just take the boundary of that motion. It's like flattening a polyhedron like cereal box or whatever. And indeed, that will find a folded state of a polyhedron that is flat. But of course, you can't really use that motion, because it goes through 4D. This one will go through 3D, so if you want it to flatten this linkage, that's also not valid. So you get a folded state, but you don't get the folding motion if you could solve 3D fold-and-cut. Our current state is, we don't know how to solve 3D fold-and-cut. We do know how to solve polyhedron flattening. So this relation is just kind of interest, it doesn't imply anything super useful. An open problem is, so we know how to solve polyhedron flattening as a folded state. What we don't know so much about is folding motions. There is a recent paper from just last year which will continuously flatten by continuous motion any convex polyhedron. But non convex polyhedron we still don't know. These are roughly hand drawn figures. An interesting project would be to actually animate their motion. I think it would look pretty cool and be much more convincing that it works. I mean, I'm convinced that it works, but it's a lot harder to see visually from these pictures. It'd be really cool to see the motions, the actual continuous animation. And a couple last project ideas. One would be to make a fold-and-cut alphabet, like the maze software that you just used. It would be fun if you just type in a message, and out comes the crease patterns. This is sort of like a special case of a fold-and-cut design tool, but in letters it's a little bit easier. And a different challenge would be if I just want to be able to fold a single letter at a time, can you always do it with simple folds? So can you design an alphabet that is simply fold-and-cuttable. That would be even nicer. And there's some magicians who worked on that, but they couldn't get all the letters, so it would be nice to do with just a few folds, any letter of the alphabet, any digit. This is one we did for our conference in honor of Martin Gardner, gathering for Gardner five, back in 2002. And here's another possible idea for a project, although mostly it's an excuse to show you really cool pictures. This is Peter Callesen, and he takes usually four sheets of paper, cuts them probably an X-ACTO knife or something, and then from that material that he acquires, builds a 3D structure on that sheet. So you see all the material that was used, and these are just amazing. And I don't think you could necessarily do this level of fidelity from a fold-and-cut , but it would be cool to do a fold-and-cut and then build a sculpture out of the slits that you made. And I mean, it's really cool when he can use the cutout parts to be shadows of the things that he builds, especially when those shadows are more complicated than the sheet he actually assembles. That's a fairly cool effect of the monsters in the closet. Pandora's box, or box in a box in a box. You can't see that here, but you see it unfolding. The bones inside the hand, the old versus new cities. Using the positive and the negative. There's actually a little boat up there, which you can see. So pretty amazing stuff. And for free versus caged, and so on. Interesting project would be to try to make art out of fold-and-cut, just doing one cut instead of a zillion cuts. That's it. Any more questions? Yeah. AUDIENCE: In the circle-packing method, how do you assign the mountain-valley [INAUDIBLE]? PROFESSOR ERIK DEMAINE: OK, in the circle-packing method, how do you assign mountains and valleys? That was briefly covered in lecture. First you take a spanning tree of all of the molecules, and you basically fold each molecule into your parent in the tree. So each one has a direction, and from that you can extract the mountain-valley assignment. You go into the parent. It's a mountain there, everyone else's valley. Basically you're reversing one perpendicular fold per molecule. I think. Maybe two. With triangles, it's just one. For quads you might be reversing two. So you start with all of the straight skeleton being mountains, all the perpendiculars being valleys, and you have to flip a few to just get them to fit together. Little more complicated than what I said, but I would need a slide to show you the details. It's in the textbook. Anything else? All right, that's fold-and-cut.
MIT_6849_Geometric_Folding_Algorithms_Fall_2012
Class_14_Hinged_Dissections.txt
PROFESSOR: All right. lecture 14 was about two main topics, I guess. We had slender adorned chains, the sort of fatter linkages, and then hinged dissection. Most of our time was actually spent with the slender adornments and proving that that works. But most of our questions today are about hinged dissections because that's kind of the most fun, and there's a lot more to say about them. So first question is, is there any software for hinged dissections? And the short answer is no, surprisingly. So this would definitely be a cool project possibility. There are a bunch of examples-- let me switch to these on the web-- just sort of random examples, cool dissections people thought were so neat that they wanted to animate them. And so they basically constructed where the coordinates were over time in Mathematica, and then put it on the web as a illustration of that. So this is a equilateral triangle to a hexagon-- a regular hexagon. It's a hinged dissection by Greg Frederickson, and then is drawn by have Rick Mabry. Here's another one for an equilateral triangle to a pentagon. Pretty cool. And they're hinged in a tree-like fashion even, which is kind of unusual. Greg Frederickson is one of the masters of hinged dissections, and dissections in general. He's probably the master of dissections in general. And he has three books of different kinds of dissections. This is actually hinged the section on the cover here. The purple and pink pieces hinge like this into a smaller star from the outline of a big star to the interior of a smaller star. And then this star fits nicely inside. This is another hinged dissection. This book is entirely about hinged dissections, although not just the kinds we've seen. Another kind called twist hinging, which I think this is a twist hinge. The piece flips around the other side. And then there's the third book about a different kind of hinged dissection that's more of a surface hinged dissection where you've got two-- you've got the front and back of this surface and you fold them with like piano hinges with hinges in the plane. All are very cool books. You should check them out if you want to know more about dissections. They're more about here are cool examples, some design techniques for how to make them. I'll show you one such design technique later on today. but not a ton of theory here, in particular because there wasn't a ton of theory when these books were written. So that's some hinged dissections. And as I said, cool project would be to make a general tool for animating hinged dissections. There's only a handful out there. Greg has digital files of lots of his hinged dissections. He'd probably be willing to share them, though I haven't talked to him about it. If there was a good engine for animating them I think it would be cool. Even cooler would be to implement the slender adorn chain business. Take one of these hinged dissections, maybe they just hinge but sometimes there's collisions. But we already know if you refine these guys to be slender, which you can do-- if they are triangulated you can do it with only losing a factor of three in the number of pieces. It would be cool to implement that. And then you can do the slender adorned folding via CDR, which I have an implementation of, or it's not that hard to build one if you have a LP solver. So various project possibilities. Another cool project would be to just design more hinged dissections. There's still interesting questions. Either use fewer pieces or just make elegant designs. Related to the implementation idea, a particular family of hinged dissections that could be fun to implement are embodied by this alphabet. I showed this in lecture. You can take the letter six and convert it into a square, and convert it into an eight, and convert it into a four, and convert it into a nine via these 128 pieces. I didn't talk much about this theorem though, so I thought I'd give you a little sketch of how this works. It's actually very simple to construct the folded states of these hinged dissections, and it could be an interesting thing to implement. And it's also just kind of fun. This is way earlier, 1999, way before we knew that everything was possible. We could at least do all polyominoes of a given size. So let's just think about polyominoes, about polyhexes, polyiamonds, where you have equilateral triangles. And these are called polyabolos for silly reasons, basically, by analogy to a diablo, which is a juggling device. You can hinge dissect any of them here. You take each square and you cut it into two half-squares, and then you hinge them together like this. This Is 1, 2, 3, 4, 5, 6, 7, 8. So this will make any four-square object, any tetris piece. And generally, you take two end pieces and you can make any anomina. And the way you prove that that is universal, that it can fold into anything-- it's not so clear from this picture, but it's actually really easy to prove by induction. So the first thing to do in this inductive proof is to check that you can do it for n equals 1. OK. That may sound trivial but this is actually core. The key property you need in a hinged dissection of a single square into your general family is that there's a hinge visible on every edge of the object. So here, this hinge kind of covers this edge, it covers that edge. So both of these edges have hinges on them, and the other two edges have a hinge on them. They happen to be shared hinges but that's OK. And each of these, that's true. The triangle, it's a little more awkward. You actually need two hinges to cover the three sides. But you only need these two pieces. One of them is non-convex. It may be hard to fold continuously but you'd refine it if you wanted to do slender adornments. So let's not worry about continuous motion yet. So that's the base case of the induction. How do I do it for n equals 1? Now inductively, if I have some shape I want to build I'll take what I call the dual graph of that shape. So make a vertex for every square, connect them together if they share an edge-- the squares share an edge-- and then look at a spanning tree of that shape. So just cut some of these edges until you have tree connectivity among those squares. Every tree has at least two leaves, except in the fall, but every mathematical tree has at least two leaves. Like this is a leaf, if I cut here this would also be a leaf. Leaf is a degree one vertex. So that's a square that only shares one side. So pluck off that leaf, remove that square. The resulting n minus 1 square is, by assumption, can be made by this hinged dissection with two times n minus 1 pieces. So now we just have to attach this guy on. And here's a figure for that down at the bottom. This is the same thing for triangles, and polyabolos are in the upper right. So you have some existing hinged dissection, you don't really know what it's like, and you want to add this leaf back on so it shares one edge with one guy. Now this guy could be oriented this way, or it could be oriented this way, but it's the same by reflection. So let's say it's oriented this way. We know the square is made up by two half-squares, by induction, and so we know that there's a hinge here. Now this hinge connects to some things, in this case, to some t prime. It could be here, it could be up here. And all we do is stick s on here. Now s can rotate. We have our solution for one guy, and there's two different orientations for him. We're going to choose this orientation because it puts this hinge right there. And so once we do that, normally this would be a cycle, and this thing would be a cycle through here, but we just redo the hinges in here so that the cycle gets bigger. And the important thing to verify is that the orientations to the triangles are the same, just like the hinged dissection picture I showed. We always go from the base edge to the next base edge, to the next base edge of these right isosceles triangles, and all the triangles are on the outside of the cycle. So we actually construct a cyclic hinged dissection. Then at the end you could break it and make it a path. And this one is even slender. Remember, right triangles are slender, barely. You can look at all the inward normals. They hit the base edge. So this will even move continuously if it's an open chain. For closed chains we don't know. So that's polyominoes. Polyaimonds are similar. Pretty much the same thing. You just-- in this case, you might have hinges on both sides, but you rotate this thing so one of the hinges lines up, and you just reconnect the hinges. And it's not hard to show you can always do that, the hinges will never cross. And this proves that these folded states exist, and then we use the slender stuff to do continuous motions. Actually, when this paper was written we didn't have slender adornments back in '99, even in 2005 when the journal version appeared. So it's only now that we know that motions are possible by, in this case directly, in this case with some refinement. So I thought that would just be fun to see. You can do some other crazy things. So this is a hinged dissection from any four-iamond. So this is four equilateral triangles joined together to any four amino. This is a tetris piece. It's essentially a superposition of this idea with-- you see in here, these four lines make the hinged dissection of Dudeney, from 1902, from an equilateral triangle to a square. And with some extra stuff added in-- this is maybe a foreshadowing of the idea of refinement, although we didn't really realize it at the time. We want to add some hinges so that we have hinges on the midpoints of the edges instead of the corners. That turns out to be a bit more efficient in this case. So we add some hinging, still hingeable individually, but now we have hinges at the corners and so-- at the midpoints, and we'll have the same property over here. And it allows you to hinge these together. Actually, here it looks like some of them are at the corners not the midpoints. So it's a bit messy. In general, we can prove if you have any shape and you want to make poly that shape-- so let's call this shape x, you want to make polyexes-- you can do it as long as the copies of the x are only rotated and they're joined at corresponding edges. So if you check, this guy's just been rotated 180 degrees. In general you can join these things together at matching edges. And the basic technique is just subdivide the thing, triangulate, draw in the dual of the triangulation, and then connect to the midpoints of the edges. And you can show, basically, instead of the hinged dissection going around like this you can just make it go around like this and come back this way. And if you check the sequence of pieces they could visit, it's identical if you go around this way or if you go around this way. And that's enough to show that any folded state is valid With the triangles and the squares we're essentially exploiting the symmetry of these pieces. So you can rotate them to make them compatible. Here they're forced to be compatible by assuming we only join matching edges. So that was the 2D polyform paper. You can see Frederickson was one of the authors. In 3D, here's easy way to generalize that. If you take, for example, a tetrahedron, a regular tetrahedron, you take the centroid and cut everything to the centroid. And you end up cutting your tetrahedron, it has four sides, into four of these more slender tetrahedra. And then you take four of them and join them together in this way. You do have to be careful in the way that you join them because, again, on every face we want to have an incident hinge. So we've got to take care in the way that you hinge together to make sure that that is the case. But it's also cyclically hinged. This gets joined to that. And basically, the same inductive proof works. You just pluck off a leaf, show that you can turn the thing so that one of the hinges aligns with the inductive construction, and then just join the hinges across instead of within the cycles. So pretty easy. What are we talking about? Hinged dissections software, I guess. Those would be fun things to implement. They've never been implemented, and especially, to see them folding. I thought I'd show you a little bit about hinged dissection hardware, different ways you could make them physically real. This is kind of mesoscale I'd call it. This is at one centimeter bar, so not super tiny, but I think this could scale down quite a bit. We have a Petri dish here with some liquid in it, if you could read up there. Maybe this is the coolest example. We have a square made up of four pieces, and you add a little bit of salt to that liquid and it pops into the equilateral triangle configuration. So it's sort of spontaneously folding, hinging. Essentially these pieces are slanted a little bit and they prefer-- one weighting causes them to fold one way but when you add the salt they end up flopping the other way. You could see they're a little bit inexact because of that, but pretty awesome the kinds of hinged dissections. You can get them all to actuate even without much room to do so. This is done at Harvard, George Whiteside's group, chemistry. Kind of related, it's not exactly hinged dissections, but I feel like it's the same spirit, is this idea of DNA origami, it's called, where you take one big strand of DNA and you force it to fold into a particular shape. Here we're folding it into a happy face. The way that's done is you add in a bunch of little pieces of DNA. So this string, basically, has a-- this DNA strand has a random string written on it basically, and you identify, oh, I want these guys to glue together. So you take this piece of the random string, and this piece of the random string, and you construct a piece of DNA that has both of those, like a little zipper to cause those to zip up. You do that all over the place. And there's now automatic tools to do this, it's really easy to make DNA origami. It, basically, always works. There's a limit to how big this thing can be because the main strand here is a single piece of DNA, and those are hard to make super big, at least currently. But you get some really nice happy faces and mass produce them. Hundred-nanometer scale. It's kind of like hinged dissection because that strand of DNA is moving, it's actually more like a fixed angle chain, kind of like a hinged dissection. And we're essentially using here universality of hinged dissections of something like polyominoes, though the shapes are a little bit more awkward. And they've made a maps of the world. You could do two-color patterns, make snowflakes, the word DNA, and crazy stuff. So it was started by Paul Rothman, though a lot of people do DNA origami these days. Cool. Next paper I wanted to show you-- this is fairly recent-- and it's about getting continuous motions, in particular, in 3D, of hinged dissection-like things. So here we have a chain of balls. These are more like ball and socket joints. So you can maybe see them better here. There's a member going in from the green guy into the center of the red guy, and there's a slot, and the red guy can fold around the-- or the blue guy can fold around the red guy. And the question is, OK, this is great. You can prove universality, you can make any shape. You just subdivide your dog, or whatever, into two by two by two squarelets and then we know how to connect those together to make a nice Hamiltonian cycle that visits everything. But can you actually fold a chain of balls like this into that dog? And the answer is always yes. Essentially, you feed a big string of these balls into-- that's actually what's happening in this animation here, although it's a little hard to tell-- you're feeding in, say, at one of the legs, one of the extreme points in some direction, this chain of balls. And as they go in they just start tracking along the path. And you just need to check that you can track along the path. As this guy goes into a corner, for example, you can actually navigate the corner while, at all times, staying within the tube. If you can stay within the tube you know you won't collide with the rest of the chain because this tube is non self-intersecting. And so the 2D version is fairly easy. This is just circles. A little trickier to check that it actually is possible, with just one turn, with a U-turn, and with a kind of-- I don't know what you call this, not a U-turn-- where you change in two directions-- two dimensions all at once. All of these are possible with this particular mechanism, whatever mechanism you have. If it can do this then you can make anything. So that's another way to prove motions exist for this kind of polyform special case. Why do we care about this? For building robots. So these are somewhat different mechanisms, but I have two examples built here at the MIT Center for Bits and Atoms over in the Media Lab with Neil Gershenfeld and many, many people. So you get some idea-- this is a fairly small guy. I mean, the actual size is about this big. You see some feet in the background to give you some sense of scale. It's not very many pieces, but if you made a really long chain it would really be able to fold into anything you want, just servos to make the turns here. This is a much larger one. The right version is folding. And you get some idea of scale here, this is, when it's fully extended, 144-- should that be feet or inches? It's really big. So a little bit slower, of course, because it has to move a lot more, and it's also quite a bit longer. This is built, in particular, by Skylar Tibbits here. So that's the idea of robots. In general, we like to make robots that can change their shape. We've seen sheet folding robots, but these are more chain folding robots inspired by proteins, and DNA, and things like that, sort of big versions of DNA origami. What's cool about them is that they stay connected throughout the motion. You can keep your wiring, and you can keep your batteries, and whatnot, and your communication channels connected in this kind of scenario. This is, by contrast, to more common approaches to reconfigurable robots. You have individual units and they can attach and detached from each other. You could see like these guys picking up blocks, moving stuff around. It's definitely cool, but in practice it's a lot harder to build these kinds of robots because the attach detach mechanism, it's hard to get them to align perfectly, it's hard to get the electrical connectivity. Every piece has to have a battery instead of like every 10th piece, or one battery to drive everything, or tethering, or whatever. You can do some very cool things and there's a lot of algorithms around for doing this. Daniella Rus, here at MIT, built this robot, and a bunch of others. There's also a very cool theory about these. I've worked on them. You can prove, for example, that all of these models can simulate each other up to constant factors in scale. So you can take your favorite robot in a molecube and simulate a crystalline robot, or vice versa. And then there's efficient algorithms to-- these crystalline robots, they can just expand and contract and detection and attach. And you can prove that given two configurations you can change it from one to the other up to some scale factor. You can even do it extremely fast in log n time if all the robots are actuating all at once. Anyway, there's cool stuff about reconfigurable robots, but the hinged dissections offers an alternative where everything stays connected at all times, but closely related. I think that was the hardware story. So we go back to our proof of hinged dissections and why it works. And one of the-- I was kind of surprised I didn't show this in lecture, but I don't remember why I didn't. One missing piece with-- how do you go from a rectangle of one size to a rectangle of another? You may recall, we had a triangle, we triangulated our polygons so we ended up with some arbitrary triangles. Then we cut parallel to the base halfway up. You can put this over here, put this over here, you get a rectangle of some unknown height. And then to make it universal we wanted to convert everything into a rectangle of height epsilon so that then we could just string them together-- obviously, the area has to be preserved here. If we string together all the epsilon height rectangles we've got one super long epsilon height rectangle. And then we overlay the two dissections. This is how we did dissections. But how do you do this step from one rectangle to another? This is a very old dissection, at least 1778. It wasn't published by Montucla but he's credited in this publication, and this is Frederickson's diagram of it. So you take the fatter rectangle and then you take the longer rectangle and you-- first, you make multiple copies of the fat rectangle, just sort of tile strip of the plane to the right. And then you angle the thin rectangle, slightly. First of all, you line up these corners so the top left corners line up, and then we want the top right corner of the thin rectangle to lie on this bottom line. Turns out this always works. It's not totally obvious but, essentially, these copies of the rectangle you can kind of fold them up. And when you go off the right edge here, you're essentially coming back on the left edge here. And then you're going this way, and you're going this way, and this little piece is exactly the same as this little piece. And from that you get a dissection. It's not hinged, but you can see that this big rectangle has the tiny piece here, which conveniently fits right over there. It's like a wrap around in the other direction. And then this piece-- well, everything matches up here. The only other weird thing is this bottom-- when you go below the bottom you also wrap around to the top. And just check all the pieces match up, and you've got your dissection. It's kind of crazy. You have to check this works for all parameters, but it does. And in general, of course, if you have a very long rectangle you need many pieces, relative to the fat one, but that's essentially optimal. OK. For fun-- this is a general technique called the piece lie technique, or superposing two tessellation of your shape. You can use that same technique, for example, to get the hinged dissection from a regular square to the equilateral triangle. You just angle it right so that, for example, this midpoint hits this midpoint, and various other alignments happen, like this midpoint falls on that edge. And if you look at it right these cuts give you the four pieces for the square to-- I guess you can see it right here, here are the four pieces of the square. And if you check, everything matches up. You can also make equilateral triangle. In this case, it happens to be hinged. That doesn't always happen. It's a little tricky to tell, maybe, but with practice you can see it. I mentioned, at some point, that you could take this and turn it into a table that either has four sides or has three sides. One of the annoying things about the table is that you need legs on each of the pieces. So Frederickson was playing around with this fairly recently, in 2008, and he came up with this alternative way of-- essentially the same technique, but you end up with one big piece and lots of smaller pieces. So the idea is you just have a big leg, or a bunch of legs, under one piece of the table. And so this is what the dissection looks like. Unfortunately, it's not hingeable. But if you add in a couple pieces you can make it hingeable. So at this point, the universality result was probably none. This is actually a lot easier than the way we do it, specialized to this kind of scenario. This hinges, I think, something like this-- maybe even an animation of it? Yeah. Drawn by Frederickson. So you could see a careful orchestration here just to make sure that, indeed, you can avoid collision. And so that's the proposed table. No one has built it. Another project would be to build some hinged dissections, for example this one, as real furniture. It would be pretty neat. I have a couple examples here of real furniture built. This is the Dudeney dissection, a four-piece kind of a cabinet. It's got lots of shelves. It looks really practical. And I don't know the bottom. It looks like there's a bunch of wheels down there. Definitely, you have to have a bunch of table legs in this case. But you can really reconfiguration it in all sorts of ways. The close up. That looks pretty cool. It's made by D Haus Company. Any German speakers? Anyone know what "haus" means? Same in English, house. So they actually built a house. And I can't tell whether this is a real building or a very good computer rendering. It may be real. AUDIENCE: [INAUDIBLE] PROFESSOR: What's that? AUDIENCE: It looks like a rendering. PROFESSOR: It looks like a rendering. Yeah. At some point later they have people walking by, but it could be a composition. Anyway, it's an idea of having a house for any season. You can reconfigure it dynamically with these tracks. It's a pretty cool idea. It would be neat to experiment with. Anyway, hinged dissections in practice. It's funny to take a 2D dissection, but, I think, in architectural setting you can't change where the floor is. So probably, 2D dissection makes sense. There's the real, maybe real version? I don't know. So that was rectangular rectangle. OK. I'm cheating a little bit. Another question. This is a very specific question, but for step three, which is where we did all the action of rehinging stuff, I said, number of pieces roughly doubles. I meant to say at least roughly doubles. So in the worst case, the point is that can be at least exponential. It definitely can be more because, in general-- remember, it looks something like this-- The point is, you need at least two triangles per edge here because they need to fit together to make these little kites. So you at least double, for every edge that you visit. In the worst case, you visit the whole-- all the edges of the polygon. So you end up doubling everything. But it can be worse because sometimes, if you don't have a lot of room in this corner, you've got to divide into lots of very tiny triangles. I think that probably only happens towards the beginning. After you've cut them small, you won't have to cut them even, even smaller. But I don't know for sure. The point is, it's at least exponential. And this is the more complicated diagram. But I claim that you could get a pseudopolynomial bound. How do you do that? This is a little [? trickable, ?] and still have time though. So let me go over the rough idea, also what the claim is. So pseudopolynomial bound. I'm not going to claim this for arbitrary polygons, although I think it's probably true. What we argue in the paper is that if the vertices of the polygon lie on our grid, then we're OK. It's just a little hard to keep track of otherwise. I will scale things to make this the integer grid. And then the claim is the number of pieces is polynomial in the number of vertices, n and r-- r is usually some ratio of the longest distance to the smallest distance. In this case r is the grid size, like an r by r grid. That's like the size of the overall grid divided by the size of a grid cell. So, basically, the same thing. So, how do we prove this? The general idea-- so we have these messy constructions, and essentially, we're inducting. We're moving one hinge, and then moving the next hinge, and moving the next hinge. And essentially, all of those inductions are nested inside each other. You completely refine to do one thing then you have to refine to do the next one in the existing refinement. So we have a very deep recursion. It's one way to think of it. Order n depth recursion, so we end up with exponential in n. But instead, what we can do is only recurse to constant depth. And if you're just more careful in the overall construction this is possible. How? Let me give you some of the steps. You need more gadgets and you need to follow-- So before, I said, oh, there's some dissection out there, it's known. You triangulate, you convert triangle to square, triangle to rectangle, rectangle to rectangle, then superpose. It does the dissection, then we hinge it arbitrarily, then we fix the hinges one at a time. Here, I want to actually follow those steps and keep it hinged dissection as much as possible. So we're going to triangulate the polygons, but in this case, we're going to subdivide further and also triangulated with all the grid points as vertices. It's little hard to draw, but here's a grid. Let's draw a polygon. Hard to make a very exciting polygon, so few vertices, but maybe something like that. OK. If I triangulate this thing and all the interior points-- there aren't very many interior points in this example. Maybe I'll make a slightly different one. There's two interior points. I want to triangulate, with those as vertices of the triangle. So maybe I'll do something like this. A couple different shapes of triangles here, but they all have the same area. This is called Pick's theorem, special case of Pick's theorem. So here, they're all a half-square. Even though this one spans a weird shape, it's one-half square of area. So the nice thing is if I do this in polygon a and in polygon be the triangles-- there's equal number of triangles of the same size because they have matching areas originally. There's probably a way to do this for general polygons. I think this is the only step that requires grids except it's also a lot easier to analyze, this bound with grids. So it's, I guess, an open problem to work out without grids. OK. The next thing is we'd really like a chain of triangles. Right now we just have a blob of triangles. And we can chainify the triangles. This is a step that was-- I don't know if I showed the figure last time. This is what we do to slenderfy everything. We have some general hinged dissection. I don't know what it looks like. We just take each of the triangles, subdivide at their in center, cut, and then you hinge around the outside. And you'll get one-- in this case, one cycle of slender triangles. In this case, all we care about is that it's a chain. So we have some general thing here. We subdivide each of them like this, and then you hinge around. And so now I've got a hinged collection of triangles for a, and I've got his collection of triangles for b. I'm just going to do a to b here. I should probably say that. Two shapes. And conveniently, these triangles will still have matching areas. They're all now 1/6, if we do it right. So we get a chain of area 1/6 triangles. And I have the same number for a and for b. So this kind of cool. Of course, the triangles could be different shapes, but I basically have a chain of various triangles. They're all the same area-- a little hard to draw-- for a. I have a similar chain for b. And I just need to convert, basically, triangle per triangle from a to b. So now my problem is a lot easier. I have these hinges which I need to preserve. That's a little trickier. This is actually an idea suggested by Epstein before the universality result. It's like, all we need to do is do triangle to triangle while preserving two hinges. Then we could do anything to anything. So we're following that plan. And now we're going to use all the fancy gadgets we have to do triangle to triangle while preserving these hinges and not blowing up the number of pieces too much. But definitely simpler. We're down to triangle to triangle. Next step. OK. Next problem Yeah. This is slightly annoying. I said, oh great, these triangles are matching up. But I'm not going to be able to do triangle to triangle and get exactly the hinges I want where I want them, so I'm going to have to end up, for example, moving this hinge to another corner. So we're going to use a new gadget, actually, for fixing which vertices connect to which triangles. This is, maybe, not obvious yet that we need this, but we will. And we're going to use a slightly, a somewhat more efficient version of, essentially, the same idea. So we've got a hinge here, in the middle. And basically, can't control where the hinge goes, but it's supposed to go to one of the corners. So we're going to reconfigure in this way. So we assume we have some way of doing it. And here's the thing, we assume that maybe this has already happened to a. We don't want to recurse into a because then we get exponential blow up. I'm going to have to do this for every single triangle here. There's n of them. That's a lot. I don't want to get deep recursion, I don't want to get depth n recursion. But if I cut up in this way, in fact, I only need to cut up b. And if b hasn't been touched yet this is OK. And then I'll do it the next way, and the next triangle, next triangle, and they won't interact. That's the good news. So how do we do it? Well, we cut up a little, oh, what do we call it, kite fan, I believe, here. Here there's two kites, and we get these triangles to match these two, these triangles to match these two. We cut up this little piece along the side. And either the green stays in here-- green is attached to the pink or magenta. So if we keep the green in here, the triangle stays there. If we pull everything out-- and there's a little hole made here to make that more plausible. But in reality, we have to subdivide to get slender. So if we instead reconfigure the green to lie along the edge, and the blue can turn around here and fit inside because it has exactly the same shape, these two chains are identical, it can also fit in here. And then we've moved the magenta over to that side. So that's cool. That works, and it doesn't touch a. So it's a slight variation of what we had before. And it's good. So, that's psudeopolynomial, and they don't interact. And so we can move these things however we need to according to what step four produces for us. So this maybe slightly out of order. I could have called that step four, and this step three. Get to the more exciting part. Finally, we do triangle to triangle. This a little crazy. I'm going to give you three constructions that give us what we want. And then I'm going to claim I can overlay them. This is what we can't do with hinged dissections, but I'm going to do it anyway. So bear with me. The final gadget will say how to overlay them. But let's start with the relatively simple goal of triangle to rectangle. This I already showed you. And the nice thing about triangle to rectangle, this three-piece dissection, is you can hinge it here and here and it works just fine. So that's already a hinged dissection. That's the easy step. Then we want to take that rectangle and convert it into a tiny-- or not tiny, same area, but an epsilon-height rectangle. Because remember, we have two triangles, they're different shapes so they have different heights. This one will end up being half the height, but it won't match what we'd get for this triangle. So I'm going to do steps a and b for each of the triangles. And then I have two epsilon-height rectangles. And then the challenge is to convert one into the other. This is a challenge because they have hinges on them. So with dissections you just overlay these two cut ups. But hinged dissections, there's hinges you have to preserve, we can't do that. OK. First part is step b, which I showed you already, going from one rectangle to another. Here's another diagram of it. It turns out it's almost hinged. You can, essentially, just flop back and forth and back and forth, except at the end you might be in trouble. So there's one step here, and depending on parity exactly this piece of the rectangle is hinged here. But I really want to be hinged here, so I'm just going to move it over here. I have tools for moving hinges around. So it turns out, you have to check that this is safe. But you just do one hinge moving, and then you're OK. So in this case-- this should actually go a little bit deeper-- the bottom figure shows when you go too deep you can cut, cut-- and this is just like the previous diagram of triangle to rectangle. You do that at the bottom you'll be fine. There's a couple different cases in exactly the parity and how you end up. Three cases I guess. But in all cases the rest can be hinged. You just need this one step in the middle to fix it. So most of it is just swinging back and forth. So it's almost hinged, which is good news because we have tools to make almost hinged things actually hinged. So that's cool. So basically, we've covered a and b at this point. But the last part is c, or how do we superpose all these things? And this is using another gadget called pseudocuts. And essentially, you have some nice hinged dissection already, and you want to add a cut and a hinge. So just imagine cutting all the way through here and adding a hinge, I guess, on the yellow side here. And somehow, I want this thing to fold in all the ways it used to be able to fold. So it could fold into a. But then I also want it to be able to fold at this hinge, and eventually fold into b. And it's complicated, but again, the same idea. So we've got these yellow guys, which normally live in here, and so yellows is yellow. These are triangles. These are triangles minus triangles, so they're like little quads. They have holes just the right size for the yellow. These guys have holes just the right side-- I'm sorry, how does it go? OK. I see. It's purple, then blue, then yellow, I believe. So the yellow fits into the blue-- anyway. Whatever works. These guys nest together. And when they nest together they fill these little holes. And then there's matching patterns out here. So they all fit. How does it go? Actually, sorry, I think they're all triangles. This just looks multicolored. So it looks like purple here is going into the cyan one at the next level. The yellow guys are going into the purple. I see. So there's a triangle and a quad here. Lovely. And then these guys stretch across. Definitely a little more complicated. And you lose a factor of two, or whatever, but if you apply these pseudocuts in the right order-- and these are fairly simple cuttings that we have to do. We know that these cuts are mostly a striping. So if you just apply them in order you don't get blow up. I'll just wave my hands at that. It's a little hard to draw the picture, obviously, but that's how it goes. And that's pseudopolynomial hinged dissection. This is why-- it was intentional I didn't cover it in lecture because it's pretty complicated. There wasn't time. Any questions? Last topic is higher dimensions. Can we get a brief overview of 3D dissections? So this is more a dissection question than a hinging question, although, of course you could ask, does all this work for hinged dissections? Pseudopolynomial, we don't necessarily know. For straight up proving that hinged dissections exist, the claim is-- it hasn't been written up formally yet-- the same techniques work. You can take any dissection and convert it into a hinged dissection. But in 3D, it turns out, dissections, by themselves, are not so simple, as a lot of open problems. Some nice things are known. So let me tell you about 3D dissection. If I want to convert one polyhedron p into another polyhedron q, obviously, the volumes must be the same assuming we're doing a reasonable cutting and not some crazy axiom of choice thing. So volumes have to match, just like for polygons the areas have to match. But that turns out to be not enough. And this goes back to a Hilbert problem. So you may have heard of David Hilbert. He wrote this paper of like 23 open problems at the turn the previous century, 1900. This is problem three. It wasn't directly about hinged dissections, or about dissections rather. A little bit convoluted-- it's about some certain axioms and proving certain things. But in particular, he was asking, are there two tetrahedra of equal base and altitude, so equal volume, which can in no way be split up into congruent tetrahedra? So there's no way to dissect one into the other. If that's true it would show the certain axioms are necessary and certain proofs. And it turns out it is true. There are tetrahedral of equal volume where you cannot do this. And that-- I don't have a slide for it-- but this was proved by a guy named Dehn. And he came up with something called the-- well, that we now call the Dehn invariant. He didn't call it that himself. And these things must also match. It's called invariant meaning that no matter how you cut the things up and reassemble the Dehn invariant doesn't change. And so if you have any hope of going from p to q, those two things must match. And then, Sidler-- so this was 1901, Dehn proved that this was a necessary condition. So like a year after that appeared. In 1965, a little bit later, Sidler proved that this is all that's necessary. So these are sufficient conditions. If p and q have the same volume and the same Dehn invariant, then there is actually a dissection. And he proved it somewhat algebraically, somewhat constructively, I'm not sure exactly. There's a simpler proof by [? Jephson ?] in 1968. And he proved that, actually, in 4D the same is true. In 4D you need the volumes to match and the Dehn invariants to match, and that's enough. In 5D and higher no one knows what it takes for a dissection. Pretty weird. It could be interesting to study these more carefully. Let me tell you briefly about Dehn invariants. A little awkward unless you're familiar with tensor product space. How many people know about tensor product space? A few. OK. If you've done quantum stuff, I guess, it's more common. I'm not familiar with tensor product space, but here we go. Tensor product space. But I can read Wikipedia with the best of them. It's a fairly simple notion, it just has somewhat weird notation. You can do things like take something x and write tensor product with y. And what this means is, basically, don't mess with this product. OK. It's a product. Really, this is two things, x and y. They're not interchangeable, they're in completely different worlds, different units, whatever. You can't like multiply them. They just hang out side by side. You also can't flip them around. It's not commutative. OK. Fine. But some things hold. Like if you take, I don't know, z and add it to this product, you do have distributivity, so you can get x-- is this-- no, this doesn't look very correct. If I have this then you can multiply that out. So you get x tensored with y plus x tensored z. So that holds. It also holds on the left. And the other thing is that constants come out. So if we have c times x tensored with y, this is the same thing as c times x tensored with y. So in the end I'm going to have a bunch of these pairs, these tensor pairs. And I'm also able to add them together. And nothing happens when you add them together, they just hang out. So in general-- you could also have a constant factor-- so you have a linear combination of pairs, basically. Why am I doing this? Because here's the Dehn invariant. Dehn invariant says, look, with polyhedra you've got two things-- it's going to be the x and the y over there-- you've got edge links and you've got dihedral angles. So look at every edge. Here's an edge of my polyhedron here. It has some length, which I'll call l of e. And there's some angle here, which I'll call theta of e. Add those up over every edge. So the Dehn invariant is going to be the sum over all edges of the length tensored with the angle. AUDIENCE: Isn't an angle a function of two [? of these? ?] PROFESSOR: Angle is the angle between these two planes. So that's a dihedral angle. Yep. So for every edge there's one dihedral angle. Just sort of the interiors of all that angle there at the edge. So this is kind of what's going on. And these things have to match. Now it's a little more complicated. Sorry, it's not really just the angle. Essentially, if you add rational multiples of pi nothing happens. So you actually take this weird group, all rationals times pi-- All this means is if you have two angles and their difference, that you subtract them and you get a rational multiple of pi, then those two angles are considered the same. So what this is really saying is I only care about the irrational part of pi, roughly. You add pi over 2, that doesn't change anything. Why this thing? Well if I take an edge, and for example, I cut it in half anywhere, I could cut it at an irrational fraction, or whatever, I will get two lengths but they'll be tensored with the same angle. I didn't change the angle. And so by distributivity-- once you get things inside the same place. So in this case, we'll get two lengths that add up. They match. OK. So as long as you have matching angles you can add the lengths together. That's what distributivity tells you. Similarly, if I tried to cut this angle in some piece, it could be an irrational ratio between the two pieces, they will have the same edge length. And when I have matching edge lengths I can use distributivity and add the angles back together. So basically, when you dissect, this thing will not change. It's a little more awkward when I cut here because this was originally a pie, and then I cut it into some pieces. And this is where you need the rational multiples of pi not mattering. But eventually you can prove Dehn invariant is invariant. The harder proof, you can prove that it's also sufficient if you have the matching volumes. As recently proved like a few years ago, 2008, that whether the Dehn invariant of one polyhedron and another match is decidable. So there is an algorithm to tell whether two polyhedron have the same this thing. Decidable is a pretty weak statement. Natural open problem is, is there a good algorithm to do it? We don't know. If it does match, is there a good algorithm to find the dissection? We don't know. These may be easy if you really understand the proofs deeply. But at the time no one cared about algorithms. At this point, we need to go back and really understand how to actually do 3D dissections so that we could then do a 3D hinged dissections. That's it. Don't forget, orgami convention is on Saturday. Should be fun.
MIT_6849_Geometric_Folding_Algorithms_Fall_2012
Lecture_16_Vertex_Orthogonal_Unfolding.txt
PROFESSOR: All right. Let's get started. So we are continuing the theme of unfolding polyhedra, and the general picture we are thinking about in terms of edge unfolding versus general unfolding which are these two pictures. Top is an edge unfolding of the cube. Bottom is a general unfolding of the cube. If you have a complex polyhedron, we found general unfoldings. There are like four of them. They work. We proved one of them. If you want an edge unfolding, that's the centuries-old open problem. For non-convex polyhedra, we know this is too much to hope for. Even if it's topologically convex, there's not always an edge unfolding. That was the tetrahedral Witch's Hat. But for general unfolding, we don't know. So today's lecture is actually mostly about these two open problems and different variations of it that we know how to solve, special cases, and changes in the model. At the end, there'll also be some stuff about the reverse direction folding, but mostly, it will be about unfolding. So there's a third kind of unfolding which we call vertex unfolding, and it's kind of like a hinged dissection. So instead of normally in unfolding, something like a cross, it's a nice, connected polygon. The faces are joined along edges. But what if I allowed disconnecting along edges, but I just wanted things to stay connected along vertices, like at a hinge. So it still should be one piece in the sense that this is still connected, and in this case, I still want an edge unfolding. You're only allowed to cut on edges. And in fact, we will cut on all the edges because that'll be the most flexible. Every edge gets cut, but you're going to leave intact certain vertices to make one, quote, "piece" at vertices. So we still want one piece. So this is vertex unfolding, and it's always possible. We've even implemented this algorithm here. A bunch of random points on a sphere. Take the convex hull, and then, take a vertex unfolding. So you get this nice chain of triangles, don't intersect each other, and this will fold up into that 3D polyhedron on the left. Here's some bigger examples, hundreds of vertices. Amazing. All right. So how do we do this? All right. Let me state a theorem. Every connected triangulated manifold-- this is a very general result. It actually holds in any dimension. We're going to think about two dimensional sources in 3D, but it could be D dimensional surfaces in D plus 1 dimensions. --has a vertex unfolding. So this works both for convex and for non-convex. The only catch is that every face has to be a triangle. It's an open problem for something like a cube where you actually have quadrilateral faces. So the way we prove this is to construct what we call a facet-path. This is a path that alternates between visiting faces, triangles, and vertices. So the idea is you have some-- say we're doing a tetrahedron or something. You start at a facet. Then, you go to one of its vertices. Then, you go to one of the facets that shares that vertex. Then, you go to one of the vertices. Then, you go to a facet. Then, you go to a vertex. Then, you go to a facet. This is a facet-path. It should visit every facet, every triangle, exactly once. Vertices you can visit multiple times, although I think it's a bad idea to visit the same vertex twice in a row. Other than that, you can visit a vertex more than once. Maybe I would visit it again coming up here. If there was another triangle like this, I could go up to this vertex again and over to the triangle. So the claim is facet-paths always exist for any triangulated, connected surface. Once you have this facet-path, you're basically golden because you can lay it out without overlap. And that's the next picture. So if you have a triangle and you have some corners of it that are hinged to adjacent triangles, you can rotate that triangle 'til it fits in a vertical slab. And the hinges are on the ends of the slab. And so each triangle lives in its own slab. Slabs don't intersect. No intersection. The hard part is really getting this path. Once you have that, you can just lay it out. It's a little bit nontrivial with obtuse triangles. They don't necessarily just lie along the horizontal line. You have to set things up so that those guys are on the boundary, but you can always rotate it so that that's possible. So the real question is, how do we construct a facet-path? All right. So I have a bunch of triangles. In general, they make some surface. And maybe I should draw a running example over here. Let's think about a triangulated cube, something simple. Imagine there's more triangles in the back. It's a little hard to think about the 3D picture. I would really like to two dimensionalify it, and because this theorem works, not only for a polyhedron-- it works for any manifold, any sort of locally, two dimensional thing-- it would be nice if I could just sort of cut this open and think about a disc instead of a sphere, topologically. And I can. I mean this theorem's supposed to work for discs just as well. It should work for anything that's connected. So ideally, I cut it all apart into lots of little triangles, but it has to stay connected. So I'll unfold it. Why not? So I can add some cuts until I get down to a spanning tree of the faces, a spanning tree of the dual graph. OK. For whatever reason, this is the triangulation I chose, and this is the unfolding I chose. But just keep cutting edges until cutting an edge would cause it to disconnect. So the maximal set of cuts, there are many ways to do it. When I unfold, it might overlap because we don't know whether edge unfoldings exist, but that's fine. You can think of it as a two dimensional picture, but it might need a few layers. We're not going to actually fold this thing, but we're going to cut this up into a facet-path. This could actually be our input. Instead of being given a polyhedron, we're given a disc like this, a triangulated disc, and we have to deal with it. Somehow we've got to construct a facet-path here, visit every triangle exactly once, and passing through vertices. AUDIENCE: So if you have overlap, it doesn't really matter. PROFESSOR: Yeah, overlap doesn't really matter. We're going to cut it up, and then, we're going to splay out the triangles. AUDIENCE: Distances don't matter. Right? PROFESSOR: No, uh. AUDIENCE: If you're trying to just find a path-- PROFESSOR: Right. From the perspective of facet-path, distances don't matter. It's just topology. Yeah. You could think of it as a circle with some decomposition into triangles, if you like, but that's maybe harder to think about. I got to think about it this way. It's useful because then, we'll get to Mickey Mouses, as we'll see. So cut edges until you can't anymore, so until cutting would disconnect. So this means what you're left with will be sort of a tree of faces. There'll be no cycles because if there was a cycle, you could cut one of the edges, and it wouldn't fall apart. So obviously, there's a tree here. Now, trees have leaves. That's our favorite lemma lately. They have at least two leaves. In the case of a triangulated polygon, we usually call them ears. So here's a fun term. So let's say color the ears. These are leaves and in that tree which call the dual tree. So in reality, we're thinking about a dual graph which looks something like this, where there's a vertex for every triangle and edges for every edge connecting triangles, and there's leaves of that tree. But in the original thing, we think of this triangle as being an ear, and this triangle is an ear. And this triangle is an ear, and this triangle is an ear. I like ears because they're kind of on the boundary, on the surface, so they're just triangles that are adjacent to only one other triangle. Now, the next step is to color what are called the second-level ears. I'll call them the remaining ears, if you remove those ears, what would, then, become an ear. All right. Now, I wish I had another color. Yeah, that's kind of yellowish. So this would become an ear. This would become an ear. This would become an ear, and this would become an ear. And I'm going to stop there, just two levels. What could I get in this process? I mean what it looks like I'm getting is I get an ear and then, a second-level ear. It could be a little more general than that. Maybe, for example, if I had a triangle like this, both of these would be first-level ears, and then, this would become a second-level ear. Turns out that's all that can happen. Because these are triangles, second-level ear, at this point, it's only adjacent to one other thing. So what was it adjacent to before? Well, maybe, one ear, certainly not zero because it wasn't a first-level ear. So at least one ear, maybe two ears. That's it. AUDIENCE: Unless there's only four pieces left. Right? PROFESSOR: Sorry, what do you mean? AUDIENCE: You could have three ears. PROFESSOR: You could have three years. Yeah, so here I have two. I mean the first-level ear and a second-level, or it could be a first-level-- Oh, you're saying at the end. Right. It could be like this. That would only happen at the very end. But yeah, this could be first-level ears, and this is the second-level ear. Good point. Most of the time, we will either get something like this, the rest of the polygons over here-- so this is a first-level ear. This is a second-level ear. --or I get-- this is the Mickey Mouse picture. Probably not allowed to say Mickey Mouse. It's under copyright, but there you go. So two first-level ears, second-level ear. Boom. This is, most of the time, what you'll get in the base case of this induction. I'm going to pluck off these ears and keep making the thing smaller. At the end, there are a few cases to think about. I have them drawn here somewhere, base cases. Nothing. Yeah, well. This won't work out if you have one triangle, if you have zero triangles, which is this empty picture, or if you just have two maybe. Well, I guess-- Yeah, because then, these are both first-level ears. It doesn't look quite the same. Or maybe just a Mickey Mouse because those are all-- Well, it probably works. Anyway, these are the sort of cases you worry about. But for these cases, I just need to check that I can find a facet-path. So for example, this one, I just visit the triangle. This one-- I don't know-- I do it like that. In fact, I can make it a cycle if I want to go crazy. This one-- I don't know-- something like this. That'd be one way to do it. (SINGING) Doo, doo, doo. Hm. Got to be a little careful. [INAUDIBLE] Something like that. All right. But really, I care about these two cases. So what I'm going to do for each of those-- I guess this is still step four-- is I actually want to make cycles for these guys. I care about that. Am I doing something wrong? Oh, no. Over there. Good. So in both of these cases, I can make cycles. In this one, in the base cases, not always. Like this guy, hard to make a cycle. But for the two general cases, if I find two ears or three ears, I can just draw that in. So I'll do that for this example, though, by now, it's gotten a little messy, so let me redraw it. So we have-- All right. So I'm going to connect these guys and connect these guys. This case, I only get the two-ear case, but I think we'll get another case shortly. OK? So obviously, I haven't finished it, and these are disconnected. There's lots of things left to do. But then, the idea is repeat. So imagine those guys as being done. I'm left with these four triangles, actually, a little boring because I don't get the Mickey Mouse case. But then, this will be an ear. This will be a second-level ear. And so I'll end up doing this. And this will be an ear, and this will be an ear, second-level here. I mean this is actually the base case, if you will. So in general, I just pluck off two or three triangles, repeat until I get one of the base cases. Now, what I have is not connected, but it's a bunch of cycles. But there's one cycle here, one cycle there, one cycle there. You could actually connect some of them together, like these two. You could go around like this and then, go like that, so that could be a bigger cycle. But these cycles are not even attached to these cycles, so it's kind of a problem. That's step five is we're going to fix all the problems. That'd be a great title. Connect cycles together. We're going to do that by local switches, so here's the general picture. Suppose you have two adjacent triangles and two completely separate paths. So it's a cycle, so some cycle over here. And if this guy shared a vertex, then I could actually connect them together. So suppose it doesn't share a vertex. That means it has to use these two, and then, it-- something like that. What I want to do is remove this edge and remove this edge and, instead, add this edge and that edge. So that's a local change, and now, it will be one big cycle. We've probably seen this trick once or twice before, I think in the Mountain Valley assignment stuff. So here I have, for example, these two triangles, which are adjacent, but the paths don't meet. So I'm just going to erase this edge, put in that one, erase this edge, put in that one. Lo and behold, I have a bigger cycle now, like that. Still not everything, so like these two triangles don't touch. So erase that edge. Erase that edge. Put in that one. Put in that one. I'm preserving, at all times, that I'm a facet path, and I'm merging components. So by the end, I'll have one big component. Now, I haven't necessarily described this as a big cycle, but as I've kind of been hinting, what I do now is take an Euler tour around this thing. We've seen Euler tours. All the vertices in this graph will have even degree. And so I can take an Euler tour, and really, I should take what's called a non-crossing Euler tour. Point is you've got a bunch of cycles that touch. You just walk around the outside. That will visit all the cycles. It'll visit all these edges exactly once, and that will actually be a facet-path. And we're done. Now, I have a facet-path for a triangulated cube. So I can splay out those triangles, get a non-overlapping chain like this. Actually, it's probably the top one, something like that, might be. I may not have matched exactly what's in the textbook. Cool. One slight detail is I was sort of waving my hands, assuming there was actually a cycle here. I really only need a path. In fact, there won't always be a cycle because at the very end, you're not able to construct a cycle. So that will actually cause you to make-- if I do something like this, so at least it can connect to other things. That will cause you make two vertices of odd degree. But that's OK because there's still an Euler path that starts at one of the vertices of odd degree, visits all the other edges, and then, comes to the other vertex of odd degree. And I just need a path. You can actually characterize when you get a cycle. It's when the original thing is not too colorable, I think. Anyway, that's vertex unfolding. It's kind of easy. I think we solved most of it in like an afternoon. We had this idea. It was cool, and then, we solved it kind of quickly. That's how it often goes. Once you have a cool problem, it falls quickly. There's lots of interesting open problems remaining about vertex unfolding, and they seem a lot harder. So for example, what if I have non-triangulated polyhedra? And natural version is to think about what we were originally trying to attack, convex polyhedra. This turned out to not require convexity. But what about convex polyhedra, not triangulated? Is there always a vertex unfolding? We don't know about edge unfoldings. And the answer is no. Well, no that's not right. Sorry. The answer is we don't know. What's annoying about this example is that there's no facet-path. So there are two things that could go wrong. One is that the facet-path doesn't exist, and that can happen in this more general scenario. And the other is that when you lay it out, it overlaps. Both of these things could go wrong in the general convex case. So in this example, this is a truncated cube. There's eight triangles and only six octagons. And if you look, once you're at a triangle, you have to go to an octagon because there's no adjacent triangles. So at best, you could alternate triangle-octagon-triangle-octagon if you want to pack all those triangles in, but you run out of octagons to get there. So there's no facet-path, but maybe, you don't need facet-path. You could make a tree, just a little trickier. Obviously, if you triangulate that surface, you're done. So the analog of general unfolding with vertex unfolding is trivial. You just triangulate, and then, you use the edge case. But if you really only want to cut along edges, this example can be done. I mean it has an edge unfolding, so if it has an edge unfolding, it definitely has a vertex unfolding, but we don't know how to prove it. The other thing that could go wrong is once you have octagons, even if you could find a way to lay them out-- that's not a very good octagon-- you can't-- let's say if your two hinges were here and here, if that's where you attach two adjacent pieces, you can't fit that in a vertical strip. There's no way to turn it so it fits in a vertical strip. So also this layout problem doesn't work if you have something more than triangles. So some pretty fascinating questions. It's even open for non-convex. If I take non-convex polyhedra, and I want a vertex unfolding-- let's say all the faces are convex. At the very least, you need to forbid faces having holes. If you remember this example, the box on a box, this also doesn't have a vertex unfolding because still, this guy has to fit in that little square hole. But as long as the faces don't have holes-- let's say the faces are convex-- would be a nice version, like the Witch's Hat, the spiked tetrahedron. That probably has a vertex unfolding. AUDIENCE: If you triangulate that, [INAUDIBLE]. PROFESSOR: If you triangulate this, it'll definitely have a-- if you triangulate anything, it'll have a vertex unfolding. Yeah. All right. That's all I want to say about vertex unfolding, and that's sort of addressing edge unfolding of convex polyhedra, or actually, both of these. Now, I'd like to go to sort of the real problem. This is one of my favorite problems, I think. A bunch of us posed it in '98, which was right when I was-- or '99, right when I was starting out. See, it's tantalizing, in some ways more natural, because it's nice to allow cuts anywhere. And it could potentially work for everything. I conjecture every non-convex polyhedron without boundary can be generally unfolded, and there's some really good evidence for this as of a couple years ago. That's what I want to talk about, which is orthogonal polyhedra. This is one of my favorite unfolding results. It's by Mirela Damian, Robin Flatland, and Joe O'Rourke. They've done a lot of work in this unfolding area. An orthogonal polyhedron is one where all the faces are perpendicular to one of the three coordinate axes. So for example, here is a orthogonal polyhedron I drew this morning. If you want to draw orthogonal polyhedra, Google SketchUp makes it really easy, and you can add shadows and texture. So there are three kinds of faces. There's the ones perpendicular to x, like these guys. There's the ones perpendicular to y. That's all the yellow faces. And there's the ones perpendicular to z. That's these top guys. So we call them x-faces, y-faces, z-faces. What we're going to do-- so the theorem is these have general unfoldings. That's pretty awesome because every polyhedron is approximately orthogonal if you can voxelize it. So this is really a lot of stuff. I would love to generalize this approach arbitrary polyhedra, but that's the big open question. So what do we do? Well, we're going to single out, from this color coding, the y-faces. Just color them yellow. Then, there's all the other faces. Well, they form bands. They're cycles. They go around in a loop. A lot of them here, I've just drawn as rectangular loops, but in general, all those wooden faces, the x and z-faces, will form a bunch of loops. And then, there's the y-faces which we're going to have to deal with, but sort of ignore them for a while. I should have drawn it with them erased. It would look cool. But if you think about just those bands, this is sort of how they're connected together, this tree. There's a big one out here, and then, it has two children. There's this one and this one. And then, this child has one child hanging off of it, and this one has two children hanging off of it. So that's these two guys. In general, this guy might have some children hanging off the back side as well. I'm not going to try to represent that in this picture. There's just a bunch of children. There will be some front children and some back children. You pick some root arbitrarily, and then, you have children going off of there. Now, if you're orthogonal polyhedron has genus 0-- it's topologically a sphere-- this will be a tree. If it's like a doughnut, it will have a cycle. So this theorem only applies for our genius zero. So we're going to exploit that that dual drawing, how the bands are connected together, is like a tree. So why don't I write down, a band is a cycle of x and z-faces, and they are connected together in a tree. Now, the rough idea is we're going to take a depth-first traversal of this tree. We're going to start at the root, work our way down and come back, work our way down, and something like that. It's not going to be quite so simple. The challenge, I guess you could say, is avoiding overlap. OK? If you wanted to unfold a band, obviously, a band can just unfold straight. It's like a nice, long strip. So each band, individually, is fine. It's how do you piece those bands together and then, have room for the yellow faces to attach on the sides, no overlap? But it can be done with the awesome, crazy idea that we'll get to shortly. It's going to start out kind of innocent, but the general approach is always proceed rightward in the unfolding. So the unfolding will look something like this. OK, whatever. Always going to the right. We start here, and we might go up and down, but we never go left. And then, that's going to be all the band faces. All the band stuff will be connected like that, and then, there's going to be yellow faces that can just hang off the sides. So these are the y-faces. As long as I get the band to do this, y-faces can hang up and down. It's not going to intersect anybody. Pretty clear? So it's clear if we could do this, we can get non-selfintersection. The amazing thing is that this is possible. What this is essentially a limitation on is how far you could turn your bearing. So you start going right. You can turn right, but I could not turn right again. I have to turn left next. I can actually do two left turns in a row, as long as I was initially going down. I can turn left twice, then, I could alternate left-right. That's always good. Or I could turn right twice, but only if I'm initially going up. That's the rules. As long as I adhere to those rules, I'm fine. Now, we're going to heavily exploit that we can do general unfolding, that we can subdivide those strips into lots of little pieces. We're going to subdivide into a lot of little pieces, an exponential number of pieces, so this is kind of hard core. So here is one example. This is a leaf. So trees have leaves, and at the end, we're going to have to visit a leaf. So this is one box. There's this funny view, so you can see like a mirror on the bottom and a mirror on the right and on the side. So this is if you had one box, here's what you would do. And we're actually assuming-- notice this side does not get covered. The idea is that site doesn't exist. That's attached to the rest of the polyhedron, so that's where our parent lives. And our parent tells us you have to start at s, and it says you better finish at t. And I want the property that if initially, I think, initially, you're going right. No, it looks like initially, you're going up. It matters. You can do stuff, and then, at the end, you should still be facing up. So you have to visit all these faces, but not turn in total. And normally, that would be hard to do. If you just tried to visit one face at a time, you can't do that, but if you visit faces multiple times and kind of weave around in a clever way, you can do it. So maybe I'll point with this thing. Yeah. So I start at s. I go up. I turn right. Now, I better turn left. I go down over here, up there. I turn left. Then, I turn right. So if you follow along here, I just turn right here. So now, I go down here. And that is a left turn because it's on the bottom, a little hard to think about. So I turn left here, and then, I go, turn right. And then, I go down, turn left and then, right. This is confusing that I'm upside down. And I come to t, and lo and behold, I'm facing up again. In fact, I basically just zigzagged. This would also work if it was rotated 90 degrees. I'd initially be going right and, then, down and right and down and right and down. So this can kind of go in a couple of different orientations. It's really powerful. It also works if t is on the other side of s. You could do sort of the mirror image traversal. Now, obviously, I didn't cover the entire surface. I'm leaving room for later, but if this was all I was going to do, I would actually sort of fill out all those strips, just kind of extend them. It just makes this kind of fatter. So this got a little bigger. I've got the first half and then, the second half. This is really glued up there. But you can imagine once you have these sort of paths that visit everything, you just fatten out, and it's no big deal. And also, in this case here, we're imagining-- oh, this is actually two of them. Fun. Two of these strips joined together. And so there's a few side faces. They just attach up and down. They're not going to intersect anything because this is not actually below this. This is way over here. So that's the leaves, and I still haven't gotten to the exciting part. So imagine you have a band. Just going to represent that by this big rectangle, and it has a bunch of children. Remember, it can have front children. This is down in the y-coordinate. And it could have back children up in the y-coordinate. And suppose I'm actually attached to some parent, let's say, as a down neighbor, and I start at some s. Let's say I'm told you have to start here. You have to finish here, and you should preserve orientation because I don't want to have to think about whether it turned, don't want to have to depend on that. So you can do it. Initially, you must be facing up because immediately I'm going to make two right turns. And I could handle two right turns, as long as the next thing I did was a left. So I come into this thing saying, look, you're facing-- this is facing up. Now, you're facing down. You better turn left next, and by the end, I still want to be facing down. OK. Now, I make two left turns. Now, I'm facing up again. I tell this guy, you better turn right next, and preserve. At the end, you should be facing up again. Now, I make two right turns. Now, I'm facing down, and so on and so on. And here, I'm wrapping around to here because this is actually a band that cycles around. Then, I go in here. It's the same thing. It's just a little hard to see because I'm drawing it on a flat surface. But if it was on a ring, it would be much clearer. Just going left and right and left and right, alternating direction, so that I preserve my orientation. At some point, I get to here. I loop around. I make a little wiggle at some point, and then, I can visit all the top neighbors. You just have to slightly switch your orientation, but again, preserve that you're doing left-left-right-right, left-left-right-right, left-left-right-right. Then, you will preserve your orientation. You tell each of the children which way you're initially going, and they can deal with it. It's basically telling you whether s and t is like this or the other way around. OK? So now, we end up way up there where the lavender edge is at t10. Now what? We want to come back here, and I'm not allowed to sort of intersect myself. That would be the paper going into two parts of this unfolding, so that's not good. But I have all this space, so natural thing is to just wander from there back down to here, using up the space. So it's going to look like this. Everything that you did, you just undo. Now, where this is painful is not only do I have to undo it in this diagram, but I have to recursively undo everything I did in here, everything I didn't in here, and everything I did in here. So this recursive thing from this structure ends up getting doubled. At the parent structure, it will also get doubled. At every level of the tree, you're going to double what was below you, so that's why you get exponential, in general. If your tree is ugly like something like this, you'll start with something nice and small down here, maybe constant number of terms. Then, you'll double. Then, you'll double again. Then, you'll double again. Then, you'll double again, and it will be exponential. So if this is n, the number of things you're doing here is 2 theta n. On the other hand, if your tree happens to be nice and balanced, doubling is not so bad because here you'll have constant. This will double everything below. This a double everything below, but there's only log n levels. So is that linear? It should be about linear. It's certainly 2 to the theta log n, and it matters what this constant is. I think it's n or maybe n squared, but not too bad. So if you're lucky and the just the structure of your bands is balanced, it's good. In general, though, it's going to be exponential. Open problem. Can you deal with these situations and sort of balance them and make them only make a polynomial number of cuts. Certainly be nice. Exponential number cuts is a lot, but it works. You can unfold every orthogonal polyhedron this way. I would love to see an implementation of this algorithm. You could only do it in a computer because you'd be splicing into all these little things, and it would fall apart. Jason? AUDIENCE: You've been making these, I guess, gadgets [INAUDIBLE] voxel would attach just by a side. You could imagine it attaching at a corner attaching to multiple sides. PROFESSOR: So you're worried about-- not quite sure. What we think about is one band, which looks something like this, attaching to another band. That will always happen on a face. I'm not quite sure what you're imagining. Maybe something like that where they share a partial face here? AUDIENCE: Yeah, but it could also be inset into the [INAUDIBLE]. PROFESSOR: If it's inset, I'm cutting with every-- I maybe didn't mention that-- through every vertex, I'm going to slice with a y-plane. So that will cut into lots of little strips, and then, there's no sort of overlap with the strips. I'm subdividing into little substrips. So that sort of deals with that issue if this was moved that way. Then, there will be three strips, one over here, one where they're overlapping, and one on the right. Yeah. Good question. I forgot to mention the subdivision at the beginning. Speaking of subdivision at the beginning, this leads to another notion which we call grid unfolding. So the grid of an orthogonal polyhedron is what you get when you subdivide by extending every face into a plane and cutting with that plane. So that's sort of what I was doing here when they're overlapping. Even in this picture, there's like a face here, this vertical face, and so I'd end up slicing through this thing with a band in that direction. And I don't know, this vertex you'll end up slicing through this guy. So hopefully, you can imagine that. With every vertex, slice x, y, and z. That's another way to do it. And that subdivides in sort of a nice grid where every face will now be a rectangle, and rectangles always meet whole edge to whole edge. So it's a nice simplification. What I'm proposing is add those edges to your polyhedron. It's kind of like assuming that you started with unit cubes and build something but kept all the edges of all the cubes. I want an edge unfolding of that. Do those exist? These are what we call grid unfoldings. So grid unfolding is an edge unfolding of the grid. This only makes sense for orthogonal polyhedra. Open question. Do grid unfoldings always exist? It's essentially the orthogonal analog of this question, edge unfolding a convex polyhedra. If you want to go to orthogonal, which is like in between convex and general non-convex, maybe you could hope for grid unfoldings. Edge unfoldings obviously don't work. We had lots of examples where those fail, like the cube with little bites taken out of the edges. But grid unfolding, you get lots of subdivision. It might be easy. Well, it's not easy. I would guess, actually, it's not possible. The next best thing you could hope for is to refine. So you take each of the grid rectangles and divide it into k by k, so subgrid. So ideally, k is one, and you're not subdividing at all. But maybe, you take every rectangle, divide it in half. Maybe that's enough to then be edge unfoldable. That would be sort of a refined level two grid-like unfolding. There are a ton of results about this. They're all partial. Obviously, in the general situation-- I mean this algorithm we just covered-- you can achieve a refinement of only 2 to the theta n exponential. When can you do better? Ideally, you get 1 by 1, but maybe, you can do something. What? OK. Interesting. One thing you could do, with merely 5 by 4 refinement, is something called Manhattan Towers. Let me show you a picture of Manhattan Tower. No that's not a picture of Manhattan Tower. This is more crazy examples of what it's like to visit. This is not-- this probably is complete, but here, there isn't too much doubling because there's only a single child, more or less, everywhere. But the unfolding looks something like that. Here is Manhattan Tower. So it has a connected base on the x-y plane. And I want to consider z-slices as I go up in z-coordinate, and I want those z-slices to get smaller and smaller, always contained in what I had before. So I never have overhang. That's a Manhattan Tower. And in that case, 5 by 4 refinement is enough to unfold these things. So that's pretty good, still not perfect, but pretty good. And this is by the same authors, like a year before the general result. Let's see. I think maybe I have a movie. Yeah. So this algorithm has been implemented, at least in some simple examples. And it kind of nicely unrolls. You can see a 5 by 4 refinement in that little staircase. It's, again, to make everything keep going to the right, but here, they find a clever way to visit all the faces without having to revisit, basically, at all, just visiting each face a constant number of times. And then, we can zoom out, and you get the unfolding. So it looks very similar in spirit. Of course, the details is how do you do all that visiting. I'm not going to cover that here, but you get substantially less refinement for that special case. Yeah. Another case looks like this. Boom! AUDIENCE: Woah. PROFESSOR: Isn't that cool? I'll play it again. This is just slightly more special. So again-- I have three of them. They're so much fun. It's like exploding a city. Boom! So here, the floor is a rectangle. That's the only additional requirement, and again, as you slice upwards, things only get smaller. Here's a bigger one. Boom! So exciting. I could do this all day. So I'm not going to describe how this works, but you could almost reconstruct it from these diagrams. This is what we call an orthogonal terrain. This result by Joe O'Rourke. Here, you don't need any refinement, grid unfolding, one by one. That's pretty sweet. All right. Next one is what we call orthostacks. These are like a stack of a bunch of orthogonal polygons, a bunch of bands, basically. This is where the idea of bands came from. This is from an old paper in 1998, from the beginning. So it's just I have a band, and then, I stack another band on top. So each z cross-section is connected. So that's a little different. With towers, I could have multiple towers here. I really only want one tower built slab by slab. These things we don't know how to grid unfold. That's an open problem, but if you refine just in z by a factor of 2, that's enough to unfold. So 1 by 2 refinement is enough for orthostacks. Now, you could go a little crazier and allow vertex unfolding of orthostacks with some grid refinement, and in that case, you don't need any refinement. So grid vertex unfolding orthostacks was the title of paper. These guys, John Iacono and Stefan Langerman. And it looks something like this. He uses vertex unfolding to fix the direction. Here you were going up, but you really wanted to go right. So it makes things easier. And in fact, the other guys, Damian, Flatland, and O'Rourke-- It's got to be awesome doing geometry and your last name is Flatland, unless she does a lot of three dimensional stuff. Irony. You can do vertex grid unfolding of any orthogonal polyhedron is what I have written here. I haven't actually read that. I should read that paper. What else do I have? Orthotubes. Orthotubes, this is again in the old paper. Orthotube is just sort of thickness one orthogonal tube. It could even be closed, I think, in a loop, but here I've shown it open. And here, grid unfolding is enough. You just do all the grid refinement. You could even just do it locally. Technically, there's a slice here that might slice over here, but you don't have to worry about that. You just subdivide into a bunch of boxes, and this can be unfolded in a sort of zigzag fashion. So 1 by 1. You can generalize this to trees also, though, it's not totally known. As long as you have a tree of cubes with fairly long connectors in between the branch points-- those are called well-separated orthotrees-- that works grid unfolding. If you don't have that condition, so just have cubes connect in some treelike fashion, it's open whether you can do a grid unfolding. We've worked on that in the past. My conjecture's, in general, you need to omega n refinement. My belief is that this kind of exponential blow up is necessary, but that you can do it only a balance tree. So this would give like linear refinement. I think that's necessary, but we can't even prove that you need 2 buy 1 refinement in any example. We don't have anything where grid unfolding is definitely impossible. It's quite sad state, I guess. It's very hard to prove that there aren't unfoldings, accept by exhaustive enumeration, and that's hard to do because it's slow. We've come up with lots of candidate examples, but eventually, we unfold them all. I have a bunch of other open problems. This was genus 0. Interesting question is can you do genus higher than 0? Orthogonal polyhedra. I would guess so, but I'm not sure. I think the biggest question is, can you make this non-orthogonal? But then, the bands get messy. Haven't been able to do that. All right. I think those were the main problems. Any questions? I'm going to take a break from unfolding now and switch the other direction of folding. So with folding, we're imagining we're given some polygon, and we'd like to make a polyhedron out of it. It's exactly the reverse of what we've been thinking about. When is this possible? Now, the rules of the game here are different from origami. With origami, we showed from any shape, you can make anything if you scale it down. But here, I really want exact coverage. Every point here, or let's say every little patch here, should be covered by exactly one layer over here, not allowing multiple layers. So this means not everything is possible. I also care about scale factor, but-- This one, of course, you can crease like this, and more importantly, you glue these edges together. You glue these edges together. The opposite of cutting is gluing. We'll be more formal about defining gluing, I think, next lecture. This is just a sort of prelude. But you end up gluing-- I want to make something, let's say-- in fact, we're always going to talk about folding convex polyhedra. There's very little work on the non-convex case, though, there was actually a recent result. I'll mention that next lecture. If you want to make something convex, and therefore, sphere-like, you have to get rid of all the boundary, so you've got a glue every edge. You don't have to glue whole edges to whole edges. Maybe you just glue part of an edge to another part of an edge, but somehow, boundary has to get glued up. And if you want something sphere-like, in fact, those gluings have to be non-crossing. I have to be able to draw a picture like this. Question is when do these gluings make a polyhedron? That is the question we will be answering next class. But first question is suppose I gave you one of these pictures. I give you a polygon, and I give you a gluing. What this tells you is sort of how to locally walk around. So if I'm here, I could walk over here. I could walk over here, teleport over there, walk over here, teleport over here. Whatever. OK? The gluing tells you locally what the surface looks like, even though you don't know what it looks like yet in 3D. In particular, you can compute shortest paths here. I could compute the shortest path from this point to this point you might think is a straight line. But no, it's not. I don't think. Or maybe it is. Let's see. A little tricky. AUDIENCE: Should be diagonal with the bottom square. PROFESSOR: Diagonal with the bottom square because these guys are both in the bottom. Very good. So this point is the same as this point, or no, this point. Because these get zipped together. This point is the same as this point, so in fact, that diagonal is the shortest path between those two points. So you have to think about it for a while, but it turns out, in polynomial time, you can do that. That's cool. What I want to show now is that suppose you could make a convex polyhedron in this way. I claim you can only make one, never more than one from the same gluing. So I've defined locally what this thing is. It's like a piece of paper. I can mangle it around. If I want to make something convex, there's only one thing it could possibly make. Finding out what that one thing is quite a challenge, but at least, we can prove that it's unique. Sorry about the screeching. This is Cauchy's rigidity theorem. I think we mentioned it in a previous lecture when we were talking about rigidity and three dimensions and like why domes stand up. But now, we're going to use it. We're actually going to prove this theorem, and we're going to use it to study these kinds of gluings and say there's, at most, one way to do this. There's a lot of ways to state this theorem, but one way is to say, suppose you have two convex polyhedra, and suppose they came from the same sort of intrinsic geometry. So there's the geometry of the faces, and there's how they're connected together. So here, initially, I drew a tree of how the faces were connected together, and then, I drew some other connections like this. So the dual here is, of course, an octahedron. But if you have two convex polyhedra and they are combinatorally equivalent, they have the same way that things are connected together, the same graph, and they have congruent faces, so the geometries match also, then, they are actually the same thing, the same polyhedron. AUDIENCE: Is that something that-- PROFESSOR: Up to rotation and translation. AUDIENCE: Is it the same set of congruent faces? PROFESSOR: Oh, yeah. So when I say congruent faces, I mean according to this equivalence. If you take a face on one that has a corresponding face on the other, those two faces should be congruent, not just any pair, and they're not all the same. Different faces can be different, but they're identical in pairs. So I'm just saying basically, if you have this picture, there's only one realization. So this picture defines what are the geometry of the faces, how are they connected together. And so if you had two convex polyhedra with that same underlying diagram, they have to be the same polyhedron. That's what we're claiming. That's what we're going to prove. Is this one any better? Yeah, it's better. This is an old theorem. You may have heard of Cauchy, famous French mathematician. Cauchy-Schwarz inequality, all those good things. You don't need to know those. He proved a lot of things. This theorem he didn't actually prove. He wrote a paper about it or, I think, partly a letter, partly a paper in 1813. Proof was wrong, and it was fixed in 1934, over 100 years later, by Steinitz. So sometimes it's called Cauchy-Steinitz rigidity theorem, although, usually, Cauchy. There's a lemma in here that's often attributed to both of them. So it's sort of a proof by contradiction. We want to prove uniqueness. So we're supposing, well, maybe, there's two polyhedra, p and p prime, and they're combinatory equivalent and have matching faces congruent. What I want to do is look at corresponding vertices, let's say a vertex v and p and a vertex v prime and p prime. So there corresponding in the combinatorial equivalence. And then, I want to slice the polyhedra, p and p prime, with an epsilon sphere, epsilon radius sphere centered at v and v prime. Quick mention, this is not true if you allow non-convex realizations. You may have seen that example before. These have exactly the same combinatorial structure, same geometry on each face, but one's non-convex, and they're different. But as long as they're convex, they're going to be the same. In a convex situation, here's what the slice looks like. So here's little vertex, degree like 5, and I slice the polyhedron with this tiny sphere centered at v, not the interior, just the boundary of the sphere. What I get are these arcs. They'll be great circular arcs. Here we can see all five of them. They're great circular arcs. There on the sphere. I get a polygon. I get a convex polygon on the sphere. Convex spherical polygons, convex because the polyhedra are convex. I actually get two of them. Right? One for p, one for p prime. Polygons. What do I know about those polygons? I know they're edge lengths because the length of an edge here is equal to-- if this is a unit sphere, if I rescale the epsilon to be 1, that edge length is actually this angle. That angle is an angle of the face at the vertex. So I know all the angles. I know all the geometries of the faces. I know they're congruent. So I know what these edge lengths are in the sphere. What I don't know are these angles of the polygon. I know the lengths of the polygon, but I don't know the angles of the polygon because that's essentially the dihedral angle of that edge. And the worry is, well, maybe all the edge lengths match. We could do this at every vertex, but maybe the dihedral angles are different. Maybe the convex polyhedron could flex. That's why this is about rigidity. Maybe it's flexible, and then, maybe there are two different states. All the faces are the same, and therefore, all those edge lengths are the same, but the angles might differ. If p and p prime are supposed to be different, then there must be two angles that differ. So I want to look at those angles, and I want to label vertex of a spherical polygon. So I have n of these spherical polygons for p n of them and p prime. Let's look at them in p. I'm going to label it plus if the angle there in p is bigger than the angle in p prime. Remember I have a correspondence between everything in p and everything and p prime. Minus, if the angle is less, and 0, if the angles are equal. So what I want to show is actually all the angles are equal. Therefore, the polyhedra will be identical. If all the faces are the same and all the angles at which you join them are the same and they're connected in the same way, they are the same polyhedron. There's no flexibility there, but it could be there are pluses and minuses. But if there's going to be a problem with this theorem, there have to be pluses and minuses. That's the proof by contradiction. So let's look at one of them. Let's look at a vertex that has some pluses or minuses. So we have-- it's a spherical polygon. I'm going to draw it more like a polygon, maybe some pluses, some zeroes, some minuses, whatever. First question is could it be all pluses and zeroes? Is that possible? No. It doesn't look good. What does that mean? It would mean this is like a linkage. We're used to linkages. It just happens to be on a sphere. Forget this on a sphere. Think of it is as almost flat. What that would mean is there's some other way to draw this thing. Basically, there's a way to flex this linkage so that all of these angles increase and this one stays the same. How could I get a polygon where all the angles increase and still be convex? Ain't possible. Why is it not possible? I think we've used this fact a couple lectures ago. It's not possible by something called the Cauchy Arm Lemma. This is the part that Cauchy got wrong, so it's also called the Cauchy-Steinitz Arm Lemma. And here's the thing. If you have a convex chain but open chain here. There's a missing bar. So suppose that even when you add the bar, it's a convex polygon. But then, I take a convex polygon, remove an edge. This is what we call convex chain. And if you open all the angles, increasing all the angles, in a convex chain, then this distance increases. It's pretty intuitive. I open all these angles. So I put plus. Some of them could stay the same, but then, this distance will increase, as long as you stay convex. I should mention that. But in this situation, we know that both the initial position in p and its target position in p prime are both convex. OK. Let's just take that lemma as given. This is the one I don't like. Then, I know, in particular, I can't have all pluses because then, if I just pretend one of these edges wasn't here, I know that that distance must increase. But it can't increase. It's supposed to stay the same. The edge lengths are fixed. So if they're all pluses and zeroes, or all minuses and zeroes, the same is true just viewing p prime as p and p as p prime. They can't be all pluses. They can't be all minuses. So if there's anything in there other than zeroes, there has to be at least one plus, at least one minus. In particular, there have to be at least two alternations. Alternation is either going from plus to minus or from minus to plus. So it could be something like plus, plus, plus, minus, minus, minus, minus, plus, plus. Whatever. OK? Maybe that's your polygon, and that's your labeling. Is that possible? So once you have at least one plus and one minus, you have to have these two switches at least. Could this happen? AUDIENCE: No. PROFESSOR: No. Right. Because you pick some chord. What do I do here? Just subdivide. All right. Whatever. Pick some chord like this one. I'm not sure it matters. Maybe here? Whatever. These angles down here are decreasing. Therefore, this distance decreases. The angles up here are all increasing. Therefore, this distance increases. Can't have both. So this is also not possible. So in fact, you have to have at least four alternations. It's always even. And so it has to be at least a bunch of pluses, then a bunch of minuses, then a bunch of pluses, then a bunch of minuses. And so we're counting these transitions from plus to minus. This is a lemma that we used when we were locking trees, I think. We had a convex polygon. I said at least two of the angles have to decrease, or at least two of them have to increase, one way or the other. That's because there have to be at least two groups of pluses, not only at least two pluses and at least two minuses. You might think, well, what happens if there's only three vertices. Then, you can't have these four alternations. Well, yeah, you can't have those four alternation because if you have a triangle, even on the sphere, triangles are rigid. So you would know if I had a degree 3 vertex, locally that thing is rigid. It can't flex at all. We're only interested in cases where it might flex locally at a vertex like the pentagon, like a quadrilateral. All right. So what? This was true at every vertex that was not entirely zero. So if we look at-- let me write this down. I really just care about the non-zero edges, the ones that are plus or minus. So I'll call the subgraph of plus and minus edges. The number of alternations is at least 4 times the number of vertices. I'm going to denote the number of vertices by a capital V. OK? That's page one. We're going to use a trick. It has many names, I suppose. I usually call it double counting in combinatorics, where you have one quantity, namely, the number of alterations. We're going to count it in two different ways. We'll get two different answers, but we know they must end up being the same. And then, we'll get a contradiction. So what's the second way of counting. Well, we counted local to a vertex. The other natural way to count angles is by looking at the faces. There are also faces. It's sort of the dual perspective. Every phase has a bunch of angles that have some degree or whatever. They're really kind of the same thing. Oh, here was Cauchy's Arm Lemma. Beautiful. If you look at the alternations as you walk around a vertex versus as you walk around a face, you'll end up counting them the same number. Right? Here was an alternation from plus to minus. What's interesting here-- before we were thinking of labeling the vertices of the spherical polygon, but in fact, whatever this edge does, it does the same thing local to that vertex as the thing local to that vertex. So really the labels are on the edges of the graph. They could be zero, plus, or minus. And if I have an alternation from plus to minus, view from the vertex, it's also an alternation as I walk around the face. So instead of counting by walking around all the vertices-- which are just did, and I got at least four at every vertex-- let's do it from the perspective of the faces. And we're in this weird subgraph of plus and minus edges, so assume there are no zeroes. All right. If I have a face of 2 k or 2 k plus 1 edges, then it will have, at most, 2 k alternations. So I already have a lower bound number of alternations. I'm going to try and prove an upper bound, sandwich it between, and show that, actually, the upper bound is smaller than the lower bound, and that's a contradiction. So this is kind of obvious. Right? If you have 2 k vertices, no more than 2 k alternations, slight, the place where we're making a little improvement is for the odd case. We know because a number of alterations has to be even you can't get up to 2 k plus 1 alternations. Has to be even because for every plus to minus, there has to be a matching minus to plus because it's cyclic. So even here, you can only get 2 k alternations. That helps us a little bit because now, we can talk about number of alternation is at most two times the number of triangles, f sub 3 is going to be the number of faces of degree 3, plus 4 times the number of quadrilaterals plus 4 times the number of pentagons plus 6 times the number of hexagons plus-- why'd I write seven? I wrote seven. I'm being sloppy. At that point, I don't care. 7 f 7, 8 f 8, 9 f 9-- I'm not going to try to be clever from 6 on, but I'm going to be clever at 5 and 3. Why 5 and 3? As you may remember from way back when, you have a planar graph-- because polyhedra are planar graphs. They're convex. The average degree is 5? Slightly under 6, 4, 3, 2, 1? One of those numbers. Let's see. Should be like 3 n minus 6 edges, so that should be 3. Yeah, three. So most of the faces are going to have low degree. That's the points. So 3 and 5 really matter, but out here, it doesn't matter so much. This is kind of a magical proof. It shouldn't be intuitive where it came from, but it's really beautiful. You'll see as it all comes together. Fun. What do I do next? Right. I want to relate-- I have a vertex count here. I have a face count here, and I know Euler's formula. Hopefully, we know it. It's a cool formula. V minus E plus F is 2. This is the number of vertices, number of edges, number faces is two. [? For ?] connected, planar graphs. There are other versions when you have multiple components or when you have tori, genus, whatever, but for convex polyhedra, this is true. So this conveniently relates vertices to faces, but it involves edges. So somehow, I have to bring edges into the mix. All right. Well, edges. I want to count the number of edges in terms of the number of faces. I could do it in terms of vertices or faces. The number of edges is half the sum of the degrees of the vertices. Remember, that's handshaking lemma from way back when. It's also half the sum of the degrees of the faces. If I look at every face and I count the number of edges, I will end up counting every edge twice, once from each side, so this is half the number of faces. Well, number faces I want to write in this form. So this is half 2-- No, sorry. Not the number faces. What am I doing? Half the sum of the degrees of the faces. So this is half-- what is the degree of degree 3 faces? 3. What is the degree of degree 4 faces? 4. And so on. Exactly. So now, things are starting to look similar, and I want to get some cancellation going on. Use my cheat sheet here. I'm going to rewrite this formula as V equals 2 plus E minus F. Hopefully, that's the same. I put E over here. I put F over there. We'll get this. OK. So now, I have E minus F. E is this. F-- I have F. Am I missing something here? I'll write the next one, and I'll figure out where it came from. Oh, duh, I just skipped a step. F is the sum of f3 plus f4 plus f5 plus f6. OK? All I did here was decrease by-- well, because there's a half out here, I decrease each coefficient by 2, nothing surprising. Whew! So I took E, I subtracted F, just took away one of each. Now, I have this formula. That's V. Now, I also know that 4V is at most, the number of alternations. Hm. So I could get a formula for 4V here. Right? 4V is going to be 8 plus this is going to be like 2. So I get 2 f3 plus-- oh boy, 1 times-- 4 times f4 plus-- just double these numbers-- 6 times f5 plus-- what's the next one? 4-- 8 times f6, and so on. Yes. OK. Now, these guys look very similar to these guys. Now, here it gets bigger. 8. It's going to go 10, instead of 6 and 7. We also have a plus 8. We don't know whether there any faces of degree 6 or more, so we can't rely on that, but we have plus 8. So somehow, 4V, which is, at most, the number of alternations-- that's the reverse of what we said before-- 4V is, at most, this number, and yet, it's also equal to this number. It can't be both. This number's at least 8 larger than this number. It could be even more larger, but at least that contradiction done. This works as long as there's at least one face, meaning there's at least 1 plus or minus because we're only looking at the subgraph plus and minus edges. And that is Cauchy's rigidity theorem. Let me quickly tell you in our situation, here, we don't actually necessarily know where the creases are. We just know how things are glued together. Even in that situation, you could sort of figure out where the creases must be because as I said, you can compute shortest paths once you have the gluing. So you compute the shortest paths between all pairs of vertices, something like this picture, except you don't know what it looks like in 3D. You can still compute the shortest paths. You know every edge must be a shortest path. So the edges are some subset of these guys. And so you've got lots of little convex polygons here. We know it must make a convex polyhedron. If it made two, Cauchy's rigidity theorem would tell you that they're the same. So even once you fix the gluing, you know that there's a unique convex realization, and there will be a unique set of edges from the shortest paths that actually realize it. The next class will be all about how to actually find those gluings and know that they actually will fold into some convex shape and how to find that convex shape, but that's it for today.
MIT_6849_Geometric_Folding_Algorithms_Fall_2012
Class_13_Locked_Linkages.txt
PROFESSOR: All right. So, lecture 13 was about a lot of things. We had algorithms for the carpenter's rule theorem. We had three of those. Let's see, there's the CDR strain of pseudo-triangulations, which we're going to talk about a bunch today. There's the energy method, which we'll talk about a little bit. Then there was locked trees. I'll give you-- there's one new result in locked trees I'll tell you about. And then we had the last topic, which was-- cheat-- oh, right. Last topic for today is four dimensions, which wasn't covered in class. And what we did talk about in class is 3D. Essentially no updates there, although open problems are still open. Fairly challenging. But 4D we didn't get to talk about at all. And that's not too hard, how to unfold any 4D chain. So that'll be fun. Also 5D chains we'll get-- and 6D and 7D, but not eight. Any dimension. First question for today is why do we care so much about expansiveness? And this is mostly-- mostly, I want to reference next lecture, which will use expansiveness. But, indeed, the original point of expansiveness was just a convenient way to ensure self-intersection. Essentially, before the idea of expansiveness, we had this idea of a really complicated chain. that's not very complicated but for the carpenter's rule problem, we had this idea that, well, how do we distinguish between a linkage that is locked within Epsilon? However you draw it, if it's not self-touching, it's going to jiggle a little bit. Versus something they can eventually unfold. And there was no good way to distinguish between just jiggling a little bit and unfolding all the way. Which was frustrating, and expansiveness gave us a way to guarantee that. Not only did it avoid self-intersection, which is nice. If you expand, you don't self-intersect. But it also gave a way that things would actually be rigid if it was impossible to do. And so you couldn't move at all. That reduced it to a rigidity theory problem or, really, a tensegrity theory problem. Which was kind of what this problem needed. That's why it was open for 25 years before we could prove it with tensegrities. So, that's one answer. But, in fact, now that we know expansive motions exist, they're really handy. They let us prove that the energy method worked, for one. And next class we're going to use them to add thickness to our edges, add polygons or regions attached to every edge. And if you have an expansive motion, it turns out under certain conditions, this thing will still work. Whereas with non-expansive motions like those produced by the energy method it won't work in general. So initially it was convenience, but it turns out to be handy mathematically. Two, we've seen one example just proving the energy method That's actually our next topic. We will see another, next lecture. But a few people asked about the proof of the energy method, so I wanted to review it. How do we know we don't get stuck in a local minimum? So this is an obvious question to ask if you're familiar with gradient descent. If not, here's a kind of picture. Imagine you have some energy landscape. In reality, this is a surface drawn over n times d dimensions, but I will assume n times d equals 1 because I can draw it. So here's configurations. And then on the x-axis, or in general all the other axes. And then the y-axis, which will always be one thing, is the energy function. So sometimes the energy is high. At some points it's going to actually shoot off to infinity. That's where you'd self-intersect. Remember, the energy function was sum over all edges. The w sum over all vertices. U of 1 over the distance between u and dw. So when a distance goes to 0, the energy shoots to infinity. Those are the points we want to avoid. So those are bad things. But if we start at some non self-intersecting configuration, we'll have some finite energy. And the idea is, well, if we followed the negative gradient, that will decrease energy as fast as possible. And from here it's going to basically follow this trajectory. And it would get stuck in this little well. This is called a local minimum. But this local minimum might be lower. And so the worry is that it depends where you start. If I start over here, I'll fall down there. But if I start in this kind of well, I'll end up at this kind of accumulation point. And what I really want is maybe some kind of global energy minimum. That's the usual issue with gradient descent. That's why people don't like to use gradient descent sometimes. Because it gets stuck. Hey, Jason. However, with this particular setting we actually proved that the energy method does not get stuck in a local minimum. And I'll just review that proof because it went by quickly in lecture. So the idea in our setting is we know the carpenter's rule theorem. Carpenter's rule theorem tells us that any configuration where we're not done-- and I'll start just by thinking about a single chain, maybe an open chain or a closed chain. Any chain that is not already straight or convex, not only can be unfolded, but has an expansive motion to unfolding. Expansive motion tells us that if we look at any distance between a vertex and an edge, this increases. That's not how we define expansive. We define it between every pair of vertices. But it turns out if you have-- this is actually something we needed for non-self-intersection. If you have a vertex and an edge, and you know that-- well, this edge stays fixed length because it's a bar. We know that these distances increase. And this distance increases by expansiveness. Then, in fact, this point must go away from this bar. You can draw an ellipse. If you held these fixed, it would move along an ellipse. Or held the sum fixed. But you've got to go outside that ellipse. This guy's got to get farther away from-- every vertex, it has to get farther away from every bar if the pairwise vertex-vertex distances increase. So if you take this expansive motion, you know that it is an energy-decreasing motion. So what that tells you is that any configuration that's not already straight or convex has an energy-decreasing motion. So this tells you, you never get stuck in a local minimum. Because what's a local minimum in one dimension? It's a place where you cannot go down from here. And those are the places we're worried about. And the claim is the only places where you don't have a downhill motion are places where you don't have expansive motions, and therefore the configuration must already be straight or convex. And then you're done. So you don't have to worry about getting stuck with the energy method. There is one tricky part which is, if you have a linkage that consists of multiple pieces-- maybe you have some open chains, some closed chains, you can have some guy stuck in here. They won't necessarily straighten out, but-- there's a couple kinds of things you could do. You could unfold this chain, you could unfold this chain, or you could just let them fly away from each other. All of those things will decrease energy. And so the worry here would be that these two linkages fly apart and never really unfold locally. That would be bad for us because we're always decreasing energy. There's always a decreasing flow, but we may not actually finish. We might not finally make this. Our goal in this case, remember, is to make it outer convex. Meaning that all the outermost chains are straight. All that outermost polygons become convex. So we want to argue this. And I'm just going to sketch the proof here. Essentially, when you have multiple components, once they're very, very, very far away, their contribution to energy, just by all the pairwise distances across between two components, will be very, very tiny. And the claim is actually the energy method won't get you all the way to an outer-convex configuration, but it will get you within some Epsilon of outer convex. You get to choose whatever Epsilon you want. And so when these guys are far enough away, their contribution to energy is much less than Epsilon. Once they're, like, n squared divided by Epsilon away or something, the contribution between them will be far less than Epsilon. So, what really matters in energy is all the distances inside the linkages, inside the components. And we're following the gradient motion, which decreases energy as fast as possible. And so you can prove that it's better-- if you want to decrease energy as fast as possible, it's better at some point to unfold the pieces. Not to just let them continue flying apart. So waving my hands a little bit there. But if you know that you're at least Epsilon away from being done, then the energy will be fairly high within the components. And therefore it's worth unfolding the components. But it's especially clear for one component. All right. Any questions about that.? That's why the energy method works. The next set of topics is related to pointed pseudo-triangulations, which are indeed very cool. There's dozens of papers about them. They've spawned their whole little sub-field of computational geometry. We touched on them a little bit in lecture, but there's a lot more to say about it. In particular, why they even work for the carpenter's rule problem. I didn't make clear, but I would like to. So a couple things about them. They were actually originally invented in 1994 or the first used, I guess, in 1994. At this point, they were called geodesic triangulations for a particular reason. And so for the carpenter's rule problem, we had a polygon and then we pseudo-triangulated the inside and the outside of the polygon. Here we're just pseudo-triangulating the inside. In that setting, they're call geodesic triangulations. And it's for a particular data structures problem, actually. This is sometimes covered in 6851, which is Advanced Data Structures. You can check out online. The goal is to decompose this polygon in a kind of balanced way. So first you choose a pseudo-triangle that's kind of most in the middle of the polygon. Then you recurse on the sides. Actually number one is this one. And if you do this, it turns out if you want to walk from any point in this polygon to any other point in the polygon, you only visit log n pseudo-triangles. Just a nice, small number. And this is a particular problem called ray shooting, where you imagine you have a laser to shoot in some direction. The polygon-- you want to know when it exits the polygon, when it hits the boundary and where it hits. One way to do that is to walk through the pieces here, the pseudo-triangles. And if you only have to walk through log n of them, it turns out walking through a single triangle only takes log n time. Total amount of time is log squared n. So this is a pretty good way to do ray shooting in polygons. Which is nice. So that's where they originally come from but they weren't used very much until later. So let me tell you some nice properties about them. Pseudo-triangulation has 2n minus 3 edges and n minus 2 pseudo-triangles. Always. I should say, if you draw a pointed pseudo-triangulation, remember pointed means at every vertex of the pseudo-triangulation. You have some angle which is-- you have a reflex angle bigger than 180. This is the pointed property. Pseudo-triangles-- all the faces look like this. They only have three convex vertices. All the other vertices are reflex. So this is pseudo-triangle. Pointed pseudo-triangulation has both of these properties. And it's pretty easy to prove this by induction. I won't go through the proof here. It's interesting in the way that it is different from triangulation. So n here is the number of points. And I'm thinking about pointed pseudo-triangulations of the points. Meaning, I take some points and I add edges until I can't anymore. Subject to the pointed property. So I add this edge, I'm still pointed. I add this edge, I'm still pointed. I add this edge, I'm still pointed. Can't add this edge. Can't really add that edge. Let's see if I can and this one and this one and this one. Anymore? This one and now I'm done. I can tell that I'm done because at this point, all of the faces are pseudo-triangles. They each have three convex vertices. And as soon as you have that all the faces are pseudo-triangles, we can show that you can't add any more edges while still being pointed. I'm still pointed because I only added edges that preserve pointedness. And this is a sort of greedy algorithm to construct pointed pseudo-triangulation. It always works. And we can count the number of edges. I won't bother, but it will always be 2n minus 3. By contrast, if you look at regular triangulations-- so a triangulation would go all the way to the point where every face is a triangle, like this. And in that case they have more edges, obviously. In general they will have-- here is two more edges because there are two interior vertices. So, in general, it would be 2n minus 3 plus i edges for a triangulation, where i is the number of interior points. And it will have n minus 2 plus i faces, triangles. Pseudo-triangulations are quite minimal in the number of edges. In particular, they are minimally generically rigid. This is the number of edges I want for a nice rigid structure. 2 n minus 3 is what I should have. And indeed, pointed pseudo-triangulations are minimally generically rigid. That's the next thing. Minimally generically rigid. This was the Laman condition. We saw a couple characterizations, but in particular, you can fairly easily prove that they satisfy the Laman condition. Why? Well, first thing to check is they have 2n minus 3 edges. That, I just claim. Second thing to check is that if I look at any subset of the vertices, k of the vertices, they should only have at most 2k minus 3 edges among them. It should induce at most 2k minus 3 edges. Well, let's look at any k vertices. What are the edges that they induce? Well, the thing I know about them is that they will induce a pointed set of edges. Because this structure without the blue edges is pointed. If I remove edges from it, it will still be pointed, [CHUCKLE] right? If you have a big angle, you'll continue to have a big angle if you remove edges from it. It's a matter which subset of edges I look at, in particular, in the deuce subset. It will still be pointed. If I have a pointed set of edges on some k vertices, I can use that greedy algorithm to finish the pseudo-triangulation. As long as you're pointed, you can add edges until you can't add edges, subject to the pointed constraint. At all times preserving pointedness. This is what I did to generate this triangulation the first place. If I take some subset of k vertices, I can do it again on those k vertices. When I'm finished, I will have a pseudo-triangulation. And that has 2n minus 3 edges. In this case, 2k minus 3 edges. So I started with some pointed set of edges. I added stuff. I ended up with 2k minus 3. That means I started with, at most, 2k minus 3 edges. 2k minus 3. And so that's how you prove the Laman condition. It's kind of trivial, because pointedness is an inherited property. So if you believe in 2n minus 3, then you believe minimally generically rigid. Cool. Even cooler is that the converse is roughly true. In some sense, all Laman graphs can be drawn a pseudo-triangulation. There's one catch, which is that you need a plane area. Any planar Laman graph has a pseudo-triangulation as a realization. Realization was just a way to draw the graph in 2D. Now, pseudo-triangulations are planar, meaning none of the edges cross each other. So you definitely need a graph that can be drawn in the plane without crossings. That's the planar constraint. But if you have a planar minimally generically rigid graph, you can always draw it as a pseudo-triangulation. So it's kind of a converse of this theorem. And a sense in which pseudo-triangulations are universal for planar minimal generic rigidity. Which is pretty neat. I will not prove that theorem, but I will give you a visual sense of what the proof looks like. It's based on the Henneberg number construction. So we know minimally generically rigid graphs can always be drawn in a Henneberg way, either by adding new degree-two vertices, or by adding new degree-three vertices and removing one of the existing edges. And the claim is, you take any Henneberg construction. You can do it-- you can implement it in a pseudo-triangulation. So you start up with your single edge, that's a pseudo-triangulation. You keep adding things. Preserving the fact that at all times is a pointed pseudo-triangulation. It's tricky, but for example, if you know that you want to add this red vertex next to these two, you find a place that will guarantee that this is a pseudo-triangle. This is a pseudo-triangle. It will be pointed for free because it's just a degree-two vertex. So the challenge is preserving pseudo-triangles. And same for adding the degree-three vertex. This is harder, of course, but you need to-- you find a little patch here where, if you choose the point in there, you get pseudo-triangles on all three sides. And also, that removing the edge still preserves pseudo-triangulation. It's definitely not trivial to do this, but it can be done. And it's kind of neat that it can be done for any Henneberg construction. The last thing I wanted to tell you about pseudo-triangulations is why they work for the carpenter's rule theorem. Why do they give expansive motions. Well, they're rigid of course. They don't give you any motions. But what we claimed is that if you remove a convex hull edge-- so if we take a pointed pseudo-triangulation, and then we remove a convex hull edge, we of course get something that's flexible. Because we were minimally generically rigid, so if you remove an edge, you will now be generically flexible. It turns out, you're not only generically flexible, you are actually flexible for at least a little bit of time. So pointed pseudo-triangulation minus convex hull edge. It's not only as flexible, but it flexes expansively. Meaning, if you look at all pairwise distances-- expansively-- look at all pairwise distances, they either stay the same, they're connected by a bar, or they will increase. And this is what we wanted for carpenter's rule. You could do that motion for a little while. Then you might have to switch to a different pseudo-triangulation according to a flip, which I won't talk about. We already did in lecture. But at all times the claim is there is an expansive pseudo-triangulation motion by removing single edge. So why does this hold? This, it turns out, you can prove using things similar to what we know from our proof of the carpenter's rule theorem that we covered. So it's kind of neat to see the connection between the two. So, why is this true? Well, we already argued that the thing is flexible without the expansiveness constraint. So we want to add the expansiveness constraint. So just like before, we're going to add all pairwise struts. So between all pairs of vertices, let's say they're already connected by a bar, we add a strut between them. Meaning, that could only increase in length. Now we have a tensegrity. We want to argue that tensegrity is infinitesimally flexible. So, we're going to do that using the duality just like CDR, just like the proof the carpenter's rule theorem we saw. We say, OK we want to prove this thing is flexible by duality. That's equivalent to proving something about the equilibrium stresses. Now in the case of CDR in a single chain, what we needed to say was the equilibrium stresses were all zero. And then we said, OK, if in order to prove the equilibrium stresses are all zero, we use Maxwell-Cremona theorem. That's equivalent to saying all polyhedral lifting are flat. And then we prove that by looking at the maximum z-coordinate. In this case, we can't argue that all the stresses are zero. It's not true. But if you recall the duality claim, I said if you have a tensegrity, the tensegrity is rigid if and only if the underlying linkage is rigid. Like when you replace struts with bars. Which is going to be true here. And there is a stress, an equilibrium stress, that is non-zero on every strut. Equilibrium stress being non-zero in every strut means that all the struts basically have to stay fixed length. They become bars and then you're done. What I need to prove the opposite, I want to prove its flexible, I need to find a strut where there's no stress. If I find a strut where there's no stress, then basically that strut can expand and there's a motion. That is roughly true. [CHUCKLE] It's a little bit more complicated than that but I'm going to just claim suffice as to prove. There's a strut that has zero in every equilibrium stress. Which strut am I going to choose? I am going to choose the one that comes from this convex hull edge, e. So I removed a convex hull edge as a bar. It's going to get replaced when I add all pairwise struts, by a strut. So I've got the vertices on the corner here. One of these, I replaced with a strut. The other guys are whatever they are. This is a pseudo-triangulation. Not a very interesting triangulation in this case. I'm going to add a vertex here. Now it's a pseudo-triangulation. So this guy got replaced with a strut, for example. I claim this guy is zero in every stress. Why is it zero in every stress? Well, I claim, we've essentially proved this, in any equilibrium stress non-zero stresses must be on or interior to convex polygons of bars. OK, first believe this claim. The only place I have non-zero stress is when I'm interior or on the boundary of a convex polygon of bars. And then, so there might be some non-zero stresses in here. These edges might be non-zero stressed. Intuitively, what's happening here, these guys are going to serve as mountains in the lifting. And there's a hole. So you go deep into the hole there. Claim is that is the picture. So you've got some convex polygons. There's stuff in here that's-- could be bad. Could be stressed. But the stuff outside the convex polygons, whatever they are, have to be flat. Meaning they have zero stress. I'm jumping back and forth between the lifting and the stresses by Maxwell-Cremona. Those are the same thing. Now look at edge e. Edge e is a strut. The other edges here that I've drawn, those are the bars. I haven't drawn all the other struts. I should really use a color for this guy. So this red edge-- there's lots of other red edges I'm not drawing between all pairs of vertices, but the only bars are the white edges. And those white edges-- well there's some convex polygons, here's a convex polygon, here's a convex polygon. But none of the convex polygons enclose e. Because e's on the convex hull. E's on the outside here. So it can't be stressed by this claim. And so if you believe this claim, then you believe e has zero stress because it's not interior to any convex bar polygon. Because it's a strut, not a bar. OK. Now let's prove the claim. It follows from the argument we gave in, was it lecture 12? Previous one? Where we proved the carpenter's rule theorem. But it's not obvious that it follows. Let me explain why. I want to look at the maximum z-coordinate. I want to look at the region of all the points in the plane that, in the polyhedral lifting, given by Maxwell-Cremona's theorem, they are at maximum z. That is some region. It could be two-dimensional. It could be one-dimensional. Can't be three-dimensional. [CHUCKLE] It's part of the plane. So this region may have some one-dimensional parts. It may have some two-dimensional parts, like this whole part. All of this stuff might lift to maximum z. It could be-- have some non-convex parts, like this. Whatever. This is kind of generically what it might look like. I guess it could have cycles with not-filled interior. All these things are plausible, except we had this lemma. This is the key lemma, the heart of the proof of the carpenter's rule theorem. If you look at m and you look at a vertex, v, of the boundary of m. I'll use this Dell notation to mean-- ignore the points in here. I want points along the boundary here. And in particular I want to look at these vertices in this drawing. Look at such a vertex, v, and suppose that v is pointed. We didn't use this terminology because we didn't have it at the time. But maybe v looks like this. And there is an angle here, this reflex. That's a point of vertex. Now I'm looking here just at the bars, ignoring the struts. So suppose vertex v has a reflex angle among just the bars. Then the claim is locally, this entire region must be in m. Must be locally at the maximum z-coordinate. How did we prove that? Well. the claim is this had to be flat because there's no mountains here. So if you think of a reflex region and v is at the maximum z-coordinate, right? It's on the boundary of the maximum z-coordinate so it has maximum possible z. So there's no way you can go up. These edges could be mountains. But all the other edges which are not drawn here, these blue-- there are some other edges. Those are the struts coming to v. Those all have to be valleys. They all have to lift to valleys because stresses can only carry positive stress. Positive stress correspond to valleys. These guys are all valleys and this is at maximum z. You really can't use those valleys, right? [CHUCKLE] You're up here at maximum z. If you use valleys, you go to higher z, which is not allowed. So these actually all have to be flat out here. Which means locally, all this stuff is at maximum z That was the proof we had a couple lectures ago. Now, once you know that that's at maximum z, you know that a lot of these things are impossible. Because look, here's a reflex angle among bars. There might be more bars here, but we know that this is pointed. Whatever's missing here, there's some reflex angle. We're assuming we have a pointed pseudo-triangulation. And then we remove an edge. It's still pointed. So that means all of this has to be in m. Is it in this picture? No, contradiction. So you cannot have one-dimensional parts because wherever you have a one-dimensional part, there's a reflex angle that's not in m. So that's bad. In fact, they all have to be two-dimensional parts like this. Can it be a two-dimensional part like this? No, because here's a reflex angle that is not in m. Contradiction. So the only situation you can have is a convex polygon with the interior all in m. Sorry , wrong. The reverse. The exterior should all be in m. Because, you have all these reflex angles. That has to contain the pointed part. And locally, by this lemma, it has to be in m. So all this stuff has to locally be in m. The only way is for all of that to be in m. Inside, we don't know. Could be stresses there, and this could be a hole where you go deeper. All of this is at the maximum z-coordinate. The boundary, here. The inside, who knows. Could go down. So you can have stresses interior to convex polygons in bars, but not exterior to the convex polygon. M is either everything, and there's no boundary, and then there's no stress. Or it's not everything, and then it will have to have stresses only interior to convex polygons. And that's what we're claiming, is that stresses can only be interior to convex bar polygons. And in particular then, e is not stressed. The end. This is definitely harder than the case we talked about for carpenter's rule theorem. When you have a single chain, you're just trying to prove there's an expansive motion. It's a lot easier than this because m is much simpler. It could basically only be a path or cycle. You only have a single chain. Here m could be more complicated, but in the end, it's actually pretty darn simple. Any questions about that? Cool. Perfectly clear? [CHUCKLE] Definitely a bit complicated, but nice. Now, it turns out pseudo-triangulations are really at the core of expansive motions. I'll just mention one more theorem about them. So you may recall I mentioned, if you look at the space of infinitesimal motions of a linkage or a tensegrity, they form a cone. It lives in some high-dimensional space, but I'll try to draw a cone here. These guys go off to infinity. I'm drawing it in three dimensions, I guess. You have some place here-- this is the zero zero zero motion we're nothing moves. And these are potential different motion-- each of these points corresponds to an assignment of velocity vectors where you move and you preserve all the constraints. If I take such a motion, I can scale it up. It's still a motion. Motions form a cone. It's called a convex cone. Every motion can be scaled up or down all the way to zero. So if we look at the cone of expansive motions, it's also a cone. It's a cone of motions of a particular tensegrity where we add all pairwise struts. Then, the pseudo-triangulations correspond to these edges of the cone. These edges, they're really rays. They go off to infinity. These things are called extreme rays. Meaning, they're extreme in a particular direction. Like if I want the motion that is the most this way, then it will be this ray. They're kind of the corners of this polyhedron. The edges of the polyhedron, if you will. Although in higher dimensions, it is edges but they're called extreme rays in the case of a cone. The extreme rays in the cone of expansive motions equal pointed pseudo-triangulations minus one edge. I think I have a slide about it. Yes. So this is a paper by [? Roto ?] [? Santos ?] [? Enstrenu ?]. This is just a particular example for five points. These are all the different pseudo-triangulations minus an edge. In some cases, like here, you get two triangles. So that becomes rigid. So they fill in the whole clique to indicate that's a rigid component. But the claim is for five points. These are the edges of the expansive cone. Each of these guys has an expansive motion. Like this guy would rotate. Pretty simple. This one would open up a little. Like that. In general, all these guys have an expansive motion. Those are characterizing the cone. Those are kind of the extreme rays in this fairly high-dimensional cone. It has something like 10 dimensions if it's 5 times 2. And you can prove that if you're interested. Read the paper. So, in fact, pseudo-triangulations are really core to expansive motions. If you wrote down the linear program, which is the expansive motions, then you are essentially characterizing this as the polyhedron in the linear program. If you know linear programming. And you follow the simplex method. Simplex method is all about finding where the edges are. And here they are. So you would actually, if you implemented the linear program, ran it on a simplex solver, you would get pointed pseudo-triangulations automatically. They would just pop out. That's actually originally how they were discovered. But then this is why they were discovered that way. End of pseudo-triangulations. All right. Next question is have any of the open problems been solved? [CHUCKLE] I get this almost every lecture and usually the answer is, no. So I don't tell you anything. But this time, there is a nice result which is related to this picture. These are slides from the lecture. We had this linear locked tree, which is minimal among all linear locked trees. Minimum number of edges. We have this equilateral locked tree. All the edges were the same length. Was not strongly locked. There was some positive distances here, but it is locked. I didn't mention this question, but an obvious open question is, can you get both at the same time? Is there a linear equilateral locked tree? If there were, it'd probably be a lot easier to prove locked that this mess. So it'd be nice to get both but, in fact, you cannot get both. There's this paper by a bunch of people in the open problem session from two years ago. It finally got published last year. And it proves a bunch of things, but one of the things it proves that there's no equilateral locked tree that is also linear, also lives in a single dimension. And the way it proves that, essentially, so you take any linear equilateral tree. What does it look like? Well, it lives along a line here. It's segmented into equal-length chunks. So from a high level, it looks like this. Now in reality, there could be lots of edges along each of these segments. And there's interesting things happening inside one of these vertices. Could be something like this. That's one way to do a tree-like structure in there. It could have parts like this that go from side to side without touching these guys. Who knows what's happening here, but from a high-level perspective, it's a bunch of segments in a path. The idea is this is basically just looking at five of the vertices out of all n of them. But if we just look at those five vertices, everything looks good. This is in canonical form. Canonical form for a tree is where all the edges point to the right. Something like this is nice and canonical. So right now this is canonical. What we do is look inside the path here. Look for a break point where things are not canonical. And then we fix it. So here's just an example of that. Here, this is what it looks like currently at a high level. Here's what it looks like in reality. So imagine-- so the tree is doing this stuff locally at all these vertices. Right now, we're coalescing this all into one big mega-vertex. And then it looks like this. But suppose now we realize, oh actually there's kind of a split here. There's stuff over to the right of this. There's stuff over to right of this. They're not directly connected. Then what we'll do is pull it apart. Treat these as two separate vertices. Increase the number of vertices in this picture by one. So we end up splitting w here, according to whatever is actually happening in the linkage. And then we say, oh gosh, this is not really canonical, right? This edge is pointing to the left. These guys are pointing to the right, still. But this one's pointing to the left. So it looks like this picture got a rightward edge and then a leftward edge. We fix it by just rotating this edge 180 degrees, keeping at all times this thing pointing to the right. So this comes along for the ride. And then we will end up with this picture. Now we're canonical again with one more vertex. We'll repeat. Eventually we'll be canonical with all the vertices. And we're done. So if you take any two configurations, you canonicalize both of them. You will end up with essentially the same picture. It doesn't matter which route you choose. But it's already known that it doesn't matter which route you choose. You can change the route just by flopping the tree with one motion or a very small number of motions. So if you have two configurations, you canonicalize both of them then. Then there's a motion between them and you get a motion from one configuration to the other. Therefore, there are no locked trees that are equilateral and linear. Kind of nice. The last thing I want to talk about is 4D. A couple people asked about 4D. Why is it so different from 3D? The essential reason is we have one-dimensional bars, but four dimensions of motion. That's a gap of three dimensions. Three dimensions is a lot. When you have one-dimensional chains in 3D, you have a gap of two dimensions. Two dimensions turns out to not be a lot. [CHUCKLE] So why is that? Let me prove it to you. Here is actually an animation of the motion, just for fun. This is in the textbook. The top row is if you are zoomed out from the very beginning. It's this mess and you basically pull out a string. It actually wiggles around a lot. Here's what it looks like zoomed in. This is the original mess. And you end up flopping this over here, then flopping it over there. And it's hard to see because this is a two-dimensional projection of a four-dimensional thing. But eventually you pull open the whole thing. So how does this algorithm work? It's actually very simple. For open chains, it's very simple. Closed chains, a little more complicated. Trees, also very simple. So unfolding 4D open chains. This is by Roxanna Cocan and Joseph O'Rourke. Back before the carpenter's rule theorem. This publication was later but the original version was in 1998, I think, or '99. So what do we do? Well, what I do is look at the end bar. So let's say this is the end of the chain. From here it goes in some direction, and stuff. What I'd like to do is rotate this bar so that it extends this bar. So I have an 180-degree angle here. That's my goal. If I can do that, I can fuse that vertex, never rotate it again. Treat this as a longer edge. Therefore the remaining thing has n minus 1 edges if I had n edges originally. So I can just apply induction. In other words, just repeat that. Eventually the whole thing will be straight. So the challenge is, can I make a move-- can I fold this last bar to extend the next-- the second bar? Two issues. Two problems that could happen here. One is that there's no continuous motion to get there. The other problem is that even instantaneously, if I just picked it up and dropped it into the right location, it might intersect the rest of the chain. Maybe the chain here comes and intersects that ray. If it intersects that ray, I can just jiggle the whole chain a little bit, and it won't. [CHUCKLE] Is that clear? Let's see, you've got these one-dimensional bars, and this is a one-dimensional target ray. So if a bar happens to be on this ray, just wiggle it a tiny bit. It's going to come off of that place. This will actually even work in three dimensions. This step. There's an even easier way to do it, is just to rotate this bar. If you rotate this bar, you can basically point the ray wherever you want. And if you look at where that bar is pointing, it's kind of hard to draw the picture but you just have to miss all of these one-dimensional things. So it's very easy to miss. So let's just assume for now-- we'll kind of see this again in a moment. Assume that this ray happens to be clear. There maybe things that come very close to it. Maybe they go on the backside and then come on the front. But they never-- don't actually touch the ray. Now think about where this bar can go relative to this vertex. If I draw a little sphere here, I'm going to draw it initially in three dimensions but really it's in four dimensions. So I have a little sphere. Where this bar is currently, corresponds to a single point on the sphere. I'm currently here. Where I want to be corresponds to another point on the sphere. My goal is to find a path on the sphere that gets to the x. Gets to the buried treasure. This is essentially treating this as a whole ray. So I don't even care that it's short. If I just imagine it going off to infinity, can I go from position a position b? Now in three dimensions, this is a two-dimensional sphere. And if you look at the obstacles, the things I have to avoid, those correspond to one-dimensional bars here. Which, if you project them onto the sphere, correspond to great circular arcs. So the worry would be-- let me draw them in red, the obstacles. The things that will kill you if you hit them. The worry would be you have what we call a cage, something like this, of one-dimensional obstacles on the sphere that block this x-point. Block the target from the source. And in 3D this-- if you draw what happens in the knitting needles, this is what happens. You can't move the last link to extend the previous one because one-dimensional barriers are a problem on a two-dimensional sphere because 2 minus 1 equals 1. But on a three-dimensional sphere, if you draw one-dimensional obstacles, they have basically no effect on the sphere. They do not disconnect the surface of a three-dimensional sphere. Can you see that? [CHUCKLE] The analogy-- if you just keep the difference in dimensions the same-- so we have a three-dimensional sphere versus a one-dimensional obstacle. Let's go down a dimension. You have a two-dimensional sphere and a zero-dimensional obstacle. What's a zero-dimensional obstacle? A point. So imagine you have all these red points which you can't touch. And then you have the initial position and the target position, b. How do you get there? No problem. There's tons of paths. It's basically the same in four dimensions, just harder to see. You have a three-sphere, which is the boundary of a four-ball. And you have these one-dimensional arcs on it. And they just don't block your way at all because they're so low-dimensional. So that's intuitively why 4D is easy, 5D is also easy, 6D, and so on. Because the obstacles are so tiny. The obstacles remain one-dimensional on this high-dimensional sphere. You can just fix one link at a time. Eventually the whole thing will be straight. Ta-da, 4D! Any final questions? Yes? AUDIENCE: Is the situation of folding a surface in four dimensions roughly analogous with folding [INAUDIBLE]? PROFESSOR: Folding a two-dimensional surface in four dimensions? Definitely, this argument breaks down. And so you can get locked things, there. If you'd asked earlier I could show one example we have of a kind of locked surface. I don't know if there's actually a locked 2D surface in 4D known, though. I suspect there is one. That could be an interesting question to work on. Think about that. All right, see you Thursday.
MIT_6849_Geometric_Folding_Algorithms_Fall_2012
Class_10_Kempes_Universality_Theorem.txt
PROFESSOR: All right, so lecture 10 was about two main things, I guess. We had the conversion from folding states to folding motions, talked briefly about that. And then the bulk of the class was about Kempe and Kempe's universality theorem and the beginning of linkages. So let's start with an open problem about converting folded states to folding motions. This is a nice question. So suppose you have a sheet of paper like this one, but it has a hole in the middle. And then you construct some folded state of that piece of paper with a hole in it. And now you'd like to actually get there by continuous folding motion. So the question is why doesn't that work? What we know is that the same proof technique doesn't work. We don't necessarily know that it's impossible. That would be a nice problem to solve, actually. So what difference does the hole make? So this is the method we saw before. You imagine having some folded state, say, from a flat piece of paper to a crane. You roll up that piece of paper to a tiny triangle that maps to a nice almost flat portion of the crane. Then you basically play that motion backwards, but on the surface of the crane instead of on the flat sheet. And that always works for simple polygons, polygons without holes. If you have a hole in here, you could imagine just filling the hole. That's what the question suggested. Just fill the hole, do this thing, and then remember that the hole actually wasn't there. Erase it again. That should give you a motion. Erasing the hole is fine. The trouble is this part. So if I define a folded state of a piece of paper with a hole, it doesn't tell you where-- let's say there's a little hole here-- this mapping won't tell you where that hole goes in 3D. You have no idea. And in fact, it may be impossible to map the hole anywhere in this folded state that's valid. When you tear a piece of paper, new foldings become possible that were not otherwise possible. What's an example of that? When I do a big tear like this, now I can pull these points of paper apart. And it's impossible to fill this hole in in 3D. It's possible to fill the hole in here. It's just suturing. But when I separate things, that means the original sheet could not fold into this state. So you get new folded states when you have holes, that you cannot just patch the hole and hope to find a place that it folds over here. There are other issues. You could maybe patch it in with some stretchy material or something. I have one example in the notes where you-- suppose you have a tube of paper. So this is not a flat example, but it's an interesting example anyway. So let's say the outside of this tube is purple, and the inside is white. And one thing you can do with the tube is turn it inside out. So you can make the inside purple and the outside white. So this is possible with a tube of paper. If you imagine this as being a hole and the bottom side is also being a hole-- both of these are open right now-- then you could also imagine filling them in and getting a cube of paper. So in this case you'd get a cube that's entirely purple. This would be bad, because a cube cannot be turned inside out without self intersection. So this is an example where there is a folding motion without the holes filled in. There is not a folding motion when you fill the holes in. So I don't know what that says exactly about the problem, but it's some intuition why this is tricky business. That's for a polyhedron, of course. If you're just trying to take polygon with holes and fill it in somehow, I mean, maybe there's a way, but certainly the obvious way does not work. What else did have in my notes here? All right. Any questions about that open problem? Next question is about, it's a neat idea. We talked about linkages that have joints like this one where you must stay connected. And then we briefly also talked about a different kind of joint where this vertex was pinned right along another edge. And we showed you could simulate that by just making a zero area triangle here. So you can force these edges to come right at this point of that bar. Well, different idea is what if you allow this point to be able to slide along that bar? Is that some new kind of linkage that's more powerful or something? Turns out, no, it's not more powerful. You can simulate that too. So I thought I'd show that. I had to think about it. It's kind of fun. This would have made a good problem set problem, but I decided to cover it. So remember our good friend the Peaucellier linkage, which looks like this. So these guys are equidistant, and this is a rhombus. All the edge lengths are equal. Then this vertex lies along a straight line. And it has a limit. Let's say it can go this high and this low, something like that. We've seen that in animation before. So what I'm going to do is imagine this as being my bar, and this as my flexible point that can move along that bar. So here's the existing bar. And if I want to add a point that can' slide along the bar, I'm just going to attach this construction to this bar, and this is where things get a little bit messy. These are the guys that are normally rigidly on the ground, but instead of them being on the ground, I'm going to attach them here. So I'm going to attach this to that, this to that, this to that, this to that. And because there's two connections, this is at the intersection of two-- I mean, this is a rigid triangle. So these guys can't move anymore. But, well, they move relative to this edge. So however this edge moves in the plane or in space, whatever, I guess in the plane here, this guy will be forced to track along the bar. So if you don't worry about intersections, which we're not in this lecture, this construction you could attach to any bar and make a point that can slide along the bar. Kind of fun. So you see the power of Peaucellier linkages. You can do all sorts of fun constructions like this. You could make it just occupy a little portion of the bar, whatever you want. Just build the appropriate Peaucellier linkage. OK. So we're into linkages. Next we're into Kempe's universality theorem, or sort of Kempe's universality theorem that he almost proved. So one question just to review, there is this parallelogram and contra parallelogram. Pretty much all of his construction other than Peaucellier at the end are a mix of these two gadgets. And there's this issue that you could flip one into the other. Here we had, this was the translator gadget. If you have two parallelograms, you can collapse one of them, say, and then flip it out to be a contra parallelogram. If you don't do any bracing, this can happen. And this is bad. You can actually see a few ways in which this is bad. One is here, we have the parallelogram-- or, the point of this translator gadget was to preserve this angle alpha, the green angle here. It's supposed to be the same as the green angle here. But if you do this flip, it won't be anymore. It's the angle between this and horizontal. Right now this edge is almost horizontal. So the new angle's almost 0. Here it's not. Here's an example with contra parallelograms. This is our angle trisector. If you line up this big angle with something, you get the thirds of it, or vice versa. We actually wanted to use it to triple angles. This is Kempe's original drawing. And if, say, this outermost contra parallelogram flipped open and became a parallelogram like this blue, then you'd be in big trouble. These two angles no longer equal this third angle. So you'd no longer be tripling or trisecting. So that's why it's bad. Next question is, how did you fix it again? I mean, the parallelogram was easy. I don't think I need to review that using the construction we already talked about. Let me tell you a little bit about the contra parallelogram bracing, although it's very messy to prove that this works, so I don't want to spend too much time on it. The idea again was to take the midpoints of the four edges of the contra parallelogram. And first you prove those always remain colinear in the contra parallelogram state. They are not colinear in the parallelogram state, and that's kind of what's good about it. Then you find a magic point out here off the board called X, and X is going to be on this perpendicular bisector of PR. It's also the perpendicular bisector of SQ. Turns out this distance always equals this distance by the symmetry of contra parallelogram. That's actually really easy to see, because you have opposite edge links being equal. You get that symmetry. So it turns out that has to be a fairly specific point for this to work. All right. So what do we do next? And then we add these four bars. And the harder part of the claim is at that this thing still moves within an appropriately chosen X. I kind of don't want to get into that too much. The easier part to see is that you can no longer-- if X is sufficiently far down there, this is no longer possible. So let's prove that first. So let's see. If you look at these bars, the bar PX and RX have the same length. That means however you fold this thing, X must be on the perpendicular bisector of PR. Here, that's fine. Over here, the perpendicular bisector would be, I guess, some thing like this, I guess. All right? So the perpendicular bisector of PR some ray like that. OK. And simultaneously for the same reason, X must be on the perpendicular bisector of S and Q. S and Q are these opposite midpoints, and so you've got to be in some kind of perpendicular bisector here. If you have to be on both of those lines, that means in fact you must be at the center of this parallelogram. Or, I don't really need this as a center. It's some point inside the parallelogram. That's really bad for X. If these links are really long, say, longer than the perimeter of this linkage, then X has to be outside, because, yeah. If it has to be far from S, R, Q, and P, you can't be inside the polygon. OK? So provided these lengths are sufficiently long, say longer than the perimeter, there's no way X is inside, and yet in the parallelogram state it has to be inside. And so the parallelogram stays impossible. So that's the easy part of the proof. The tricky part is to get this thing to still fold when this is in the contra parallelogram state, that X is still OK here. And I'll just mention to convince you that it's tricky-- you set the length of the XS bar must be in square, this thing. XS bar squared must be the XP bar squared plus 1/4 AB squared minus AD squared. And I won't go into the proof. There's some details in the notes here. But what this says is that we're talking about XS versus XP. The other two are symmetric , so they have to be equal. So XS has to be a bit bigger than XP, and this says how much bigger. The formula says how much bigger. They can both still be very large, so we can still get the part that we need, but we need that they're actually related in this way for the whole thing to hold together. And I will leave it at that. Sorry, it's a little unsatisfying, but the details are just not that exciting. If you're interested in them, you can read Tim Abbot's master's thesis, which is on my web page. Cool. I wanted to briefly remind you about some of the project ideas for Kempe before we go to generalizations of Kempe. So one of them is to implement Kempe. It's never been implemented, as far as I know, in general form. And it would be interesting to actually see it happen in action, some version of it. There's a lot of different versions, but ideally with bracing, I guess. Another fun sort of more design project would be to design an alphabet and be able to make every letter of the alphabet with some linkage. That doesn't have to follow Kempe, but it would be in the spirit of signing your name. And I have here one example that's on the web. I'll show you the web page of making the letter C. Here's what it looks like in action. So it's just a four bar linkage, pretty simple. I mean, it's three bars plus this closing bar. And then you look at the midpoint of this edge, and it happens to trace out this kind of letter C. So if you had a pen there, that's what it would make. You could imagine just a whole bunch of these in sequence and be another kind of mathematical font which would be fun to have, so. A lot more, 25 open problems left to go. I think I could do a circle. I can do an O, so. 24, and these [? fall ?] fast. Another direction would be to build some kind of sculpture inspired by Kempe. This is one by Arthur Ganson since called Faster! And if you've been to the MIT Museum, you may have seen it. Sometimes it's out, although I've never seen it running. But there's a video of it online if you want to check it out. So this is a device. It's a kind of push cart. It's a sculpture push cart. You have to run with it, and as you run, the wheels power these gears, and the pen there with the hand signs faster, as in you should push it faster. It's pretty crazy. Now, this could be done with Kempe, but in this case it's not. It's done with these weirdly shaped gears. And those weirdly shaped gears control the different axes of the pen. X, Y, Z, N, in and out. You can see it actually lifts up to do the exclamation point. So that's one sculpture inspired by Kempe. In general, there are lots of ideas for making sculptures out of linkages. Arthur Ganson is particularly cool in making linkage-like sculptures that move kinetically. If you haven't been to the MIT Museum to see his stuff, you really should. It's super cool. All right. So that's that. Next we go on to generalizations of Kempe. So there are few questions about this. One was higher dimensions, how exactly does that work? Particularly D equals 3. If you want to follow a surface, does that mean the linkage now has two degrees of freedom? The answer is yes. You get to choose. If you want to trace out a surface, you'll have to have two degrees of freedom. So it's not just turning a circular crank. It's more like a spherical crank, which is indeed what my hand can do relative to my elbow is move along a sphere, well, maybe a half sphere or something. So that's what's possible there. If you want to trace out a 3D curve, then you only have one degree of freedom, of course. I thought I'd briefly tell you a little bit about how 3D works, or show you one of the constructions, which is the Peaucellier linkage. So here is the 2D Peaucellier linkage. In 3D, this won't work. Well, it won't work in the sense that this guy is rather unconstrained. But if we add, let's see, it's a little hard to see. Imagine a plane here, a vertical plane through these points. So if this is in the board plan, I want to choose a point that's roughly here out of the plane. I'm going to draw that here. And then connect it up the same as before. So it's connected to here. It's connected to here. And it's connected to there. So it's just a third point just like these two. These two are symmetric, so this one's also symmetric. And all these lengths are equal if you put it at right C coordinate. Then this is like a 3D Peaucellier. I don't think Peaucellier invented it. Probably we did, but the result is that this point will lie on a plane out here. So that's cool. It's like the higher dimensional version of Peaucellier. Now if you actually want this point to move along a line, what would you do? AUDIENCE: Intersect two planes. PROFESSOR: Intersect two planes, exactly. Take two planes, intersect them. It is a line. There's the line of intersection. Generically, you get a line. Unless they're the same planes, you always get a line. So if you take two Peaucellier linkages, you overlap them at this one point. Then this point, on the one hand, these two points are pinned. We'll have to line one plane. On the other hand, I'll have to line another plane from the other Peaucellier linkage, so you can force it to stay on an actual line. And both of these gadgets are useful. Sometimes you want things to stay in planes. Sometimes you want things to stay on lines. And once you have this construction, in fact, you could build the old Kempe construction, and just put in a ton of 3D Peaucellier linkages to force everything to stay in the XY plane. And then anything you could do in two dimensions you can now do in three dimensions. So that's observation one. You can do Kempe in the Z equals 0 plane, the XY plane. And that's the basic idea for 3D. You do all the stuff we're used to doing in there, all the angle doubling, adding new addition, all these things, and then you just have to translate. You have some points in 3D. You need to measure their coordinates or the angles they form with other edges in 3D. You just map all those things into the XY plane, do your computation on the XY plane, and then map them back. I won't talk about that mapping, but it's not too hard. And once you can map anything you want into the XY plane, you could do your computation, map it back, and force your points in three dimensions to have whatever properties you need. So you can write down any polynomial now in X, Y, and Z and set that equal to 0 just like before. Do all the trig expansions you did before, and you can force a point to trace exactly that curve in 3D. So that's just a sketch of how 3D works. Skipping some details because there are messy. The idea is very simple. 3D Peaucellier. All right. Next question is related to-- properties is mentioned in the lecture notes. So there's a couple versions of this question. They're asking the same things at different levels of detail. So what about curves not represented by a polynomial? I read this like, well, that's not possible. Everything you make out of linkages has to be represented by a polynomial. That's true, but what about piecewise polynomials? That is possible. So you can do piecewise cubic splines. If you've ever drawn a curve in a vector drawing program, you've used splines. So those are splines, and they're made up of little polynomial pieces, like maybe of a parabola here, and then you design it to transition say, C2 into another parabola, or then into some hyperbola, whatever. So these kinds of general curves, you can do great things with splines. Pretty much every curve you've seen on a computer is a spline. And this is much better than the Weierstrass approximation theorem which we talked about before, which says you design one polynomial that approximates an entire curve, like your signature. But when it does that, it'll be like this. It's a very ugly-- if you use various pieces of polynomials, say all cubic polynomials, you can get some really nice looking curves. So you can really reproduce your signature. And there is a theorem mentioned in the notes that says you can trace any semi algebraic set. So I want to define semi algebraic, because this is actually closely related to splines. It's a little more general. So what's a semi algebraic set? So here's an example of a semi algebraic set. You have some polynomial, let's say on XYZ, and you want to say this is greater than or equal to 0. So it's a little different. Before, we could set polynomials equal to 0. Now I can set them greater than or equal to 0. So that's the semi part of-- if we just have this, this is an algebraic set, essentially. Semi algebraic, you can have half spaces in some sense. And then you can also take unions and intersections to form an algebraic set. Also compliments, but it doesn't matter. So what does this mean? It means I can take all the stuff on one side of a polynomial, and then-- well, on one polynomial. Sorry, that's what I should do. So here's, let's say, my parabola. I can take this region, and then I could say, OK, well, let's take-- this gets messy to do-- I could take also all the stuff outside this polynomial. So that's some bigger region here. I could take the union of those. I could clip off parts. Basically I can construct a spline in particular, but in general I can do lots of different things by unions and intersections of these polynomial half spaces. So this lets you construct splines. It lets you piece together components, because, for example, I could take this curve. And then on one side intersect it with the other side Greater equal to 0 and lesser equal to zero, then I get exactly just this curve. And then I can, for example, cut to the left of this line. And then I'll have this curve, but only if it stops here. I could do the same thing with this piece, end up with this piece, and then take the union of those two pieces. So I can construct a spline. I can have regions, of course. Infinite area, finite area, whatever. So semi algebraic sets are very general. It's fairly easy to see that this is the most you could hope for, because in general, you look at a linkage, it's defined by polynomial equations. You say, well the square distance from P to Q equals L for various things. So you only have polynomial equations to work with. Because you have flexibility, you get inequations. Because we can do things like this. I mean, this distance is now less than or equal to the sum of these two lines. So semi algebraic sets are the best you could hope for, a bunch of polynomial inequalities. And in fact, every such semi algebraic set is possible. So how do you prove that? It's actually really easy. We've essentially already done P of XYZ is greater or equal to 0, because we saw in Kempe how to set something equal to 0. To do that, we used the Peaucellier linkage, which I will draw for the Nth time, because I need to modify it. So here's a Peaucellier linkage. It forces this point to lie on a straight line. If I add a joint here, and basically let this length get smaller if it wants to-- it can do things like this now-- I should draw it more like that. Maybe I should draw it scale, so that's maybe something like this. This point can now move anywhere on the segment. Then this guy will end up being able to make, well not exactly this half plane, but a region of it. A big enough region if you set it up right. And as you may recall, the x-coordinate here was the sum of all my trig terms. And I wanted that equal to 0 for this to happen, but if I wanted to make greater than or equal to 0, then I just needed to be to the right of that vertical line. And so that lets me take any polynomial and set it greater or equal to 0. So that's basically Kempe, except I used this modified Peaucellier that let's things go to the right a little bit. OK. Intersections are also easy. If I want to take the intersection of two sets, I just overlay those linkages and let them share the same point. I call this P that we're constraining. So that will apply multiple constraints to that same point, and so that is the intersection of those two semi algebraic sets. The one tricky part is unions, and this is what the question is asking about here. Intersection gadgets are clear, but what's the union gadget? Union gadget turns out to also be possible using Kempe. Kind of surprising. Let me show you how. So suppose you have linkage one constraining some point I'll call it P1 here. And you have another linkage two, constraining at some point P2. And what you'd like to do is build a new linkage that has some point P that either follows P1 or follows P2, so it takes the union of those two sets. So whatever L1 constructs for P1, whatever L2 constructs for P2, you want to be able to trace P1 or trace P2. And what we're going to do is build another box here, another linkage, which is going to be-- I won't write L. Got some more room here. It's going to be a Kempe construction for this polynomial. So this polynomial, it's a little different from what we've seen. And over here it's going to be point P. P here is X comma Y. P2 is X2Y2 and so on. If you want, as X1Y1. So this is an equation involving three points. In the past, we've only had polynomial equations involving one point. This is another generalization of Kempe which I may or may not have mentioned, but it's really easy. We had, what was it, a rhombus to represent a single point. You just have three of them, and now you've got three points. You could do the same trig identities, and so on, to expand out. You get, you might not call it a polynomial, you might call it a multinomial. Well, I guess it was already a multinomial in X and Y. Now it's a polynomial on XY, X1, Y1, X2. Y2. And that's exactly what we have here, various powers of those six variables. You expand them out just in the way you did before with trig, and you just need to be able to add up angles now, not just alpha and beta, but now there's six possible angles representing each of the x- y-coordinates. So I claim I can build this thing of Kempe. OK. And if I build this thing, basically I force X and Y to be either X1, X2 or Y1, Y2, and that lets you take the union of link [? which ?] L1 and L2. The new point P can either live at P1 or can live at P2. Questions about that? Yeah. AUDIENCE: Should some of those be pluses? PROFESSOR: Should some of those be pluses? AUDIENCE: [INAUDIBLE] looks like X equals X1 is enough to make that whole thing 0. PROFESSOR: Yeah. I was curious about this. Do you see a way to fix this? The worry here is X could be X1, and Y could be Y2 [INAUDIBLE]. So you have-- AUDIENCE: X minus X1 squared plus Y minus Y1 squared all [INAUDIBLE] X minus X2 squared plus Y minus Y2 squared. PROFESSOR: Yes. Plus-- this is extremely ugly. OK, let me maybe rewrite it. Thanks. Yeah, as I was writing this I was like-- AUDIENCE: Squared in the wrong place. PROFESSOR: Squared plus Y minus Y1 squared. And then this thing times yeah, thanks. Same thing with 2's. Equals 0. So the product being equal to 0 means one of the two terms better equal 0. And in this case, if this equals 0, because of the squares it forces it to be non-negative, which means the only time the sum is equal to 0 is when both of the terms are equal to 0, which means X equals X1 and Y equals Y1. So this plays the role of and here, and the product plays the role of or. So either XY equals X1Y1, or XY equals X2Y2. Thank you. Good fix. Just make a quick note of that. Do I have a pen? I won't. OK, so that is how you do intersection of two linkages, is just Kempe again. It's kind of cool. Any questions about that? Once you have unions, we've already done intersections in half spaces, then you can make any semi algebraic set. So what I think would be cool in particular is to implement Kempe force spline, say, quadratic splines. Because Kempe for quadratic polynomials is going to be pretty reasonable. You have to implement this union gadget to piece together the pieces, which is kind of messy, but in principle, you can piece together a bunch of polynomials and see what a spline looks like. You had a question. AUDIENCE: So if the curves you're taking in enough don't intersect to get from point to the other-- PROFESSOR: OK. Yeah. So you're asking about sort of continuous crank ability of Kempe constructions. And indeed, if you had two sets that were disjoint, there's no way to continuously go from one spot to the other. What this is saying is that the overall trace of this point-- if you look at all possible configurations, the linkage, and then just see where P goes, it will trace out both of those connected components. On the other hand, if these do overlap if there is a common point between these two linkages, then this will allow you to transition from one to the other. So right, if you're building a spline, presumably you know that they're connected, and you want to be careful of the way you union them together to make them possible by continuous motion. I think that's always possible. But you do have to be careful. Other questions? All right. I want to get the hypar gluing, but there's one more topic some people asked about, which are these origami axioms. I mentioned them very briefly at the end of lecture 10, and they sounded amazingly powerful. They let you solve all these things. The setting here is something called ruler and compass constructions. How many people here have heard of ruler or straight edge and compass? Almost everyone. A compasses is this gadget like this. You can draw circles with it. And there's a standard mathematical formulation of a straight edge and compass which is, if I have two points, I can draw a straight line through them. If I have two points, I can draw a circle through them like that. And if I have lines and circles, I can take their intersections. And if that's all you're allowed to do, then you can prove that if you look at the coordinates of all the points you make-- and let's say you start with one point which I'll call 0 comma 0, another 0.1 comma 0. So I have the number 0 in one. Then the numbers you can make, the coordinates you can make, are everything you could make from 0 to 1 by plus, minus, times, divide, and square root. You could do all those operations, and that's all you could do. And so what that means basically, is you could solve quadratic polynomials, but nothing more. And that's an old result from 1800s and implies things like you cannot trisect an angle. You can bisect an angle because that only involves quadratic stuff. You can't trisect an angle, for example, 60 degrees, because that involves solving subcubic, which is not possible by straight edge and compass. You cannot compute the cube root of 2. It's a cube doubling problem, so all these great things. Then came along Huzita in 1989, and at the same time, Jacques Justin in 1989, who you remember did Kawasaki's theorem and Maekawa's theorem. He also did this, all independently. So Huzita suggested these axioms for folding. If I have two points, I can fold a crease along those two points, that line. If I have two points, I can fold a point onto a point. That constructs a perpendicular bisector. If I have two lines, I can fold one line onto the other. That is the angular bisector. If I have a point and the line, I can fold the line onto itself , which forces the crease to be perpendicular and pass through this point. You see these in lots of origami diagrams. They let you find interesting lines. If I have two points on a line, I can fold this point onto this line while also folding through this point. That's a tangent of a parabola if you look at it correctly. And if I have two points and two lines, I can fold this point onto this line while simultaneously folding this point onto this line. There actually is four or eight different ways to do it in general, but you can find them all by just manipulating the paper. That's the claim. There's one other that Huzita missed, Justin saw, called, these days, Hatori's axiom, where you fold a point onto a line. And not shown here, I believe, is another point that you must fold through. No, that looks like this axiom. So I've forgotten what the differences is for Hatori. Oh, not drawn here is, there's an edge in the bottom. And you also want to fold the line onto itself, which means you have to be perpendicular to this black line down here. So that's sort of all you could imagine if you have these sort of points onto lines, lines onto points, lines onto lines. Axioms, if you enumerate them all, this is all of them for a single fold. And with these axioms you could prove you can solve any cubic polynomials. So basically you can also take cube roots. And so in particular, you can do things like trisect an angle. It's what's shown up here. Fairly small sequence. This was discovered in the 1970s. You can trisect any angle, divide it into thirds just by these kinds of folds. And the tricky operation here is folding two points onto two lines simultaneously. That's the third degree operation. Everything else you can do with ruler and compass. And you can also do things like double a cube. This is computing a cube root of 2 ratio. First you bisect you thing into thirds. This is easy to do, because we can divide by three. Then you fold these two points onto these two lines. And it turns out the ratio here between this y-coordinate and this y-coordinate is cube root of 3. A over B is cube root of 3. This is by Peter Messer in 1985. So that's kind of cool. But that's all you can do with single folds. The most you hope for is solving cubic equations. You can't quintisect an angle. You can't divide an angle in fifths. You can't compute the fifth root of 2, I assume. You can't do lots of things. But it's at least more powerful than straight edge and compass, which is cool. So then this was just for single fold operations. You could look at two fold operations, three fold operations where you make two folds and simultaneously align lots of things. With two folds you can-- oh, sorry, before I get that. There is a software called ReferenceFinder by Robert Lang, if you're curious about how to construct various things. So here is plugged in, I want to compute a third. And it just enumerates all possible things you could do with five or six folds. And if it finds an exact solution, it will put it at the top. It also finds approximate solutions which are practically useful. So this is a sequence of operations that from a square, you could find this point at 0 comma third, which is nice. When you have two folds, you can quintisect an angle. These are Robert Lang's diagrams for quintisecting a given angle. You can see at the very end here we have an angle and it's divided evenly into fifths. It's a bit complicated, and at some point it involves a two fold operation. It says here. Here's where it all happens. Your fold here, and simultaneously you fold here. And you have to align all these points and lines and things. And that's cool. I think with three folds, they can solve any quintic equation. That's Alperin and Lang. And then the culmination, which I mentioned briefly at the end of lecture 10, is if you allow end folds simultaneously, then you can solve a degree end polynomial [INAUDIBLE] order N. First you set up your piece of paper into all these independently manipulatable limbs. You mark off these coordinates, which are the lengths of the bars in your Kempe construction. And then you say, well you've got to fold so that this all these points align. That will construct a linkage state, and if you set Kempe up right, there will only be one state, which is the solution to your polynomial. So that's one way to do it. There are actually other ways to do it. Alperin and Lang have another solution. It's a little more simple, but this is fun because it uses Kempe. And that is a brief story of origami axioms. Any questions about that? There's a small chapter in the book. It's called Geometric Construction. I didn't prove anything here, because it's a little bit tedious to prove. You can solve any cubic, you can solve all these things. That's the most you could solve. But all these things have been completely characterized. I guess there isn't a complete characterization of, say, two fold axioms, exactly what you can make. That's still open. But the point is, as you add more folds, you get more power. Eventually you get all polynomials, which is the most you could hope for, for any kind of geometric construction. If there are no more questions, we resume our task of building something out of hyperbolic paraboloids. Remember this is the hat construction. If you have a square in your polyhedron, you take four hyperbolic paraboloids and join them together like this picture. And then this is going to represent one edge. Those two edges of the hypar are going to represent one edge of the square. And I was suggesting we make this shape, which we have enough hypars for. We just need to do some taping. This is the truncated tetrahedron. It's got four triangles, four hexagons. So we've already made-- last time we made the four triangles. These are the three hats, and they can be joined together to form a tetrahedron by themselves. We've got enough hypars to make the four hexagons, and then we just need to tape them together, and we will get the truncated tetrahedron. Who would like to help? Come on up. AUDIENCE: [INAUDIBLE]. AUDIENCE: [INAUDIBLE]. AUDIENCE: Yep. PROFESSOR: It will be something like this. AUDIENCE: OK. PROFESSOR: I think we need a whole other six, but. AUDIENCE: [INAUDIBLE] Is there another roll of tape hidden somewhere? [INAUDIBLE] STUDENT: And the other main thing is [INAUDIBLE]. STUDENT: [INAUDIBLE] these two on a triangle here. It's like 36. Sort of.
MIT_6849_Geometric_Folding_Algorithms_Fall_2012
Class_6_Architectural_Origami.txt
PROFESSOR: All right, so this lecture we talked about lots of cool origami design software, all written by Tomohiro Tachi. We had Origamizer, Freeform Origami Designer, Rigid Origami Simulator. And then he talked about a bunch of other things that he's built using that kind of technology, like cylinder folding and so on. And I got a bunch of questions. So first is an exercise. If you haven't picked up handouts, grab them. Let's fold something from Origamizer. And I made the pretty much the simplest possible thing you can make, which is four squares coming together. And so this is the crease pattern, and we're going to fold it. You can start folding now if you like. So I'll do it too. So the first step in folding a crease pattern like this is to precrease all the folds. The easiest way to do that is to precrease them all mountain. And these black guys, we're actually going to fold-- I'm going to make a rectangular region here, because I didn't trim the paper to the square. So you've got a fold along these two black lines and all of these guys. So it's a good exercise in folding crease patterns, if you didn't already do one NP set two, which is due today. When you're mountain folding, you can pretty easily guide along a straight line. So you just fold, unfold, fold, unfold. Don't worry about the mountain valley pattern at this point. It's really hard to precrease a valley. When you're precreasing the non-horizontal and vertical folds, make sure you only crease the part that's marked. Don't go too far. How we doing? How many people have precreased? No one, all right. I win. It helps that I've already made a couple of these. This is what it's going to look like when it's folded. On the back, you'll have the four rectangles. Beautiful. And then on the front where all the color is, you see all the crease pattern. And so we're going to-- as Origamizer always does, it adds these tabs, the tuck parts, along each of the edges. And that the vertex you've got a slightly more complicated thing. So that's what we're going to be making. This is an example of the tried and tested advanced origami folding of almost anything. First precrease all the creases, then fold all the creases. Then you have your model, pretty much. But the precreasing is usually the really tedious part. When we're feeling high tech, we'll precrease with a sign cutter, which is a computer-controlled robotic knife blade. We'll score along the creases. Or a laser cutter, which can burn halfway through the paper. Or pick your favorite tool. The fanciest is a computer-controlled ball burnisher. Once you've precreased everything mountain, you want to reverse in particular the black lines. Those are really the only important ones to reverse, I find, in this model. So those are all going to be valley. You want to invert them to be valleys. Once you've got those guys, you can collapse your flaps. Here's the fun part. Ideally, you can collapse maybe all four of them at once. Let's see. It's been hours since I've folded one of these. I've forgotten how it's done. You get this kind of floppy version. These polygons aren't flat like they're supposed to be. And this is where we have too much material at the vertex here. And so there's these tucks, which are just crimps, as we've been calling them. And the blue here is mountain, the way I've set it up. So you just do all of those crimps. Mountain valley. Mountain valley. It's kind of like little simple folds but in this three-dimensional state. And when you put them all in, it just makes exactly the desired form. That's because this is has been designed to have exactly the right angles after you do the crimps. And there's only one-- if your paper was rigid, there'd be only one possible shape this could take on. And there it is. This is supposed to be-- usually, you fold it this way so that all the crease lines that you print are on the backside, so you can't see them. So then you have your perfect model out here. Not so perfect. You'd just be exposing the origami side with the tucks hidden on the backside. Usually, you also want to hide the tucks. But of course, if you want to get it the other way around, you just fold the blue valleys. The red's mountains. But I find it's a little easier to fold this way, where you can see what you're doing. You can see all the crease lines you're making. I can see the advanced folders are already going on to the next one. This one is six squares, all coming together. When folded, it looks something like this. But note that the crease pattern as I've drawn it kind of requires you to cut out this outer shape, because I haven't drawn the creases that are outside there, to make it easier to fold. So if you're folding ahead, ideally, you cut along all the black lines in the outside. Those guys. And then when you want to put in these tucks-- you see how these are lined up-- you just want to crimp that. All right. You can keep folding at home. I want to show you a little bit about the process of making that crease pattern. It's pretty easy once you know how, but if you haven't done it before, the first time is a little bit challenging. I use a 3D-- first you draw your 3D squares that you want to make. I use a program called Rhinoceros 3D, which has a pretty cheap academic license, and it's commonly used around here in architecture. You get a top-down view and a perspective view, and all these great things. In this case, I just wanted to draw four squares in a plane, so it was pretty simple. I should have the file here. Four squares. So it looks like this. I've got my four squares like that. Very exciting. Here's what it looks like in three dimensions. Wow. Then you export that into DXF format. Or sorry, into OBJ format. So you save as OBJ. You've got a zillion options. You want to polygon mesh. Polylines. UNIX. That works for me, though some variations may also work. Then you've got your OBJ file. You run Origamizer, which looks like this. And you can drag in your new OBJ file. Here it is. Exciting model here. This is in three dimensions. And then you say develop. Actually, first you should probably do angle condition, and then develop. Now you've got something like a crease pattern. It's actually just placing the squares, and you can change how the squares are placed here. I spread them out a little bit so that this tuck was not super tiny. This model isn't very constrained. And then you say crease pattern generation, and you get your crease pattern. And you can adjust how spread out you want these, how big you want your tucks to be. I made it nice and square, and something like that size. Then when you save, you get a crease pattern in DXF format. And then hopefully your drawing program can edit DXF. I think I opened it in Rhino, then exported to Adobe Illustrator, and then open in Illustrator. I removed all this stuff on the outside, because I just wanted this square boundary. But you can do whatever you like. So that's Origamizer in action. Wow, 900 frames a second. That's fast. Of course, if you do more complicated models, it can be a little more involved. We've seen the bunny. What haven't we seen? A mask. Never done this one. You should see develop. Boom. There it goes. And spreading them out, trying to solve all the constraints, at some point it will converge. In the lower left, you can see its current error value. When that gets down to zero you have a perfect crease pattern. Except for these green regions. The green regions means that the tucks in the 3D model, some of the-- it's a little hard to turnaround. Some of the tucks may be intersecting. So if we look closely we can probably find some tucks that are crossing each other. And if you want to deal with that in the software-- not just somehow fiddle around with it with origami-- there's a tool which is split extra wide tucks. If you look at one of these, the green thing is the edge-tucking molecule. If you look at that, it will subdivide into two edge-tucking molecules. Now they're half this tall. They don't go as deep into this model. And they're less likely to intersect. As long as you've got a green thing, there's potential intersection. When you're done, this is probably a valid crease pattern, at this point. A little bit of green. Hopefully they're OK. You can keep splitting if it continues to be a problem. It just adds more and more creases. So that's how to use Origamizer, if you haven't used it already. And go back to slides. And the slide progression of that. Cool . So the next question is about-- essentially, it's a question about what makes a convex vertex versus a concave vertex. Concave is a little bit ambiguous, so usually we say non-convex, to mean the opposite of convex. So I'll use non-convex. Essentially, there are two or three kinds of vertices, depending on how you count. We've got something like this vertex of a tetrahedron. This would be convex, meaning that if you look at the sum of the angles of material at that vertex, that sum of angles is less than 360. You could also have a flat vertex, where it's equal to 360. That's what we just made. I've got four squares coming together, four 90-degree angles. Sum of those angles is 360. Or you could have a non-convex vertex. Non-convex, it's bigger than 360. And that's a little harder to draw. So I made what I call the canonical-- it's a nice clean orthogonal, meaning all the bases are horizontal, vertical, or the other way. Non-convex vertex. This has six 90-degree angles coming together. Six times 90 is bigger than 365. It's 540. So this is, of course, inspired by the video game Q'bert. Play it back in the day. And when you put it into Origamizer, it gives you some kind of layout like this. Then you ask for the creases. And boom, you've got it. And the thing that I printed out had this removed, which requires you to cut here, unfortunately. I also made the squares go all the way to the tip. Place them differently, and you end up with this crease pattern. And this is a little trickier, because you've got some extra tucks in here. They're quite small. And depending on how accurate you want to be, it's a little hard to fold it in exactly the right shape. Looks pretty good. It's got some-- little bit messy here in the center. If I use better paper, it'll be a little easier. So that's a non-convex vertex. And in some sense, the point of Origamizer was to deal with non-convex vertices, not just convex ones. Convex ones, you can kind of wrap around the paper, and just tuck away the extra material. Non-convex, you really have to tuck away material in a clever way in order to get all of these guys to come together. Because normally, on a sheet of paper, everything looks flat. Everything should add up to 360. But if you hide away material, you can get more corners to come together, and that's what lets you get non-convex vertices. So that's where that came from. You can't just take a convex vertex and flip it inside, because intrinsically, on the surface, it'll still look like a convex vertex, even if it's popped inside out. Some of the angles won't change. Still be less than 360. Cool. Next thing I wanted to show is Freeform Origami. In particular, there's a bunch of different modes in Freeform Origami, and they weren't really showing much in the videos. So I'm going to show you a little bit about how it works. So you download Freeform Origami . All this software is Windows only at the moment. So then you open your favorite model. It can be a 3D model or a 2D model. Miura-ori is a really nice example to work with. This is just straight lines in one direction, and then a zigzag in the other direction. I've got your 3D view on the left and right. Now these views are not enabled, because I haven't turned on a lot of constraints. Now, as you see, there's a lot of different constraints I can turn on or off. In this case, I will turn on developable, which means that each of these vertices in this 3D model are flat, according to this model, so you want to constrain some of the angles to add up to 360. That means that it came from a sheet of paper. That makes it a folding. So this is different from the target in Origamizer, where it's just a 3D model. And now you can see up here the crease pattern, which will actually fold into that. Because a developable, you can just locally unfold it, and you'll get a picture like that. The other thing I want to turn on is flat foldability. This is Kawasaki's condition. So it's going to enforce that this angle plus this angle equals 180. Or the sum of the odds equals the sum of the evens. When you add that constraint you guarantee a flat folding, and then this picture is the shadow pattern, if you make that flat folding, and just draw them on top of each other. OK, so those are my constraints, and that turns on all of my views. Now I can do-- currently, I am in simulation mode. This means it's acting like a physical piece of paper. So when I drag on a corner, it'll try to fold that up, or unfold it. But this stuff on the right, the crease pattern, is not changing. So this model, because it has a lot of boundary edges, it has a bunch of degrees of freedom. So I was like number of degrees-- number of boundary edges minus 3 is the number of degrees of freedom, in this general picture. They're crushed. So that's the idea. You can also hold down spacebar, and it'll just try to fold everything, kind of uniformly. Or you can hit B, and it'll unfold everything uniformly. So this is all, again, not changing the crease pattern up here. If I move away from simulation mode, if I turn this check box off, now I'm allowing the crease pattern up here to vary. So if you watch this upper right corner, as I drag on this guy, crease pattern changes. It's now allowing the whole thing to be flexible. And I can do things like, oh, maybe I want to make this really high up here. And this is stuff you could not do with Miura-ori. We're changing the Miura-ori pattern. Zoom out over here. See what's going on. Maybe I want to bring these guys up as well. I can't make any 3D shape, because I am constrained by-- a little too exciting. You can always hit Control-Z to undo. Sometimes it's hard to satisfy all the constraints that I give it. We can do things like snap there. And wow, cool. So you have to be a little careful. This requires some finesse. Because the constraints are not always satisfiable. But this, whatever I'm making, at all times will come from one piece of paper-- and you can print out this crease pattern-- and it will be flat foldable. And the cool theorem by Tomohiro is that if you have a valid 3D state, like the one on the left, and you know it's flat foldable, and it came from a sheet of paper, then it will actually be rigidly foldable. And so we can unfold this thing. Whoa. Or fold it, in theory. I see. The problem is I should first turn on simulation mode. I don't want the pattern to change. Then I let it fold, or unfold, and then it will be well behaved. This is guaranteed to work. When I have simulation mode on, anything could happen. So it could explode. But that's how Freeform Origami works. So this question here was-- yeah, if you pull on a point when you're in simulation mode, you won't change the crease pattern. But if you turn off simulation mode, which is called design mode, then you can really change the pattern, and get it to fold into something that you want. And here's an example of something designed with this method. And then we waterjet cut it with little tabs. And this only folds once. You can't unfold it, or else the tabs will break. But it's pretty cool. And you can just print out these-- this is made from one sheet steel and folded by hand. This was made back when Tomohiro was visiting for that guest lecture. So first we made a paper model, made sure it looked good. And this one, we'll fold rigidly. And we made another version, which I couldn't find. It was metal, but [INAUDIBLE] ridges folds rigidly, like the videos that he showed. AUDIENCE: Erik, what is the name of the program you're using? PROFESSOR: This is called Freeform Origami. Or maybe Freeform Orgami Designer. All of these, if you search for Tomohiro Tachi software. It's also linked in some of these slides. You will find all three of these programs. I haven't yet shown Rigid Origami Simulator. Because it's, in some sense, assumed by Freeform Origami, because Freefrom Origami can also do the folding with keeping all the panels rigid. But they have some differences, which I might talk about now. Next question is, on the slides, Tomohiro showed there were tons of equations. He didn't talk about any of them, and some people really wanted to know about these great equations or the conditions. What are the constraints that go on in Origamizer, Rigid Origami Simulator, and Freeform Origami. And there are a bunch. And I don't want to go into them in lots of detail, because it can get complicated. But I'll give you a high-level picture of what's going on. So first one, this is Rigid Origami Simulator, which I didn't show you. But basically, you take in a crease pattern. You can hit spacebar to make everything fold. You can hit B to make everything unfold. And it keeps all the panels rigid. That's its goal. And there's essentially-- this software is written in a way that the geometry of each of these faces is determined by the original crease pattern. So you don't-- that's just given to you. And the only thing really that's free are the bend angles at each crease. So it parameterizes this 3D model by the bend angles. And when you parameterize by bend angles, there's one key constraint you need, which is that if you walk around a vertex and you say, OK I bend by this. And then I bend by this, and bend, bend, bend. I should end up back where I started. Otherwise, there'll be a tear in the paper, at the corner. So if you're going to prioritize by bend angles, you have a cyclic constraint around each vertex. And that is the one constraint you have. This was originally described by Belcastro and Hull. I know some of you know. And so around a vertex, basically, every time you have a face of paper, you turn by that amount. There's matrix B. It's a rotation. Then you rotate around that crease by however much the crease angle is. And then you rotate around the face, and you rotate, rotate, rotate. You take the composition of all these rotations. That should end up with the trivial rotation, which is do nothing. Otherwise, there would be a tear here. So this is a constraint on the angles it's a somewhat complicated constraint. It involves sines and cosines of the angles. But otherwise, if you ignore the sine, cosine, stuff, this is actually linear. This is a bunch of matrices, rotation matrices. You're just composing them. So it's relatively clean. And then you get your folding motion. A little tricky to do by hand, but very easy on a computer to solve that linear system. OK, next we have Freeform Origami Simulator, what I just showed you. This has two constraints. Or there are two constraints that I turned on. There are, in general, more that you could turn on. One of them is developability. So here, we want to start from a piece of paper. And so we want the sum of the angles to be 360. So that is just a sum constraint. The other condition we want is flat foldability, which is the Kawasaki condition. If you satisfy both of these, we know that you'll be rigidly foldable, and that's kind of what Freeform Origami is about. You can turn them off. You can turn on other constraints as well. There are bunch in there, but those are kind of the core two that you typically want to use. And so it's always solving these constraints. So those two systems have relatively simple constraint systems, although Freeform Origami has a lot of extra bells and whistles. So you could do cool design. You can try to force two vertices to come together, and so on. You can try to make mountains be folded as mountains, and valleys folded as valleys. You can constrain which way creases go. Those are inequality constraints. The last one I want to talk about is Origamizer. This has a lot of constraints, and this is where it's probably more insightful to go through them. So remember we're trying to place these polygons into the plane so that these edge-tucking molecules are very simple. They're just a single crease. So that's our-- first we're going to just sort of paramaterize how things are set up. Suppose you've got two faces, which share an edge in the polyhedron, the thing you're trying to make. We want to place those two faces somewhere in the piece of paper. And there's a rotation. So here, we've separated this edge from this edge. And if we extend those lines, they form some angle. We're going to call that angle theta ij. That's one of our variables that we get to play with. The other thing is how distant are they. There's wij. Here, and wji here. And just for that prioritization to make sense, you've got to satisfy a couple of conditions, that if you look at theta ji versus ij, it's negated. And if you look at the w's, you can take the sine of half the angle theta, and that tells you how much this w differs from this w. So these are two relatively simple constraints. Then, like in the previous two-- like in Rigid Origami Simulator, you have to have closure around a vertex. If we're placing these two parameters, w and theta, denote how this guy's placed relative to this guy. And then you can-- if you look around a vertex where all these faces meet, there's the way this is parameterized with aspect to this, and this to this, and this to this. Those should be consistent. And in terms of the thetas, it means that you should do one full turn around the vertex. You've got these theta i's. Then you've got these alpha i's, which are the angles of the face. Then you turn by theta. Turn by alpha. Theta, alpha, blah, blah, blah. In the end, you should get 360. And the equation's written this way because these are the variables that you want to constrain. These quantities are all known. You know all the alphas ahead of time. Those are the angles of your surface. So this is a linear constraint on the thetas. So there's also a similar constraint on the w's. This is a little bit messier. It involves rotations, involving these angles and this other angle, capital theta, which is the sum of thetas and alphas. But it's essentially saying the same thing, that this closed loop is actually a polygon. It should come back to where it started. So if you do this walk, you end up back at your origin, 0, 0. Next constraint is the convexity of the piece of paper. So you're trying to-- you want the polygons on the outside to form a nice convex polygon, because you can always fold the convex polygon from a square. And so this is just a very simple constraint that, at the boundary, you have these-- the thetas should be greater or equal to 180. That's very simple. Next one, these get a little bit more technical to make the molecules guaranteed to work. And so, in particular, an edge-tucking molecule, we want this to be a nice convex polygon. And so this is actually fairly easy to constrain, but all these angles should be in the right range. Don't want any giant angle. You don't want these to basically flip open to be more than 180. That would be bad. The vertex-tucking molecule is a little trickier. There are two main constraints we need. One is that the thing that you fold, which is kind of floppy and has too much material, you want it to have too much material, not too little material. You want each of these angles in the tabs to be greater than or equal to the desired angle over here, so that you can just add in a tuck, like these guys. Add in one of these little pleats to reduce the angle to whatever you need. If it's too small, no matter how much you fold it, it'll stay too small. So it's like the guy who keeps cutting the board and he says, "I keep cutting it, but it's still too short." So you want it to be too long initially, so you can cut it to just the right length. The angle to just the right length. This involves all these angles, which I don't want to define, but you can compute what the angle is here. It's easy to compute what the target angle is. You just measure it on the 3D model after you compute the type proxy. And so you're constraining the thetas, or constraining this fee value. All right, so then the other constraint is this tuck depth condition, which says this is the non-intersection parts. So you want these tucks to not hit each other. They're not so deep that they penetrate each other. And I don't want to go into the details of how that's specified, but it's another constraint. Now over all, these constraints are fairly complicated and non-linear. But Origamizer solves them approximately. And if you let it converge, and if it says it's got zero error, it has solved them. But it can take a while. So one of the questions was, can we do an example by hand to solve all of these systems? And the short answer is no. You really need a computer to solve something like this. At least I would. The solution method is essentially Newton's method, that you may have seen in some context. But this is a high-dimensional version of Newton's method to solve non-linear systems, and it involves the Jacobian-- I'll just wave my hands-- which is partial derivatives with respect to all the different parameters you have. These are vectors, so this is a big matrix. And then you do a sequence of iterations using this method, which is a little easier to see in this picture. Essentially there are two things going on. So you're reacting to-- suppose you have a valid solution right now. Then someone drags on a vertex. When they drag on a vertex, let's say they drag it along a straight line. That's a linear motion of a vertex. And that will start violating constraints. If you go in that direction, probably not very good for all these constraints. In Freefrom Origami, you have-- the edge lengths should all stay the same. If you're in simulation mode. So as you drag crazy, you're invalid. So the first thing you do is project. And this is, I call, an oiler step. You project that direction to be a direction that is perpendicular to all of your constraints, which means that it preserves all the constraints to the first order. And that's, I think, this first red step. Sorry. In general, these green steps would be if you just preserve things to the first order. But if you keep following motions that are kind of correct-- they're correct to the first order-- you'll eventually drift away from correctness. And so you have to correct with the second derivative-- and that's these yellow steps-- to try to get back to a solution. So as you're dragging, first, you correct to be correct to the first order. You make a step in that direction. Then you do the sequence of second order steps to get closer and closer to where things are actually correct. If that made sense, great. If not, you should take a course on numerical methods in computer science. A little beyond what we can do here. And so I'm just going to leave it at that. Cool. Couple other questions about things Tomohiro said. So he said, it seems you don't need to worry about NP completeness of flat foldability. That's actually something we'll be covering in the next lecture. So if you don't know what that means yet, don't worry. We'll be talking about it. But it means, at the high level, it says it's competitionally intractable to make things fold flat. And yet, he's solving it. Why is that OK? There's a couple things going on. In some sense here, we don't care about true flat foldabilities. Sometimes, he'd like to fold all the way to the flat state for compactness, and so on. That would be nice. But in particular, he just wants local flat foldability. He knows that if you have Kawasaki's condition, then you guarantee a rigid motion to fold for a little bit of time, and you can prove that. And so if you're just trying to get rigidly foldable things, it's enough to have local flat foldability, which we do know how to solve in linear time. And that's the Kawasaki condition, and that's what he's solving. And so, essentially, whatever he makes will fold for at least a little bit of time. And if he's lucky, it'll fold all the way to flat. Sometimes not. Sometimes might get collision in between. So you always get something that folds. And then if it doesn't fall the way, you can try tweaking it until it does. So that's the high level version. But you can, in some sense, sidestep NP completeness here. I think there's still some interesting open problems. In this setting, it seems like, say, Freeform Origami Designer. It seems like you really-- yeah. I have to leave it at that. I don't know exactly how to formulate the open problem here. But I think there are interesting questions about proving NP completeness doesn't matter as much here. OK, another cool question. This is getting a bit higher level. This is rather tedious to fold by hand, as you've now learned, especially if you're going to make something like a bunny. Can we make a machine to do this? And so I wanted to show you a couple examples of machines for folding that have sidestepped the printing by hand. This is an origami robot made at CMU by Devin Balkcom. He was a Ph.D. student at the time. And he's taking a piece of paper. It's a robot. It's open loop. It has no feedback, has no censors, or anything. It is preprogrammed like an assembly machine to fold. Essentially, it can do simple folds. So it's got a little suction guy, to move things around, crease. Eventually it accumulates error if it does a ton of steps, so you'd need a closed-loop system with a camera or something to get that. But it actually does a pretty decent job. This is real time. In this case, I think it's finished. One more fold. Crunch. It's pretty impressive what it can do it. But it can really only do simple folds. It's going to have an issue if things really unfold it a lot. It might accidentally hit something. And this should be a samurai hat. Tweaking it a little bit by hand. Wow, it looks like a tetrahedron. OK, so that was one example. Here's a more modern example. This was done at Harvard just last year. And this is a process involving laser-cutting individual components, aligning them with these tabs. Sorry, these pins. Assembling them together to make hinges. So they use laser cutting, and to get two-dimensional surfaces, they use folding to make 3D shapes. Kind of like pop-up cards. This is what a typical hinge looks like. They've got all the different materials here to attach different parts. And these piezoelectric folding actuators. This is their overall design. They're trying to make a bee robotic bee. And this is what the final created thing looks like. It's mostly carbon fiber. And then these are the piezoelectric actuators. So this is the thing they want to make. They build a scaffold around it that causes the whole thing to fold into its desired 3D shape. So they're taking flat parts. And they want to do things like take this flat part and raise it. So what do they do? They add two hinges to make this part move along this straight up and down motion. And then each of-- that's just a scaffold. Each of the gray parts they actually want to build. They add the appropriate hinges to cause it to fold in exactly the way they like. So here, for example, the wing is staying vertical. This part-- it keeps moving around on me-- is turning 90 degrees. You do that with all the parts. You get them all to fold like that. Here's a prototype in real life. And then here's the final version. This is actually in real time, so it folds really fast. Zoom. And then you've got your assembled thing. One more. And then they add this particular metal that fuses the hinges together, so that they will no longer unfold. So that's what it looks like locked. It's all done in an automatic process. And then you laser cut all of the scaffold away, and you've got your finish thing. A sense of scale, this is super, super tiny. It's tedious to fold these by hand. And in this way, they can mass produce them. Here's what it looks like when you connect a battery. Either it will fold at 1 Hertz or at 30 Hertz, which you can barely see, because it's a 30 Hertz video. So you get your robotic bee. It's not yet controllable. It doesn't have a battery attached, but it's extremely lightweight, and very powerful. This is a 3D printed prototype they made first. And you can use it to mass produce your objects. Essentially, automatic procedure. And it's all by layering up flat layers, and then getting it to fold into 3D things. And so you could imagine something like this to execute some complicated foldings, although that's future work. This is, in some sense, a fairly simple thing to build. And we're working on making more complicated things. So that was some robotic folding for you. Next question is, any cool open problems here? So I have two related to rigid origami. One of them is, if I give you a crease pattern, tell me whether it is rigidly foldable at least a little bit or to the first order or something. So I'll just give you something, like this will fold rigidly. I want to say yes or no, does this fold? Seems like a pretty natural question. And indeed, if all the vertices are degree four like this, only four four edges coming together, we can solve it efficiently. But given a more complicated general pattern, characterize when that is possible. We don't have a good algorithm for that. I don't know if there is one. The more general question is-- that's kind of an analysis question. The design problem is, I want to design cool rigid origami. And we've seen bunches of examples of rigid origami. Here's a new one I wanted to show. The Hexa Pot. I believe this is rigid origami, as a kick starter on this. And here is one of them. It folds nice and flat. And it has this 3D state, where you can boil water on your camping stove. And they have a video of cooking noodles. It cooks noodles. It cooks pasta. It cooks sausages. Anything you could imagine, you can cook in here, as long as it fits in this space. That's waterproof, obviously. We saw the telescope lens. We saw this origami stent. How are these designed? Inspiration. Some human had a cool idea, tried it out, proved that it actually folded rigidly. Great. But can we come up with algorithms to design things like this? Could you close the door? Here's another just one-off example. You may have seen these. These are called shopping bags, and they're usually paper shopping bags. They're usually folded along this crease pattern. It turns out that's not possible if all the panels are rigid. This thing cannot fold at all. It's rigid, if the panels are made of steel. And it's actually fairly easy to see that, if you look at this corner, these are four 90-degree angles coming together. And if you look at four 90-degree angles, two straight lines, like in a map, you could fold one of them. But only when you get all the way to 180 can you fold the other way. So right now, this guy is folded 90 degrees. This can't fold at all, which means this fold angle is zero. And we know from Tomohiro's lecture that a degree four vertex has one degree of freedom. So this if this is zero, they all have to be zero. And so the whole thing is rigid. Of course, if you add extra creases, this is done with that Devin Balkcom and Marty. So the same robotic folding guy. Here's a visual proof of what goes wrong. You end up with a tear here. You can fold everything except one of those guys. If you add extra creases, you can kind of roll the lip of the bag down, and repeat that until it's really short. And then once it's below this height of one to two, you can just crush it like a garment box. And so you can do that. You can actually fold this thing flat, and you can undo it and unfold it. An interesting open question is, these paper bags are manufactured in their flat state. If I give you a flat paper bag, can you open it by adding creases? I don't think we know the answer to that. But we conjecture the answer is yes. There are a bunch of different designs out there. This is done with-- it's hard to read. But this is with [INAUDIBLE] in particular, who did the origami stent. It's kind of a twisting box. Works up to a cubicle box. And he just had a paper with Woo this year on a more practical folding. So when we roll the lip, we get a lot of layers of material. This one works for fairly a tall bag. I forget exactly heart how tall. Maybe three to one. And it has a fairly small number of layers. They even built one out of sheet metal to prove this is a practical way to make rigid shopping bags. And the last question here is, could you make one crease pattern that folds into two different shapes? Could you make an Origamizer that at one point will make one shape? And then you super-impose another crease pattern, ideally sharing lots of creases, to make a different shape? And the answer is, watch the next lecture. Yes, we will be talking about universal hinge patterns, where you take a different subset of the creases. You can fold anything, provided it's made up of little cubes. And that's one answer to that question. Any other questions? Yes. AUDIENCE: Erik, going back to the rigid foldability, you do understand rigid foldability has a single vertex, right? It's just a global [INAUDIBLE]. PROFESSOR: Right. Rigid foldability of the single vertex is easy. Almost anything is rigidly foldable. But yeah, its general crease pattern [INAUDIBLE]. AUDIENCE: So it's very similar to the flat foldability [INAUDIBLE]. PROFESSOR: Yeah, it's like flat foldability, except for flat foldability, we know that testing a single vertex is easy. Testing a whole crease pattern is NP hard. What'd we'd like to prove is either NP hardness, or get an algorithm for rigid foldability. AUDIENCE: There's no such result [INAUDIBLE]. PROFESSOR: Right, there's no such result for rigid foldability yet. Other questions? All right. Enjoy folding.
MIT_6849_Geometric_Folding_Algorithms_Fall_2012
Class_5_Tessellations_Modulars.txt
PROFESSOR: All right. So today we resume efficient origami design. And we had our guest lecture from Jason Ku which was definitely a different style of lecture. More survey, lots of different artwork. And it had some practical hands-on experience with TreeMaker, which you're welcome to do more of on your problem set. And so there weren't a lot of questions because this is not a very technical lecture, so I thought I'd show you some more examples of artistic origami, things not covered by Jason, and some other different types of origami. So we start with a bunch of models by Jason because he didn't show his own models, so I thought it'd be fun. We've seen a bunch already in this class, but this is a really nice F16 that he designed. And these are all done with Tree method of origami design. Another lobster. We saw a Robert Lang's lobster before. This one's different. This is a version of the crab that he showed. So the one you saw was like the very preliminary, very rough folding. But with some refinement, especially in the shaping stage, it looks pretty nice. Even on the back side you get some nice features. We have a little rabbit. This is kind of in the traditional style that he showed where you've got sharp crease lines that really define the form. I assume that's what he was going for here. This is a non-tree method design. This is using what's called box pleating. We've heard about box pleating. And it means you have horizontal, vertical, and 45 degree diagonal folds. But you can use it just to shape box-like shapes. It originally was used by Moser to make a train out of one rectangle of paper. But here we've got a pretty nice sports car convertible, even with a color reversal. So it's pretty cool. This is one of my favorite designs of Jason's. Bicycle, one square paper, color reversal. Really thin features. Probably lots of layers up there, but pretty awesome. This is using tree method. Obviously, the paper is not connected with a hole there, so there's some part here that's attached just by folding to another part. Yeah, questions? AUDIENCE: How big is that? PROFESSOR: I'm trying to remember. I think the bicycle's about that big. Anyone remember? It's been a while. So presuming he started from a piece of paper maybe twice the size or so. Looks big here. And this is a really complicated butterfly, very exact features, very cool. These are all from his website if you want to check out that. I'm just giving a selection. Some of them have crease patterns and you can very clearly see the different parts of the model, and the rivers, and so on. Others do not. This is one of-- we're going back in time, so this is when Jason was just starting at MIT as an undergrad, I believe. This is the dog of someone who works at the admissions office. It's very cool. And this is one of his earliest models, 2004. I think it's pretty elegant on the ice skate with color reversal. So that was Jason, for fun. One question we had is what about origami from other materials, not just paper? And we've seen a few examples of that, but I thought it'd be a fun theme. And we'll come back to this a couple times today. This is-- I don't if you call dollar bills paper-- but there is this whole style of dollar bill origami, as my t-shirt last class indicated. And this is one of the more famous dollar bill folders. And he has hundreds and hundreds of designs. One of his latest is the alien face hugger for Prometheus and so on. So there's a ton of stuff done. There's the particular proportion of the rectangle of a dollar bill. And it's also just plentily available. The US is one of the cheapest currencies to do bill folding because it has one of the lowest value bills. So there's that. These are all folded from toilet paper rolls, so moving up to cardboard. This definitely is pretty different in the way it acts relative to standard paper. And there's this guy who makes these incredible masks. Very impressive. And I'm guessing crayon or some kind of rubbed color. So that's pretty awesome. Here's something called Hydro-Fold. This just came out this year by this guy Christophe Guberan, where he's got an inkjet printer. He's filled it with a particular kind of ink that he custom makes. And as it comes out of the printer, it folds itself. It's been printed on both sides. So one side you get mountains, the other side you get also mountains, but relative to that, it's valleys. So there's some fun thing happening as the liquid dries out that causes the paper to curve. You can't get 180 degree folds, but you can get some pretty nice creases. I don't know exactly how much accelerated that is, but he's hopefully visiting MIT later on and we'll find out more. So it's using regular paper, but a different folding style, a different material for folding. You can also take casts of existing paper models. So Robert Lang has done a bunch of these with a guy named Kevin Box where they take a paper model and cast or partially cast it. In this case, in bronze. In these cases, stainless steel. So these are two. This is like traditional origami crane and Robert Lang complex crane. And for fun, the crease pattern for those two looks like this. And this is I think mostly on a 22.5 degree grid system. May actually be-- you can see here there's a river that's not orthogonal. So it's not intended to be box pleated. So that gives you these 22.5 degrees. There's some other features out here, but the most of it is this 22.5 degree system. So as you might guess from now, there's some questions about this. You don't necessarily entirely use the tree method. You use a mix of different things. In particular, there's a technique called grafting where you can combine two models. If you're interested in that, check out Origami Design Secrets. And for things like the dragon where you have this textured pattern-- which we'll get to, it's called a tessellation-- and you want to combine that with doing tree method stuff, you can do that. But it's not necessarily mathematical formal how to do that. It's just people figure it out by trial and error. There's probably interesting open problems there, haven't been formalized. Here's another cardboard design. This is by our friend Tomohiro Tachi. That's him. So this was initially a bed. And you fold it up. And you need a pillow, of course. It turns into a chair. So that's pretty awesome. So that's one of the great things about using non-paper is you get a lot more structural integrity and support. And that leads us into steel, which also makes for stronger models. And this is another design by Tomohiro. We made it here at MIT using a waterjet cutter in CSAIL. And it makes a pretty nice table. This is based on a curve crease design which initially drafted on paper, and then in plastic. And then when it seemed to be working pretty well, we waterjet cut this steel and these perforation lines. And then many hours of painful bending or difficult bending later, some hamming and so on, we got it to fold into a pretty nice shape. So that's one example. I have another example. This is out of much thinner steel. And this happens to be laser cut using a newer laser cutter in the Center for Bits and Atoms in the Media Lab building. So a little bit of a cheat. This is not from a square paper. It's been cut a little bit smaller. I need chalk. Jason. So take a square of paper. You can cut out-- these are 22.5 degree angles. You can cut out material like this from your square and still make a good crane. But it substantially reduces the number of layers you get, especially at the corners. And so we exploited that because this is pretty thick material. And this is just the Center for Bits and Atoms logo. But pretty cool. You can make a crane. And we added these crease lines to get the nice bow of the crane. So pretty nice. This is made by Kenny Cheung who just graduated, PhD. So that was some metal. Next topic is tessellations. So this is a particular style of origami. It goes back-- probably the earliest tessellation folder is Ron Resch. The early history's a little hard to know for sure. Ron Resch was an artist starting in the '60s. He died just a few years ago. We've met him. Pretty crazy guy. Did a lot of cool origami foldings early in the day. There's a patent that describes this particular folding. And what makes a tessellation is essentially a repeated pattern of some sort. It could be periodic. It could be aperiodic. You've probably heard of tessellations like the square grid or some kind of mesh of two dimensions. Origami tessellations are in some sense trying to represent such a tessellation. Here you've got the triangular grid, if you look closely, after folding. But also if you look at the crease pattern itself, it is a tessellation. It's going to be a repeated pattern of polygons. So you've got sort of two levels of tessellation going on. It's like a double rainbow or something. And so there are lots of examples of this. Here's some kind of traditional flat origami tessellations. Some of these are more traditional than others. You've some very simple-- well not simple, but beautiful repeating patterns. Octagons and squares here. You can count them. And this is still periodic. Then we get to some less periodic stuff. And so there are techniques for designing these kinds of tessellation. If you start with a regular 2D tessellation, there's a transformation from that tessellation into a crease pattern, which then makes things like this. You can see here there's sort of clear edges here. And that represents the tessellation it's based on. It's just been kind of shrunk a little bit. Each of these is a pleat. There's a mountain and a valley crease. And so on all of these, I believe, that style. You've got essentially a twist fold at each of the vertices. And you've got a pleat along each of the edges. And if you want to play with these, there's software called Tess freely available online. And I'll show it to you. And it lets you design things like this, following a particular algorithm. So you start with some geometry. And I don't really know these by heart. So it has a fixed set of geometries that you can play with. We'll try this one. And you get a regular 2D tessellation of polygons. And then you increase the-- then I hit, Show Creases. And it's applying a particular algorithm which is essentially-- it's maybe more dramatic if I increase this value or change it dynamically. It's rotating each of the polygons, so a twisting. Sorry, that's negative. As it rotates them, you get-- let me know you. It'd be nice if this is color-coded, but it's not. So these two squares are two original squares of the tessellation. They've been twisted. And then these edges which used to be-- so they're shrunk and twisted. And then these edges used to be attached. We're now going to put in a little parallelogram there. And you just do that everywhere. And this is a crease pattern. It will fold flat. Doesn't work for all tessellation. And there's a paper characterizing which tessellations it works for. They're called spider webs. But it's a very simple algorithm and it's led to tons of tessellations over the years. And you can export this to PDF, print it out, and fold it. It obviously takes a little while. One of the fun surprises of this algorithm, which this is made by Alex Bateman and this was just sort of a surprise by accident. I think there's a slider at the top, the pleat angle slider. And by accident, he didn't require it to be positive. And he realized that if you made it negative-- whoa, that's a little too negative-- you actually get the folded state. This is what that crease pattern will look like after you fold it flat, because it's essentially reflecting across each crease. So this is with all the layers stacked up. So you get sort of an x-ray view. But it gives you a sense of-- it's hard to see the thickness here so we actually wrote a little thing here which is a little bit slow-- we'll see if it works-- called Light Pattern. And it's just measuring how many layers are stacked up at each point and it will hopefully give you a shaded pattern so that if you held it up to light where the dark spot's going to be, where the bright spot's going to be. So the idea is this will help you figure out whether something's going to be interesting or not interesting ahead of have time. Then you can go fold once you've set the parameters exactly like you like. I've just shown one of the parameters there. There's another one, pleat ratio. So this is cool. I think an interesting project would be to extend this tool. It's open source. Lots of interesting things to do with it. Add more tessellations. Improve the interface. Maybe try to show 3D visualization as it folds. There are existing 3D origami tools which we'll see in the very next lecture, Rigid Origami Simulator, that might make that not too hard actually. It'd be cool to try. Put it on the web I think would be interesting. Point it to JavaScript or something. Because I think there's really cool tessellation here. Not many people have actually used the software because it's a little awkward and as you can see, Light pattern doesn't always work. But I think that's just because this tessellation's a little too big. All right. So that was Tess. And that style of tessellation. You can see that you could some really cool thing. This is what a light pattern looks like. So you get the different shades of gray. 50 shades of gray? Then there are more three dimensional tessellations. So this is in a different style. And this is folding a very simple origami base called water bomb. And the resulting thing is not flat, but it's very simple crease pattern and pretty cool three dimensional result. This is not captured by Tess. And that would be a different style project to generalize to 3D tessellations. That'd be very cool. Here's that same tessellation, I think, or a very similar one, but made out of stainless steel. So you can see there's big cuts here. So this is probably made on a waterjet cutter. And then you leave little tabs. So you wear gloves so you can fold this by hand. Probably not easy, but possible. Here's some more back to paper, some more 3D tessellations. And if you're interested in playing with tessellations, you could try Tess. Or there's this really good book came out recently by Eric Gjerde, Origami Tessellations. And this is actually one of the models that's described in here. Unlike traditional origami, there's no sequence of steps. All of these are based on here's a crease pattern, fold along all the lines, and then collapse all the lines simultaneously. Like a lot of mathematical origami design, but there's great stuff in here. Really cool tessellations and some of the best photographs of tessellations. So definitely check out that book if you want to do tessellations. This is the crease pattern. Give you an idea for this guy. It's also periodic. This is triangular twists. You can kind of recognize that, but it's very cool. More alternate materials. This is polypropylene. And there's this great Flickr site, polyscene by Polly Verity. And tons of examples of foldings by polypropylene. So it's a kind of plastic. It gets scored by a machine and then folded by hand. And so really striking results. You get this nice semi-transparency. It works really well with tessellations. Here's some recent ones we just found making things out of mirror and plywood and copper as like the surface material, and then polyester and fabric, or polyester and Tyvek. Tyvek is like those envelopes, plasticy envelopes that you can't really stretch or tear. Really great stuff. And you can buy it in sheets. So that's sort of the base layer that's holding everything together. At the creases here, you can see through to the fabric material. And then this is plywood on the surface. So these are all different tessellations, kind of tessellations. These have been wrapped around to make vessels or to make-- they call it a shoulder cape. Looks like a set of armor. But really cool stuff when you work with other materials. It'd be a great project in this class, I think, to try some of these techniques. Combining some basic foldable sheet material with some richer material, you can make some really cool stuff. Once you have a computer model of it, you can-- and we'll see in the next lecture different computer tools for doing that-- then actually building them I think is really striking. Back to paper, although this barely looks like paper. These are some really cool kind of traditional style tessellations, but folded in a very unusual and beautiful way by Joel Cooper who's one of the leading tessellation folders in a certain sense. He's best known for tessellations like this, however. So these are all based on a regular triangular grid, but not quite identical. It's definitely not periodic here. Going for human forms. He has whole busts and heads. And these are really striking. They're not designed particularly algorithmically. My understanding is he comes up with little gadgets for certain features like cheeks and so on, and he starts composing them in ways that seem to work. And he has a collection of different pieces that work together well. And he can get really intricate, really beautiful 3D surfaces out of that. So this is kind of begging to be studied mathematically in some way, but pretty challenging. This is an interesting tessellation style by Goran Konjevod. He was a co-author on the "Folding a Better Checkerboard" paper that I talked about. And the crease pattern here is extremely boring. It's a square grid. But the mountain valley assignment is not quite trivial. And because of the thickness of the material, it actually gets this curving behavior. So this thing is technically, mathematically it's flat. It's like this really boring pleated square. But the way it goes is you sort of take a square and you pleat the edge and then you pleat the edge and you pleat the edge. So you do mountain valley, mountain valley. And here you're alternating between this side and this side and this side and this side. And that gives you this kind of corner. But because the material is nonzero thickness, you get these really cool curves. And when you change which order you fold the pleats in, you can really control a lot of this surface. It's kind of magical. He has a bunch of designs like this. You can check out his images on the web if you want to see more and diagrams. And I think this is our last tessellation example. So here, goal is to make US flag. And there's a video of this being made, but it's just fold along the lines and then collapse. You're using a tessellation element to get the stars in the flag. And this is what the crease pattern looks like. So you've got a nice tessellation here and then sort of a simpler tessellation out here, which is just some pleats. And getting those pleats to resolve to the outside. This is by Robert Lang. Very cool. So next, I want to transition to kind of modular origami where you use multiple parts. But before we get there, this is I guess the oldest recorded example of a picture of origami. So this is from 1734. This is a reference. This is the actual object-- I believe, a newspaper article. And it's a little rough to see here, but there's an origami crane and a bunch of other classic origami things like water bomb. So the assumption is by 1734, origami was well-known. All the classic models were out there. We don't know how far back it goes. It could be as early as when paper was invented which was like 50 AD. Somewhere between 50 and 1734, origami really hit it big. That's the big range. But I wanted to show this because of the cranes. And one way to combine multiple parts together is to combine multiple cranes together. And there's this whole world, hiden senbazuru, which is connected cranes. And orikata means you're cutting in addition to folding. So this is a rectangle of paper. It's been split along two lines and then folded into three cranes. So that's pretty cool. And there's much more intricate ones where you take a square of paper or a rectangle paper, do lots of cuts, subdivide your thing into a bunch of squares. Each square gets folded into a crane. The tips of the cranes stay connected at these tabs. And the challenge when you're folding these is to not tear at the tabs. But then you'll get these really cool folds. This is an old book from 1797, not much later than that last reference. We have a copy of this book if you're interested in checking it out. Lots of different designs. There have been some recent works in making really nice. These are spheres out of connected cranes by Linda Tomoko. And here's one out of silver foil. So really cool connected cranes. So that's a traditional origami style. I want to transition to modular origami where you combine lots of identical parts, but now they're actually disconnected. And this is a very simple unit. I think it's just water bomb based. And then they nest into each other. You've probably seen these kind of swans, modular swans. I think they're a very old tradition. Possibly China? I'm not sure exactly. So a kind of traditional model. But you get a lot of geometric models like this. So these are examples of different units. You take typically a square of paper. You do maybe 10 or 20 folds and you get a unit. And then you combine a bunch of these units together. So one of the classic units is called a Sonobe unit. Sonobe units use sort of backwards, but you can get these kinds of cool polyhedra. Robert Neale, he's a magician and an origami designer. Has some units. This one's called the penultimate unit. And so you can see each of these green strips is one unit-- blue strip, pink strip. There's a lot of units in here. 90 in total. Typically, one per edge of the polyhedron, sometimes two per edge. And they lock together in certain ways to really hold these nice shapes here. Tom Hull folds a lot of modular origami. And one of his units is called a PHiZZ unit. I think it can make anything as long as you have three units coming together at each vertex. So as long as every vertex has degree three, you can kind of make your polyhedron. I guess the lengths also have to be the same or else you have to adjust the units to be different. So each of the units here is identical, except different color patterns. Here's a big example of a PHiZZ unit construction. So this is 270 units. Take a long time to fold probably and even more time to weave them together. Usually putting the last piece in is the hardest. Here's some more examples by Tom Hull. He has another unit called the hybrid unit. And this is what three of them look like woven together. So this paper is probably red on one side, black on the other. And there's one unit that comes here, wraps around the tetrahedron, and two more. And you combine them and you can make all these different regular solids. And you get these spiky tetrahedra on each of the faces which is pretty cool. So this like icosahedra, a regular 20-sided die, on the inside here, but then each of them has a spike from there. And here's a big one he made. This is actually one of my favorite polyhedra, the rhombicosidodecahedron. It's got all the polygons-- squares, triangles, hexagons, if I recall correctly. It's obvious, right? And one of the challenges here is getting the color patterns to be nice and symmetric and even. And Tom Hull is one of the experts in that. He's a mathematician, but also an origamist. And then he started combining the two because of problems like this. Next we get to polypolyhedra. This is the idea of taking multiple polyhedra and weaving them together and then making that out of origami. And this is one of the most famous designs in this family called FIT, or Five Intersecting Tetrahedra, designed by Tom Hull. This is a photograph of one that I am the proud owner. It was folded by Vanessa Gould, who directed Between the Folds, which is the documentary you all heard about when Jason mentioned it. And it's available free streaming on Netflix, so you should all watch it. Or we could have a showing here. Actually, how many people are interested? Haven't seen the movie or would like to see it again related to this class some evening? OK. That's maybe enough to do a showing. Anyway, she folded this. Cool. And then Robert Lang enumerated all possible polypolyhedra that are symmetric in a certain sense. And these are two examples that he thought were so cool. He made them out of paper. Most of them just exist as virtual designs. People have been folding them, but there's hundreds if not thousands in his list. So if you're interested, check out his website on polypolyhedra. These are, again, modular. And finally, we come to modules of cubes. And this is why you have business cards. And I thought we could play with this. This is a life-size chair made from a particular unit, which is out of business cards, folding these individual cubes and then sticking them together in a particular way. Unfortunately, the material's not strong enough to actually support much weight. So you can't sit on this chair, but it looks just like a real chair. It's very cool. You can make any set of cues you like and interlock them together. One of the craziest experimenters with this cube module is Jeannine Mosely, who's a MIT alum and lives in the area. And she became really famous for making this Menger Sponge out of 66,000 business cards. It took something like five years to make this. She made a lot of the units herself. And so this is trying to represent a particular fractal, which is pretty cool. You start by taking a cube and then drilling holes through each of the sides in the center third. So this is one iteration. You just drill through that hole, that hole, same on each side. Remove that material. That leaves you with-- how many cubes? Eight cubes on top. Eight cubes on the bottom. Four cubes in the middle, which is 20. For each of the 20 cubes, you recurse. So for each of those 20 cubes, you drill holes, drill holes from all the sides. And after two iterations, you have this structure. After three iterations, you have this structure. After infinitely many iterations-- well, no, this is not infinite. But this is actually the same number of iterations as that. So in principle, you keep going. But at any fixed point, you can treat the smallest little unit that hasn't been recursed as one of these cubes, build that, and then assemble them together. It's challenging. You could not take this-- with the business cards, you could not go to the next level-- not because it would take forever, but also because it would collapse under its own weight. So trade-off there. That was 66,000 business cards, five years. I thought, man, that was a big project. But then Jeannine says, what else can we make? And she got more volunteers for these future projects so they were made a lot faster. This is a cool fractal. Not quite as many, 50,000 business cards. And this is a fractal that she designed. Kind of complimentary. You take a cube and subdivide it into three by three by three, and then remove all the corner cubes, and then recurse. And she calls it the Moseley Snowflake because if you look at it from the corner, you get this nice Koch snowflake outline. And this is the real one from the same view. It's a little big, so it's hard to see it all in one shot. And so that's pretty awesome. And then her most recent project was 100,000 business cards. This is I guess the world record for origami made from business cards. And this is a model of Union Station in Worcester, Massachusetts. Hundreds of volunteers here to make this. This was done for first night celebration a year or so ago. Pretty amazing. And you can see, you can really sculpt with these cube units, do lots of cool stuff. And there's a few extra details on the surface there. So I thought we would make something. So these are diagrams you can start working or I can tell you about how they work. Each cube is made from six identical business cards. I have here my own business cards from when I first arrived, old classic. So you start by taking two of your business cards. You have to decide whether you want the white face up on your cube and make it nice and clean or you want the pattern side up. Whichever one you want to expose, you keep that on the outside and you bring the two cards together. So in this case, I'm going to make the pattern side out. And you want to align these approximately evenly. You want them as perpendicular as possible and then roughly evenly spaced. And then you just mountain fold both sides. So you want mountain folds on the side that you care about. And that gives you a nice square. Now I've got two nice squares folded like this. Repeat three times, you get six units. Four and six. OK, once you've got the six units, you want to combine them together. This is where it gets fun. And it's helpful to look at this diagram down here. These are some diagrams by Ned Batchelder. And so this idea of making cubes has been around. I think it was Jeannine's idea to combine them together. So this is what one cube looks like. Why don't I fold, make one of them. Basically, you want the tabs going on the outside. And you need to alternate so they lock together. And you need to alternate between oriented horizontal and oriented vertical. So they recommend starting by making a corner, three of them like that, and then fill around the outside. And then as usual, putting in the last piece is the hardest. So I've got to get-- I want all the tabs on the outside like that. And I probably should've mentioned-- fold the creases really hard. You can do that to a certain extent afterwards, make it nice and cubey. But in this case, I got my six-sided cube out of those six units. It's got my name right in the center. So you can design business card specifically for this purpose. I accidentally did. And that's how you make one cube. Once you've got two cubes, you can lock them together by just twisting them 90 degrees relative to each other and just sliding the tabs in, just sliding the tabs in. This is also like doing that last move. So this tab's got to go in here between these two tabs. It wouldn't hold together very well if it wasn't hard to put together. So once you've got them together, you've got two cubes. Now if you want, for a finishing touch, you can also make another unit and cover the surfaces. So all of Jeannine Moseley's examples are done this way where at the end-- I haven't tried this lately-- you stick on a business card just on the surface so it interlocks here and then interlocks over here. Ho boy, this is challenging. And then you get a full square business card on the outside. And you can use this to, if you have different colored business cards or you want a nice, clean white surface, no seams. So you have these tabs right now, but once you add something like this, you have a nice seamless square on the outside. So you use up more business cards, but it can make for a nicer surface. So any questions about making these? I thought we would make some and then build something. But for that, I need suggestions on what to build. Oh, an MIT. I like that. Let's make an MIT. So let's design. So by MIT, do you mean MIT logo or like an M? 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 cubes. Easy. Exploding cubes. I wonder if you can use these to make pinatas? MIT. We could also just make a row at the bottom. One cube higher. Four more minutes. AUDIENCE: We could do Minecraft origami. AUDIENCE: Ohh. AUDIENCE: Oh, yes! AUDIENCE: That's a good idea. PROFESSOR: Minecraft is a good source.