playlist
stringclasses
160 values
file_name
stringlengths
9
102
content
stringlengths
29
329k
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Logic_1_Overview_Logic_Based_Models_Stanford_CS221_AI_Autumn_2021.txt
OK. Hi, everyone. So this week, we are going to be talking about logic. So this is our last set of modules, and we're going to switch from variable based models and start talking about logic. So let's start with a question. So let's start with this question. If x1 plus x2 is equal to 10 and x1 minus x2 is equal to 4, what is x1? OK? So think about this for a few seconds. So the way you would go about this is you'll probably use the thing that you've used in algebra and basically cancel out x2's together and you would have 2x1 equal to 14. Divide 14 by 2, and you'll end up getting 7, right? Another way of looking at this problem is you can think of this as a factor graph. This is actually a factor graph, and you have these constraints. And one way of solving this is to go and do backtracking search and then actually try to figure out the satisfying assignment there. But the problem there is that might not be the most efficient way of doing it. And kind of like the trick that you've learned in algebra is probably a more efficient way of dealing with this question. So that is kind like a motivation for why we are talking about logic. Could we do logical inferences in a way that makes our lives much easier and allow us to talk about expressions much more concisely and allow us to move around symbols and come up with decisions, come up with solutions based on these sort of logical inferences? So have this example in your mind throughout the lecture, because we are using similar type of ideas when we talk about logic and doing inference in logic. OK? So if you remember our course plan, right, we started with machine learning and then we talked about in general reflex based models, where we have a low level of intelligence. And we started adding to these levels of intelligence, thinking about state based models and then thinking about variable based models. And finally, we are at logic. When we are talking about a higher level of intelligence and expressivity when we are thinking about AI, AI systems. And just taking a step back and thinking about the paradigms that we have used in this class, we started thinking about learning, and modeling, and inference. So the idea was we have some sort of data. From that data, we are going to learn a model. We are going to learn this representation. And then we're going to be able to do inference on that model. So once we have a model, once we have an MDP, once we have a search problem, we can basically ask a question, and that's inference, right? You can basically ask questions and infer an answer, and that allows us to think about different types of inference algorithms, OK? So examples of that, as we talked about, search problems. So when we have a search problem, like the objective inference problems that we're thinking about was finding a minimum cost path. Or we were talking about MDPs. So in MDPs, for example, or games, you are thinking about maximum value policies. Or we looked at CSPs or Bayesian networks, where we are looking at basically what is the probability of some query conditioned on some sort of evidence. So these are some examples of inference questions, inference problems that you have looked at throughout the different lectures and modules that we have seen. So in modeling something, when you think about modeling paradigms that we had, when we had state-based models. We thought about search problems, MDPs and games. And we basically thought about these in terms of states, actions, and costs. Right, so those were kind of the main core elements that would come in into our modeling when we were thinking about state-based models. And applications of that were things of the form of route finding or playing games. And then when we started thinking about variable-based models, we started defining this idea of variables and factors and constraints between them. And then we talked about CSPs. We talked about Bayesian networks, Markov networks. And applications of that were things that was easier to think about in terms of variables. So we talked about scheduling or tracking or medical diagnosis, where we have dependencies between these different variables. And that was in variable-based models. So in this week, what we want to do is we want to talk about logic-based models. And in logic-based models, similar to state-based models and variable-based models, we are going to define a few different types of logic that we are going to be using. So specifically, we are going to be talking about propositional logic and first order logic. And we are going to think in terms of logical formulas, and in manipulating these logical formulas and how we can infer new formulas from them. So specifically, how we think about inference rules. And what are some applications of logic, where logic shows up in a variety of applications starting from theorem proving, hardware and software verification, and also in general, reasoning. It's a core element of reasoning in artificial intelligence. So historically, if you think about logic, kind of like, back when old AI was very highly dependent on logic. So logic was dominant in AI before 1990s. So the same sort of excitement, the same sort of hype that is around deep learning today, that same sort of hype was around logic before 1990s. And that was kind of like the core of AI. People were thinking, logic is going to give us really understanding of artificial intelligence and developing artificial intelligence that could really achieve things that humans can. But that didn't really pan out. And the reason it didn't pan out was logic had a few problems. So the first problem was logic was deterministic, right, and it couldn't really handle uncertainty. And that gave rise to things of the form of probabilistic inference, and in general, understanding probabilities and adding uncertainty on top of logic or developing models that can capture uncertainty beyond logic. The second problem with logic-based models, were they were very rule based. And they wouldn't allow fine tuning based on data. So because of that, they were very brittle. So if I have new data that comes in and tells me something else, then that rule-based model is not going to be able to capture that, and it's really hard to incorporate information coming from new data. And again, that gives rise to machine learning and then this idea of data-driven models and looking at data and being able to learn new models and being able to do inference from that perspective. OK? So these are like the problem one and two are weaknesses of logic. But in general, logic has some sort of strength that some of our models today, like some of the state of the art models today, don't really have that. And I think there is really an opportunity here to use ideas from logic and something more modern, machine learning systems or something more modern, AI systems. And the strength of logic is expressivity. So we're going to be talking about this throughout this week in general. But the nice thing that logic gives us is it provides a compact way of expressing models, expressing representations, that you wouldn't normally be able to get. So this compact representation can be really powerful, because you could manipulate that compact representation, and that could allow us to move and to be able to infer new ideas, infer new rules, and so on. And in general, that expressivity is a big strength of logic and the reason that it is still around and there is still excitement around using it. All right. So let me motivate logic with an example. So we've looked at this example-- I think Percy showed this example during the first lecture. Where our goal is we want to have a smart personal assistant. So let's say we are sitting on the beach, and this is after the class and we are on vacation and after COVID. And we are sitting on the beach, and we have a personal assistant. And what we want to do is you want to ask our personal assistant. Maybe it's Siri or maybe there's something fancier than Siri. And we want to ask our personal assistant a set of questions. Or maybe we want to tell it some information. Maybe we want to inform it about something or ask questions from it. OK? And let's say we use natural language. So let's start with natural language as a medium for talking to this personal assistant. OK, so let's look at an example here. So this was the system. So let's say that this is my system, and I tell my system, all students like CS221. OK. So I'm telling it some information. And then my personal assistant says, I learned something. OK. Then I can say, Bob does not like CS221. OK. And then it would be like, I learned something. And now based on this knowledge that it has, let's call that knowledge base, so based on this knowledge base that it has, I can ask this personal assistant questions. I can ask, is Bob a student? What should it answer? So if it actually does inference right, right, it should answer no. Right, if it has a set of formulas and based on those it can infer and it can reason, then it should actually be able to answer that question. Underneath here, there are a bunch of formulas and there are a bunch of inference rules. We are going to be talking about that, but you could take a look at that and see what are the formulas that it has access to and what are the things that it is inferring. It is inferring that Bob is not a student here based on the things that I've told it. But this is an environment that we are going to be talking about throughout the lectures and other modules this week. All right, so let's go back here. OK. So in general, when we think about having this personal assistant where we are using natural language or where we are using logic, the idea is it should be able to digest heterogeneous information. And it should also be able to reason deeply about that information. Right, it can't have just shallow knowledge of that information. It should be able to inference. It should be able to connect these different pieces of information and make logical statements based on that. Makes logical moves based on that. OK? So why should we use natural language? That's a good question. Why natural language? Or like anything else. OK. So natural language is kind of nice. We all speak in natural language. I'm talking to you guys in natural language. It's a very nice medium to use when we talk to a personal assistant or when we want to basically express what we would like to say. So it's a very rich medium for expressing what we want. And because it is rich, we can say things like, I don't know, a dime is better than a nickel. And we can say things like, a nickel is better than a penny. And based on that, we could be able to make expressions. We could make logical statements based on that and say, therefore, a dime is better than a penny, which makes sense. But the problem with natural language is it's also a little bit slippery. I can start with something that says a penny is better than nothing, that's OK. And then I would have another statement that just says, nothing is better than world peace, and that's perfectly fine. And putting these two together, I can come up with a logical kind of a statement based on what I've seen, that a penny is better than world peace. Which sounds a little bit weird and not correct and not the thing that I actually wanted. OK. So even though natural language is pretty rich, when we are thinking about logical statements and making logical inference here and following inference rules, it feels like natural language is a little bit slippery, and you might want to have access to some other type of language. So this language, when we talk about language, it doesn't need to be natural language. Right, language is just a mechanism for expressing things. It's just a way of expressing, OK? So natural language is an example of a language that allows us to express things. It's kind of informal. We also have programming languages. Those are kind of formal. We have Python or C++. In addition to these, we can have logical languages. And the nice thing about logical languages is that they're formal, and we can think about the relationship between them and formal connections between them. But the other nice thing about logical language is they're actually closer to natural language than, let's say, programming language, just because they're declarative. So there is actually a connection between natural language and logical languages. In one of the later modules, we're actually going to talk about how we can write expressions in first order logic if we have a natural language statement. So in this lecture this week, we are going to be talking about two types of logical languages-- propositional logic and first order logic. All right, so what is the goal of a logical language? So the goal here is to be able to represent knowledge, right. You want to be able to represent knowledge about the world. But that is not the only goal. In addition to that, you want to be able to reason about that knowledge. Right, it's not just about representing, it's about how we can move, how we can make logical statements and how we can run inference rules, and how we can make new statements and reason about them. So an example is if I tell you guys it's raining and it is wet, you should be able to reason about that and you should be able to figure out that, well, it is raining, then. Right, are you telling me raining and wet? So it is definitely like both of those statements are true, and you should be able to reason about that. And that is the goal of logical language. And when we think about logic, we have kind of three main ingredients for logic. I'm going to go into these details a little bit more in our first module. But let me just give you a quick overview. So we're going to have syntax. And syntax basically tells us-- are the symbols of that language. So basically, it defines a set of valid formulas. So syntax here, for example, in a logical formula in propositional logic, syntax could be rain and wet. OK, so when I write rain and wet here, this and in the syntax land doesn't have any meanings. It's just a symbol. It's just like this shape. OK? And rain and wet, they don't have any meanings. They're just symbols. OK, so when you're talking about syntax, you're really talking about the symbols that are building blocks of language. But symbols alone, like syntax alone, is not going to be able to define a language. In addition to syntax, what we need is semantics. We actually need to give meaning to this syntax. So for each one of these formulas, we need to be able to specify a meaning or we specify this. And meaning has a very precise meaning to it. So meaning corresponds to an assignment, a configuration of the world, a setting in the world, that corresponds to that formula, that corresponds to that syntactic formula. So for example, in the case of rain and wet, it actually corresponds to a specific meaning where rain takes value 1 and wet takes value 1. And this is a specific model, specific world where we live in. And in this world, rain and wet has this particular meaning. OK? So the main ingredients of logic are first off, syntax and semantics. And once we have syntax and semantics, then we can talk about inference rules. So we can actually talk about, what can we infer now that we have a set of formulas or we have a set of knowledge about the rules? So given that we have a formula f, could we infer, could we derive a new formula g? Could we figure out if g is true or not based on f? For example, if you tell me rain and wet as a formula, from that, I can derive rain is also true. Right, because it's got to be rain and wet. So from that, I should be able to derive rain. And then that's what we are going to spend quite a bit of time this week on. What are the inference rules that we can play around with, and how do they apply to different types of logic? OK, so three ingredients-- syntax, semantics, and inference rules. All right, so let me just make a bigger point for this difference between syntax and semantics. Because the difference might be a little subtle. So again, if you think about syntax, syntax is talking about the valid expressions that are in your language. It's basically talking about the symbols, right? The things that are valid to say. The symbols that are valid to write in this language. Semantics is about what the expressions mean. So let me give you an example here. So let's say that you are looking at 2 plus 3 versus 3 plus 2, OK? 2 plus 3 and 3 plus 2 have different syntax. They're not the same. They don't look the same. If I have no idea what 2 means or plus means or 3 means, right, 2 plus 3 has nothing to do with 3 plus 2. They have two very different syntax. But they have the same semantics. Right, if I know what plus means and what 2 means and what 3 means, and if I know that 2 plus 3 is 5 and 3 plus 2 is 5, then they have the same meaning. They have the same semantics. So different syntax, but similar to same semantics. On the other hand, we can have settings where we have the same syntax. Things look the same, but they have different meanings. For example, you can look at 3 over 2 in Python 2.7 versus Python 3. And in that case, it looks the same, 3 looks the same. Divide looks the same, 2 looks the same. So syntactically, these two are exactly the same thing. But semantically, they have different meanings. They actually correspond to different values, right, when you're doing this in Python 2.7 or Python 3. OK? So again, we have two expressions that have the same syntax in this case, but they have different meanings and different semantics. So syntax and semantics are two different things. Both of them are needed to define a logical language. And I want to kind of end with this view and this diagram that I'm going to come back to and explain it in a bit more detail in kind of future modules. So the idea is we have two worlds here. On the left, we have syntax, syntax land. And on the right, we have semantics land. OK? So in syntax land, we have formula. So I'm going to use these rectangles to kind of represent formulas. These are different formulas that I can write, like rain and wet. And each one of these formulas have a meaning in the semantics land. OK, so each formula has a meaning in the semantics land. And that's called models here. And what our goal is throughout the lecture is to come up with inference. Like first, define syntax and semantics for different types of logics. And then after that, come up with inference rules that allow us to manipulate these formulas, these compact formulas that are kind of nice and expressive, and manipulate them and come up with new formulas, infer new formulas, that have meanings that are also entailing the meanings of our current formulas. So more on this later. If this is confusing, I will talk about this in more details in a few modules. OK. Just to give you a quick overview of different types of logics we will be talking about. There are different types of logics. In this order, they're increasing-- in the order of increasing expressivity. And in this week, we are going to be talking about the bolded ones. So we will be talking about propositional logic. And specifically, propositional logic, a subset of it, that only has these things that are called Horn clauses. So we'll talk about what those are. And we will also talk about first order logic. So first order logic only with Horn clauses and just generally, first order logic. There are other types of logic that we're not discussing in this class-- second order logic, temporal logic. And they're actually quite useful in a variety of fields, in programming languages and robotics and formal methods. And, yeah, if you're interested in any of these, we can chat about it offline. One other point I want to make is as we increase the level of expressivity of logic here, like as you go down in this list, expressivities of these logics is going higher and higher. But the thing that you're losing on is computational efficiency. So if you want to run inference rules, it's going to become much more difficult, if you're running it on first order logic as opposed to propositional logic. So there is a tradeoff between computational efficiency and expressivity of that logical language. All right, so with that, this is the roadmap for this week's lectures. So we're going to start with kind of modeling. And by modeling here, what I mean is defining the syntax and semantics of logic. So we're going to talk about propositional logic and the syntax of that. Then we are going to talk about the semantics of propositional logic. At that point, we're going to be switching to inference and discussing inference rules. In general, we are going to be talking about two main inference rules-- modus ponens is one, and the other one is resolution. So we're going to talk about under propositional logic, how do we do modus ponens, how do we do resolution? At that point, we're going to switch back to a higher level of expressivity in terms of our models. We are going to talk about first order logic after that. Again, syntax and semantics of first order logic. And then after that, we are going to talk about modus ponens. Again, an inference rule for first order logic. And we have an optional module at the end, which is about resolution for first order logic. This gets a little bit hairy. But if you're interested, you can kind of look at this and how resolution gets applied to first order logic. And then we're not talking about learning during this week. In general, learning more recently has been applied to kind of logical formulas, specifically in the area of formal methods. People have been thinking about learning logical formulas from data, from demonstrations. But that's outside the scope of this class, so we will not be talking about that.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Logic_7_First_Order_Logic_Stanford_CS221_AI_Autumn_2021.txt
OK. So in this module, we would like to talk about first order logic. So far we have been talking about propositional . logic. We have talked about the syntax of propositional logic, its semantics and we've also talked about a few different inference rules. So we've talked about modus ponens and resolution. OK. And now we want to extend our logic and make it a little bit fancier, make it a little bit more complicated, and think about first order logic. So the first question to ask is, why do we even want to do that? Why is propositional logic not enough? Like, if you remember, we talked about resolution, and resolution was taking an exponential amount of time to be solved, and it seemed pretty powerful. Like, if you can do that in propositional logic, it seems pretty useful. It seems pretty powerful. So what are some of the limitations of propositional logic, OK? So let me show you that in one example. So imagine we start with a sentence that says, Alice and Bob both know arithmetic. So if I want to write this in propositional logic, one way of doing that is that I can have a set of symbols-- prepositional symbols-- one being Alice knows arithmetic. And I can have another propositional symbol about Bob knows arithmetic, and these can take true or false values. And one way of writing this is by writing this particular formula where I can write Alice knows arithmetic and Bob knows arithmetic. OK. So this is something-- this seems a little weird, right. This seems like something is wrong here. So what is wrong here? So if I try to extend this and write something that's slightly more complicated, this type of writing symbols and adding them and so on just doesn't scale. It's not expressive enough. Let's say I write all students know arithmetic. If I'm writing all students know arithmetic, then I'm going to list all student names, and then have a single propositional symbol for each of students knowing arithmetic and ANDing that. And that just doesn't scale. If I have a lot of students-- if you're in a class like 221, that wouldn't really scale, right? I need to write, AliceIsStudent implies AliceKnowsArithmetic. BobIsStudent implies BobKnowsArithmetic. And each one of these is going to be a symbol that takes, like, true or false by itself, and this is going to blow up fairly quickly. Even worse, I can have a situation where I write a proposition that says-- I write a statement that says every even integer greater than 2 is the sum of two primes. OK. So this is actually a Goldilocks conjecture. OK. So if I want to write this in logic, well, I'm kind of screwed. Right. Like I can write this in propositional logic because it's talking about every even integer, and there's an infinite number of them, so I'm not going to be able to write that in propositional logic. So what can we do? It looks like if I'm using propositional logic, it's very clunky, and there is kind of like a lot of proposition-- a lot of propositional symbols going on and it just wouldn't scale. But if you think about it, when you're thinking about these statements, there are some objects here, and some relationships-- some predicates between these objects. And maybe we can use that structure. There's quite a bit of structure here, right? Like Alice being a student, or Bob being a student-- like, being a student is a predicate on top of this object, object being Alice or object being Bob. So maybe we can use that structure and instead of defining a single propositional symbol for everything, maybe you can talk about objects and predicates instead. OK. So what that means is here, for example-- like in this other example, Alice, knowing arithmetic. Alice, you can think of as an object. Arithmetic, you can think of it as an object, and knowing is a predicate on top of Alice and arithmetic. And maybe we can think about that structure, and in general this other view that there are some objects and some predicates on top of them, and think of that as a way of writing a new type of logic. In addition to that, like, for that example, I was talking about every integer having a property, then for those types of specifications, we need to think about quantifiers, right? We need to think about ways of saying for all or ways of saying there exist. So we need to have a way of representing these quantifiers, and to represent the quantifier, we need to have a variable, right? So when I say for all students, I need to have a variable x that corresponds to every single student. So in addition to these objects and predicates, we need to have quantifiers and variables, and then we need to use quantifiers and variables to represent our statements, OK? Here, let me give you an example. So what I want to do in this module today is I want to talk about the syntax of first order logic, and I want to then talk about semantics of first order logic, and in the next module, we'll be talking about inference rules. But I'm not going to go into as much-- like, I'm not going to do justice here. So I'm not going to go into as much details and syntax and semantics the same way that we did in propositional logic. So it's a little bit more high level here. OK. So let me just give you a couple of examples here. So if I'm saying Alice and Bob both know arithmetic, in first order logic ideally, I would want to be able to write something of this form. Like, something that says, hey predicate of knowing over Alice and arithmetic over these objects Alice and arithmetic should be true, and the same predicate of knowing over Bob and arithmetic, again arithmetic is the same symbol, should be true. OK. So I want to be able to capture that structure of objects and predicates in first order logic. The other thing is, going back to this other student, all students knowing arithmetic-- I should be able to have quantifiers and variables. So ideally if I want to write this in first order logic, I should write something of this form for all x, x being a student, predicate of student over variable of x should imply x knowing arithmetic. Again, knowing is the same predicate. OK. So these are actually just examples of first order logic. But how do we do this? How do we get to these get to these statements? For that we need to define the syntax of first order logic. So let's get into the syntax of first order logic. All right. So let me go to my notebook. So we're going to talk about first order logic and its syntax. And when you are defining the syntax of first order logic, we have two types of things going on. We have terms and we have formulas. If you remember propositional logic, we only had formulas. Here we first need to define a set of terms, and these terms are expressions that are referring to objects, OK? So they are expressions that are referring to objects. OK. So what are terms? So the first thing that we consider as term is a constant symbol, OK? So Alice, for example, or math, or arithmetic, or whatever. These are constant symbols, OK? So constant symbol is a term. In addition to constant symbols, we need to have variables. Like as I was talking about earlier, if you want to-- if you want to be able to talk about quantifiers, those quantifiers need to be defined over variables. So variable is kind of like, if I say for all x, x does something, that x is a variable, OK? And in addition to that, we can have functions here. And these functions are defined on some of these terms. So functions can be defined on terms, and they also give us terms. So for example, I can look at a function like summing over x and 3. So if I'm looking at sum of 3 and x, x is a variable, 3 here is a constant symbol. Summing over that is a function, and that also gives me a 10, OK? All right. So that is terms. So now I can talk about formulas. OK. So now that I've defined terms, I can talk about formulas. The most basic form of a formula is an atomic formula. This is actually very similar to our propositional symbols in propositional logic. So this is kind of like the basis of it. In propositional logic, we had these propositional symbols like P or negation of every word-- define negation on top of that. But p was this propositional symbol. Here, atomic formula is kind of like the basis of this logic. OK. So what is an atomic formula? An atomic formula is a predicate applied to terms, OK? So for example, Bob knowing the arithmetic is an atomic formula. I can write that as knows is a predicate applied to symbol-- constant symbol Bob and constant symbol arithmetic, OK? And this whole thing is an atomic formula, OK? So once we have the atomic formula, then what we can do is we can do operations on top of these, right? We could do the same things that we did in propositional logic. We can have logical connectives on top of them-- connectives on top of this, so applied to this formula. So these logical connectives are things like negation or and/or implication-- bi-directional implication. We can just apply the same sort of thing, similar to propositional logic here. And in addition to that, we are going to define quantifiers. So quantifier is going to be defined on top of all these. And these qualifiers are things like for all, or there exist. OK? All right. So this defines the syntax of our first order logic terms. We have formulas, and then formulas-- atomic formulas-- are basically going to be predicates on top of terms. And once you have the atomic formulas, you can play around with them using the connectives, logical connectives, or using the quantifiers, OK? Let's go back here. So a quick recap of that-- constant symbols like arithmetic, variables just like x, functions like sum of 3 and x. And then we have formulas referring to kind of what these truth values. And atomic formulas are predicates applied to terms. Connectives connect them. For example, you might say x is a student implies x knows arithmetic, so this implication is connecting this predicate on top of symbols-- on top of the variable to this predicate on top of the variable and the symbol. And then in addition to that, once we have variables, we can have quantifiers. So you can say for all all x, x's student-- that implies x knows arithmetic, OK? All right. So that summarizes the syntax of first order logic. One quick note on quantifiers is if you think about quantifiers, quantifiers are just slightly more complicated versions of ands and ors. So if you think about the for all quantifier, the universal quantification, you can think of it literally as a conjunction, OK? So when I say for all x, P of x, that's very similar to saying P of A and P of B and P of C and so on. OK. And this for all is kind of like being treated as ANDs between all the possible things that this x can take, OK? Similarly, if you talk about existential quantification, there exists-- that is kind of like an or. You can think of it as a disjunction. If I say there exists x p of x, it's very similar to saying p of a or p of p or p of c and so on. And if I had to find a number of them, then I can actually enumerate all of that. I can actually unroll this and enumerate all of them. OK. So then, if for all there exist are kind of like AND and OR, then I can apply De Morgan's Law. So what that means is if I have a negation outside of one of these quantifiers, if I say negation of for all x Px, that is equivalent to saying there exists x negation of Px. Why, because the ands are going to be flipped to become ors, so for all becomes there exists, and the negation becomes insides, just like De Morgan's Law applied to ands and ors, OK? All right. Another point I want to make here is if when you say for all x, there exists y, we can't flip the order. Again, just like and and ors, you can't really flip this order, right. If I say for all x there exist y so then x knows y, that's pretty different from saying there exists y, which that for all x, x knows y. So we can't simply flip their orders. Do not do that. OK. So now that we know the syntax of first order logic, let's talk about how we can start from natural language, and from natural language, how can we write first order logic? So if you think about universal quantification, when you talk about for all, the way that we usually refer to that in natural language is by using the word every, OK? So if I say every student knows arithmetic, I would use the for all, like quantifier. Right, we would say for all x, because that corresponds to every student. So is this the right-- so it's a question to ask is, is this the right way of writing this natural language statement I have? Every student knows arithmetic. I would write for all x student, x is student, OK? And in addition to that, I write x knows arithmetic. But this doesn't actually correspond to this sentence. So there is something a little bit subtle going on here. When you say every student knows arithmetic, basically we are conditioning it, like knowing your arithmetic, on being a student. But the statement here is not doing that conditioning. This statement is basically saying everyone is student, and everyone knows arithmetic, and that's not right. Not everyone knows arithmetic. Every student knows arithmetic. So because there is that conditioning that goes on in this statement, and it's kind of like implied in this natural language, the correct way of writing this is by using an implication. So if I want to write out the statement every student knows arithmetic, I would write for all x of x's students, then that implies that x knows arithmetic. So conditional on that person being a student, then x knows it. Then that is an implication. We're going to have a couple of these examples in the assignment-- the logic assignment. So I think it's kind of like a good rule of rule of thumb to think about for all implies every time you see every. So this is not always true, but in general, if you see every student or every person does whatever, it's usually of the form of for all implication. OK. How about there exists? So let's say that we have some student knows arithmetic. So when we talk about some student and we have to use a existential quantifier, an actual correct way of writing this is to say, well, there exists some student, and x knows arithmetic. So an and is going to be sufficient here. So every time you see a sum, it's usually corresponding to there exists, and an and, and every time you see every, it's usually corresponding to for all and an indication, OK? All right. So note that there are different connectives here for all and there exists, usually when you start from natural language. OK. Let's look at a few examples. Let's see examples. Let's see if we can write these in first order logic. So first example is there is some course that every student has taken. How do we write this? So there is some course, so there exists a y such that y is a course, OK? There is some course. This is to that part of it. There is some course, or if there is a y, so y is a course that every student has taken it. So that's the and part. And there is something true about this course. What is true about this course? Every student has taken it. So how do we write every student has taken it? What that says is for all x and x is student, that implies that x, that student has taken the course y, OK? All right, so that's the first example. Let's look at the second example. So if you've seen this example earlier in this module-- so every even integer greater than 2 is the sum of is the sum of two primes. How do we write this? So it says every even integer. So if it says every, if I see every, then I would expect to see a for all x, and some implication that comes later. So what is that x? For all x, that x is an even integer and it is greater than 2. So for every even integer greater than 2. So even integer greater than 2. What do I get for that? Even integer greater than 2. That implies that that integer is going to be the sum of two primes. So how do I write that? I'm going to use there exists y and there exists z to correspond to those two primes, right? So that integer is going to be sum of two primes. There exists a prime and there exists another prime. There exists a y and there exists another z. So that I can say some-- and y is a prime and z is a prime, and sum of y and z is equal to x, that integer. OK. All right. OK. Let's look at another example. If a student takes a course and the course covers a concept, then the student knows the concept. So if is kind of like every-- remember, like every was also kind of like if. Like when we were seeing every, we would say for all implies. If is kind of the same thing, if is basically saying, hey, if a student basically means for every student, so if a student takes a course and a course covers a concept, then the student knows the concept. So for all x, x being a student, OK, and x takes a course y, so for all course y and for all concepts, z. So these for alls are for all students and for all course and for all concept, x is a student, and x takes y, and y is a course, and y covers z. If you wanted to be pedantic and write, we should have had and z is a concept, but I'm skipping that. Then what does that tell me? Then the student knows the concept, right. The thing that comes after a comma is the thing that comes after this implication, right? Then the student x knows the concept z. OK. All right. So that was going from natural language to first order logic and we're able to talk about the syntax of first logic using terms and then using these formulas defined over a term. So now let's talk about the semantics of first order logic. How do we define the meaning or semantics for this first order logic? So if you remember the semantics, how do we define semantics for propositional logic? We define it by this using this idea of models, right. Like models representing a particular situation in the world, OK? So in propositional logic and model w was a world, that was mapping propositional symbols to truth values. It was a truth assignment to propositional symbols. So if going back to my original example, Alice knows arithmetic and Bob knows arithmetic, if I were to write that in propositional logic, I would basically have these propositional symbols, Alice knows arithmetic, Bob knows arithmetic, and a model would assign 1 or 0 to each one of these propositional symbols. That was in propositional logic. How do we think about it in first order logic? OK. So the way to think about this in first order logic is by having a graph representation for every model. So you can think about predicates that we have been talking about, like knowing your arithmetic and so on, and I don't-- like unary or binary predicates that are defined over these terms that you are talking about. OK. So a model w can basically be represented by a graph here. So a W could be a graph where we have these different nodes and each node corresponds to an object, OK? So an object is going to be represented by a node, and then you're labeling each node by constant symbol. OK. So node 1, o1 is an object and it's going to be labeled by Alice, and node 2 might be an object and it's corresponded to like both Bob and Robert, and o3 is corresponding to a node responding to an object corresponding to an arithmetic. Then what we can do is we can have directed edges here that basically talk about binary predicates. So Alice knowing arithmetic corresponds to this directed edge that corresponds to this predicate knows, and apply to Alice and Alice and arithmetic. And for unary predicates, we are basically-- you just going to have the predicate on top of the node, so Alice being a student, OK? All right. So that defines a model here. So a model in the first order logic basically has two components. One is that it basically has constant symbols assigned to objects. So basically a model for Alice corresponds to a node o1 and Bob corresponds to node o2, and arithmetic corresponds to node o3, and then we have predicate symbols that are basically giving us these tuples of o1 knowing o3, or o2 knowing o3. So that corresponds to w of knowing that predicate of knowing, which basically gives us these tuples. Either they could be binary or unary, depending on the predicate, OK? All right. So the way we are defining a model is a little bit more complex than the way we defined model in propositional logic. So there are a few other restrictions that we are putting on models to make our lives a little bit easier. So if you remember, so we can have basically a statement that says John and Bob are students, right? So how do I write this in first order logic? I can say John is student and Bob is student, so student's a predicate on top of John and Bob. So if I think about a model corresponding to that, I can have a node, o1, corresponding to John, and student, predicate student on top of this node, and I can have node o2 corresponding to Bob and student on top of this. But that's one option, right? One could have other types of models that represent this. I can have a single node, and I can say, well, this person's name is both John and Bob. Maybe John and Bob are the same people, and we're talking about both of them being a student, so one other option is w2. One way of representing this model is w2 where I just write one node. Or maybe I have three nodes. Maybe I have this other unnamed node here that doesn't have anyone assigned to it. So the restriction that we are putting in here is basically trying to make sure that w2 and w3 doesn't happen. So basically we are putting this unique names assumption, which says each object has at most one constant symbol for it. And this basically rules out w, and in addition to w-- sorry, this rules out w2-- basically, you can't have both John and Bob associated to the single node, to the single object. So we can have at most one constant symbol. And in addition to that, you are going to have another assumption on domain closure, which basically says each object has at least one constant symbol. So we can't have an object corresponding to o2 here that doesn't have any symbols assigned to it. So this rules out the [INAUDIBLE].. So this basically ensures that when we have a constant symbol, a constant symbol is equivalent to having an object. If I have an object, there is one single constant symbol that is assigned to it. OK. So why am I trying to do this? Like, what would this buy me? So the thing that this buys me, this one to one mapping that we have between constant symbols and objects, like using these two assumptions that I've put, allows me to do an operation that's called propositionalization. And what that buys me is actually-- it buys me to be able to use inference rules that bring propositional logic. The whole reason I'm doing this is so I can use ideas from propositional logic like resolution or modus ponens, when it comes to inference in first order logic. OK. So if you think about it, if you make this restriction, then first order logic is not anything fancy. Right. It's just syntactic sugar on top of propositional logic. It helps us write things a little bit more aggressively and have an easier time, like writing out things. But at the end of the day, it's kind of like the same thing, the same sort of logic that goes behind everything, OK? So for example, if we have this example, knowledge based in first order logic, we might say Alice is student and Bob is student and every student is a person, so for all x, x is student, that implies x is a person, and some students are creative. So, for there exists an x, or x is a student and x is creative. OK. So this is my knowledge base in first order logic, OK? What can write this exact knowledge base in propositional logic? Like I can actually do that. Now based on this assumption that every constant symbol has a one to one mapping to an object, I can simply write this in propositional logic. I can say student Alice. I can say student Bob. Both of these are propositional symbols. I can have an and of them, and then student Alice implies person Alice, which is another propositional symbol, and student Bob implies person Bob. Student Alice and creative Alice or student Bob and creative Bob is what I get from this last expression.
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2021
Logic_6_Propositional_Resolutions_Stanford_CS221_AI_Autumn_2021.txt
So in this module, we are going to be talking about the resolution, which is an inference rule. So, so far we've been talking about propositional logic, we've been talking about syntax and semantics of propositional logic, and we discussed one inference rule, specifically modus ponens. And the idea of an inference rule is can we do manipulation of syntax in the syntactic land over formulas in order to derive, in order to prove a new formula? And the idea is that inference rule, under that specific set of logic and logical formulas, is that sound and complete? And what we have seen is if I apply just modus ponens on propositional logic, I get soundness but I don't get completeness. And what that means is if I have a bunch of formulas that are entailed that are true, I'm not going to be able to get all of them if I apply modus ponens on propositional logic. So we talked about two ways of solving that and we discussed the first way, the first idea was instead of looking at all of propositional logic, let's look at a subset of it, and that subset is propositional logic with only Horn clauses. So we defined Horn clauses during the last module and we looked at propositional logic with only Horn clauses. And in that case, if I apply modus ponens then I get soundness and completeness, everything is great. The other option is what if I don't want to limit my propositional logic, what if I want to look at all of propositional logic, can I make my inference rule a little bit fancier or a little bit more powerful? So in this module, we are going to be talking about a new type of inference rule, specifically called resolution, as a way of getting both soundness and completeness. All right, so to start with I want to just write out a few things that we're all aware of but let's just get on the same page on all of them. All right, so let me just write out a few things, so if we have p implies q, well what is that equivalent to? That is equivalent to negation of p or q. Let's just write out some of these equivalences here. If I have negation of p and if I have negation of p and q, what is that equivalent to? Well I can apply De Morgan's law and that gives me negation of p or negation of q. And then if I have negation of p or q, what is that going to be? That is going to be equal to negation of p and negation of q. Hold on, get more of these extra lines. All right, so these are a few equivalencies that we all agree over, this is just how they are, it's just like truth, right? So if you look at a truth table of these you're going to get these equivalences. And the reason I'm defining these equivalencies is, in general I would like to write everything in the form of disjunctions and conjunctions. OK, so let me define a few other things here. So I'm going to define a literal as a propositional symbol, p, or a negation of a propositional symbol, negation of p, OK? So a literal is just p, or negation of p, where p is just a propositional symbol, OK? So then based on that one can define a clause to be a conjunction, sorry, to be a disjunction of propositional symbols. OK, so we talked about Horn clauses during the last module, but we never defined what a clause is. So a clause is just an or on a bunch of literals, it's just a disjunction of a bunch of literals, so I can have a clause that's like p1 or negation of p2 maybe or p3. This is a clause, OK, because it's just an or of a bunch of literals, OK? So then the question is what is a foreign clause? So it could be-- we defined Horn clauses last lecture, but we could think about Horn clauses a little bit differently here. So a Horn clause, it's basically a clause, it's just a disjunction of a bunch of literals, with at most one positive literal. So I'm going to refer to this guy as a positive literal and this guy as a negative literal. And a Horn cause basically says you have at most one positive literal in your cost. For example this clause that I've written here is not a Horn clause, right, because it has two positive literals, p1 and p3, but for example I can have another Horn clause that is p1 or negation of p2 or negation of p3, and then this is going to be a Horn clause because it has at most one positive literal, that is p1. This is just another way of looking at Horn clauses. So going back here, right, so we have A implies C, how can we write it, we can write it as negation of A or C. We have A and B implying C, right, what does that equal to? Its negation of this first part that can use De Morgan's Law and that gives me negation of A or negation of B or C. Remember again this is the clause now and it's a Horn clause. And then again like defining what I have defined so far, a literal is going to be a propositional symbol, either positive or negative, either p or negation of p, a clause is just a disjunction of these literals, and a Horn clause is just a clause with at most one positive literal, OK? All right, so now when I'm thinking about modus ponens I can actually write it out as clauses, right, like remember I have A and A implies C and that gets me C. That is what modus ponens tells me, instead of A implies C, I can just write it as a clause and I can write it as negation of A or C, OK? And kind of intuitively-- what is really happening is you're canceling out A and negation of A, that's why we are getting C, OK? And the reason I'm defining modus ponens like this, I'm rewriting it, is this kind of helps us to think about this more general resolution rule that I'll be talking about it in a few slides. So the idea of resolution is I don't want to limit myself to specific types of clauses, I can talk about general clauses and general clauses are, what are they? They are disjunctions of positive or negative literals. And the ideal resolution is if you have a bunch of clauses, you have a rule, you will have an inference rule that cancels out your positive and negative literals. So here is an example, so if it is raining or snowing, that's part of your knowledge base, and if it is not snowing or there is traffic, one can infer that it is raining or there is traffic. Let's think about why can we infer this even intuitively, OK? So if it is snowing, right, so if it is snowing, then there has got to be-- the snowing is true, right, there has got to be traffic, OK, so that's how I get traffic, and if it is not snowing, right, and if it not snowing, then there has got to be rain, because it's either snowing or raining, so that's how I get rain. So intuitively that is why you're getting this rain or traffic and in some sense you can think about snow and negation of snow canceling each other out, because either if it is snowing or if it is not snowing we are going to get traffic or rain out of it. And this is basically a resolution inference rule applied to one example, OK? One can think about this much more generally and think about a clause f1 or kind of a disjunctive through fn or p, and another clause where you have negation of p or g1 through gm, and then the idea of an inference rule is based on these two premises we can conclude a new clause that cancels out p and negation of p, OK? So this is called resolution. All right. So is resolution sound, so that's a very good question to ask, in general we want it to be sound because we want to be able to derive things that are actually true. So is it, like you remember this example, is it true that I can derive rain or traffic here, OK? So how do I check that? Well, to check soundness I need to actually get to the models and meanings of each one of these formulas and I need to check entailment, so let's check that on this example. So if I have rain or snow, what is models of rain or snow? So I have here my truth table, it's going to be a little bit larger because I have both snow, rain and traffic, so I need to look at 0 1 values for all of them. So I have rain or snow, rain or snow corresponds to these shaded areas, so that's models of rain or snow, and then I have models of not snow or traffic, that corresponds to these shaded areas. And remember as I add more formulas to my knowledge base I'm shrinking its models, I'm adding more constraints so I'm shrinking the models, that is why models of these two formulas is going to be the intersection of their models. So the intersection is going to be this darker red area, OK? So if I'm checking entailment, if I am basically checking if resolution is sound, I should be checking entailment, and what that means is I should be checking if the models of what is in my knowledge base is going to be a subset of models of this new formula that I'm trying to derive here. So what's the new formula I'm trying to derive here? Resolution tells me you can derive rain or traffic, and if I look at rain or traffic and models of rain or traffic, I get this green area. So the question is the shaded dark red area a subset of the green area? And in this case it is, so it turns out that the resolution is actually sound. So in terms of thinking about the models, thinking about the semantics here, we are getting soundness, we are ensuring that we are getting truth by applying resolution, OK, so resolution is sound. So as you've kind of seen resolution only works on clauses, right, like I've been defining these clauses, which are disjunctions of literals, and the question is can I apply a resolution to all of propositional logic, and the answer is yes. It actually turns out that because if resolution only works on clauses that is actually enough. And the reason that is actually enough is you can think about any propositional formula and you can write any propositional formula as a conjunction of a bunch of clauses, and that's called a conjunctive normal form, OK? So a conjunctive normal form, a CNF formula, is a conjunction of clauses, OK? So an example of that is if you have a clause A or B or a negation of C, you have another clause negation of B or D, and an and of these two clauses is in a conjunctive normal form, OK? So you can kind of think of this as the equivalent of having a knowledge base where you have each formula is a clause, and when you really have a bunch of formulas in your knowledge base you're basically thinking about and of those formulas, right? So a knowledge base basically is an and of a bunch of-- a conjunction of a bunch of formulas that could be written, let's say as clauses, OK? All right, then basically every formula that is written in propositional logic can be converted into a conjunctive normal form, in a new formula conjunctive normal form that's exactly equal-- like the models of the old formula is exactly equal to the models of the new formula. So how can we do that? It's actually kind of a easy way of doing it, there's just a recipe for converting every formula to conjunctive normal form. Let's look at an example. So let's say that we have a formula, it says summer implies snow and the whole thing implies bizarre, OK? So here I don't have any and's or or's, right, I have this implication, so I need to get rid of that implication. How can I do that? I can basically remove an implication and write it out in the form that I talked about earlier, which is negation of the first term, or the second term. So this implication I can write it as negation of the first term, this whole term or the second term, I can remove this implication and write it in a similar way, I can write it as negation of summer or snow, OK? So now what I'm going to do is I'm going to push the negation inside using De Morgan's Law, so I'm going to push the negation inside, make this and, push the negation inside. I have a double negation here, I'm going to get rid of that double negation and make this positive. So now I have a bunch of literals, positive or negative, and I only have and's and or's. But this is actually not in the conjunctive normal form, right, because conjunctive normal form means that and of a bunch of or's, this is actually the opposite, right, this is or of a bunch of and's, but you can actually distribute this or over the and, and if you distribute the or over the and, you end up with these two clauses, summer or bizarre and another clause which is the negation of snow or bizarre, OK? So you end up in a CNF form, any formula you give me I can end up in a CNF form. So the general recipe for it is if you have implications or these bi-directional implications, replace them with and and or's in negations, so that's the first thing you want to do. Bi-directional application write it out as implications in and's, if you see an implication write it out as a form of negation in or. If you have any negations move them inside using De Morgan's law, if you have double negations remove the double negations, and then at the end just distribute or over and if you have anything of that form, and you'll end up with a conjunctive normal form. So that is kind of the general recipe of converting any propositional logic formula to a CNF one, OK? And then why are we writing this as a CNF form? Because the resolution rule works only on clauses, which is it only works on CNF form formulas. So what's the idea of resolution algorithm? Why are we trying to run resolution? The reason is, in general, you might be asking me if f is true or not. We care about having that assistant that we can ask from or we can tell it things. And what does that do? That tries to basically check things like entailment, right, so if the knowledge base-- if you want to check if a knowledge base is entailing a new formula or not, that's the same thing, right, that's the same thing as checking if negation of f-- if the knowledge base contradicts negation of f, or basically checking if negation of f added to the knowledge base is unsatisfiable or not, OK? So how do we run the resolution based algorithm, well what we do is if you ask me if f is entailed or not, add negation of f to my knowledge base and then I convert all my formulas to CNF form, we can do that. And once I have everything in a CNF form, I can apply a resolution, I can keep repeatedly applying the resolution until everything is converged, and then I can return entailment if and only if I'm deriving false, OK? So that is how we run resolution if we want to answer a question about entailment. Let's look at an example here. So let's say I have a knowledge base, this is my knowledge base, it has a bunch of things in it, they're not in a CNF form, they're not in a clause form or anything, but I have a bunch of formulas. And you're asking me if this knowledge base entails a new formula and that new formula is C. So how do I check that using resolution? What I'm going to do is I'm going to add the negation of C to my knowledge base, I'm going to make everything to a CNF form, so using that recipe that I talked about, removing implications and pushing negations in and distributing words over things, right? Once I do that everything is in a clause form, I have a clause and I have literals. So this is my knowledge base, everything is in a clause form, in a CNN form, and then I'm going to repeatedly apply resolution. So how do I apply resolution? Let's start from these two, so I have A and I have negation of A or B or C, in some sense A and negation of A gets canceled out, so I can add B or C to my knowledge base using resolution, OK? I have negation of B in my knowledge base, so negation of B and B canceled out and I can add C to my knowledge base, and I've added negation of C to my knowledge base and negation of C and C get canceled out and I'll get false, OK? So after repeatedly applying resolution here I am getting false, meaning that when I add negation of the formula I was able to get this contradiction, I was able to get false, And what that means is that knowledge base actually entails the formula, the formula being C in this case. So knowledge base entails C, I can derive C. All right, so a good question to ask is what is a time complexity of these algorithms. So if you remember modus ponens, right, the idea of modus ponens, this was a more general form of it, was that at every step, right, like we would at most kind of add one propositional symbol to our knowledge base, and if you're adding one propositional symbol, like if you have N of them, we have at most like N things to go over. So this would be a linear time algorithm like when we are running modus ponens. It's pretty simple, it also converges fairly quickly because there are N things that we need to go over. But when we think about this inference rule resolution, when we were thinking about resolution we are adding many propositional symbols back to our knowledge base. And in the kind of worst case you're adding all the subsets of the disjunctions of these symbols to our knowledge base at the end. So what that means is you have to go over all of them and that takes exponential time, right? So running resolution, in terms of time complexity, it takes exponential time. And it's actually not surprising that it takes exponential time, if you think about what resolution is doing it's actually trying to solve a satisfiability problem, right? You have clauses and you want to check satisfiability here, you're doing model checking, and satisfiability is known to be NP complete. So it's not surprising that running resolution until convergence actually takes exponential time. So there are really some trade offs here, if you think about using Horn clauses, you could use modus ponens, the nice thing about it is that it's going to be linear time but it is less expressive, you're not able to represent everything in propositional logic, you're only limited to Horn clauses. With Horn clauses they turn out to be kind of useful for many applications, especially some applications in programming languages, so in those applications it does make sense to use modus ponens because it's faster, it takes linear time. On the other hand, if you really care about all of propositional logic, then you really care about dealing with any type of clauses, and there you have to use resolution but the problem with resolution is it's trying to solve an NP complete problem, it takes exponential time.
Robotics_by_Prof_D_K_Pratihar
Lecture_43_Biped_WalkingContd.txt
Now, we are going to discuss, how to tackle the scenario for the double support phase. Now, during the double support phase, actually both the feet are on the ground. Now, here, let us see, how to take care and how to carry out this particular the analysis during the double support phase. Now, during the double support phase; this particular foot is on the ground. Similarly, this particular foot is also on the ground and this is nothing, but the staircase. So, this staircase is denoted by this, this is nothing, but the staircase; so this is the staircase. Now, here, the way I discuss, L_1, m_1 are the length and mass of the first link, that is the foot. Similarly, we have got L_2, m_2, length and mass for the second, L_3 and m_3 for the third, L_4 and m_4, that is length and mass for the fourth link, m_5 and L_5 for fifth, m_6 and L_6 for the sixth one and L_7 and m_7 are the length and mass for the foot. Now, here to tackle this, your dynamic support phase, it is bit difficult and actually, what we do is, now here, the trunk mass of this particular m_4 has got significant influence on the dynamic balance margin. Now, what you will have to do is, the moment this particular biped robot is walking on the plain surface. So, what you can do is; so this particular m_4, we can distribute or divide into two equal parts. And, the moment, it is negotiating so, this type of staircase, this particular m_4, I can divided into two parts, but the two parts will not become equal. Now, why do you need it? For the purpose of analysis of this double support phase, we will have to assume that that this is consisting of two single support phases. And, we have already seen, how to carry out the analysis for this particular the single support phase. So, what I do is; so this particular DSP is actually assumed to be consisting of two single support phases (SSPs) and for each of these particular SSPs, we try to carry out the dynamic analysis, we try to find out what should be that particular the ZMP point. Now, let us see, how to carry out this particular the analysis, now as I told that this particular trunk mass, that is, m_4 has got significant influence on the balance. So, what I do is; we take the projection of this on the ground for this trunk mass and what I do is, we try to find out what is the distance between this particular point, that is, the edge of the leg to this particular point. And, similarly, we try to find out the edge of this particular leg or the foot from this particular projected point of the trunk mass. And, supposing that this is denoted by capital X_1 and this is denoted by capital X_2. Now, if I know this capital X_1 and X_2, now I can distribute this particular m_4 into two parts. Now, supposing that X_1 is equal to X_2; that means, the biped robot is walking on the plain surface. Now, in that case, we can find out, that is, m_41. So, m_41 will be equal to this particular expression, that is, m_4 X_2 divided by X_1 plus X_2 and similarly, m_42 is nothing, but m_4 X_1 divided by X_1 plus X_2. Now, if I get X_1 equals to X_2; so definitely m_41 will become equal to m_42; that means, whenever it is walking on the plain surface, m_41 will become equal to m_42. But, the moment it is negotiating the staircase or it is crossing the ditch or some uneven terrain, so in that case; this particular m_41 and m_42 are not equal and this particular m_41 does not become equal to m_42. And, supposing that it is negotiating the staircase, now in that case, m_41 and m_42 could be said as 40 percent and 60 percent of m_4 or they could be 30 percent, 70 percent of this particular the m_4. Now, once I have got this particular the numerical values for these m_41 and m_42, very easily, what we can do is, we can consider this particular double support phase is nothing, but a combination of two single support phases. For example, say one phase will be something like this, so this is one single support phase. So, this will be something like this and this is one mass, similarly I have got one mass here, I have got one mass here, I have got one mass here. So, this is nothing but a single support phase; similarly on the right hand side, I can consider another single support phase. And, once I have got this particular single support phase; by following the same principle, so what I can do is, I can carry out this particular, I can find out what should be the ZMP point. Now, for example, say if I concentrate on this particular the single support phase. So, this is one single support phase; so for this particular single support phase. So, I can find out the ZMP and this particular ZMP is denoted by this, so this is the ZMP. Similarly, for this particular double support phase, another single support phase, which is nothing, but this, this is one link, this is another link, another link, another link so this is nothing, but the ZMP point and this is your X_ZMP. Now, remember, this particular X_ZMP is nothing, but a vector and I can also find out, what should be the magnitude, what should be the direction. And, we assume that this particular the reaction force, the ground reaction force, whenever the biped robot is walking on a ground or it is negotiating some staircase; there should be some reaction force. And, this particular ground reaction force is going to act through this particular ZMP. Now, and due to this ground reaction force only, we are able to walk. So, this is the point through which the ground reaction force will walk and as I told that this is nothing, but a vector. So, this indicates the ground reaction force and it is passing through the ZMP. Now, if I extend, this particular straight line, I will be getting something like this and this vector, if I extend, I will be getting something like this and these two straight lines are going to intersect at this particular the point. And, we take the projection of this particular intersection point on the ground and that indicates actually the system ZMP, that is, X_ZMP comma system. Now, once again I am just going to concentrate on the single support phase and the double support phase and how to maintain the balance. Now, during the single support phase; supposing that this is the ground foot, ok; now if the X ZMP point or the ZMP point if it is lying within this particular the ground foot ground region then only the dynamic balance is maintained, but if it goes outside, the balance is going to be lost. Now, if I consider one double support phase something like this. So, this is one ground foot and this is, say another ground foot. Now here; the safe region is denoted by this, so this is nothing, but the safe region. So, the safe region if I just draw, so the safe region is nothing, but is nothing but this; so this is nothing, but the safe region. Now, this particular system ZMP; so this system ZMP should fall within this particular safe region; then only the dynamic balance will be maintained, otherwise, it is going to lose the balance and it is going to fall. Now, in a particular walking cycle, whether it is in single support phase or in double support phase; the balance has to be maintained, at the same time during the transition of single support phase and the double support phase, that particular balance has to be maintained, then only it will be able to maintain its balance in a particular walking cycle. So, this is the way actually the biped robot maintains balance during the walking. And, this is how to determine the ZMP during the double support phase, this I have already discussed a little bit. Now, here, I am just going to show you some stick diagram that a biped robot having 7 degrees of freedom is negotiating the staircase during the single support phase. So, one foot is on the ground and the other foot is in air. This is another stick diagram, where the same 7 degrees of freedom biped robot is negotiating the staircase and here, both the feet are on the ground. Now, I am just going to show you one real experiment on a very simple biped model. Now, I am just going to discuss the different components of the biped robots, which we have in our lab. And, actually this is nothing, but the very simple biped robot and we can see that so, here we have got 2 servo motors. So, the servo motors we can see, so this is one servo motor, this is another servo motor, it is a very simple model. Now, with the help of this particular servo motor; so we can control the movement in the forward and the backward directions. Similarly, with the help of this particular servo motor and with the help of these two tilt rods; so, we can actually go for, we can lift and we can place the foot of this particular the biped robot. And, here, we have got actually one micro controller, so with the help of that, we can control the preprogrammed motion, actually we can control, we can run. Now, here, if you see, the area of this particular foot is much larger compared to the overall dimension of this very simple setup. Now, the purpose is actually, which I have already discussed, so that we can get more safer region to maintain its dynamic balance. So, that we can get during the double support phase or the single support phase, we can get the larger area for the safe region and that is why, the dimensions of this particular foot has been kept somewhat larger even compared to the overall dimensions of this particular the setup. Now, with the help of this particular very simple biped model, we are going to show you like how to generate the forward and the backward movement. Or, how can it walk on the plain surface in the forward direction and in the backward directions. So, now, we are going to show you that particular experiment. Now, it is showing the forward movement of the biped robot and now it is moving in the backward direction. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_07_Introduction_to_Robots_and_Robotics_Contd.txt
Now, I am going to start with the end-effector, which is generally used in the Robots. Now, if we see the robot’s end-effector, we use generally two types of end-effectors, which I am going to discuss, but the basic purpose of using end-effector is to grip or hold some parts, materials, tools, just to perform some specific tasks. And, this particular end-effector is generally attached to the wrist joint, that is, the last joint. So, there, we attach this particular end-effector. Now, if you see the end-effector, this end-effector could be of two types. Now, one is called the gripper and another is called some specific tools or the specialized tools. Now, this grippers are known, for example, say, we will have to grip some tools, like we will have to do some machinings, for example, drilling or say grinding or milling. So, we will have to use some special type of gripper, so that we can grasp that particular object. Now, sometimes, as we mention, the robots are used to perform some specific tasks, for example, say spray painting or welding. Now, if we want to do the spray painting, the spray gun has to be attached to the wrist joint. Similarly, if we to want to do some sort of welding, then the welding electrode has to be attached to the wrist joint and that is, also an end-effector, but this particular end-effector is in the form of tools. So, the end-effectors could be grippers or it could be some specialized tools. Now, I am just going to see the different types of grippers, which we generally use. So, I am just going to classify the grippers. Now, the first classification is: single gripper and double gripper. Now, as I mentioned that if this is the wrist joint. So here, in this particular wrist joint, we connect that particular gripper. Now, here, I can grip, I can connect only one gripper just to serve a specific purpose or depending on the requirement, I can just attach two independent grippers. So, if I use two independent grippers, that will become the double gripper, and if I use only a one such gripper gripping device, that is called the single gripper. So, this is the difference between single gripper and double gripper. Now, then, comes the concept of the internal gripper and external gripper. Now, let me take one example. Supposing that I have got a steel pipe for example, say water pipe or oil pipe, ok? Now, this particular pipe, I want to grip. Now, there are two possible ways with the help of which I can grip this particular pipe. So, I can follow this type of gripper, this type of gripping pad, or gripper, so I can use one gripper like this, I can use another gripper. So, one gripper is here, and another gripper is here and this is a hollow pipe, so I can grip with the help of these two fingers or there is another possibility, the same pipe I can grip here, externally. So, I can put a one gripping pad here and another gripping pad here and I can grip this particular hollow pipe. So, if I use this type of gripper that is called the internal gripper. If I use this type of two fingers, that is called the external gripper. So, this is the difference between the internal gripper and external gripper. The next classification is the soft gripper versus the hard gripper, soft gripper and hard gripper. Now, let me take one example. Supposing that, this particular marker, say, if I just grip it like this. So, there is a possibility that I can grip it with the help of these two fingers, although this is not the perfect point contact, so might be it is almost similar to the point contact and with the help of these two fingers, I can grip it. And, there is another way of gripping, I can also grip it like this. Now, if I grip it like this, this is nothing but the perfect area gripper, ok? So, if I use the point gripper like this, that is nothing but the hard gripper, that is actually the hard gripper and if I use this type of the area contact and that is called the soft gripper. So, for the hard gripper, we maintain the point contact just to grip that particular object and for the soft gripper, we consider the area contact just to grip that particular object. Now, here, in this particular sketch, in fact, I am just going to give information of another concept. Now, that concept is also very important from the gripper design point of view and that is nothing but the concept between or the difference between the force closure and form closure. So, this is actually the force closure and this is nothing but the form closure form closure. So, the concept of these force closure and form closure is a very important. Now, let me take one very simple example. Supposing that say I am just going to write something on the board with the help of a white chalk. So, the chalk is having circular cross-section. Now, to grip that particular chalk, so that I can write in a very nice way, so what I will have to do is, with the help of my finger, I will have to put some force, here I will have to put some force here, then only I can grip this particular chalk, because the chalk is having the circular cross-section and if I do not grip it properly, if I do not put force and there is a possibility that I will not be able to write or draw the picture which I am planning to do. The same is true, if I do not grip it properly, so I will not be able to write anything here. Are you getting my point? Now, this is another example. Supposing that the object is having the cross-section, which is nothing but a square in place of a circular in place of circular. Now, this object is having the square cross-section. Now, if it is a square and if I just try to grip it. Here, gripping will be much easier compared to this particular object because here, so I have got a one finger here, another finger here, another finger here, another finger here. And, this particular geometry or the corner this geometry or the corner is going to help us in gripping, which is a absent here. So, supposing that both the chalks are having the same mass, same weight and I have got chalk having circular cross-section, I am having chock having the square cross-section. So, handling the chalk having square cross-section will be much easier, and that is why, in fact, in some of the universities, they use the square chalk a particularly in foreign universities, they take the help of square chalks. But, here, actually we take the help of this type of circular cross-section chalk. So, the difference between the concept of force closure and form closure is a as follows. Here, to grip this particular object, we will have to take the help of some force and here, this particular geometry is going to help us in gripping this particular object. So, this is the concept of the form closure and this is the concept of the force closure. So, if we want to design and develop some sort of grippers, we will have to see the nature of the object, the cross-section of the object, which I am going to grip. And, accordingly, depending on the requirement, I will have to design that particular gripper. So, this is the concept of the force closure and form closure. Now, another classification is actually the active gripper versus the passive gripper. Now, by active gripper, we mean the gripper having some sensor and by passive gripper, we mean it is a gripper without sensor. Now, let me take the example of our gripper. So, if I just consider our gripper. So, for example, say with the help of this finger, I am just going to grip it. Now, for gripping definitely, I will have to put some force, but at the same time, this particular skin, which I have on the finger, is going to help us a little bit and this particular sensor, the skin is a touch sensor, which I will be discussing, in details, after sometime. So, this particular skin, that is, the touch sensor is going to help me in gripping also, besides that particular force, which I am putting, and that is nothing but an example of active gripper. On the other hand, we have got the concept of passive gripper, where we do not use any such sensor. Now, I will be discussing the working principle of passive gripper in much more details, after some time. Now, I am just going to discuss the working principles of a few very simple mechanical gripper, which are very easy to understand. So, in mechanical gripper actually, what we do is, we try to design some fingers and we operate these fingers with the help of some mechanisms. And, these are purely mechanical, so I am just going to design and these particular grippers are very simple to design and these are less costly and less versatile. So, I am just going to discuss a few available very simple mechanical designs for this particular gripper. For example, say, the first one is a gripper with linkage actuation. So, let us try to understand the working principle of this particular very simple gripper. So, here we have got two gripping pads. So, these two are the gripping pad or the gripping jaws. So, with the help of this gripping pad, I am just going to grip some objects here. So, I am just going to grip with the help of these two gripping jaws or gripping pads. Now, I will have to grip, I will have to un-grip also, how to do it? Now, the mechanism is very simple. So, what we do is here we have got one piston cylinder arrangement, so this particular piston can slide inside this particular the cylinder. Now, the moment this particular piston moves towards this solid arrow; that means, in this particular direction, ok? So, here I have got a joint, the rotary joint. So, what will happen is, the moment it is sliding in this particular direction, this angle: theta is going to be reduced, and as theta decreases, these two points are going to come closure to each other and here, we have got the support and due to that, these two grippers are going to move away from each other, and it is going to un-grip. On the other hand, if this particular piston is moving towards this particular arrow, this dotted one, this dotted arrow, what will happen is, theta is going to increase and these two points are going to move away from each other and consequently, these two gripping jaws are going to come closure to each other, and it is going to grip this particular object. So, this is a simple gripper designed with the help of the linkage or the mechanism, ok? Now, I am just going to discuss the working principle of another very simple gripper. Now, here once again, this is the gripping pad or the gripping jaw with the help of which I am just going to grip this particular object, the object is here, say. Now, what we do is, once again we use the principle of the cylinder piston mechanism, so, this particular piston can slide. Now, supposing that it is sliding towards this particular direction, shown as the solid arrow. Now, if it moves towards that. So, here, we have got one mechanism, that is called the swing-block mechanism. So, swing-block mechanism it is a very simple, supposing that I have got one this type of cylinder sort of thing. So, this is the cylinder and here, you will find that. So, this is connected here, one link and here another link is connected and here, you will be finding one circular groove here, and through this particular groove another link will pass. So, this particular link will pass through this, ok? Now, here, the moment this particular piston slides towards this in this, particular direction. So, what will happen is, these two swing-blocks, these two swing-blocks will try to come closure to each other. So, if they come closure to each other, what will happen? So, it is going to grip this particular object. And, a reverse is the situation, if it is sliding towards this particular direction. So, if it is sliding towards this particular direction, these two swing-blocks are going to move away from each other and consequently, there will be gripping of this particular object, shown by this particular, the dotted one, ok? So, this is the way, actually, we can grip and un-grip with the help of this type of very simple mechanical gripper. Now, I am just going to discuss the working principle of another very simple mechanical gripper. Now, here, this is nothing but one gripping pad, this is another gripping pad and I am just going to grip this particular objects, ok? Now, here, we have got one pinion, pinion is nothing but a small gear. So, I have got a pinion here, and I have got a rack here. So, this is nothing but the rack and pinion mechanism, rack-pinion mechanism. So, a rack-pinion mechanism, we have. Now, here, so this particular pinion can rotate either clockwise or anticlockwise, and here, this upper part is connected to this gripping pad and the lower part is connected to the gripping pad in this particular fashion, ok? Now, supposing that, so this particular pinion that is connected to the motor, say, is rotating in the clockwise sense. So, if it rotates in clockwise sense, the upper part is going to slide something like this and the lower part is going to slide something like this; that means, it is going to move towards that and that is going to come like this and it is going to grip this particular object, ok? And, reversing the situation, this particular pinion is rotating in the anticlockwise sense. So, if it is rotating in the anticlockwise sense, this upper part is going to slide along this particular direction and this lower part is going to slide in this particular direction and consequently, this is going to slide towards this, this is going to slide towards this, and it is going to un-grip that particular object. So, this is the way, actually, we can grip and un-grip this particular object with the help of this type of very simple mechanical gripper. Now, then, comes the working principle of another very popular mechanical gripper, that is nothing but using the mechanism of cam and follower. So, by using the mechanism of cam and follower, we can design this particular gripper. Now, let us see how. Once again, we have got the piston and cylinder arrangement and this is a one gripping pad, this is another gripping pad, ok? and here, we have got that particular cam profile, very complicated cam profile. So, we have got this cam profile and we have got the roller follower. So, this is the roller follower. So, cam and follower profile, ok? Now, supposing that this particular piston is sliding towards this particular direction shown by the solid arrow. So, what will happen is: this is going to slide towards this. So, might be initially, the roller was here, are you getting my point? Now, this particular thing will slide in this particular direction. So, ultimately, the roller has come here, it has come up to this, ok? So, initially, the roller was here, but after sometime due to this sliding movement, the roller will come here; that means, these two rollers will come very close to each other. Now, if they come closer to each other, then what will happen is: if it comes closer to each other. So, this is going to come, these two gripping pads are going to come closer to each other and it is going to grip this particular object and for reversing the situation, it slides towards this dotted direction. So, if it slides along this, what will happen is, initially, the roller was here, but after sometime, the roller could be here that means, the distance between the roller is going to be increased that means, these two rollers are separated out and due to that, what will happen, if it is going to be separated out, ok? So, this is going to be separated out. So, these two gripping pads are also going to be separated out and it is going to un-grip that particular the object, ok? So, this is the way, this cam actuated gripper is working. These are all very simple mechanical designs of the grippers. Then, comes the vacuum gripper, which is very frequently used nowadays, particularly for a handling some flat plate sort of thing. Now, let me take a very simple example. Supposing that the robot is going to do some sort of pick and place type of operation and it is going to pick one steel plate and it is going to place it to some other place, ok? Now, how to grip it? Now, supposing that I have got a steel plate here, say, this type of steel plate. Say, might be the thickness is of 20 millimeter steel and the length could be like 1 meter, the breadth could be of 0.5 meter something like this, ok? So, this type of steel plate, the robot is going to grip and place it to another. How to do it? So, what we do is: we take the help of this type of vacuum gripper, ok? Now, here we put one vacuum gripper, we put another vacuum gripper here and we will be able to grip this particular steel plate. Now, let us see the working principle of this type of gripper. Now, here, actually what we do is now, let us first see that particular picture, the way it is used and then, once again I will be coming back. So, this is the way, for example, this is the object to be gripped, I am using these two vacuum grippers and this is connected to the robotic the wrist, ok? So, this particular cup is nothing but the vacuum gripper. Now, let us see, how does it work. Now, to explain the working principle actually, I will have to go back to this particular design, ok? Now, here, the flowing fluid is air, you can see that air is actually coming in and going out, ok? Now, we know that the air is a compressible fluid ok?, but for simplicity, let me assume that air is incompressible fluid, that is, the density row is constant. So, this is an assumption. This is an assumption because, we know that air is nothing but a compressible fluid ok?, but for the purpose of analysis, let me assume that the row is kept constant, that is, nothing but the compressible fluid. Now, here, the way it works is as follows: here, we have got a strainer. So, this is the strainer. It is just going to separate out the dirt particles. Now, inside, we have got one plate, so this is the plate and on this particular plate, actually I have got some small drilled holes, ok?, some small opening, we have and this is known as the orifice plate, ok? So, this is actually the orifice plate and on this orifice plate, we have got some small openings, ok? Now, the air will be forced to pass through this orifice plate and the moment it passes through this small area, what will happen to its velocity? The velocity is going to increase based on the continuity equation because according to the continuity equation the volume rate of flow, that is, A into V, area into velocity should remain constant. Now, here, the area is decreasing, so the velocity is bound to increase. Now, if velocity increases, what will happen to the pressure? According to Bernoulli’s equation, the pressure is going to be reduced, ok? So, as velocity increases, the pressure is going to be reduced. So, here, the velocity will be more, but pressure is going to be reduced, and then, this air will pass through the venturi. Now, in venture, once again, there will be some change in the cross-sectional area and due to this sudden change of the cross- sectional area, what will happen? Once again, the velocity is going to increase, but pressure will be further decreased and this particular region is connected to this part. So, here, the pressure becomes below atmospheric. So, definitely, here, the pressure will be below atmospheric, ok? So, inside the cup, the pressure is below atmospheric, ok? Now, if you see this application, the way we are using it. So, here, inside this, we have got the pressure below atmospheric and outside, we have got atmospheric pressure and due to this pressure difference, this particular elastic cup or the vacuum gripper is going to grip the object, ok?. So, this is the way, due to this pressure difference, it is able to grip that particular steel plate. Now, how to un-grip? To un-grip it, actually what we will have to do is: we will have to stop the air flow. Now, if you stop the air flow what will happen? So, this particular area, is connected to air atmospheric air, inside the cup, the pressure will be atmospheric pressure, and outside is also atmospheric pressure, there is no pressure difference and then, due to the self-weight, this particular steel plate is going to be un-gripped. So, this is the way, actually, we grip and un-grip, if we use some sort of vacuum gripper. Now, here, this vacuum gripper is developed in the form of some elastic cup. So, this is actually the elastic cup. So, this is the elastic cup. Now, this elastic cup is made of elastic material like rubber or the soft plastic. Now, if the object is hard, we generally use the soft elastic material and if the object is soft, we use some sort of hard material, as the vacuum gripper. Now, to generate the vacuum, we can use a venturi and orifice, the way I discussed. We can also use some sort of vacuum pump. So, by using the vacuum pump also, we can create that particular vacuum inside the vacuum gripper or the elastic cup and using this we can grip and un-grip. So, this particular object, part will be connected to the robot and the robot is going to grip and un-grip that type of object like steel plates. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_01_Introduction_to_Robots_and_Robotics.txt
Let us start with the course on Robotics. The first topic is on Introduction to Robots and Robotics. Now, before we start learning robotics, a few questions may come to our mind, these are as follows: What is a robot? What is robotics? Why should we study robotics? What is motivation behind robotics? How can we give instruction to a robot that you perform this particular task? What are the different types of robots, we generally use? What are the possible applications of robots? Can a human being be replaced by a robot?, and so on. Similarly, there are many other questions. Now, here actually, what I am going to do, I am just going to give answer to the first few questions. But, the last one, that is can a human being be replaced by a robot?, that I will try to answer towards the end of this particular course. Now, let me start with the first one, that is, what is a robot? So, I am just going to define the term: robot. The term: robot has come from the Czech word: robota, which means the forced or the slave laborer. This is just like a servant, and we are going to give some tasks to the robot, and it is going to perform those tasks just like a servant. Now, the term robot was introduced in the year 1921 by Karel Capek. Karel Capek was a Czech playwright, he wrote one drama and the name of the drama was: Rossum’s Universal Robot (R.U.R). And, in that particular drama, he introduced a term: robota, that is, the robot. But, the way he described robot is as follows: the robot was look-wise similar to a human being. But, nowadays we use a few robots, which do not look like the human being. So, this is the way actually, the term robot was introduced in the year 1921. But, during that time, there was not even a single robot in the world. Now, if you see the literature, the term robot has been defined in a number of ways. For example say, according to the Oxford English Dictionary, robot is a machine capable of carrying out a complex series of actions automatically, especially one programmable by a computer, so this is nothing but an automatic machine. Then, according to ISO, that is, International Organization for Standardization, the robot has been defined as follows: the robot is an automatically controlled, reprogrammable, multifunctional manipulator, programmable in three or more axes, which can be either fixed in place or mobile for use in industrial automation applications. Now, as I mentioned, that robot is nothing but an automatically controlled machine. And, it is reprogrammable that means the same robot can perform a variety of tasks, and to perform the variety of tasks, we will have to change its program. And, it is multifunctional, that means, the same robot, the same manipulator can perform the different types of machining operations. It can do some sort of peak and place type of operation, and so on. Now, here actually, we are using the term: manipulator. By manipulator, we mean that it is a robot with fixed base. Now, this manipulator could be either serial manipulator or parallel manipulator. So, these things, I will be discussing in details after some time. Now, another very popular definition is given by RIA, that is, Robot Institute of America. Now, they defined robot as follows: it is a reprogrammable multi-functional manipulator designed to move materials, parts, tools or specialized devices through variable programmed motions for the performance of a variety of tasks. Now, these terms, I have already defined. For example, by manipulator we mean robot with fixed base, and that is nothing but a mechanical hand; that means, the human hand we are going to model, design and develop in the form of an artificial hand, and that is nothing but the manipulator, and it is reprogrammable and multifunctional. Now, in terms of re-programmability, if we compare a robot with one NC, CNC machine; now in CNC machine like computerized numerical control machine, we can perform a variety of tasks by changing the program. Similarly, in robots, the same robot I can use to serve a variety of purposes, simply by changing the program. But, here, there is a basic difference between the level of re-programmability, which can be achieved by a robot, and that can be achieved by a CNC machine. Now, it is important to note, that the level of re-programmability, which can be achieved by a robot is more compared to that of the CNC machine. And, that is why, a CNC machine is not a robot. I have put one note here, that CNC machine is actually not a robot. Now, next I am just going to define, what do we mean by the robotics. Now, the robotics is a science, which deals with the issues related to design, development, applications of robots to perform a variety of tasks. The term: robotics actually, it was coined by Isaac Asimov in the year 1942. Isaac Asimov, wrote one story, the name of this story was Runaround. And in that particular story, he used the term robotics first, but once again let me mention that during that time, that is, during 1942, there was not even a single robot in this world. Now, here in robotics, we use the fundamentals of different subjects, for example, physics, mathematics, mechanical engineering, electrical and electronics engineering, computer science. And, that is why, it is bit difficult to become a true roboticist, because if we want to become an expert, a true expert of robotics, we will have to know the fundamentals of all these basic subjects, and a robotics is actually a multi-disciplinary subject. Now, I am just going to define one concept, which I have already mentioned a little bit, like in robotics, we try to copy 3 Hs. Now, these 3 Hs are nothing but, the Hand, Head, and Heart, that means, we try to copy the hand of a human being in the artificial way, in the form of one manipulator, that is, the mechanical hand. We try to copy the head of a human-being, that is, nothing but the intelligence. And, we also try to copy the heart of a human being, but not the mechanical heart, but the emotion of a human-being. And, that is why, in future, the robot will be intelligent and at the same time emotional too. Now, if we consider the human-beings, we are intelligent, we are emotional, and in robotics, we try to copy everything from the human being. So, in future, we are trying to design and develop intelligent and emotional robots. Now, the next is, what is the motivation behind robotics?, why should we study robotics?, what is the reason?. Now, if you see today’s market, it is dynamic and competitive. And, if you want to be in competition, and if you want to be in business, what you will have to do is. You will have to fulfill at least three requirements. Now, these requirements are as follows: like you will have to produce good at low cost; and at the same time the productivity has to be high; and the quality of the product has to be good. Now, you see the three objectives, like reduced production cost, increased productivity, and improved product quality. Now, it is bit difficult to achieve all these three things at a time, and some of them are actually conflicting. Now, if you want to achieve all three, there is only one solution, and that is nothing but automation. So, you will have to go for automation, if you want to achieve all three requirements. Now, if I proceed further, let me tell you something regarding, the different types of products and methods, which we generally use. Now, if you see the production methods, the purpose of production is actually to convert the raw materials into the finished product. Now, this production could be of three types. For example, we can have the piece production, then there could be batch production, then there could be mass production. Now, for piece production, we have got several designs and each design, we will have to manufacture small in number. Now, for batch production, we have got a few designs; and each design, we produce a few in numbers. Now, in mass production, we have got only one design, and that particular product is to be produced a large in number. Now, we can automate this particular batch production, mass production and of course for piece production, automation is not possible; so there is no automation for this particular piece production. But, for batch production, we can go for automation. And, for mass production, we go for automation. For mass production, we generally go for the fixed automation or hard automation. For this particular batch production, we generally go for the flexible automation. Now, robotics is an example of this flexible automation. And, that is why, for batch production, particularly in the manufacturing unit, we will have to go for the robots, if you want to survive in this competitive market and that is why, the robotics and the robots have become so much popular in manufacturing units. But, nowadays, not only in manufacturing units, the robots are used in different areas. For example, robots are nowadays used in space science, used in medical science, robots are also used for sea-bed mining, in agriculture, fire-fighting, and so on. So, there are various applications of robots, nowadays. Now, here all such things I have noted. Automation can help to fulfill the requirements of the above requirements. And, robotics is an example of the flexible automation, and that is why, we should study robotics. Now, I am just going to concentrate on a brief history of robotics. Now, if you see the NC machine, that is, the numerical controlled machine; that was developed first in the year 1950, but robot came after that. So, the first robot, which was developed, that was developed in the year: 1954. In 1954, the first patent on the manipulator was filed by George Devol, and he is known as the father of robot. In 1956, Joseph Engelberger started the first robotics company, and the name of the company is Unimation. So, Unimation is the first robotics company, which was started in the year 1956. Then, in the year 1962, General Motors used the manipulator, the name of the manipulator is Unimate, and this particular robot was used in die-casting application. Now, next, in the year 1967, General Electric Operation made one 4-legged robot, and this is a 4-legged vehicle, and they demonstrated and it worked well. Then, in the year 1969, SAM was built by the NASA, USA. SAM was the name of that particular robot, which was built by the NASA, then Shakey, an intelligent robot, was actually manufactured by Stanford Research Institute SRI. In fact, Shakey is the first intelligent mobile robot that was developed in the year 1969. In 1970, Victor Scheinman, demonstrated a manipulator known as Stanford Arm, and then, Lunokhod 1 was another robot, that was sent to the moon by USSR, then ODEX 1, another robot, was built by Odetics, in the year 1970. Then, in the year 1973, Richard Hohn of Cincinnati Milacron Corporation manufactured one robot, the name of the robot was T^3, The Tomorrow Tool. Then, in the year 1975, Raibart at Carnegie Mellon University, USA, built one one-legged hopping machine, and that is the first dynamically stable machine. Raibart, in fact, is known as the father of multi-legged robots. In the year 1978, Unimation, the first robotics company, could develop the PUMA, that is, Programmable Universal Machine for Assembly. And, this is actually a manipulator, whose current version is having 6 degrees of freedom, and it is very frequently used in various industries. Then, in the year 1983, Odetics, a robotics company, introduced a unique experimental six-legged device. In the year 1986, Adaptive Suspension Vehicle, in short, ASV was developed by Ohio State University, USA. In 1997, NASA, USA, developed the intelligent robots like Pathfinder and Sojourner, and they sent them to the Mars, but that particular mission was a failure. And, that particular failure was due to some sorts of mismatch of the specifications. Next, in the year 2000, Honda could develop one Humanoid robot, Asimo robot. So, Asimo Humanoid robot was developed by Honda, in 2000. Then, comes in 2004, the surface of the Mars was explored by Spirit, and Opportunity, and this particular mission was successful. And, you might be knowing, what happened in 2012, the Curiosity, one intelligent autonomous robot, was sent to the Mars by the NASA, USA, and this particular mission was successful. Then, all of you might be knowing, what happened in the year 2015, Sophia, that is one intelligent and a little bit emotional humanoid robot, was built by Hanson Robotics, Hong Kong, and this is actually, as on today, the most sophisticated intelligent humanoid robot. And, a few weeks ago, this particular robot was brought to IIT, Bombay, and there she could talk, she could communicate with other people, and some of you might have seen in paper or a television. So, that particular very sophisticated intelligent humanoid robot is Sophia. So, these are in sort the brief history of the robotics. Now, the purpose behind giving this brief history of this robotics is just to tell you that we started a bit late in India. The study on robotics, we started around 1979, 80. So, we started a little bit late, although the first manipulator, the first patent was filed in the year 1954. Now, I am just going to concentrate on a particular robotic system. So, what are the different components of a typical robotic system? Now, here, in this particular schematic view, you can see that that this particular thing is nothing but a robot. So, this is actually the robot, and this is the manipulator, this is a serial manipulator. And, this is the drive unit for this serial manipulator. And, this is the controller or the director for this particular manipulator. Now, as I told that this is a serial manipulator, and by manipulator, we mean a robot with fixed base. So, here, the base of this particular robot is fixed. So, it is a fixed base, we have got one link here, another link here, another link here, and these links are used just to transmit the mechanical power. And, in between the two links, we have got the joints, so we have got a few joints. For example, say if I consider that this is the base of this particular manipulator, and this is the next link, so in between these two, you have got a joint here. Similarly, in between this link, and that particular link, we have got a joint here. Similarly, here in between this link and that link, we have got a joint here, between these and these we have got another joint here. So, in between the two links, so we have got a particular the joint. Now, if you see the robotic joint, the robotic joint could be basically of two types, it could be either the linear joint, or there could be rotary joints. So, the linear joint, it could be either prismatic joint or sliding joint. Now, here I am just going to draw a rough sketch for these prismatic and sliding joints. Now, if I just draw this particular prismatic joint, supposing that I have got a block like this. So, if I consider a block like this. Now, here, I can insert one this type of key. Now, if I insert this particular key here. So, this particular joint will be nothing but a prismatic joint, and this is a linear joint. So, this particular part, say part A can be just moved in the linear direction here, and this is an example of the prismatic joint. Now, similarly, I am just going to take the example of one sliding joint, now supposing that I have got a block like this. Say, I have got a block like this. And, here, I will have to insert one pin, that pin could be something like this. Say, I will have to insert a pin something like this here, and this particular pin can be inserted here and there will be only the linear movement, and this is the example of one sliding joint. So, these are all linear joints. Now, next come to the rotary joint. Now, here so, if you see the rotary joint, it could be of two types basically, we could have the revolute joint, and there could be twisting joint. Now, both are the rotary joints, but basically there is a difference between these revolute joint, and twisting joint. Now, to find out the difference between the revolute joint, and twisting joint; I am just going to take one example here. Now, let me take one example. So, this is the fixed base, and this is the link, and in between I have got a joint here. Now, with the help of this particular joint, so this particular link can be rotated something like this. So, it can be rotated something like this. Now, if this is the output link and this side is input. The axis of the output link is nothing but this about which I am taking the rotation. And, this particular axis is coinciding with the axis of the output link. This is the output link. This is the axis of the output link, and I am taking this rotation about this particular axis. So, this particular rotary joint is nothing but, the twisting joint denoted by T. Now, let me take another example, say this is one link and this is another link. So, here, I am just going to take the rotation, the rotation about this particular axis. Now, here if this is the input side, that is the output side. So, the axis of the output link is something like this, and the axis about which I am taking the rotation is this, and they are at 90 degrees. So, if this is output, the axis of the output link, and the axis about which I am taking the rotation, they are at 90 degrees. So, that type of rotary joint is known as the revolute joint, so this is nothing but a revolute joint denoted by R. So, basically once again let me repeat that we use two types of joints, namely linear joint, and rotary joint. And, once again, there are two types of linear joint, the prismatic joint and sliding joint, and two types of rotary joint we use, one is the revolute joint, another is the twisting joint. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_45_Summary_Contd.txt
Now, then we concentrated on the control scheme; that means, once you have determine the expression for the joint torque, then how to achieve that? How can the motor supply that type of that amount of torque in a particular cycle time? And, each of this particular motor, generally we use DC motor, which is actually equipped with one controller; say PD controller or PID controller or PI controller. And, with the help of that; so, this particular motor will be able to generate the required torque. Now, in partitioned control scheme, actually what we do is, the total torque tau, that is divided into 2 parts alpha tau prime plus beta. Now, this alpha is nothing, but your D (theta), that is the inertia terms and this B is nothing, but is actually, if I just write down the expression for the torque, that is, a D (theta, theta double dot) plus your h (theta, theta dot) plus your C (theta). So, this is nothing, but the expression for the joint torque. So, this is the inertia term, correlation centrifugal term and this is the gravity term. So, beta is actually take care of this correlation centrifugal and the gravity terms. So, this is nothing, but beta and D (theta) is nothing, but alpha then how to determine this particular tau prime? To determine the tau prime actually, what we do is; either we take the help of PD controller, that is, Proportional Derivative controller or we take the help of PID controller, that is, Proportional Integral and Derivative controller. Now, supposing that I am using PD controller; so tau prime will be theta_d double dot, that is the desired acceleration plus K_P, that is the gain value for the proportional gain multiplied by error plus your K_D, that is the derivative gain multiplied by is your E dot. And, if I use PID controller; so, I will have to add K_I that is the integral gain multiplied by integration E dt; E is nothing, but the error and E dot is the rate of error, ok. So, using this particular principle and using the partitioned control scheme and using the closed loop control system, the motor will be able to generate that particular the desired torque; so, these things I have discussed in much more details. Now, comes your, if you want to make the robot intelligent; we will have to take the help of some sensors because we human being, we use a large number of sensors like eyes, ears, nose, skin and all such things. Sometimes, we collect information with the help of multiple sensors and there will be multi-sensor data fusion also. Here, also in robotics, we will have to use the different types of sensors and if you want to purchase a sensor, so we will have to prepare the specification; so, we will have to mention, what is the resolution of a sensor, what is the repeatability of a sensor, what is the range of a sensor, and so on. Then, comes your; we discussed the different types of sensors, we generally used in robots, for example, we have got internal sensors, we have got external sensors. Internal sensors are generally used to operate the drive units and external sensors are generally used to collect information of the environment; we have got some sort of touch sensor, we have got non-contact sensors, ok; in fact, we have got different types of sensors. Now, we discussed, in detail, the principles of touch sensor like limit switch, different types of position sensor like potentiometer; how does it work. Then, LVDT, that is, Linear Variable Differential Transformer and it can measure the linear displacement and for measuring the rotary displacement; we will have to use RVDT; that is RVDT is a Rotary Variable Differential Transformer. We can use some sort of optical encoders; now optical encoders could be either the absolute optical encoder or there could be incremental optical encoder. Absolute optical encoder is more accurate, more costly because there we use large number of the photo-detectors and here, the resolution, which we get is nothing, but 1 in 2 raised to the power n; n is nothing, but the number of concentric rings. And, then comes the force or the moment sensor; now these force or moment sensors are generally used to find out, what should be the component of the force, the component of the moment. Supposing that one robot is doing some sort of manipulation task, that is, say it is doing some sort of pick and place type of operation and while doing this pick and place type of operation; so, the gripper is going to grip, it is going to carry it and place it somewhere. So, the robotic joints are subjected to some amount of force, moments, torques and to determine that, we can take the help of this type of force or the moment sensor. The working principle of this force and moment sensors, I have discussed in much more details. Now, then, comes the range sensor; now in this range sensor, we use the principle of the triangulation. And, using the principle of triangulation, we can determine the distance between the object and this particular the sensor. We can use light as a source or sound as a source in this type of range sensor to determine the distance between the object and the sensor. Then, comes your proximity sensor; we have got a few proximity sensor, which are very popular. For example, say we have got inductive sensor, then comes we have got the Hall-Effect sensor; these are suitable only for the magnetic material. And, Hall effect sensor works based on the concept of that Lorenz force, that is F is nothing, but q multiplied by V cross B, V is nothing, but the velocity with which a charge q is moving in a magnetic field of strength B; then it will be subjected to the force, that is called the Lorenz force. So, using the principle of Lorenz force, this Hall effect sensor is working, then using the principle of the law of magnetic induction, that is, the rate of change of magnetic flux is proportional to the induced voltage or the induced current. So, based on that; so this inductive sensor is working, these are suitable for the magnetic material, then comes your capacitive sensor, it is suitable both for your the magnetic as well as the nonmagnetic material. So, these are some of the sensors very frequently used in robots. And, there is a another sensor, that is also very frequently used, that is called a passive sensor and in this passive sensor actually, we do not use any feedback for this passive sensor and Remote Center Compliance that is RCC is a typical example of this particular the passive sensor. Next, we started with the topic 7, that is, Robot Vision. So, in place of sensor; if the robot is using some camera, then how can it collect information of the environment? So, that particular principle actually, I have discussed in much more detail. The steps of the robot visions are as follows, for example, we capture image or the photograph with the help of a camera. Generally, we use some sort of CCD camera, then we go for some sort of sampling, that is analog to digital conversion and for this particular sampling, we take the help of some sort of electron beam scanner. And, I have already discussed that we do this scanning along the x direction and y direction and on the electron beam scanner, there are some photo-sites. And, whenever we have doing the scanning, some amount of charge will be accumulated on the photo-sites. And, amount of charge accumulated is proportional to the light intensity. So, by using that particular information, we can do some sort of sampling that is called analog to digital conversion. We use some sort of digitizer here and once, we have expressed that particular image; say black and white image, in the form of a matrix of numerical values; that means, your image I am just going to represent with the help of one matrix of number and this is what, is known as the frame grabbing. Now, this particular frame grabbing, if you do so, I will be getting that particular the image in the form of matrix form, but that may not be the correct and there could be some noise, there could be some lost information. So, we will have to go for preprocessing, there are different methods of preprocessing, which I discussed, for example, the masking operation, then neighborhood averaging or median filtering. So, these are all preprocessing methods, once we got the preprocessed data; next we take the help of thresholding, just to find out the difference between the object and that particular background. And, to find out the boundary, we take the help of edge detection technique; these are nothing, but the gradient operator, we use the second degree gradient, first degree gradient, that is, Laplacian is the second degree gradient, also. Once you have got that particular boundary; now, we try to express that particular boundary of the object in a mathematical way, that is, we use some sort of boundary descriptor, so that we can do some sort of further processing. And, once you have got that, now we will have to identify, actually as I discussed, we try to find out the compactness of the different objects. Now, by compactness, we mean that is nothing, but the perimeter square divided by the area. And, by knowing this particular compactness, we try to find out actually, we try to find out the different objects. So, this is actually, how to determine that particular, how to collect information of the environment; the next is your the robot motion planning. The aim of robot motion planning is to plan; is to determine the course of action. Now, this robot motion planning could be either gross motion planning or free space motion planning. Now, here, we concentrate only on the gross or the free space motion planning. We solved the find path problem using the different methods, graph-based methods like the visibility graph, which was proposed long back in the year 1969 and in fact, this is the first approach of robot motion planning, that is the visibility graph. Then, came the concept of Voronoi diagram, we tried to find out the locus of the points, which are equidistant from 2 of the boundaries and that could be the feasible path for the robot. Then, we discussed the cell decomposition; so before we go for the cell decomposition, what we do is; so we try to find out the feasible and the infeasible zones. Now, if I have got a physical robot and an obstacle; we try to convert it into a point robot and a grown obstacle and we will be getting some sort of your the feasible and infeasible zones. So, the feasible zone that is divided into a large number of small segments and we try to find out, what should be the midpoint for each of the feasible regions. And, then, we try to connect all the feasible points by the straight line and that will be nothing, but the collision-free path for the robot. Then, we discussed the principle of tangent graph technique; so, we consider the bounding circle for the obstacle and we try to move, the robot will try to move along the tangent and the circular arc to reach that goal and that is the principle of tangent graph technique. Then, we discussed, in details, the dynamic motion planning problem. Now, here, in dynamic motion planning problem, the robot is moving, at the same time the obstacles are also moving and to solve that particular problem, we took the help of a few approach; one is called the path velocity decomposition. And, as I mentioned that this is the first approach proposed to solve the dynamic motion planning problem. This path velocity decomposition that consists of 2 sub problems; one is called the path planning problem and another is called the velocity planning problem. Then, comes your accessibility graph, now this accessibility graph is nothing, but the modified version of the visibility graph. Now, here, the obstacles are moving; so at time t equals to t_1, I will be getting one visibility graph, one collision-free path at time; t equals to t_2, another visibility graph. So, the visibility graph is going to vary with time and that is nothing, but the concept of accessibility graph. Then, comes the relative velocity scheme; so, in dynamic motion planning problem the robot is moving, obstacle is moving. So, here, actually what we do is, we try to find out the relative velocity of the robot with respect to the obstacle, as if we consider the obstacles are stationary and we try to find out the velocity and considering that, we try to find out the collision-free path. Then, comes your incremental planning, so at time t equals to t_1, say the dynamic motion planning problem will become the find-path problem. So, we make a plan considering the problem as a find-path problem, then the robot is going to start moving; the moment it faces problem and if there is a chance of collision, the robot is going to stop and it is going to re-plan and once again, it will try to find out the collision-free direction, this is the principle of your the incremental planning. Then, comes the artificial potential field method, here, the robot is going to move under the combined action of attractive force or attractive potential of the goal and a repulsive force or the repulsive potential of the obstacle. And, due to this combined effect of this attractive and repulsive forces, the robot is going to move towards the goal. So, this is actually the principle of your artificial potential field method, then comes your the reactive control scheme. So, here, each of the robotic action is divided into a large number of the primitive robotic tasks and each of these primitive robotic task is controlled at a particular layer of the control scheme. Now, supposing that a particular task has been divided into certain primitive behaviors to design the control scheme, there should be 10 layers and over and above, there will be one centralized computer and which is going to control all the 10 layers. So, this is actually the scheme for the reactive control scheme and based on this reactive control scheme, one field of robotic research started that is called the behavior-based robotics. But, as I told that behavior-based robotics has got a few drawbacks and that is why, currently we are working on evolutionary robotics. And, evolutionary robotics is going to actually overcome the problem faced by the behavior-based robotics, but as it is out of scope of this particular course, I did not discuss the principle of the evolutionary robotics. Now, then comes your, we started discussing on intelligent robot like how can a robot take the decision as the situation demands and that we implemented with the help of the wheeled robot. Now, I am just going to discuss, in short, the way this particular thing was implemented, it is very simple. So, we have got a field and in the field, we have got a robot and some moving obstacles, at the top, we put the camera that is the overhead camera; so, with the help of this overhead camera, we take the snap at a regular interval and that particular picture collected with the help of camera goes to the a computer through BNC cable. And, in the computer, we have got the my vision board, that is nothing, but the image processing hardware and there, we carry out some sort of image processing within a fraction of second. And, based on that image processing, we try to find out, what is the position of the robot, what is the position of the obstacle, which one is the most critical obstacle, and so on. And, based on the critical obstacle, we use the motion planning algorithm just to find out the angle of deviation and acceleration or the speed so that it can avoid collision. Now, with the help of motion planner whatever decision we have got; now we will have to implement. To implement that, we need that we will have to give some instructions to the controller of the motor because the motor is connected to the wheel of the robot; now what we do is, we took the help of some sort of wireless communication through radio frequency module. And, with the help of this radio frequency module, the information related to how much RPM is required on the left side wheel, how much RPM is required on the right hand wheel that information we are going to pass to the controller of the motor. And, the controller of the motor is going to generate that particular RPM so, that this particular wheeled robot can take the left turn or the right turn or it will be able to move in the forward and backward direction, as the situation demands. So, this is the way, actually we could make the robot intelligent and we did real experiment and in this course, we showed some video also for that real experiment. Then, comes the biped walking. So, we concentrated on the biped walking, we consider how to find out the power consumption for the biped walking; how to maintain the balance of this particular, the dynamic balance of the biped walking and I did not discuss the mathematical derivation, in details, like how to derive the power, expression for the power and as I told that is available in the textbook of the course. So, those who are interested they can see from this particular textbook and after that with the help of a small model on a biped robot, we showed some movement, some forward and backward movement of that particular the biped robot. And, the video of the real experiment has or also been shown here and it has been demonstrated. Now, this particular course, we have come to the end of this particular course on robotics and I am sure all of you have enjoyed this course and have learnt a lot through this particular course, but this is simply the beginning. So, if you want to learn robotics and if you want to be the true roboticist, these are fundamentals, you will have to understand, the all 4 modules of robotics, which I have discussed. Those things we will have to understand, we will have to start learning that will give you the initial momentum, so that you can learn this particular subject in future, in depth. And, robotics once again I should mention that this is the future, so we will have to go for this type of multidisciplinary fields in future and using the robotics, we can solve different types of real-world problems. So, I think, there is a very good future for this particular the robotics. I thank you all and I wish you all the best. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_02_Introduction_to_Robots_and_Robotics_Contd.txt
So, we are discussing the different components of a Robotic System. And, we have seen that, the robot is having base, links and joints. And, the next is, we have got some sort of end-effector or the gripper. Now, the purpose of using the end-effector is to grip that particular object, which I am going to manipulate. And, this particular end-effector or the gripper will be connected here, but is not shown here in this particular sketch, so the gripper will be connected here. The next is this gripper or the end-effector, I will be discussing in much more details after some time. The wrist joint, so this particular joint is nothing but the wrist joint, where I am just going to grip, so this particular end-effector with the last link of the robot. Next is the drive system or the actuator. Regarding this drive system or the actuator, now let me see, what type of drive system we have in our body, like we human -beings are dependent on both the mechanical drive systems, that is with the help of your muscles. And, we also take the help of hydraulic system, that is with the help of blood, and the blood is pumped with the help of heart. So, now all such things have been copied in robotics also. And, in robotics, we have got the pure mechanical drive. Now, mechanical drive means, in the form of the gears, gear and pinions, in the form of chain drives, belt drive and so on. Now, supposing that the load requirement is more or the power requirement is more, so what we will have to do is, we will have to take the help of some sort of hydraulic drive. We also take the help of pneumatic drive, using some sort of compressed air, we use electrical drive also. And, sometimes we combine, that means, use electro-hydraulic, electron-pneumatic drives. So, different types of drive unit we generally use here, in the robots. The next is actually the controller. Now, this controller is nothing but the brain of this robot. So, just like our head, this controller contains this particular brain or the intelligence. And, here in the controller, there will be software, and hardware also. Now, these actually constitute one robotic system all such components. Now, here if I want to make it intelligent, the robot will have to collect information of the environment with the help of some sensors. And, that is why, we use some sensors along with the robot, just to collect information of the environment and operate the drive units, so that we can make these particular robots intelligent. This is once again has been copied from human-being, because we have got a few sensors like we have got eyes, ears, nose, skin and all such things. We collect information with the help of these senses, take the decision on our head, the same thing is done by an intelligent robot. The information collected with the help of these particular sensors, will be processed in the controller, then the decision will be taken, and that particular decision will be executed. This is the way, actually, one intelligent robot will be working. All such things will be discussed in much more details after some time. So, the different types of sensors, we generally used in the robots, these things will be a discussed in details after some time. In fact, in robots, we use both internal as well as external sensors. Internal sensors are used to operate the drive units. For example, say we have got the position sensor, velocity sensor, acceleration sensor, forces or the moment sensor, and so on. On the other hand, we have got a few external sensors, which are used to collect information of the environment. For example, we have got some sort of range sensor, proximity sensor, and all such things will be discussed after some time in much more details. Now, I am just going to see; what are the different areas in robotics. As I told that in robotics, there are four distinct modules. Like these modules are coming under the purview of different disciplines. For example, say we have got the kinematics, dynamics, and sensing, which are coming under the umbrella of this mechanical engineering. Now, in kinematics actually what we do is, we try to consider the motion, the relative motion of the different joints, different links ok, but we generally do not try to find out the reason behind this particular movement or this relative movement, and that is actually done in dynamics. So, in dynamics, we try to find out, how much is the force required?, if it is a linear joint, and how much is that moment or torque required, if it is a rotary joint. So, all such things are mathematically determined in dynamics. And, in sensing, we try to collect information of the environment with the help of sensors. So, all such things are coming under the umbrella of mechanical engineering. And, then, comes the motion planning. So, what we do is, if you want to make these particular robots intelligent, what you will have to do is, you will have to make some planning. We will have to plan the course of action, and we take the help of a few motion planning algorithms. Now, if you see the literature, a huge literature is available on robot motion planning using both traditional as well as soft computing-based approaches. So, here, in this course, basically I will concentrate only on the traditional approaches of motion planning, and these things will be discussed in much more details after some time. But, the purpose of using motion planning is to decide the course of action, depending on the input situation. So, what should be the output, how to decide that, that is the purpose of the motion planning. Now, then comes the artificial intelligence. So, as we told that, we try to copy the human brain in the artificial way, using the principle of artificial intelligence. Now, to design and develop the suitable brain for the robots, we will have to model the human brain, the human intelligence in the artificial way, using the principle of artificial intelligence. And, once again, this artificial intelligence is a very big area of research. And, distinctly there are two groups of algorithms, one is called the traditional AI techniques, and we have got the non-traditional AI techniques, that is, called the computational intelligence, that is the artificial intelligence using the principle of soft computing. So, using this principle actually, we can make a plan depending on the requirement. So, that the robot can be made intelligent, and it can take the decision and execute that particular task depending on the requirement. Now, all such things are coming under the umbrella of computer science. Now, next comes the control scheme. Now, supposing that to perform a particular task. So, my motion planning algorithm has given some decision, which has to be executed. So, how to execute? So, what you do is at each of the robotic joint, we use some motor. Generally, we use DC motor, and to control these motors actually there should be controller. So, definitely the robot should have one control architecture, and one control scheme has to be used to control this particular robot in a very efficient way. And, these control schemes, and its hardware implementations are coming under the purview of your electrical engineering, and the electronics engineering. Now, here, actually to develop the robots, we will have to have very good knowledge of the general science like physics, mathematics, because we will have to use the principle of physics and mathematics very frequently to design and develop the robots. Particularly, if I want to design from kinematic point of view, dynamics point of view, so a lot of mathematics, a lot of physics I will have to use. Similarly, if I want to design and develop suitable sensors for these particular robots, we will have to use the basic principles of physics, all such things I will be discussing after some time in much more details. Now, that means, if somebody wants to become a true expert of robotics, he or she should have at least some basic fundamental knowledge of all these pillars. And, that is why, the robotics is little bit difficult, and it is bit difficult to become a true roboticist. Now, I am just going to concentrate on once again the different types of joints, which is generally used in robots. I have already mentioned that basically we use two types of joints, one is called the linear joint, and another is called the rotary joints. Once again, the linear joint we have got basically of two types, one is called the sliding joint, another is called the prismatic joint. Similarly, the rotary joints, we have got, may be either the revolute joint, or twisting joint. Now, each of these particular joints is having the degree of freedom or connectivity. So, by connectivity we mean, how many rigid link can be connected to one fixed link through that particular joint. Supposing that this is the input link, and I have got one output link here, now if I want to join this output link with the input link, I will have to put one joint. Now, if I can join only one output link to the fixed input link with the help of that particular joint, this particular joint is having one connectivity or one degree of freedom one. Now, here, I am just going to discuss in details, the joint like the revolute joint. Now, this revolute joint as I told, has got only one connectivity, and one degree of freedom, because I can connect only one output link with the input with the help of this particular joint. Now, one very simple example of this type of joint, that is, revolute joint is this particular joint. Here, you can see, that if this is the input side, that is the output link, this particular output link can be connected to the input link with the help of this particular joint. So, this joint is nothing but similar to a revolute joint. Now, here, in this particular sketch if you see, this is the input link, denoted by i j. And this is the output link denoted by j k. And, in between the input link i j, and the output link j k, we have got a joint, and that is nothing but the j-th joint. So, this is nothing but the j-th joint. Now, here, if you see, this is the axis about which I am taking this rotation. And, the axis of the output link is nothing but this, and they are at 90 degrees. That is why, this is a typical example of the revolute joint. Now, let us see, how does it work, to explain its working principle, this is the input link and that is the output link. Here, I have got a fixed offset that is denoted by d_j fixed, so this is the fixed quantity. And, so this particular output link can only rotate with respect to the input in this particular direction. OK? This rotation is given by theta_j. This theta_j is nothing but a variable. So, here, theta_j is known as the joint angle, so this is known as the joint angle. And, this joint angle is nothing but a variable here for this particular revolute joint. As I told that this particular joint is a revolute joint. So, this particular angle is going to vary, and this angle is nothing but the joint angle. So, this is the variable for this revolute joint. Now, then, comes the prismatic joint. Now, before I go once again for this particular prismatic joint, let me just tell you one more thing. Now, here I did not discuss the twisting joint that is another type of the rotary joint. Now, I am just going to show you one very practical example, with the help of which you can find out the difference between, the revolute joint, and the twisting joint. Let us concentrate on the joint which you have at this particular neck, say our neck. Now, with the help of this particular joint, I can rotate my head in two different ways. For example, say this is nothing but my fixed link. And, supposing that, I am just going to rotate the head, so this is actually the axis about which I am rotating my head, and this is something like this. So, I am just rotating my head something like this. And, here in this particular rotation, this is the axis about which I am rotating my head, OK, so this is one rotation. Now, I am just going to rotate my head in another way. So, this indicates the axis about which I am taking the rotation. So, here, in this particular rotation, so I am able to rotate my head like this. So, if I rotate my head like this, my output axis of the head is here, and this is the axis about which I am taking the rotation, so this angle is nothing but 90 degrees. So, the moment I am rotating my head like this, this particular joint is nothing but the revolute joint. But, the moment I am rotating my head like this, this particular joint will act like the twisting joint, so this is the practical example, just to find out the difference between the revolute joint and the twisting joint. Now, here if I just draw one revolute joint and the twisting joint on the same manipulator, I can draw very easily. Say, this is one manipulator with one twisting joint, and one revolute joint. So, let me try to prepare one sketch. So, this is actually a robot with fixed base. This is a very simple robot. So, with respect to the fixed base, I have got a joint here, and it can rotate something like this, so this joint is nothing but a twisting joint; it is a rotary joint. But, here, I have got another joint with the help of which I can rotate, and this is nothing but a revolute joint. So, this is the difference between the twisting joint, and revolute joint. Now, I am just going to start with the prismatic joint, which is nothing but a linear joint. And, this particular prismatic joint is having only one connectivity or one degree of freedom; now, here on this particular sketch. So, this is the input link i j. So, this is actually the input link i j. And, the output link is nothing but the link j k. And, here, we have got the j-th joint. Now, you can see that, so this particular theta_j, that is, the joint angle is kept fixed. Now, this block can move up and down, so it can slide, it will have only the linear movement, only one connectivity, one only one degree of freedom, and there could be only linear movement. So, this particular prismatic joint, which is denoted by P, will have only one connectivity or one degree of freedom. So, this prismatic joint is having one connectivity or one degree of freedom. Now, then comes the cylindrical joint. This particular cylindrical joint is having, in fact, two degrees of freedom. So, cylindrical joint has got two degrees of freedom; this is the input link i j, the output link is j k. Now, this particular block, can slide up and down. And at the same time, it can also rotate something like this. This is called the cylindrical joint. So, here, we have got one linear joint, and another rotary and this is a combination, and it has got two degrees of freedom. And, that is why, both theta_j as well as this particular d_j are kept as the variables. So, this is a typical example of this particular cylindrical joint, which has got two degrees of freedom or two connectivities. Now, then comes the concept of Hooke joint or the Universal Joint. So, this Hooke joint or the Universal Joint, this is actually a combination of two rotary joints. In fact, here we are going to combine, two such revolute joints. And, this particular Hooke joint or the universal joint has got two degrees of freedom. Now, let us try to understand the principle of this Hooke joint. Now, here, the input link is i j, and the output link is nothing but k l. Now, here you can see that we have got one revolute joint here. So, this is the axis for the first revolute joint. And, this is the axis for the second revolute joint, OK? Now, I am just going to show you the physical concept of this Hooke joint or the universal joint. Now, let me consider that this is a joint this is a revolute joint. So, this is the axis about which I am taking the rotation, and with respect to this particular axis. So, the joint can be rotated something like this. So, it has got some theta variation here. Similarly, with respect to this. So, I can have another revolute joint like this. So, here with respect to this the axis, I can I can rotate something like this. So, this is another revolute joint. So, these two revolute joints if I connect, then it will form the Hooke joint or the universal joint. Supposing that this is actually my input side, and this is my output side. So, with respect to the input, the output will have two degrees of freedom. And, this type of joint is generally not used in serial manipulator, but this is used in parallel manipulator. Now, the concept of serial manipulator and parallel manipulator, I am not yet discussed. But, I am just going to give a very rough sketch for this serial manipulator and a very rough sketch for the parallel manipulator, just to find out their differences. Now, if you see the manipulator, which I just drew a few minutes ago, that is nothing but a serial manipulator. Now, in serial manipulator, actually all the links, all the joints are in series. For example, say the same picture I can consider for the serial manipulator, so this is nothing but a serial manipulator. So, this is another joint. And here, I can put one linear joint also. So, let me put one linear joint here. So, there is a twisting joint here, denoted by T, there is a revolute joint here, denoted by R, and there is a sliding joint here, say denoted by S, that is a linear joint. This is actually known as, TRS manipulator, this is nothing but a serial manipulator. So, this is a serial manipulator. And, now, I am just going to draw a rough sketch for one parallel manipulator. A very simple parallel manipulator, if I just draw. So, a very simple sketch, very simple design, I am just going to make. So, this is the parallel manipulator, this is the top plate for the parallel manipulator. And here, we have got a few joints. For example, say I can put one revolute joint here, I can put one revolute joint here, I can put one linear joint here, say one prismatic joint I can put. Similarly, at each of these particular legs; I have got one revolute joint here, I have got one linear joint here, this is a revolute joint, OK?. Similarly, I have got a revolute joint here, I have got another revolute joint here, I have got one prismatic joint or the linear joint here, so this is nothing but a parallel manipulator. So, this is actually a parallel manipulator. Now, the reason why I am just trying to find out the difference between the serial and parallel manipulators is as follows. So, these types of joint are generally not used in serial manipulator. But, in some of the complicated parallel manipulator, we use this type of Hooke joint. For example, in place of this rotary joint, I can put one Hooke joint here, I can put one Hooke joint here, I can put one Hooke joint here, I can put one Hooke joint here. So, these type of joints are generally used in parallel manipulator, but not in serial manipulator. Those things I will be discussing after sometime in much more details. Now, then comes the concept of the ball and socket joint or the spherical joint. Now, this ball and socket joint or the spherical joint is having actually three degrees of freedom, and all three are rotations. Now, here actually what we do is, the input link and output link are connected, so that one understands this arrangement. So, the input link, that is, link ij is having the coordinates X_1, Y_1, and Z_1, and input link is connected here, so this is actually connected to the input link, OK? And, the output link is jk, whose coordinates are nothing but X_2, Y_2 and Z_2, that is connected to this part, and this is nothing but the output link, and this is a link j k. Now, starting from input link ij, if I want to go to this link jk, how many rotations are required. Now, if I can start from X_1, Y_1 and Z_1, and if I can reach X_2, Y_2, Z_2, through some minimum number of rotations, that will be the degrees of freedom or connectivity of this type of ball and socket joint or the spherical joint. Now, let us try to find out what should be the degrees of freedom of this ball and socket joint or the spherical joint. And, before I do that, let me once again mention that this type of joint is generally used only in the parallel manipulator, but not in serial manipulator. Now, let us try to understand, why does it have three degrees of freedom. Now, this X_1, Y_1, Z_1 are initially coinciding with the universal coordinate system X, Y and Z. And here, I am just going to give some rotation about Z by an angle alpha in the anti- clockwise sense. Now, if I give rotation about Z, my Z will remain same as Z. So, this will become Z 1 prime. So, this will become Z 1 prime. But, X 1 will become X 1 prime that will be different from X 1. And, Y 1 will become Y 1 prime. And Y 1 prime will be different from Y 1, but Z 1 prime will remain same as Z 1. Now, I am just going to give some rotation about X by an angle beta in the anti- clockwise sense. So, if I give rotation about X the original X or the universal X by an angle beta, I will be getting change in all X Y and Z that means your X 1 double prime will be different from X 1 prime; Y 1 double prime will be different from Y 1 prime; and Z 1 double prime will be different from your Z 1 prime. And now, I am just going to give rotation about the universal Y by an angle gamma in the anti-clockwise sense. So, all three rotations will come that means your X 1 triple prime, will be different from your X 1 double prime; then Y 1 triple prime will be different from Y 1 double prime; and Z 1 triple prime will be different from Z 1 double prime. And now, X 1 triple prime is nothing but X 2. Y 1 triple prime is nothing but Y 2 and Z 1 triple prime is nothing but Z 2. Now, here, this X 2 is nothing but this particular X 2. This Y 2 is nothing but this particular Y 2. And, this particular Z 2 is nothing but this particular Z 2, that means, starting from the input link, that is, X 1 Y 1 and Z 1, I am able to reach the output link, that is, X 2 Y 2 Z 2. And, to reach that, I need to take the help of three rotations. All three rotations are taken with respect to the universal coordinate system, that means I need three rotations. Thus, this ball and socket type of joint or the spherical joint is having 3 degrees of freedom or the mobility level of 3. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_31_Sensors.txt
We are going to discuss another topic, that is, topic 6, that is on Sensors. Now, let us see like how to design and develop the sensors, how to use the sensors, what are the different types of sensors used and how can we collect information with the help of sensors? Now, we human-beings, we use different types of sensors like we have got the eyes, ears, nose, skin. In fact, we use multiple sensors to collect information of the environment. And, the data collected with the help of this multiple sensors are actually processed in our brain and with the help of this particular processing, we can collect information of this particular environment. Similarly, if you want to make robot intelligent, we should put a few sensors and these sensors will help the robot to collect information. Now, here, let me define that the sensor is nothing, but a transducer and we generally use sensor to take some measurements of physical parameter or physical variable and here this sensor; if you want to use as a measuring device; so, definitely there must be some calibration. And, by calibration actually we mean, it is actually the comparison with some known data. Now, through comparison with the known data, we will be able to calibrate a particular measuring device or a particular sensor. Let me take a very simple example, supposing that I will have to draw a straight line of say 10 millimeter. So, starting from here; so I am going to draw supposing that this is my 10 millimeter straight line, ok. Now, if I am told that can you not draw one another straight line which is 20 millimeter long. So, what I will do is if this is 10; so my eyes are going to measure with the previous one and might be this is 20. So, this will be 20 millimeter. That means, my eyes are following some sort of calibration; if this is 10 millimeter; so, this will become the 20 millimeter just double of that. So, our eyes while taking this particular information; it is following some calibration; it is following some calibration scale and the same is true for any such sensor. Now, you might be knowing that we use different types of sensors, we use different sensors to take some measurement, for example, to measure the joint torques, we use sensor, to measure force, we can use some sensor, but will have to calibrate. Now, this calibration is a must for any such sensor, now here actually if you want to make it intelligent; as I have already discussed that sensors are to be used or cameras are to be used to collect information with the help of to collect information of the environment and then only we can do some processing to take some decision in a very intelligent way; so, this particular calibration is a must for any such sensor. Now, if you see the literature, we have got different types of sensors for example, say, if you classify the sensors in this way, we can classify like internal sensor and external sensor. Now, these internal sensors are nothing, but the sensors, which are used to operate the drive units. For example, we have got some position sensors, then we have got the velocity sensor, acceleration sensors, then force or the moment sensors; these are all internal sensors. And, on the other hand, we have got a few other sensors, which are used to collect information of the environment and those are known as the external sensors. For example, we can use some sort of proximity sensor, acoustic sensor, then comes your visual sensor, temperature sensor; these are all external sensors. Now, here, if you see in our human body, we have got a few internal as well as a few external sensors. For example, say, whenever we try to collect information of the environment; we try to use our eyes. So, with the help of the eyes, we collect information of the environment, but supposing that we are getting some pain in the muscle of leg; now how can we feel that there is some pain? So, to feel that particular pain in the muscle; we use some other types of sensors and those are known as the internal sensors. So, in our human body, we use internal as well as external sensors, the same is true in robots. In robots also, we use a few internal sensors, we use a few external sensor. Now, the working principle of the different internal and external sensor used in robots, I am just going to discuss one after another, in details. Now if you see the literature, the sensors are also classified in different ways for example, the sensors could be named as contact sensor or the non-contact sensors. So, by contact sensors, we mean that there is a physical contact between the sensor and this particular object, whose distance I am going to measure. And, if there is no such contact, the physical contact between the sensor and the object it is called the non-contact sensor. Now, this contact sensor can be further classified into two subgroups, now, one is called the touch sensor or tactile sensor or the binary sensor. So, we have got the touch sensor or tactile sensor or this binary sensor, it is almost similar to our skin, skin is nothing, but our touch sensor. So, in robots also we use some touch sensor or the binary sensor for example, say the micro-switch or the limit switch, which are generally used in robots are nothing, but the tactile sensor or touch sensor. Now, with the help of this touch sensor; so, it is simply going to tell that the robotic finger has touched a particular object, but it is not going to measure the force required to grip that particular object or how much is the force required to or how much is the torque required to manipulate that particular object. So, it only indicates whether the contact has been made or not now let me take a very simple example. Now in all the water tanks or the oil tanks we use a valve that is called the float valve. Now this particular float valve, what is the function of the float valve? If this is the tank, the moment the water height reaches a particular level the peaks level, this particular float valve will be activated and it is going to indicate that we should stop the pump. And, the pump will be stopped and the water supply to this particular water pump will be stopped. So, this indicates the highest limit, the highest permissible limit for this particular the water. And, that is nothing, but this float valve is an example of the limit switch or either the touch sensor or the tactile sensor. Now, this is also known as the binary sensor for example, say the micro switch or the limit switch which is generally used in robotic hand. I am just going to take one example, the next slide I will show you that we can use some sort of micro-switch or the limit switch along with your robotic hand. Now, this is also called the binary sensor because it generates 1s and 0s. The moment it touches; it will generate, it will generate this particular 1 and otherwise it will generate this 1; so, it is going to generate 1 or 0s ok. So, this is known as binary because sometimes it generates 1, sometimes it generates 0; if there is a contact it will generate 1, if there is no contact it will generate 0. So, it is some sort of 1 0 0 1 something like this and that is why, this is also known as the binary sensor. Now, I am just going to concentrate on the force sensor or this analogue sensor. Now, as I told that with the help of this force sensor or this analogue sensor, we are just going to actually measure the force or the torque. So, this particular portion torque which is required to grip this particular the object and generally, we use some sort of the force sensor or the analogue sensor. And, here in force sensor or analog sensor, we use some sort of strain gauges. So, I am just going to discuss the working principle of this particular strain gauge in details. And, as I told that we have got a few non-contact sensors, for example, say we have got the proximity sensor, the range sensor, the visual sensor, acoustic sensor. So, these are all non-contact sensors. So, I am just going to discuss, in details, the working principle of these sensors generally used in robots. Now, before I go for the discussion of working principle of different sensors; now let me concentrate a little bit on the different characteristics of sensors. Like, if you want to prepare the specification of sensor; what are the information to be provided and what are the numerical values to be provided?. For example, say the range, response, accuracy, sensitivity, repeatability, resolution all such things, we will have to mention. While preparing the specification of the robot the similar type of information, we also provided. And, here actually or the range for the sensor; that means, what is the maximum and the minimum value that can be measured with the help of this particular sensor that has to be mentioned. Then, comes your the response, the response should be as quickly as possible, then accuracy is nothing, but the deviation from the exact quantity. So, that we will have to mention, sensitivity we know by definition sensitivity is nothing, but the change in output to the change in input; so, that is nothing, but your sensitivity. So, this particular sensitivity, we will have to mention, how much sensitivity you need and if this particular sensor is having constant sensitivity; now then it is called the linear. So, the sensor is called a linear, if it is having the constant sensitivity. So, by linearity, we mean constant sensitivity; then comes your repeatability. Now, by repeatability, we know supposing that with the help of the same sensor, I am just going to measure the same thing for say 10 times or 20 times. Now, the same thing, if I measure 10 times or 20 times; there is no guarantee that all 20 times will be getting exactly the same numerical value. Now, this particular deviation from reading to reading is nothing, but the repeatability. Now, while preparing the specification of this particular sensor; we will have to mention how much is the repeatability we want? And, then comes your the resolution is nothing, but the least count. So, this particular list count for the measuring device or this particular sensor; that we will have to know. Now, let me take a very simple example now, if I take one very simple example for this particular resolution, it will be clear. Supposing that I am using one sensor and in that particular sensor, I am using some electrical signal to generate some angular displacement. Now, electrical signal cannot be a fraction. So, it could be 1, 2, 3 something like that; corresponding to one electrical impulse, how much is the angular displacement it can generate? That is nothing, but the least count or resolution, the same is true for this sensor, ok. So, these are the information which are to be provided to prepare the specification of this particular sensor. Now, I am just going to discuss the working principle of a few sensors one after another. Now, this touch sensor, I have already discussed; these are use just to indicate whether the contact has been made or not. And, generally, we do not use this sensor to determine how much is the contact force? The examples are micro-switch, limit switch and all such things. Now, here, I am just going to take one typical example of a micro-switch, which is nothing, but a touch sensor used in robot gripper. Supposing that this is a very simple gripper having two fingers and here we are just going to put some micro switch or the limit switch. Now, with the help of this micro-switch or the limit switch actually, it is going to indicate whether the contact has been made between the object; supposing that I have got the object here, whether the contact has been made between the object and this particular the robotic finger. So, to serve that type of purpose, we use your the micro-switch or the limit switch and that is nothing, but the touch sensor, the same is true, we have got the skin. So, with the help of this particular skin, we can touch, we can feel the presence of an object, even if we are not using eyes; we can feel the shape and size or the more or less the structure of that particular object, even if we do not see with the help of our eyes. Because we can use our skin and skin is nothing, but the touch sensor and with the help of this touch sensor; in fact, we can find out, what should be the possible shape and size of this particular object, which I am going to grip. Now, I am just going to discuss one sensor one position sensor which is very frequently used in robotics or we generally use in school level, college level some laboratory classes also. This is your the potentiometer the potentiometer is actually why as I told very well known position sensor. And the potentiometer could be either the linear potentiometer or it could be angular potentiometer. Now, with the help of linear potentiometer, we can measure the linear displacement, that is, d and with the help of angular potentiometer, we can measure the angular displacement, that is, nothing, but theta. Now, the working principle of this particular potentiometer is very simple; for example, say so here we have got the source for the voltage that is input voltage V_in for example, say we have got the battery or V is connected to some power, ok. Now, what we can do is supposing that we have got the battery here so, I know the input voltage. So, what I do is, we know the total resistance of this particular wire; so, this is actually nothing, but the wire and it has got a special type of winding. So, capital R is nothing, but the total resistance and the reference here. So, this is nothing, but the reference that means, we are going to measure the angular displacement with respect to this particular reference, ok. Now, here actually, what I do is, we are going to measure this angular displacement with the help of this pointer or the wiper. So, with respect to the reference; so it is going to generate some angular displacement ok; how to measure this? To measure this, the method which we follow is very simple; so, here with respect to this particular reference. So, I have got the wiper and it has got some displacement and we measure with the help of a voltmeter, how much the output voltage? So, this one point is connected to the wiper and another is your the grounded. And, you can find out, we can measure with the help of voltmeter, how much is the output voltage? So, we know the total resistance of this particular wire, we know how much is the input voltage, we can measure, how much is the output voltage with the help of the voltmeter or multi-meter. And, if you have measured this particular output voltage; now approximately I can find out what should be this angular displacement. Now, how to find out? It is very simple, because if I know this input voltage and if I know the resistance. So, the current is nothing, but your V_input divided by R and the same current will also flow here and that is nothing, but is your V_out divided by your small r and small r is nothing, but the resistance of this winding up to starting from the reference; up to the end of this particular pointer or the wiper. So, starting from here up to this; so this is the resistance small r, so from here see the V_in is known, capital R is known, V_out can be calculated. So, r can we determined and if I know the value of r and if I know the nature of winding of this particular wire, the electrical wire. I can find out approximately like what should be this particular angular displacement, that is, theta. So, theta can be measured with the help of this particular angular potentiometer. This is the working principle of angular potentiometer, it is very simple and all of us have used. So, this is the working principle of your angular potentiometer, but this angular potentiometer has got one demerit or one drawback; all of us we know that the resistance of wire depends on actually the temperature. The moment we pass some current through electric wire; so, due to the heating effect of the current, that is, I square r effect. So, what will happen? There will be some heat generated in that particular electric wire and its temperature is going to increase and as temperature increases; so the resistance of the wire is going to change and if resistance changes, and you will not be getting very accurate measurement with the help of this angular potentiometer. And, that is why, if we use this particular angular potentiometer at a stretch for a long time; so, initially we may get some accurate results, accurate measurements. But, with time, after might be half an hour or one hour, there is a possibility, we will be getting some erroneous results with the help of this angular potentiometer. So, this is actually the drawback of this angular potentiometer, but its working principle is very simple and this is, in fact, one of the most popular position sensor used nowadays. Now, then comes your another position sensor that is called the optical encoder. And, this is also very popular and if you see we have got two types of optical encoder. One is called the absolute optical encoder and we have got the incremental optical encoder. And in robotics actually very frequently this type of optical encoder is used as feedback device. For example like, if it is servo-controlled robot; so, there must be a provision of feedback device and there must be a provision to measure the angular displacement. So, what you can do is; we can take the help of, so this type of optical encoder as a feedback device. Now, let us see, how does it work? Now, this optical encoder, now, let me first try to explain the principle of this absolute optical encoder first. Now, this absolute optical encoder consists of a number of concentric rings placed one after another. Now, supposing that say this is the output shaft, this is the output shaft of this particular motor I have got the electric motor here. So, this output shaft is rotated; now I want to measure what should be the angular displacement or what is the rotation of this particular shaft? What I do is here I put this particular absolute optical encoder and absolute optical encoder is nothing, but a collection of a few concentric rings placed one after another and what I do is. So, here we have got the concentric rings and on this particular concentric rings; there will be the marking zone; that means, there will be dark zone and the light zone, and through the dark zone, the light will not pass and for through this particular light zone, the light will pass now on. So, here, we have got the optical encoder; so, as the shaft rotates the optical encoder mounted on it is also rotating. Now, on one side, I have got the photo source, other side, we have got the photo detector. The moment during the rotation; this particular disc, the circular disc or the rotating disc; if the light zone comes in front of the light source, the light will pass and it is going to activate that particular photo-detector. The same thing I am just going to discuss in more details with the help of this particular the sketch. Now as I told that it is consisting of a large number of concentric rings. Now here for simplicity I am just going to consider; so, there are only 4 concentric rings. Now supposing that this is nothing, but the diameter; this is the diameter of the shaft whose angular displacement or rotation I am going to measure. Now, here surrounding this we consider say one concentric ring, another concentric ring, another concentric ring. So, I am considering 4 concentric rings here now you concentrate on the first one that is this particular concentric rings. Now, here what I do is; so this part is made black, this part is made black on the first concentric ring. So, this is made black this part is made black, and this is your the white portion through which the light will pass. Next, we concentrate on the second concentric rings on the second concentric rings starting from here, up to this is made black; so, this part is made black. So, no light will , then it is white once again, this part is made black and then, there is a white portion, then you concentrate on the third one. So, here, this is the black part and this is the white part, then this is the black part, then comes a white part, then comes this is the black part, this is the white part, this is the black part, then comes the white part and so on, and on the outermost ring, we have got the black portion here. So, this is the black part, white part, black part, then white, black, white, black and so on. So, this type of marking we have and here we are considering only 4 concentric rings. The outermost ring is going to indicate actually 2 raised to the power 0. And, this particular thing is going to indicate 2 raised to the power 1, this is 2 raised to the power 2 and this indicates 2 raised to the power 3 and so on. Now, if you see the other view for example, say in this particular view, supposing that this particular shaft it is not drawn here properly. So, this particular shaft supposing that this is actually the diameter of the shaft and on which, I have got this concentric rings mounted one after another ok; that means, this is going to indicate 2 raised to the power 0; that is the outer most and this is going to indicate 2 raised to the power 3. So, here this is another view; so on this side, we have got the light source and this particular thing is rotating. So, this particular thing is rotating; this is mounted on the shaft, the shaft is rotating, optical encoder is also rotating and the light source is put on; the moment the dark zone is coming. So, here, there will no signal, the moment I will be getting the light zone then only there will be light will pass through this and it is going to activate that particular photo-detector. So, depending on the relative position of the dark zone and the light zone; so, sometimes light will pass, sometimes it will not pass accordingly it will be generating some 1s and 0 sort of thing, ok. For example, say here, this is the reference line; initially the reference line is here, ok. Now, the reference line is fixed and this particular optical encoder is rotating; now what I do is, here on the screen, I cannot rotate. So, what I can do is; I am just considering as if this optical encoder is fixed and I am rotating this reference in the reverse direction. If I just the rotate the reference in the reverse direction. The moment my reference is here, the moment my reference is here, truly speaking, the optical encoder is rotating reference is kept fixed. But, here, this is almost equivalent of the situation, my optical encoder is fixed because I cannot rotate it here on the screen; so, I am rotating the reference in the opposite direction. Now, supposing that the reference is here and if it is here, then this is the dark zone, dark zone, dark zone and dark zone.. So, it is going to generate 4 such 0s, say this is 2 raised to the power 0, 2 raised to the power 1, 2 raised to the power 2, 2 raised to the power 3 the moment this particular reference comes here supposing that it is here. So, through the outer-most, the light will pass; this outer most corresponds to a 2 raised to the power 0. So, here it is going to generate 1, but corresponding to 2 raised to the power 1; there will be 0, there will be 0, there will be 0; so 0, 0. Similarly, the moment we consider that this particular reference is here. So, what will happen? The light will pass through all four; so, through here light will pass, light will pass through this, light will pass through this, light will pass through this. So, it is going to generate actually four such 1s ok; so if it generates four 0s; its decoded value will be equal to 0. Because 0 multiplied by 2 raised to the power 0; so decoded plus 0 plus 0 plus 0 it will be 0. And, the decoded value for this; so, 1 multiplied by 2 raised to the power 0 that is equals to 1, plus 0 multiplied by this equals to 0, 0 and 0; its decoded value will be 1. And, corresponding to this 1 1 1 1 the decoded value will be your something like this 1 multiplied by 2 raised to the power 0 plus 1 multiplied by 2 raised to the power 1, 1 multiplied by 2 raised to the power 2, 1 multiplied by 2 raised to the power 3. So, 8 plus 4 that is your 12 plus 2; 14 plus 1; so, 15. So, its decoded value will be 15 ok. So, corresponding to rotation of this particular your optical encoder and depending on the position of angular displacement with respect to the fixed reference; I will be getting some binary. This binary will be generated here and I can just do the decoding and I will be getting, the decoded value corresponding to that particular rotation. And, whatever I discuss, the same thing I have written it here. So, corresponding to this 0 0 0 0, I will be getting 0 and this is the way actually, I will be getting this particular the decoded value. Now, supposing that I have got some decoded value; now if I use like four concentric rings, if I use four concentric rings then your, how many divisions? We are getting only 16 divisions. So, 16 divisions is nothing, but your 2 raised to the power 4. So, 2 raised to the power 4 is actually your 16, ok; that means, corresponding to the whole rotation for this particular shaft that is nothing, but 360 degree corresponding to one rotation. So, this particular 360 degree, I am just going to divide equally into 16 parts; that means, your; so, this 360 divided by 16 will be the resolution of this particular optical encoder if I use only 4 concentric rings. Similarly, if I use n number of concentric rings, then it will have the resolutions like 1 part in 2 raised to the power n. So, this is nothing, but the resolution of this particular optical encoder for example, say if I take n equals to say 10; then the resolution will be 1 divided by actually 2 raised to the power 10 that is nothing, but approximately that is equal to 1024 and approximately that is equal to 1 divided by say 1000, ok. That means your 360 degrees rotation for one complete revolution will be divided into 1000 equal parts and that is nothing, but 0.36 degree. Now, this particular 0.36 degree will be the resolution of the optical encoder, if we use actually 10 concentric rings. Now, this is the way actually with the help of this absolute optical encoder; we can measure how much is the angular displacement of a particular shaft, which is rotating. This is the working principle of this absolute optical encoder which is very frequently used in robots as a feedback device. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_10_Introduction_to_Robots_and_Robotics_Contd.txt
Now I am going to start with one numerical example based on this economic analysis, which I have already discussed. . Now, here, I am just going to solve one case study sort of thing, the problem is as follows: supposing that the constant savings associated with a robot installation are given below, for example, the cost of a robot including accessories is say rupees 12 lakhs, the installation cost is rupees 3 lakhs. The maintenance and the operating cost is say rupees 20 per hour, then, the labor saving is say rupees 100 per hour, then, material cost is rupees 15 per hour, and supposing that the shop is running for 24 hours in a day; that means, there will be three shifts, each shift is equivalent to 8 hours and the effective work days in a year is, say 200, the tax rate of the company is 30 percent and techno-economic life of the robot is expected to be equal to 6 years. Now, we will have to determine the payback period of this particular robot and the rate of return on investment, and ultimately, we will have to take the decision, whether we should go for purchasing this particular robot by taking loan from the bank, so, that decision I am just going to take. Now, here, the capital investment is denoted by F and this is nothing, but the cost of the robot including accessories plus installation cost. So, rupees 12 lakhs and 3 lakhs, the total is rupees 15 lakhs. So, this is nothing, but the capital investment denoted by F. Now, here, the shop is running 3 shifts; that means, 24 hours in a day; that means, the whole day, it is running, and 200 days are the number of working days in a year. So, the total hours of running of the robot per year is nothing, but 24 multiplied by 200, that is, 4800 hours. So, this is the total hours of running. Now, saving per year, say denoted by B, that is, the labor saving and the material saving. Now, the labor saving is rupees 100 per hour. So, rupees 100 multiplied by 4800 (total number of hours) plus the material saving (rupees 15 per hour multiplied by total number of hours 4800), and if we add them up, we will be getting rupees 5, 52000. So, this is nothing, but the total saving per year by using this particular robot. Then, maintenance and operating cost per year and, that is, nothing, but rupees 20 per hour. So, rupees 20 multiplied by 4800, so this is nothing, but the maintenance and the operating cost and this is coming to be equal to your 9600 and techno-economic life of the robot is given as 6 years. Now, capital sorry the depreciation of this robot per year and that is nothing, but the constant depreciation, which I have considered. So, the constant depreciation, for simplicity, we have considered and now, while calculating this particular depreciation, we should not consider the installation cost. So, actually, by definition, depreciation is the falling value of an asset. So, we will have to consider the cost of the robot with accessories only, but not that installation cost and that is why, for calculating the depreciation, we consider rupees 12 lakhs divided by 6, but not rupees 15 lakhs divided by 6. So, the constant depreciation per year is coming to be equal to your rupees 2 lakhs and this is, as I told, almost similar to the standard deduction, whenever we calculate our income tax. Now, the net saving is denoted by nothing, but the saving, which I have already calculated as rupees 5,52,000 minus the operating cost, and operating cost is coming to be equal to 96000, and depreciation is nothing, but rupees 2 lakh and if you calculate the net saving this will become equal to rupees 2,56000. Now, a certain percentage of the net saving will be paid as tax, and here, the tax rate is 30 percent. So, the tax to be paid to the government and that is denoted by G is nothing, but 30 percent of the net saving, that is coming to be equal to rupees 76800. So, this is the tax to be paid to the government. Now, this payback period, which is denoted by E is nothing, but the capital investment, that is, F divided by (B minus C minus G). So, we know the numerical values of all B, C, G, and F, and if we just insert the values here and calculate, we will be getting 3.9 years, and which is found to be less than the techno-economic life, that is nothing, but 6 years. Now, next we try to find out what should be the rate of return on investment. Now, as I told that thirty percent of the net saving, we have paid as tax, the remaining amount, the net saving after the payment of tax is denoted by “I”. So, “I” is nothing, but 70 percent of rupees 2,56,000, that is nothing, but rupees 1, 79, 200. Now, rate of return on investment is denoted by H and that is nothing, but I divided by F. So, F is the capital investment and “I” is the net saving after the payment of tax multiplied by 100 percent and this is nothing, but 11.95 percent, and the rate of bank interest is around 10 percent, 10 point something. So, this particular rate of return on investment is more than the rate of bank interest and moreover, the payback period is less than the techno-economic life. So, both are favorable. So, we should purchase this particular robot by taking loan from the bank. So, this is the decision, that we can purchase the robot by taking loan from the bank through this economic analysis. So, this is the way actually, it helps to take the decision that whether we should take loan from the bank to purchase a particular robot. Now, similar type of analysis, we can carry out for other machines also, for example, other conventional machines for your own manufacturing unit. If you want to purchase some machines, the similar type of analysis we can carry out. So, this is the way, we actually carry out the economic analysis for the robots. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lectura_44_Summary.txt
Now, I am going to summarize of this course on Robotics. In this course, in fact, there are 10 topics and all 10 topics have been taught. I am just going to summarize topic-wise; the topic 1, the first topic that was on introduction to Robots and Robotics, we started with the very definition of the terms: robots and robotics. And, to recapitulate, we mean a robot an automatic machine, which can perform a variety of tasks and the term: robot was introduced in the year 1921 by Karel Capek. And, robotics is actually a science, which deals with the issues related to the design, manufacturing and applications of robots. And, the term: robotics was coined in the year 1942 by Isaac Asimov. Now, in robotics we copy everything from human-being. For example, we copy head, heart and hand of a human being in the artificial way and that is actually popularly known as 3 Hs in robotics. Now, why should we study robotics? Now, we have seen that the today’s market is dynamic and competitive and if we want to survive, so we will have to produce goods at low cost; at the same time, the quality has to be good and the productivity has to be high. And, to avail all these or to reach all the requirements, so we will have to go for automation. And, robotics is actually a flexible automation and that is why, modern industries should go for the robotics. We discussed a little bit, a brief history of the robotics; now, as we discussed the first robot, the first patent on the robot that was filed in the year 1954 by George Devol and he is known as the father of robot and after that, the different universities particularly different US universities, then NASA, USA then USSR; so, they started manufacturing different types of the robots. For example, say Stanford Research Institute, they developed robots, Carnegie Mellon University; CMU could develop some robots, Ohio State University could develop some robots, NASA developed a few robots. And, all of us we know NASA sent some intelligent robots to the Mars like spirit and opportunity, curiosity and all such robots are nothing, but the intelligent robots. Now, here, as I told that the most sophisticated robot as on today might be the Sofia; which was developed in the year 2015 by Hanson Robotics; Hong Kong and Honda has already designed and developed sophisticated humanoid robots; so, this shows the brief history of the robotics. Now, if you see what are the different components of a robots now, in a robot, we have got a few links, 2 links are joined by a joint; joints could be of 2 types, the linear joint or the rotary joints. Linear joints could be either the prismatic joint or sliding joint; rotary joints could be either the revolute joint or the twisting joint. And, of course, we have got a few special type of joints like the Hook joint, then comes the ball and socket joint, those are also used in robots. Now, actually in a robot, there will be one controller or the director, there will be drive units, there will be links, joints, there must be some sorts of gripper or the end effector. And, if you want to make it intelligent the robot should be equipped with some sorts of sensors; so, these are the different components of the robots. Now, if you see, we have got different types of robotic joints, which I have already discussed. Now, the degrees of freedom of a robotic system; now, before I go for the degrees of freedom of a robotic system, the first thing we will have to find out, what is the degrees of freedom or connectivity of the different robotic joints. For example, if I consider the prismatic joint, it is having 1 degree of freedom or one connectivity, then comes your sliding joint has got 1 degree of freedom linear, revolute joint has got 1 degree of freedom, twisting joint 1 degree of freedom. Then, if I consider cylindrical joint, which is having 2 degrees of freedom; then comes the Hooke joint or the universal joint has got 2 degrees of freedom. Spherical joint or ball and socket joint has got 3 degrees of freedom. Now, if I know the connectivity or the degrees of freedom of a robotic joint, we can also find out the degrees of freedom of a robotic system and we use actually Gruebler’s criteria just to find out, which I have already discussed, in details, to determine the degrees of freedom of a robotic system. Now, if it is an ideal planar robot, it should have 3 degrees of freedom; if it is an ideal spatial robot, it should have 6 degrees of freedom. Now, there are a few special robots, which are having more than 6 degrees of freedom; those are called redundant robots. There could be a few special robots, which are having less than 6 degrees of freedom, that is called the under actuated robots. Now, then comes the classification, the robots have been classified in a number of ways; for example, the robots could be either the point-to-point robot or the continuous path robot. The robots can be classified like the servo-controlled robot, non servo-controlled robot. Another classification could be based on the coordinate system; like Cartesian coordinate robot, then comes the cylindrical coordinate robot, spherical coordinate robot, revolute or articulated coordinate robot. Another classification could be based on the mobility levels; for example, say we have got robots with fixed base those are called the manipulators, manipulators could be either serial manipulator or parallel manipulator. In serial manipulator, the links are in series; parallel manipulator the links are in parallel. Now, regarding the mobile robots, it could be either the wheel robots or there could be multi-legged robots or there could be tracked vehicles. So, these are, in short, the classifications of the robots. Then, we concentrate on the workspace analysis, we define the terms like what do you mean by the reachable workspace and the dexterous work space. So, in short, the reachable workspace is that volume of space which can be reached with at least one configuration of the robot. And, dexterous workspace is lying in that particular volume of space which can be reached with the different combinations of the joint angles. So, we discuss, how to determine the workspace of different types of the joints; then we discussed the terms like resolution, accuracy and repeatability. Resolution is nothing, but the least count of a robot and this particular resolution could be either the programming resolution or the control resolution. By accuracy, we mean the precision with which the end-effector of the robot can reach the computed point. And, by repeatability, we mean the same robot, if I run large number of times, so, there is no guarantee that every time it is going to reach the same point and there could be some deviation that particular deviation is nothing, but the repeatability. We discussed, in brief, the various applications of robots; for example, the robots are used in manufacturing unit. Robots are used in medical science like telesurgery, orthotic device, prosthetic device or we use some sort of multi-legged very small robots like in the form of capsules. Then, robots are also used as the helping hand for the doctors, robots are used in sea-bed mining; just to find out the valuable stones, then do some underwater repairing, maintenance job we use underwater robots. Robots are used in space; just to collect information of the Mars or the space, we can use the robots. Nowadays, robots are being used even in agriculture, for example, say just to spray some pesticides, just to spray some sort of fertilizer in liquid form, for cleaning, just pikking the fruits, we can use the different types of robots. So, there are a large number of applications of robots, nowadays. Now, we concentrate on actually the different types of end-effectors or the grippers used in robots, we use different types of mechanical grippers. For example, the gripper is designed and developed using some sort of mechanisms like piston and cylinder mechanism. We use some sort of gear mechanism to design and develop the end-effector, we use cam and follower mechanism to design and develop the end-effector. Then, we discuss the principle of the vacuum gripper, magnetic gripper used in robots, we also discuss some passive grippers like remote center compliance. In fact, we have got different types of grippers; different types of end-effectors, we have discussed all such things in details. Then, the teaching methods we discussed, in detail. Basically, we have got 2 types of teaching methods to provide instruction to the robots; one is called the online method, another is called the offline method. So, by online method, we mean those methods, where while giving instruction; we will have to use the robots. And, for offline teaching, we are not using the robots, we are taking the help of some sort of the programming language. Now, this online teaching could be either the manual teaching or it could be the lead through teaching. The manual teaching is suitable for point to point task and lead through teaching is suitable for the continuous path task. Then, we prepared the specification of a robot; like if I want to purchase a robot; how to prepare the specifications, what are the information to be given, that we have discussed in details. Then, we carried out some sort of economic analysis; through this economic analysis, we tried to take the decision, whether we should purchase a robot by taking loan from the bank. And, here, we define 2 terms; one is called the payback period of a robot, another is called the rate of return of a robot. So, if the payback period is found to be less than the techno-economic life of a robot and the rate of return on investment, if it is found to be less than the rate of bank interest; I am sorry if the rate of return on investment if it is found to be more than the rate of bank interest, then only we just go for purchasing the robots. All such things, actually I discussed in your the first topic, that is, introduction to robots and robotics. Then, I concentrated on topic 2; that is the robot kinematics. Now, the purpose of kinematics is to study the motion or the movement of the different links, different joints without considering the reason behind that particular the movement, that is the force or the torque; now here, the position and orientation of 3 D object in 3 D space; how to express that we discussed, in details. For example, say the position can be expressed either in Cartesian coordinate system or in cylindrical coordinate system or in spherical coordinate system. Similarly, the orientation can be represented in 3 coordinate systems, one is the Cartesian or we can use the roll, pitch and yaw system or we can use some sort of Euler angle representation for the orientation. Next, we concentrate on how to derive the matrix that is the homogeneous transformation matrix, which is a 4 cross 4 matrix. And, this particular homogeneous transformation matrix carries information of this particular the position and orientation. For example, say if I just draw one homogeneous matrix; so, r_11, 12, 13, 21, 22, 23, 31, 32, 33, 0 0 0 1; px, py, pz; so this is nothing, but a typical 4 cross 4 homogeneous transformation matrix. Now, here, these 3 cross 3 matrix carries information of the orientation and this particular the vector; so, this is nothing, but the position vector and this is nothing, but a 4 cross 4 homogeneous transformation matrix. Then, we discussed the Denavit-Hartenberg notation; like how to assign the coordinate system at the different robotic joints so, that we can carry out the kinematic analysis. We concentrate on the problem of forward kinematics; that means, if I know the length of the links and the joint angles, then how to determine the position and orientation of the end-effector with respect to the base coordinate system. Then, we concentrate on the inverse kinematics. Now, here the positional orientations of the end-effector are known and we will have to find out the joint angles provided the length of the links are known; so, this is the problem of inverse kinematics. And, once we have completed this inverse kinematics; we started with another topic that is called the trajectory planning. The purpose of trajectory planning is to fit a trajectory so, that we can ensure the smooth variation at the different robotic joints. Now, this trajectory planning problem can be solved either in Cartesian system or in joint space, but if I solve trajectory planning in Cartesian coordinate system, then I will have to carry out the inverse kinematics online. And, that is why, we try to follow the joint space scheme of trajectory planning; that means, in the space of theta or the joint angle. Now to fit a smooth curve so that it can ensure the smooth variation of this particular; the joint angle, we take the help of some trajectory function. For example, we take the help of polynomial trajectory; we generally consider cubic polynomial or higher order polynomial like fifth order polynomial and the coefficients of the polynomials are determined with the help of some boundary conditions or the known conditions. Then, we concentrate on the linear trajectory, but we cannot use the pure linear trajectory function because there will be infinite acceleration and infinite deceleration at the ends, if I use the linear trajectory function and that is why, we use 2 parabolic blends at two ends of the linear trajectory function. Then, we concentrate on the Jacobian matrix; now this particular Jacobian matrix is used to relate the Cartesian velocity with the joint velocity. For example, say if V is the Cartesian velocity, that is nothing, but the Jacobian matrix multiplied by the joint velocity. So, this J (theta) is nothing, but the Jacobian matrix and moreover, with the help of this Jacobian matrix; we studied the singularity of a manipulator. And, by singularity configuration, we mean a configuration, where the manipulator is going to lose one or more degrees of freedom. Now, with the help of this Jacobian matrix, we can also carry out this particular the singularity checking. So, all such things, in fact, we have discussed in much more details. Then, we concentrate on the robot dynamics and truly speaking here; we concentrate it only on the inverse dynamics. And, by inverse dynamics, we mean that we have got all such joint angle values and their velocity and accelerations are known. For example, say theta_1 up to say theta_6, then comes your theta_1 dot up to say theta_6 dot, then comes theta_1 double dot, then comes your theta_6 double dot. So, these are all given and I will have to find out all the torque values your like tau_1, tau_2 up to say tau_6. So, this particular problem is nothing, but the inverse dynamics problem. Now, here to solve this inverse dynamics problem; actually what we do is; we took the help of the Lagrange Euler formulation. And, according to the Lagrange Euler formulation, we tried to find out, what is Lagrangian of a robotic system, that is nothing, but the difference between the kinetic energy and potential energy. So, to derive that particular expression, what we do is, before we determine the Lagrangian for the whole robot, we try to concentrate on a small point or a differential mass lying on a robotic joint; we try to find out the kinetic energy and potential energy. Then, we tried to find out for the whole link and we considered all the links just to find out what should be the kinetic energy for the whole robot and what should be the potential energy for the whole robot and we tried to find out the Lagrangian. And, once you have got this particular Lagrangian; then we use this Lagrange Euler formulation, which is nothing, but d/dt of partial derivative of the Lagrangian with respect to q dot, that is, theta dot minus the partial derivative of L with respect to your q_i that is nothing, but the tau_i. So, this is the way we can find out the joint torque and if there is a linear joint so in place of this particular tau; so, I will be getting the force that is nothing, but F_i and of course, this particular q will be replaced by the d; that is the offset. So, this will be replaced by d dot, this will be replaced by d; if it is a linear joint and if I want to find out the force. So, using this actually we tried to find out what should be the joint force or the joint torque in robot dynamics. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_32_Sensors_Contd.txt
Now, we have already discussed the working principle of one absolute optical encoder. Now, this absolute optical encoder which I have already discussed is actually very precise, but the problem is actually the number of photo-detector should be equal to the number of concentric rings. So, if I use 10 concentric rings, so I will have to use actually 10 photo-detectors, which are very costly, and that is why, actually absolute optical encoder are very costly. And, in place of absolute optical encoder, we use incremental optical encoder. Now, here in incremental optical encoder, we use only 2 photo-detectors and there is only one coded disc. So, we do not use a large number of coded disc here, and we do not use a large number of photo-detectors here. Now, let us try to understand the working principle of this particular incremental optical encoder. Now, this incremental optical encoder as I told, we have got only one coded disc. So, this is actually the shaft, whose rotation I am just going to measure. So, here, we mount only one coded disc. So, this is nothing but the coded disc here; so, this is the coded disc. And, here, on this particular coded disc, we have got the black zone and the white zone, black zone and white zone, black zone and white zone. So, the black zone and white zone are placed here. Now, if there is black zone, then no light is going to pass; and through this particular white zone, the light is going to pass, ok. Now, here actually what happens, here the principle is slightly different, different in the sense like here, we have got only 2 photo-detectors and these 2 photo-detectors are kept fixed; so, their positions are kept fixed. Now, what I doing is, here we put one photo-detector, that is A and another is B, and their positions are kept fixed. And, this particular shaft is rotating. The moment it rotates supposing that this particular incremental optical encoder, which is mounted on the shaft, so this is rotating in the clockwise sense, ok. So, if it rotates in the clockwise sense, then photo-detector A will enter the black zone first; and after that, photo-detector B is going to enter the black zone. Now, if you see the plot, this is the plot, this indicates the black zone, and this is actually the light zone I am sorry; so this is the light zone, and this is actually the dark zone. So, this is the light zone and this is the dark zone. Similarly, this is the light zone; and this is the dark zone; the light zone and this is the dark zone ok. Now, let me just use another color. So, for this light zone, I am using the red. So, this is the light zone, this is the light zone and this is the light zone, ok. Now, here actually, what happens, the moment it is rotating in the clockwise sense, this black portion, this A is going to face the black portion first, ok; and then B is going to face. So, here actually, what happen this source, now this is the light zone; that means, A will be light zone for small duration compared to B; and B will be in light zone for more amount of time. Now, here, you can see for this particular A, it is a light zone only up to this; whereas B is in light zone up to this slightly more, that means your A has entered the black zone first. So, this is the starting of the black zone. So, A has entered the black zone first. And, after that, B has entered the black zone, ok. So, once again, let me repeat. So, A will be in light zone only up to this and B will be in light zone up to this that means A will enter the black zone first, and B will enter the black zone after sometime, ok. So, this type of signal, you will be getting or corresponding to the photo-detector A and photo-detector B. And, now, if I see this particular signal, so if we see this particular signal, and if we count the number of light zone, and the number of black spot, for example the light zone, light zone and the black zone. Similarly, here also, I am just going to count the number of light zone and number of dark zone. So, by counting the number of light zone and dark zone, in fact, we can find out, how much is the angular displacement, ok. So, the angular displacement can be determined by counting the number of the dark zone, and this particular the light zone. Now, actually the next thing is, approximately we can find out how much is the angular displacement of this particular the incremental optical encoder, or the shaft whose rotation I am going to measure. Now, here another information we are going to get, that is, it can indicate the direction of rotation. For example, if you see this particular signal once again, here A enters the dark zone first, B enters the dark zone after sometime, that means, this is rotating in the clockwise sense. Now, the reverse will be the situation, if it is rotating in the anticlockwise sense. So, you will be getting the different type of signals here, ok. So, let me once again repeat. A will enter the dark zone first, and B will enter the dark zone after sometime. It indicates that this particular shaft is rotating in the clockwise sense, ok. So, this is the way actually it can find out, how much is the angular displacement, and what is the direction of movement of that particular shaft. And, as I told that here, we use only one coded wheel; here there is only one coded wheel and only 2 photo-detectors, so it is less costly. And of course, it will be less accurate compared to this absolute optical encoder. But, as it is less costly, it is very frequently used as feedback device in robot; and this is used very frequently as a position sensor. Now, this is actually the working principle of incremental optical encoder. Now, I am just going to discuss the working principle of another very popular position sensor, that is known as LVDT, that is, Linear Variable Differential Transformer. Now, this LVDT stands for linear variable differential transformer, and this is used to measure the linear displacement, that is, d, ok. Now, similarly, we have got RVDT, that is called Rotary Variable Differential Transformer. And, this particular RVDT, that is rotary variable differential transformer is used to measure the angular displacement, that is nothing but theta, ok. Now, let us try to understand the working principle of this particular LVDT, that is linear variable differential transformer. Now, construction wise, it is very simple, we have got one fixed casing. So, this is nothing but the fixed casing. And, we have got one moving magnetic core, this is actually the moving part, that is the magnetic core. Now, this particular magnetic core, it can slide along these two directions, that means, it can slide towards this or it can slide towards this, ok. And, we have got the fixed casing here. Now, in between the fixed casing and the moving magnetic core, so here we put one the primary coil, that is, L_P, and two pairs of secondary coil, that is, L_s1 and L_s2, ok. Like if we just draw, here we have got actually the primary coil. Now, if I just draw one very rough sketch sort of thing, for example, say this is the magnetic core, say, if this is the magnetic core, now here surrounding this actually we have got this primary coil; so we have got this particular primary coil, and here, we have got two such secondary coils, ok. Now, let us try to understand the working principle of this, and how can it measure the displacement, that is, the linear displacement with the help of the fixed casing. Let us try to understand the working principle. Now, to understand the working principle, actually what we do is, we try to see its equivalent electrical circuit first, ok. Now, this is nothing but the equivalent electrical circuit. So, this equivalent electrical circuit corresponding to that LVDT, this is the magnetic core. Now, here in this particular sketch, the magnetic core can move up and down, ok. And here, we have got the primary coil. And we put the input voltage, that is, V in through the primary coil. And, we have got the secondary coil, the first secondary coil and the second secondary coil. And, here, in between these two points, we try to measure how much is the output voltage, that is, V_out. Now, let us see how can you measure so this particular displacement or the movement by measuring the output voltage. Now, this V_out is actually nothing but V_L_s2 minus V_L_s1. Let me explain, what is this V_L_s1 and V_L_s2. Now, to explain this actually what we do is let us go back to the let us go back to the previous picture first. Now, if you see the previous picture, supposing that this is the particular magnetic core, so this is sliding towards my right. So, the magnetic core is here, ok. Now, if the magnetic core is here, it will be closer to the L_s2 compared to your L_s1. So, coupling between L_s2 and the magnetic core will be stronger compared to the coupling between the permanent magnet, the magnetic core and L_s1. So, here the magnetic strength will be more, the linking will be stronger. And, due to this stronger linking, actually what will happen here is the induced voltage in L_s2 will be more compared to that in L_s1. So, due to this stronger influence of this particular magnetic core, the induced voltage in L_s2 will be more compared that of L_s1. And, now let us see, what happens here. That means your if I just draw it the same situation, it is more towards L_s2, that means my magnetic core is somewhat here ok. That means, this particular coupling is stronger ok, compared to this coupling. That means, the induced voltage in L_s2, that is, V_L_s2 will be more compared to V_L_s1, and I will be getting a positive V_output, ok. So, I will be getting a positive output voltage, if it is moving downwards, ok. And, reverse is the situation, if it is moving upwards. So, in that case, V_L_s1 will be more compared to V_L_s2, and V_out will become equal to some negative value, ok. Now, here, this shows actually the calibration curve. So, this corresponds to the null position, ok. Now, this is R, indicates as if it is it was moving towards the right, so I will be getting some positive V_out. So, I will be getting some positive V_out. And, if it is sliding towards L_s1, so I will be getting the negative V_out, that means I am here ok. So, I will be getting some negative V_out here. So, this particular plot like output voltage versus the position of the magnetic core with respect to the fixed casing, ok. So, this is actually the calibration curve. And, once this particular calibration curve is pre-determined, now by measuring this particular V_out, what you can do is, we can find out like what is the position of this particular magnetic core or what is the displacement of the magnetic core with respect to the fixed casing. So, we can measure, how much is the linear displacement of the magnetic core with respect to the fixed casing. So, this is the way actually this particular LVDT works. And, as I told, this is used just to measure the linear displacement. And, for measuring the angular displacement like we will have to go for RVDT, that is, Rotary Variable Differential Transformer. Now, this particular LVDT is used in robots. It is used very frequently in different machine tools. For example, lathe, milling machine, drilling machine, this type of LVDT are very frequently used. So, this is the working principle of this LVDT. Now, we are going to discuss the working principle of another sensor that is called the force or moment sensor. Now, the purpose of this force or the moment sensor is to determine, how much will be the force or the moment acting at the robotic joint. Let me take a very simple example. Supposing that this is my wrist joint; now if I consider say this is the serial manipulator, so this is my wrist joint. And, with the help of this wrist joint, this particular end-effector is connected. Now, I am just drawing something or I am writing with the help of this marker, ok. The moment I am just going to do some manipulation task with the help of this finger, this particular joint is subjected to some amount of moment, some amount of torque, ok. Now, if I want to measure this particular moment and torque at this wrist joint, how to measure this particular moment or force. To measure this moment or the force, we put this type of the force or the moment sensor. Now, what I do here is, on the wrist end, so this is the wrist end, we put this particular the rim portion. So, this is actually the rim portion, which we put at the wrist end, so this is connected to the your wrist end. And here, we have got one square block sort of thing or cube sort of thing, cuboid sort of thing, ok, so this is called the hub. And, this hub is connected to the end-effector or this particular finger with the help of which I am doing that particular manipulation, while writing, ok. So, once again let me repeat, the hub is connected to the end-effector, and this particular rim is connected to the wrist end. And, what is our aim, our aim is to determine what should be this particular joint moment or the joint torque or the force. Now, let us see, how to determine that. Now, let me first explain the construction details of this particular force sensor. Now, construction-wise, actually, as I told, this is connected to the wrist end; and hub is connected to the end-effector. Now, here in between the rim and the hub, we have got some sort of the deflection basr. So, here, we have got one deflection bar. Similarly, I have got another deflection bar here; another deflection bar here; another deflection bar here. Now, if I see this deflection bar, these are actually the bar having some sort of square cross-section, and made of elastic material ok. So, for example, say elastic material in the sense, we can use some sort of steel, steel will be working within the elastic zone by elastic material, I wanted to mean that it a steel, but it is working within the elastic zone. It has not reached the plastic zone, ok. So, this is actually made of steel in fact and it is having this type of square cross-section. And, if you see on the deflection bar, you have got some strain gauges. So, here we are putting some strain gauges. In fact, on each deflection bar, we put two pairs of strain gauges. For example, say here, we put say one pair here, so this constitutes one pair of the strain gauge, and there could be another pair of strain gauges; another pair of strain gauge could be something like this, so this is actually the strain gauge, this particular stain gauge. So, we have got one pair, so this is one pair of strain gauge; this is the second pair of strain gauge. So, this is the first pair and this is the second pair of strain gauges, ok. So, on each of this particular deflection bar, we have got two pairs of strain gauges, and I have got four such deflection bar, so I have got 8 pairs of strain gauges, ok. And, with the help of this strain gauges, in fact, we are just going to measure, how much is the deflection of this particular deflection bar, and if I know the deflection, so from there, let us try to find out, whether I can find out how much is the load acting on the deflection bar. Now, let us see, how to determine this, now to determine this with the help of this particular end-effector doing some sort of manipulation job, ok. For example, it is handling some weights, it is doing some sort of pick and place type of operation and something like this, so what will happen is, each of this particular deflection bar will be subjected to some amount of force, ok. How to determine that, now if I concentrate on a particular deflection bar, it is almost similar to the situation as if, so I have got one beam sort of thing. So, this type of beam I have ok. And for this particular beam, so this is the fixed end, and here as if some concentrated load is acting, and this type of cantilever beam, we can assume. So, here on this particular cantilever beam, there will be some deflection something like this; and if there is some deflection, due to the load, this deflection can be measured; so this delta deflection can be measured, ok. Now, let us see, how to measure this particular delta deflection with the help of strain gauges. The strain gauges are mounted here, ok. And, with this particular strain gauge, actually we count some potentiometer circuit, which I have already discussed with the help of potentiometer, what do you do? We measure the output voltage, and by measuring the output voltage, we can measure how much is the deflection. So, we use potentiometer, for example, I can use some sort of linear potentiometer to find out, how much is the deflection? And, this particular potentiometer, the output of the potentiometer, that is nothing but the voltage, I can measure with the help of one voltmeter or multimeter. Let me repeat, for example, say we have got the deflection bar. On each deflection bar, we put two pairs of strain gauges. Now, each pair of strain gauge is connected to the potentiometer circuit. And, on the output side of the potentiometer, we can measure. Actually, the output voltage and that particular output voltage is proportional to the deflection or the displacement. And, that particular deflection is nothing but this particular delta. And, if I know this particular delta, and this is a cantilever beam, and supposing that I know, the length of this particular beam or the length of this particular deflection bar is L. I know the cross-section, I know the material properties, so very easily I can write down, this particular delta is nothing but P L cube divide by your 3 E I, this is the standard formula. Now, this is valid, if and only if, this particular bar is working within its elastic limit. Now, here, this delta is known, E is the Young’s modulus. So, modulus of elasticity that you know for the material, I is the moment of inertia, I know the cross-section of this particular deflection beam, I know the dimensions, ok. So, I can find out the moment of inertia. L is the length of this deflection bar. So, all the things are known except this particular P, so P can be determined. So, I can find out, how much is the load coming at this particular point to make that particular deflection possible. So, I can find out, what should be the load acting on each of this particular or deflection bar. And, those particular loads are nothing but the raw readings for these particular strain gauges. And, those raw readings are nothing but the W values, so these are nothing but the raw readings, which we are getting, ok. Now, these particular raw readings are W_1, W_2 up to W_8, because we have got 8 pairs of strain gauges; so, each pair is going to supply W values like W_1, W_2 up to W_8. And, our aim is to determine F_x, F_y, F_z; moment about x, moment about y, and moment about z. Now, these particular W values, we can calculate, we can determine with the help of strain gauge and potentiometer circuit. And, our aim is to determine these particular F_x, F_y, F_z; M_x, M_y, M_z but in between there will be some calibration matrix, which is nothing but this particular C_M. So, F is nothing but C_M multiplied by W, and this particular C_M is nothing but the calibration matrix. Now, here in the matrix form, I have shown the calibration matrix, how to determine. I am just going to discuss, now let us see the dimensions of this particular matrix, here, there are 6 such values, so it is 6 cross 1 matrix. And, here we have got 8 such numerical values W values, so it is 8 cross 1 matrix. Now, to make this particular multiplication possible, so this particular matrix has to be 6 cross 8 ok. So, this particular calibration matrix C_M matrix is nothing but 6 cross 8 matrix. Now, how to determine that, so that I am going to discuss now, if I just concentrate on the previous thing, for example say, this particular thing, if I concentrate, and our aim is to determine your F_x, this is the x direction, so our aim is to determine F_x; this is the Y direction, F_y and F_z, then moment about X; then comes your moment about Y; and moment about Z; I will have to find out. And, these are all raw readings of the strain gauges, that is W_1, W_2 then comes your W_3, W_4 then 5, 6 then comes your 7, 8. So, these are all raw readings of the strain gauges, and I have already discussed how to get these particular the raw readings. Now, with the help of those raw readings actually, how to determine this particular F_x. F_x is the force along the X direction. Now, here W_1 and W_2 are at 90 degree with F_x, so along F_x, there will have no contribution, no component. Similarly, W_5 and W_6 will have no component along X direction. But, W_3 and W_7 will have some contribution towards X. So, there is a possibility here W_3 will come, and W_7 will come, and of course, there will be some calibration terms, which I am going to discuss after sometime, ok, there will be some calibration terms here. Next, comes is your F_y that is in this particular direction. So, 3 and 4 will have no contribution; 7 and 8 will have no contribution; but 1 and your 5 will have some contribution, so I am writing W_1 plus W_5, and here I am just going to write something, after sometime ok, so this is actually your F_y. Then, F_z if I want to find out, so this is the F_z direction, so these W_2 and 6 will have some contribution; so W_2 plus something into W_6 plus, so W_2, W_6 then comes your W_4 and W_8, so W_4 plus W_8 will have some contribution then comes your moment about X. Now, let us try to understand, this is the X direction, so this W_1, W_2, 5 and 6 will have no contribution towards M_X ok. Now, let us see whether 4 and 8 will have some contribution towards M_X or not. Now, W_4 is acting in this particular direction, it is acting in this particular direction ok, so definitely so these two will have some contribution towards M_X. So, W_4 and 8, so W_4 and W_8 will have some contribution. Then comes your moment about Y, so this is the Y direction this is the moment about Y. So, let us see the moment about Y, the 2 and 6 the W_2 and W_6 will have some contribution towards moment about Y. So, W_2 and W_6 will have some contribution then comes your moment about Z. Now, this moment about Z, M_Z, so this is the Z direction. So, this 1 and 5 will have some contribution, then comes your, next is 1 and 5 will have some contribution. And, next is your 3 and 7, 3 and 7 will have some contribution, so 3 and 7 will have some contribution, because, this is your Z direction. So, 3, 7 will have some contributions towards M_Z, ok. So, this is the way actually we can find out F_X, F_Y, F_Z; M_X, M_Y, M_Z. Of course, I have not put the calibration terms. The calibration terms actually, I am just going to show it here. If you see, the calibration terms have been written here. Whatever I discuss that F_X depends on W_3 and W_7 multiplied by this calibration matrix, so W_3 multiplied by calibration matrix C_13 plus W_7 multiplied by C_17. Similarly, for F_Y, these are the calibration matrix; F_Z these are the calibration matrix; M_X these are the calibration matrix; M_Y depends on your C_52 and C_56, these are the calibration matrix; and M_Z depends on these calibration terms, ok. So, we can find out how much is the force acting its three component what is the moment? There is moment about X; moment about y; and moment about Z. Now, here if you want to use this type of force or the moment sensor, some precautions are to be taken. For example, strain gauges are to be properly mounted on the deflection bar. So, on the deflection bar, in fact, we are going to mount the strain gauges, and we will have to mount properly, otherwise we may not get the proper reading ok, the strain gauge are to be correctly mounted here, ok. There should not be any such gap sort of thing, properly mounted ok, so this is one precaution. Another is, your the depletion bar should work within its elastic limit. Otherwise, that particular formula will not be applicable for the deflection that is delta equals to P L cube by 3 E I, so that particular formula will not be able applicable, unless it is working within that particular elastic limit. So, these are the precautions to be taken. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_30_Control_Scheme.txt
We are going to start with a new topic, that is, topic 5, it is on Control Scheme. Now, we have seen starting from the kinematics like how to derive the expression for the joint torque and at each of the robotic joints, we put a motor and this particular motor is going to supply that particular torque. We generally use the DC motor and this particular DC motor is connected at each of the robotic joints. And, here, the generated torque is proportional to the armature current. Now, if you see the motor torque, that is denoted by say tau_m and the armature current is I_A. So, tau_m is proportional to I_A, that is the armature current and this can be written as your tau equals to K_m multiplied by I_A. Now, this particulate K_m is nothing, but the constant of proportionality. So, this is nothing, but the constant of proportionality and this is also known as your the motor constant. So, this is also known as the motor constant. So, this tau_m is nothing, but K_m multiplied by I_A. Now, here, actually what will have to do is, at each of the robotic joint will have to generate this particular tau, that is the torque as a function of time, for example, if I plot for E particular robotic joint. So, this is the joint torque, so this particular joint torque as a function of time, will have to generate. Now, supposing that this particular distribution is something like this, a very random distribution, I have considered. And, at the same time, what will have to ensure is the joint, the angle, that is, theta as a function of time. So, there must be some continuous curve something like this and at the same time, the first time derivative of theta that is your theta^dot as a function of time and your theta^double dot, that is, acceleration as a function of time so, some plot we will have to find out, some distribution we will have to find out. Then, how to ensure that this particular motor, the DC motor is going to generate this amount of torque with time. This is what is required, if I want to or create some angular displacement at the robotic joint in a particular the cycle time, so will have to ultimately generate this particular theta as a function of time, theta^dot as a function of time, theta^double dot as a function of time. Now, I am just going to discuss how to ensure that this particular DC motor is going to generate this torque within this cycle time. Now, let us see how to generate this particular torque. Now, to generate this particular torque let me once again go back to the expression of the joint toque, which I have already discussed while discussing the dynamics. That this particular joint torque tau is nothing, but D theta theta^double dot plus h theta theta^dot plus C theta. Now, I have already discussed that this is nothing, but the inertia term, this is Coriolis and centrifugal term and this is the gravity term, and if I add the friction term. So, this frictional term is to be added here, which I am not adding for simplicity, but we can add this particular friction term. Now, this particular torque has to be generated by the motor, how to generate that I am going to discuss. Now, to generate this particular torque, so what will have to do is. So, we will have to consider a particular control scheme, we will have to take the help of a control scheme. Now, here if you see the literature, we have got a few very popular control schemes and out of all such control schemes, this is the most important one, and this is known as the partitioned control scheme. Now, here in partition control scheme, what I do is, the torque to be generated, that is, tau that is distributed that is divided into two parts one is nothing, but alpha into tau prime, so this is one part and another is the beta. Now, if you say that expression of the joint torque, this h theta theta dot plus C theta plus F of theta theta dot, so this particular terms taken together we call it beta and then alpha is nothing, but D theta, D theta is nothing, but that inertia terms, ok. And, this particular tau prime that will have to generate with the help of a controller. Now, each of this particular motor is having its own controller. Now, let us try to explain how can it generate, so that particular tau prime with the help of its controller and this particular controller is inbuilt. So, let us try to explain that how to generate that particular the required tau prime. Now, this tau prime can be written as your theta_d^double dot plus K_P into E plus K_D into E^dot, if I consider PD control law. Now, PD control law means Proportional Derivative control law. So, proportional derivative control law, that is called the PD control law. Now, here, I am using two symbols: one is called this K_P, now K_P is nothing, but actually the proportionality gain. So, this is nothing, but proportionality gain value and this K_D is nothing, but the derivative gain value, derivative gain. Now, here, I am also using the terms like E, E is nothing, but the error and that is the difference between the desired theta and the theta which is actually created with the help of the motor. So, this is nothing, but theta and this particular difference is nothing, but the error E and E^dot that is your that rate of change of this particular error or we can do something like this, we can find out the deviation or the difference between the angular the velocity, that is, theta_d^dot minus theta^dot. So, this is nothing, but actually what I do is, so this is the desired angular velocity and this is the actually obtained angular velocity and this particular difference is nothing, but is your E^dot. And, what is this theta_d^double dot? Theta_d^double dot is nothing, but the desired acceleration so this is nothing, but the desired acceleration. Now, let us see, how to generate this particular tau prime using this PD control law. Now, before I go for that let me just tell you that if I use PID controller in place of the PD controller, then tau prime will become equal to theta_d^double dot plus K_P into E plus K_I into integration Edt plus K_D E^dot for the PID controller that is Proportional Integral Derivative controller. So, P stands for proportional, I stands for integral so this is integral. So, proportional integral and derivative control law and here, we are adding one extra term that is your K_I multiplied by your that integration Edt. Now, here this K_I is nothing, but your integral grain. So, this is nothing, but integral gain value. Now the values for this particular K_P, K_I and K_D are, in fact, can be determined mathematically also, and there is one well-known method, that is called Ziegler Nichols rule. Now, using this Ziegler Nichols rule, we can find out what should be the numerical value for this particular K_P, K_I and K_D. Now, once I have determined the values for these K_P, K_I and K_D those values are kept constant, those are not altered. So, they are using this Ziegler Nichols method we can find out all such K_P, K_I and K_D values and once you have got that now I can implement this particular the tau prime. Now, let us see how to implement this particular tau prime. Now let us see the controls architecture and this is the block diagram of the control architecture. Now, this particular D theta as I mentioned, so this is nothing, but is your alpha and this tau as you mentioned according to this partitioned control rule, this is alpha tau prime plus beta. So, alpha is nothing, but is your D theta and this particular thing whatever you have written here h theta, theta^dot plus C theta plus F theta theta^dot this is nothing, but the beta, ok. So, this is alpha, this is beta, now I will have to generate this particular the tau prime. Now, the way it is done is as follows, we take the help of some sort of the closed loop control system. So, initially, there could be some error, but this particular error will be compensated. Now, here, I have mentioned theta_d^double dot that is nothing, but the desired acceleration, then comes your theta_d^dot, that is the desired velocity and theta_d that is nothing, but the desired displacement or the angular displacement. Now, here, with the help of these, we are just going to generate this particular tau prime. Let us see how to generate this particular tau prime. Supposing that say this is the summing junction say I am passing, so this particular the tau prime, now, how to determine this particular tau prime according to this PD control rule? Now, as I have discussed that this particular, according to the PD control rule, this particular tau prime is nothing, but E^dot multiplied by K_D plus your this K_P multiplied by E plus theta_d^double dot. So, let me write it here, so this particular tau prime is nothing, but theta_d^double dot. Now, this is the summing junction and we can see, I am putting three such plus sign here. So, theta_d^double dot is coming from their plus K_P multiplied by the error. So, K_P multiplied by the error, so this particular thing next is your K_D multiplied by E^dot, so this K_D multiplied by E^dot so those things are summed up here and that is nothing, but is your tau prime. And, once you have got this particular tau prime we multiply tau prime by alpha so I will be getting alpha tau prime plus beta, so I will be getting the complete tau. Now, here we have got a load, load means this is the mechanical load so, if it is a robot, now actually, the load is to be carried to generate that particular the angular displacement that is nothing, but the mechanical load. And, supposing that I am using a DC motor, DC motor is generating this particular torque. That means, it is generating theta, theta^dot etc. So, the moment I am just applying, I am just putting that particular motor on. So, what will happen is, it will try to generate this particular torque. This is how to realize the torque that has been implemented for getting the theta, theta^dot and all such things. And, using some sensor, we can measure this particular theta and theta^dot, how to measure theta and theta^dot that I will be discussing after sometime. Now, supposing that we are able to measure so this particular theta and theta^dot. So, that will be brought here to this junction for the purpose of comparison. So, theta will be brought here for the purpose of comparison with theta_d, that is, the desired theta and I will be getting this particular error, and this error will be multiplied by K_P and it will be added here. Similarly, whatever theta^dot we are getting that we can measure. So, this will be brought here for the purpose of comparison at this particular summing junction. And, it will be compared with your theta_d^dot and will be getting this E^dot and this E^dot is multiplied by K_D and that is also be summed up here. And, I will be getting this particular tau prime. So, this process will go on and go on. So, in the first cycle, we may not get the accurate theta and theta^dot, but as I told that we are going to use the closed loop control system. So, there will be error compensation and at this particular robotic joint we are going to generate this particular theta, theta^dot accurately as a function of time. And, that is why, as I mentioned that will be getting the theta as a function of time, theta^dot as a function of time and of course, you will be getting theta^double dot as a function of time, ok. So, will be getting sum distribution here ok. So, this particular joint torque will be realized in the form of theta, theta^dot and theta^double dot, ok. So, this is the way actually we can generate the desired motion at the robotic joint with the help of your the DC motor, but this particular DC motor is having some loss. So, whenever we try to calculate the power rating for this particular DC motor, which you are going to put, so that particular loss will have to be considered. Now, if I know the torque history, and if I know this particular your theta^dot as a function of time, so, very easily, we can find out what should be the power rating. And if I know this particular power rating so we can prepare the specification of this particular motor, which I am going to put at the robotic joint, but as I told while preparing the specification we will have to consider that one is the torque required and another is the loss of toque, and this particular loss of torque is general,y we try to calculate loss of torque is nothing, but K tau square. So, what about torque is required to generate this theta dot and theta double^dot and all such things after that I will have to add this particular the loss that is K tau square, tau is nothing, but the torque and K is the constant. Generally we considered a small value for the DC motor. So, 0.025 or so, and we try to find out this particular loss, and then we decide what should be the power rating for this particular motor. So, this is the way actually, we control the different joints of the robots with the help of DC motor. For example, say if you consider the PUMA the way we control PUMA. So, here I am just going to discuss briefly the control architecture for this particular PUMA, that is a programmable universal machine for assembly. Now, here there are 6 joints all 6 are rotary joints and at each of the joint actually we have got this type of the control system. So, if we see the control architecture for each of this particular joint. So, this is actually the block diagram for the control architecture for a particular the joint. Now, here you can see that we have got that 6503 microprocessor. So, to control a particular joint, so I have got a motor and to control that actually we use this type of control architecture or the control scheme. So, we have got the microprocessor here, then this digital to analogue conversion. Then, we have got the current amplifier because this armature current is going to enter and will be getting the joint torque. And, this torque will be generated and it will be realized in the form of your theta as the function of time, that is the joint angle. Now, here I am using one encoder so this particular encoder optical encoder is nothing, but the feedback device. Now, with the help of this optical encoder, we can determine what should be the joint angle and so this particular thing is compared with the desired value. So, if there is any such error so that particular error will be amplified here and will be getting some current here armature current, and once again it will generate and this particular error will be compensated. So, at each of the robotic joint will be getting very accurate movement with the help of this particular the motor. Now here, for this PUMA there are six motors, now this is the control scheme for a particular the joint. Similarly, for the second joint, I have got another such control scheme, third joint, fourth joint, fifth and sixth joint. And, all such movement of the joints that will be controlled by one centralized computer and that is your the master control computer. So, here to control this particular PUMA, we have got one controller or the director and we can use one master control computer. So, with the help of this master control computer the all the movement of all the joints actually, can be the controlled. So, this is the way, actually we control this particular the PUMA. Now, here, so till now whatever I have discussed, let me tell you, in short, like your till now starting from the kinematics, we have discussed how to carry out the dynamic analysis and we have seen how to generate that particular the torque with the help of your, say DC motor. And, the motor will be equipped with one controller, and generally, we use either PID controller or PI controller or PD controller and once you know the gain values of this particular controller, we can control the movement at the different robotic joint. So, till now, actually the robot is ready and we have already discussed how to teach a particular robot. So, if I just want to give a task to the robot, we are in a position to give task to the robot and the robot will try to follow that and try to perform that particular the task. So, till now, whatever we have discussed is this, but one thing we have not a discussed, can a robot take decision, how can you make the robot capable of taking decision? So, that particular thing, we have not yet discussed that means, how to make a robot intelligent? How to make a robot autonomous? What do you mean by robot intelligence? Those things we have not yet discuss, we have not yet discuss in this course. So, gradually actually we are moving towards that how to make that particular robot intelligent. Now, before I go for that particular intelligent issues in robotics that is actually, the fourth module or the last module of robotics. Now, let me tell you what do you mean by an intelligent robot. So, by an intelligent robot actually what is meant, so intelligent robot will be able to take the decision as the situation demands. That means, in a varying situation, in a varying environment an intelligent robot should be able to take the decision, and there is another term that is called the autonomous robot. Now, this autonomous robot is actually, a robot, which has got the permission to perform as the situation demands. So, if the robot is having the ability to perform in a varying situations that may be called an intelligent robot. However, if it if it is having the permission to perform in the intelligent way or take the decision in the varying situations. So, if it is having that permission then only it is called the autonomous robot. Now, all the autonomous robot should be intelligent robot, but all the intelligent robot may not be autonomous. Now let me take a very simple example just to find out the difference between the intelligent system and autonomous system. Now, if you see that under one university there are 10 engineering colleges. Now, these engineering colleges will have to follow the rules and regulations of this particular university. Now, at each of the engineering colleges, there could be intelligent people, faculty members, students, but there unable to take any decision they will have to depend on the university rules. So, they are intelligent, but they are not autonomous on the other hand if you see the institutes like IIT, NIT they are intelligent at the same time autonomous, they are having the capability to take the decision and their having the permission also to take the decision, they are intelligent they are autonomous. In robotics actually we call it is intelligent and autonomous robot, now how to design and develop an intelligent and autonomous robot. So, those issues actually will be discussed in details one after another and as I told that we copy everything from human being in robotics, we also try to copy the intelligence. That means, the way we collect information, the way we take the decision, the way we implement all such decisions all such things, in fact, we are going to copy in the artificial way in the intelligent and autonomous robot. Now, if we see, we collect information with the help of sensors, but the robot doesn’t have any such sensor. So, will have to put some sensors for example, say will have to put some camera will have to put some sensors and with the help of this sensors and cameras, the robot will be able to collect information of the environment. And, once it is got that particular information, now it will try to do the analysis try to use some sort of motion planning algorithm, which I am going to discuss, what should be the course of action. And, depending on the course of action, now the movement should be there, the movement of the wheels of the robot should be there or the movement of the different links or the limbs should be there. Now, how to move it actually, we take the help of motor, we take the help of controller, just to get that particular motion implemented. So, here, actually to make it intelligent, all such issues will have to be discussed one after another. So, we are going to discuss, in details, how to make the robot intelligent. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_08_Introduction_to_Robots_and_Robotics_Contd.txt
We are discussing the working principles of different types of end-effectors used in robots. Now, I am going to start with the working principle of a magnetic gripper. Now, this magnetic gripper is suitable for the magnetic materials. For example, say, if I consider a component made of steel. So, this particular magnetic gripper is going to work, but it will not work for the stainless steel, because stainless steel is not magnetic. Now, here, we can use both permanent magnet as well as electro-magnet. Now, if I use permanent magnet, the mechanism will be as follows: So, for example, say this is nothing but the permanent magnet and this permanent magnet will be connected to the robotic end-effector. Now, if I see this particular magnetic gripper, if we use magnetic gripper, we have got a few advantages, for example, it can grip objects of various sizes and moreover the pickup time will be less. On the other hand, it has got drawback like it is subjected to residual magnetism. Now, supposing that I am using a permanent magnet now of this type. So, this is actually the permanent magnet, which I am going to use and this will be connected through the wrist end of the manipulator. Now, this is the steel plate which I am going to grip. The moment I put this permanent magnet very close to the steel plate, the magnetic lines of forces are going to pass through this particular steel plate, and due to this, this steel plate will be gripped by the permanent magnet. Now, if I want to un-grip or if I want to remove this particular steel plate from this magnetic gripper, I will have to use one stripping device, that is nothing but a steel pin. So, this particular steel pin can be used as the stripping device. The way it works is as follows: here on this particular permanent magnet, I have got one circular hole here and I have got another circular hole. If I want to un-grip, this particular steel pin will be inserted through these two circular holes. The moment we put this particular steel pin, some of the magnetic lines of forces will pass through this particular steel pin and consequently, the strength of the magnetic field passing through the steel plate will be weaker and due to this weakness in strength of this particular magnetic field and the self- weight of this steel plate, the steel plate will be separated out from this particular permanent magnet. Now, this is the way actually, we can un-grip so, this particular steel plate from the permanent magnet. Now, in place of permanent magnet, if I use the electro-magnet, and if I want to grip it, it is ok, but if I want to un-grip, what I will have to do is: I will have to reverse the polarity. So, if I reverse the polarity of the electro-magnet, I am just going to un-grip this particular steel plate. Now, this is the way actually, one magnetic gripper works and its working principle is very simple and this is very frequently used for the magnetic material, but this will not work for the nonmagnetic material. The next is the adhesive gripper. Now, this adhesive gripper is suitable only for the light object like the small weight objects and here, we use some sort of adhesive material. Just to grip that particular object, we take the help of adhesive material. Now, this is almost similar to the way one frog catches its prey. So, on the tongue actually it puts some sort of adhesive material and that particular tongue will be thrown towards that insect and the insect will be caught with the help of this particular the adhesive material. So, this particular adhesive gripper, as I told you, is suitable only for the very light material. Now, then, comes the universal gripper. Now, our hand is actually a true example of this particular universal gripper, because with the help of our hand we can grip different types of object and our gripper is robust and it is flexible and it can grip a number of objects of different shapes and sizes. And, that is why, this is a very sophisticated one and our gripper is known as the universal gripper. So, now, I am just going to start with the working principle of the passive gripper. Now, this passive gripper is used, whenever there is no such sensor. I have already mention that by passive gripper, we mean those grippers, where we do not use any such sensor. Now, before I proceed with the working principle of this particular gripper. Now, let us try to understand, why do we go for this type of gripper. Now, let me take one very simple example. Now, supposing that I want to develop one printed circuit board (PCB) and on the printed circuit board, there are some small small circular holes. And, what we will have to do is: depending on the requirement of this particular electrical or the electronic circuits, I will have to insert some sort of small elements like register, capacitor, and so on, just to design and develop a particular printed circuit board. Now, this particular task, if I give it to the manipulator, if I give it to the robot, at the end -effector actually, we will have to put a special type of gripper, which is nothing but the passive gripper, if I want to insert some small items like register, capacitor into this particular hole. Now, here, the problem which we are going to face is like, it is a bit difficult to insert a peg into a hole. Now, this particular problem in robotics is the actually very popular and so, how to insert a particular peg into a hole, here. So, this particular schematic view shows that I have got a steel plate and on the steel plate, we have got one circular hole. Now, on this circular hole like this, actually we will have to insert this particular peg. Now, supposing that, this is the central line for this hole and I have got this particular peg, which I will have to insert now, this peg is actually gripped with help of the gripper and now, that robot is going to put this particular peg into the hole. Now, if we to want to put this particular peg into the hole, there is a possibility that this part of the peg is going to collide here and due to this, it will not be able to insert. So, in this particular peg into the hole and this is what is known as the lateral error. So, the robot will not be able to insert this particular peg into the hole and it will be obstructed here, this is called the lateral error. Now, to remove this lateral error actually, what we do is: we put some chamfering, that means, I have got this type of plate and I put this particular chamfering sort of thing. So, here, I am just going to put the chamfering. Now, if I put this particular chamfering and try to insert this peg with the help of the robot, there is a possibility that this particular lateral error will be solved. But, it is going to create another problem, might be this particular peg is going to take the position something like this and it is going to create another problem, another error, that is called the angular error. So, by inserting this particular chamfering, there is a possibility we can solve this lateral error, but we are going to create another problem that is called the angular error. So, how to solve both the errors, so that I can insert this particular peg into the hole. Now, to do that actually, we take the help of one passive gripper, which is very popularly known as remote center compliance and that is nothing but RCC (Remote Center Compliance). Now, the construction-wise it is very simple. For example, say, this part this is connected to the wrist end of the robot and here, I have got one steel frame sort of thing, I have got another frame here, steel frame sort of thing, I have got another steel plate sort of thing of small thickness and this particular plate and that particular plate are connected with the help of four such links like this and here we have got two fingers. So, this is finger 1, finger 2 and with the help of this two fingers, we try to grip this particular peg. So, this is actually nothing but the peg. So, this is the peg, which I will have to insert into that particular hole. Now, this is connected to the wrist end, as I told. Now, this particular peg will be brought very near to the hole and supposing that the hole could be here. So, might be the hole could be here and here actually what we do is: this particular peg will have some sort of oscillatory movement like this and due to this particular oscillation and due to this error and trial, this peg will be inserted into this particular hole with the help of this RCC, which is nothing but a passive gripper. Now, this RCC will work, provided we put some chamfering at this particular plate, otherwise, it may not work and this angle of chamfering has to be less than 45 degrees, otherwise there could be some sort of angular error and moreover so, this RCC can work in vertical direction, but it will not be working in the horizontal direction, but this particular gripper is very popular just to solve, how to insert small, small electronic items into that particular printed circuit board. So, this is the way, actually, this particular passive gripper works. So, this is all about the end-effectors, the different types of end-effectors, which is generally used in robots. Now, here, I just want to mention that depending on the requirement, depending on the task, we will have to design the special type of gripper, special type of end-effector. So, the working principles of a few grippers, a few end- effectors, I discussed are actually very simple, very simple design, but depending on the complicated task, depending on the task, the nature of the task, we will have to design the most suitable gripper, and that is why, we see the task and try to design the end-effector or the gripper. Now, I am going to start with the teaching methods like how to give instruction to a robot. Now, supposing that, say I have got one robot, say, one serial manipulator and I just want to give the instruction that you start from a particular point, say, the tip of this particular marker and you reach this particular point, the tip of this particular finger through a number of intermediate points. Now, how to give this type of command, how to give this type of instruction to this particular robot; and now here, actually the purpose of teaching as I told, to provide necessary instruction to the robot. Now, these teaching methods are broadly classified into two groups. We have got online methods and we have got offline methods. Now, by online methods, we mean those methods, where while giving instruction, we use this particular robot. That means, we are going to teach a robot, but while giving instruction or while teaching it, we will have to use that particular robot. So, that particular method is known as the online method. On the other hand, if I do not use the robot while teaching, that particular method is known as the offline method, and here in offline method, we will have to take the help of some sort of programming language. Now, let me first concentrate on these particular online methods. Now, these online methods are once again classified into two subgroups one is called the manual teaching, another is called the lead-through teaching. Now, let me try to discuss the working principle of this particular manual teaching first. Now, supposing that I am just going to use one serial manipulator having, say 6 degrees of freedom like PUMA and I am just going to do some sort of drilling operation on a steel plate. So, what I will have to do is supposing that this is the plate and on this particular plate I want to just do some sort of drilling here, at location – 1. So, what I will have to do is as follows: this twisted drill bit has to be gripped by the gripper of this particular manipulator and the center of this particular hole and the tip of this particular the twisted drill bit should coincide. Now, this is in 3D, for example, it has got like x y and z axis. So, this particular object is in 3D. So, how to reach this particular point, a 3D point in 3D space, with the help of that particular manipulator having 6 degrees of freedom. Now, to reach this particular point in 3D space with the help of a manipulator having 6 degrees of freedom, there could be several combination of the theta values, for example, say there could be several combination of theta 1, theta 2 up to say theta 6 values with help of which, I can reach this particular point, say point 1 and out of all the possible combination of the theta values, if I know at least one, my purpose will be served. Now, how to collect this particular information, to collect the information, what I can do is so, I can take the help of manual teaching which is suitable for point-to-point task and this is nothing but a point-to-point task. So, there are several methods for this manual teaching, for example, say we can take the help of control handle or joystick. So, with the help of this control handle or joystick through some error and trial, the tip of this particular twisted drill bit will be able to reach the center of this hole. The moment it reaches the center of this particular hole, we store all the theta values with the help of optical encoder, which are mounted at each of the robotic joints. And, we measure all such theta values and once, we have measured all such theta values corresponding to this particular hole which is to be drilled on this plate. So, what I do is: we replace this plate by a second one, and we make this particular drilled hole exactly at the same location, then once it is done on the second plate, we go for the third plate and solve; so for a large number of plates exactly at the same locations, I can make these types of drilled holes. Now, to collect this particular information of theta 1, theta 2, up to theta 6, we take the help of this control handle, which is nothing but a manual teaching. Now, then comes the push buttons. Now, we have already discussed, that for each of this particular robotic manipulator, there is a director or a controller. Now, on the body of the director or the controller; there will be a few push buttons and with the help of these push buttons, actually, we can control the movement of the tip of the manipulator either in Cartesian coordinate system like x, y and z or in joint space like in terms of theta 1, theta 2, up to theta 6. So, we can increase the value of theta 1, theta 2, up to theta 6, we can increase and decrease the numerical values for these x movement y movement and z movement, then through some trial and error the movement it reaches this particular point, what I do is, we store all such theta values with the help of optical encoder. So, this is how to use the push button. Now, next, I am going to discuss how to use a teach-pendant for the manual teaching. Now, this teach-pendant is nothing but one remote controller for this robot. So, just like the remote controller used in TV, it is look-wise almost similar, but slightly larger in size. Now, this teach-pedant can be operated either in the Cartesian coordinate system or the world coordinate system, that is, in x, y and z. It can also be operated in the joint scheme, that is, in terms of theta 1, theta 2, up to theta 6. It can be operated in tool coordinate system also, and so on. So, we will have to select a particular operating system or the coordinate system and then by using this particular teach-pendant manually, we can control the movement of the different joints. The moment, it reaches the tip of this particular the cutting tool reaches the center of the hole, so, what I do is: we store all such theta values with the help of optical encoder and the same set of theta values, we use for a large number of plates. Now, this is the method of how to use the teach-pendant to incorporate the manual teaching. Now, I am just going to discuss the principle of another online method that is called the lead through teaching. Now, for this lead through teaching, actually, this is suitable for some sort of continuous path task, which I have already discussed and for this particular continuous path task, the tool should be in touch with the job continuously. Now, let me take the same example, which I took for this particular continuous path task, supposing that this is actually a profile, which I will have to cut on one side of a steel plate. The way, it has to be done is as follows: actually, we use some sort of milling cutter. So, this type of milling cutter, we use and so, this milling cutter should rotate and it should be able to trace this particular complicated profile. Now, this is in 3D. Now, if I consider the 2D view, supposing that x and y, so, this type of profile, I will have to cut. How to cut this type of profile? To cut this type of profile, actually, what we do is: we divide this profile into a large number of small segments and the more the number of segments, the better will be the precision. Supposing that, we are going to divide it into say one thousand segments now, for each of these particular one thousand one points, we cannot find out, so easily, the sets of theta values like theta 1, theta 2, up to say theta 6 and moreover, once we have got it somehow, we will have to store these particular sets of theta values. So, it requires a huge amount of memory, but the more difficult thing is actually, how to determine like one thousand one sets of such theta values. Now, mathematically, it becomes very difficult to determine theses one thousand one sets of theta values and that is why, we will have to use some other practical methods like how to collect this particular information. Now, one method could be like, if this is the cutter, this particular cutter is gripped by the gripper or the end-effector. Now, here, this particular cutting tool is going to trace this complicated profile, Now, we can try, if this is the cutter, I can just grip it and try to trace this complicated profile, which I am just going to cut. Now, if I want to trace this complicated profile manually, it becomes very difficult because at each of the robotic joint, there are some motors, there are some brakes, there could be chain drive, gear drive, belt drive, and so on. So, it becomes very difficult like if I just grip it and try to move according to my choice. So, it becomes very difficult to move manually. Then, how to trace? How to trace this particular complicated profile and if I can trace this complicated profile, which I am going to cut, and while tracing at regular interval so, if I can store the theta values with the help of optical encoder, I will be able to collect all sets of theta values. And, once, we have collected those sets of theta values, we try to fit some smooth curve, which I will be discussing after some time, just to ensure the smooth variation for theta 1, theta 2, up to theta 6; and once we have got that smooth variation, now, I can operate and I can run that particular robot, but here, the problem is that we will not be able to trace this complicated profile. Now, actually one method has been suggested that we are going to use one manipulator, a second manipulator, that is called the robot simulator. Now, this robot simulator is actually not a simulation package. So, this is actually a physical robot and this particular robot is kinematically equivalent to the main robot, which I am going to teach, and by kinematic equivalence, we mean that both the main robot and this particular robotic simulator are having the similar type of joints, similar type of links, but this robot simulator could be in one is to one scale with the main robot or it could be scaled up version or scaled out version. Now, in this robotic robot simulator, actually, there is no such motor, there is no drive unit, but at each of the joint, we have got the optical encoder. Now, if you have the optical encoder at each of the joint, but there is no such drive unit, there is no such gear, no breaks nothing, now, I can just grip this particular end-effector or the this particular cutter, which is connected to the end-effector, and I can trace the complicated profile, which I am going to cut and while tracing at regular interval with the help of optical encoder I am just going to store the theta values. Now, this is actually known as the lead-through teaching. Now, this robot simulator is actually the master robot and the main robot, which I am going to control is called the slave robot and this is, in fact, the working principle of master and slave robot. Now, so, this is actually the lead-through teaching. So, both lead-through teaching and manual teaching are coming under the umbrella of the online methods. Now, I am just going to concentrate on the offline method and here, in offline method actually, we will have to use some sort of programming language. Now, if you see the offline method. So, we will have to use some programming language just like your computer program. Now, here, actually, we can use a language like VAL programming for the PUMA series robot. So, this particular example, I am just going to take for the PUMA series robot using the VAL, that is, versatile assembly language or variable assembly language, and this is suitable only for PUMA, that is, programmable universal machine for assembly. Now, here, in this VAL programming, actually, we take the help of a few commands from the BASIC language, that is, your beginners all purposes symbolic instruction code. Now, here, we will see that the some of the codes are exactly the same, but we add a few extra the commands also, in this particular VAL. Now, before I just can write one program with the help of this VAL programming, I just want to define the task, which I am going to give to the robot and that particular task is nothing but the pick and place type of operation. Now, let me discuss a little bit this pick and place type of operation first, then I will be discussing like how to write down that particular VAL program to solve this the problem. Now, let us try to define the problem, which I am going to solve. Supposing that I have got a table sort of thing. So, this type of table I have and on this particular table, I have got two bins, say I have got say one bin or the bucket here. So, this is one bin and this is another bin or bucket here. So, this is bin – 1 and this is bin – 2 lying on the top of the table so, this is the top of the table, say. Now, here, actually I do is: I have got a manipulator, say, serial manipulator sort of thing, for example, say I have got a manipulator like this, very simple manipulator. I am just going to consider here that I have got a this type of manipulator, and here, we have got this particular gripper or the end-effector. Now, this manipulator is having one coordinate system, one base coordinate system, like x, y and z coordinate system and this particular table is having another coordinate system, here, like x, y and z here, ok? So, if I want to give instruction to the robot that you just go to the bin – 1 and collect a particular job and place it to the bin – 2, their particular coordinate systems are to be known to one another. Supposing that, on this particular bin, I have got an object, a 3D object and we know how to represent the position and orientation of this particular 3D object. So, here, to represent the position and orientation, we need actually six information, three for the position and three for the orientation. Now, let me take a very simple example supposing that this is the 3D object. Now, if I want to represent its position and orientation, I need three information for the position and three for the rotation, that is, the orientation. So, I need six information. Supposing that the position and orientation of the object lying on this particular bin – 1 are known and the position and orientation of this particular bin, that is, the bin – 2 are also known, and all such information I have stored at the top of this particular program. So, here, at the top of the program, I will have to write down the position and orientation of the bin – 1, position and orientation of the bin or the bucket – 2, and the position and orientation of this particular item, the 3D object. Now, once I know all such things, let us see how to write down, this particular the VAL commands. Now, we are going to discuss, how to teach a robot practically. Now, here, the robot which we are going to consider is the PUMA, that is, programmable universal machine for assembly. Now, this PUMA is nothing but a serial manipulator having 6 degrees of freedom, there are six joints. All six joints are rotary joints and out of six, we have got three revolute joints and three twisting joints. Now, here so, this is actually the PUMA. Now, this is a robot with fixed base. So, this is the fixed base, the first joint, that is, nothing but the twisting joint. The second joint is the revolute joint. The third joint is another revolute joint. The fourth joint is a twisting joint. The fifth joint is another revolute joint and here we have got one twisting joint. So, we have got six joints, each joint is having one degree of freedom. So, this serial manipulator is having 6 degrees of freedom. Now, if I see the different components. So, this is the body of the robot. This is the controller or the director of this particular robot. Now, this is equipped with one display, but that display has become out of order. That is why, this particular display is used as the display for the controller or the director of this robot. As we have already discussed that the robot can be taught using either online or offline method. Now, out of these online methods, we have got the manual teaching and lead-through teaching. Now, here, I am just going to show, how to control or how to teach this particular robot using a manual teaching method, that is, with the help of one teach-pendant. Now, this particular teach-pendant is nothing but a remote controller for this particular robot. Now, with the help of this particular teach-pendant, the robot can be controlled either in world coordinate system or the Cartesian coordinate system or we can control it in joint space like in terms of the theta space or in tool coordinate system. Now, these teach-pendant is used for the manual teaching. Now, regarding this offline teaching method, as I have already mentioned that we use some programming language. Now, for this particular PUMA series robot, the programming language, which we are using to teach this particular robot is VAL, that is, versatile assembly language or variable assembly language. Now, this versatile assembly language we can use to write down the program to solve some practical problems. Now, here, I am just going to discuss two practical problems and to solve these two practical problems, we are going to write down the VAL commands and we are going to control this particular manipulator. Now, let us first concentrate on the first task. Now, the task is, we will give command with the help of the VAL programming like the first the robot will got to its HOME position, then from the HOME position so, it will be directly reach a particular point, a predefined point, say point A, and after that, from point A, it will once again go back to the home. Now, to solve this particular problem, we are going to show you the VAL programming first and then, with the help of this VAL programming, we are going to teach this particular the robot. Now, here actually, this shows the VAL commands like with the help of this VAL commands actually what we can do is: the HOME is already defined and we are going to defined a particular point, say, either point 1, point 2 or point 3 or point 4 and from this particular point by using this VAL command, we can give the command that it goes to the HOME, then, from HOME we just go to the point A and once again, it comes back to the HOME. Now, we are going to execute this particular program to control the robot. The second task is related to the pick and place type of operation. So, at location 1, there is an object. Now, the task of the robot is to pick that particular object and it is going to carry it to another location and it will place it there and this is very popularly known as pick and place type of operation. Now, we are going to show you like how to use the VAL programming, so that the robot can perform this particular task. Now, this is the VAL programming. So, what we have done it here, we have to find the different points. So, the coordinates of the different points, we have already saved in the program and then, we are going to just give the command like: MOVE to that particular point, MOVE to that particular point, another point. So, this is the way actually we can write down this particular VAL programming to solve the pick and place type of operation.
Robotics_by_Prof_D_K_Pratihar
Lecture_13_Robot_Kinematics_Contd.txt
So, we have seen how to determine the 3 cross 3 matrix corresponding to rotation about Z by an angle theta. So, this is the matrix. Similarly, we can find out the rotation about X by an angle theta and that is nothing but 1, 0, 0; 0, cos theta, minus sine theta; 0, sine theta, cos theta; then, rotation about Y by an angle theta in the anticlockwise sense is cos theta, 0, sine theta; 0, 1, 0; minus sine theta, 0, cos theta. Now, I am just going to discuss the properties of rotation matrix. The first one: each row or column of a rotation matrix is a unit vector. That means, if I just concentrate on this particular rotation matrix, like each row. For example, if I consider the first row that is your cos theta, 0, sine theta. Now, this is the unit vector representation; that means your squared root of cos squared theta plus 0 plus sine squared theta so, this is equal to 1. Similarly, if I concentrate on a particular column then, cos theta, 0, minus sine theta and once again, the squared root of cos squared theta plus 0 plus sine squared theta, becomes equal to 1, this is what, we mean by the first property. The next is the inner product or the dot product of each row of a rotation matrix will become equal to 0. So, if I concentrate on a particular row or if I concentrate on a particular the column: for example, say if I concentrate on this particular the row, that is, cos theta, 0, sine theta and the second row is 0, 1, 0. And, if I try to find out their inner product or their cos product, the inner product of the first row, and the second row it is nothing but cos theta multiplied by 0 plus 0 multiplied by 1 plus sine theta multiplied by 0 and this becomes equal to 0. Similarly, if I consider the two columns of the rotation matrix like the first column, that is, cos theta, 0, minus sine theta and 0, 1, 0. And, if we try to find out the inner product this will become equal to cos theta multiplied by 0 plus 0 multiplied by 1 minus sine theta multiplied by 0 and this is equal to once again 0. So, this is the way actually we can check the second property of this particular rotation matrix, that is, an inner product of each row or each column of a rotation matrix with each other row becomes equal to 0. Next property is the rotation matrices are not commutative in nature; that means, it depends on the sequence. So, sequence is very much important, while writing down the rotation matrix. For example, say the rotation about X by an angle theta_1, then comes rotation about Y by an angle theta_2 is not equal to the rotation about Y by an angle theta_2 rotation about X by an angle theta_1. Now, if I calculate the left hand side and determine the right hand side separately and try to compare, we will see that they are not equal. That means, this particular rotation matrix depends on the sequence, thus, the way, the sequence along which we are writing down these rotation matrices and they are not commutative in nature. The next property is the inverse of a rotation matrix is nothing but it is transpose. So, if it is a pure rotation matrix, then if you want to find out the inverse of that particular rotation matrix, it is very easy. So, what you can do is: if we can find out the transpose of that particular rotation matrix, that will be equal to its inverse. For example, say if I see that a rotation X comma theta inverse of that is nothing but the transpose of the rotation matrix X comma theta. But, this is true only when, this is a pure rotation matrix. Now, if there is a any such translation term, this particular condition will not hold good. Now, here, I have written another thing that is the transformation matrix of B with respect to A is nothing but the transformation matrix of A with respect to B inverse, ok? Now, this is very much essential because say we want to find out a particular joint with respect to the previous and the reverse, that is, this particular joint with respect to the next and for that what you need is, this particular inverse we will have to find out and T_B with respect to A is nothing but T_A with respect to B inverse. So, this particular condition we can use. Now, I am just going to solve one numerical example based on the theory, which I have already discussed. Now, for this numerical example, the statement is as follows: a frame B. So, B is nothing but a body coordinate frame is rotated about X_U, that is, the X axis of the universal coordinate system by 45 degrees. So, this is positive 45 degrees; that means, the anticlockwise direction and translated along X_U, Y_U and Z_U by 1, 2, and 3 units, respectively. Let the position of a point Q in B is given and this is given by 3, 2, 1 transpose, then how to determine the position of that same point with respect to the universal coordinate system, that is, Q with respect to U. So, here, the way we will have to find out it, is very simple. So, this Q with respect to U, this is nothing but the rotation of B with respect to U multiplied by Q with respect to B, and of course, this is a vector so, we will have to put this vector sign. So, if I can find out this rotation of B with respect to U and I know this Q with respect to B so, very easily, I can find out what is Q with respect to U. Now, how to determine this rotation matrix, that is, R_B with respect to U. So, once again, let me read that there is a rotation about X_U by an angle 45 degree in the anticlockwise sense and there are translation along X_U, Y_U and Z_U by 1, 2 and 3 units, respectively. So, by using that particular information, what I will have to do is so: I will have to find out what should be this particular R_B with respect to U. Now, this R_B with respect to U if you see the rotation term, the rotation terms is nothing but 1, 0, 0; 0, cos theta, minus sine theta; 0, sine theta, cos theta; now, here, theta is nothing but 45 degrees. So, this is actually nothing but this particular rotation about X_U by an angle 45 degrees and these are nothing but the translation terms. So, I will be getting the transformation matrix corresponding to that particular rotation and that is nothing but this 4 cross 4. In fact, although I have written there it is R_B with respect to U. Truly speaking, corresponding to that it should be your T_B with respect to U. So, if I write T_B with respect to U, then it will carry actually the 4 cross 4 matrix and this is multiplied by 3 2 1 1 and these (3 2 1) is nothing but the position terms, that is, Q with respect to B. So, Q with respect to B and if I know actually, we will have to find out Q with respect to U and truly speaking, this is nothing but T_B with respect to U multiplied by actually Q with respect to B. Now, Q with respect to B is this much, and this T_B with respect to U is nothing but this much and if I just multiply, this is a 4 cross 4 matrix and this is a 4 cross 1 matrix, and if I multiply then, I will be getting this particular the 4 cross 1 matrix and this is nothing but Q with respect to U. So, very easily, we can find out, what is this particular the Q with respect to U. So, Q with respect to U, you can find out. Now, let us try to concentrate on this particular rotation matrix, that is called the composite rotation matrix. Now, I have already discussed that while writing this particular rotation matrix, the sequence is very important and we will have to follow a particular sequence, otherwise altogether you will be getting the different the final results and that is why, to write down this particular rotation matrix all of us, who work on robotics, follow one rule that is called the composite rotation matrix rule. Now, the rule is as follows: the composite rotation matrix representing a rotation of alpha angle about Z axis followed by a rotation of beta angle about Y axis followed by a rotation of gamma angle about X axis is written in this particular the format. The rule is as follows: whatever I state first; that means, the rotation about Z by an angle alpha, that particular thing will be written at the last, that is rotation about Z by an angle alpha will be written at the end followed by the rotation of beta about Y axis; so rotation of beta about Y axis is followed by the rotation of a gamma angle about X axis. So, ROT X comma gamma is written first. Now, for each of these, actually we can find out like what should be the 3 cross 3 matrix. So, this is a 3 cross 3 matrix, this is also a 3 cross 3 matrix and this is also a 3 cross 3 matrix. Now, if I multiply then finally, we can find out one 3 cross 3 matrix and that is nothing but the composite rotation matrix. So, we can find out the 3 cross 3 matrix, that is nothing but ROT_composite, that is the composite rotation matrix. Now, this particular rule is actually followed by the whole robotics community, just to write down, whenever there is rotation term. Now, here, I am just going to take some examples and I am going to show you that this particular rule is correct. So, indirectly, I am just going to prove that this particular rule that is the rule, for the composite rotation matrix is correct. So, here, till now, whatever we have seen is: we have expressed position in terms of the Cartesian coordinate system and we have seen that, in terms of Cartesian coordinate system, the position can be expressed in terms of a vector and that is nothing but a 3 cross 1 matrix. For example, say the position vector is nothing but say q_x, q_y, q_z and once again, let me repeat that this is nothing but a position vector in matrix form, this is nothing but a 3 cross 1 matrix. Now, the same position can also be expressed in other coordinate system now that I am going to discuss. Now, here, I am just going to see how to represent the position in cylindrical coordinate system. Now, in cylindrical coordinate system, the position of a particular point in 3D can be expressed with the help of actually two translations and one rotation in a particular the sequence. Now, here, supposing that the problem is as follows: so I have got the universal coordinate system and in this particular universal coordinate system, this is nothing but X_U, Y_U, and Z_U. Now, supposing that I have represented a particular point, that is nothing but Q with respect to U. So, in Cartesian coordinate system if I want to represent this, it is very easy. So, if I know the translation along X direction, if I know the translation along Y direction and if I know the translation along Z direction. So, very easily, I can find out this Q with respect to U in Cartesian coordinate system. Now, here, my aim is to reach the same point using the cylindrical coordinate system; so how to reach, we take the help of these steps. So, in step one, we start from the origin O. So, this is the origin O and then, we translate by r units along X_U. So, from here, we translate by r unit along X_U. So, I am here the next is the rotation in a anticlockwise sense about Z_U axis by an angle theta. So, I am here now, I am taking rotation about this particular Z_U by an angle theta in the anticlockwise sense. So, I am here so, this is my position. Now, the next is translate along Z_U axis by Z unit. So, starting from here, we translate along this particular Z direction by this small z unit, and I am just going to reach the same point, that is nothing but Q with respect to U. Now, this particular sequence, if I just take the help of the rule for the composite rotation matrix, I can find out the final transformation matrix or if I want, I can find out the corresponding rotation matrix also, but, here, I am interested mostly in position. So, what I can do is: I can find out the transformation matrix. So, T_composite, that is, the transformation matrix corresponding to this particular composite, whatever I stated first that will go to the last, that is translation along X_U by r unit followed by rotation about Z_U by theta followed by translation along Z_U by z. Now, corresponding to each of these particular things, we can write down the 4 cross 4 matrix. For example, say corresponding to this rotation about Z_U by an angle theta, I can write down very easily this particular transformation matrix, that is, 4 cross 4. For example, say, this will be the cos theta, minus sine theta, 0; then comes sine theta, cos theta, 0; 0, 0, 1 and this is the pure rotation so, the translation terms will be 0. So, 0 0 0 and of course, I have got the fourth row, that is, 0 0 0 1. So, this is nothing but the rotation about Z_U by an angle theta in 4 cross 1 matrix form. Similarly, I can also find out the translation along X_U by an angle r, sorry by r. Now, this translation it is a pure translation. So, this rotation terms will be nothing but a 3 cross 3 identity matrix. So, very easily, I can write down this particular thing, for example, say the trans Trans X_U comma r so, this can be written in the matrix form in a 4 cross 4 form as follows: So, this is 1 0 0; 0 1 0; 0 0 1. So, this is nothing but the rotation term, this is an identity matrix. Then, along X_U, I have got r translation, then Y it is 0, Z it is 0 and the fourth row is 0 0 0 1. So, similarly, I can also write down the 4 cross 4 matrix corresponding to this Trans Z_U comma z. Now, I am getting 3 matrices each having 4 cross 4 dimensions and if I just multiply, then finally, I will be getting this particular 4 cross 4 matrix. Now, in this 4 cross 4 matrix, these particular terms are going to represent the position terms. For example, say in Cartesian whatever was q_x that is nothing but r cos theta, whatever was q_y that is nothing but r sine theta. Similarly, your q z is nothing but is equal to z. So, this particular relationship between the Cartesian coordinate system and this particular the cylindrical coordinate system, I can establish very easily using the rule of the composite rotation matrix. Now, all of us, we know that the relationship between the Cartesian coordinate system and the cylindrical coordinate system is nothing but this, that is, q_x equals to r cos theta, q_y equals to r sine theta. So, these are actually very well-known relationships between the Cartesian coordinate system and the cylindrical coordinate system. So, the rule for composite rotation matrix, whatever we have used, that is correct. So, this is the way, actually, indirectly, we can prove the correctness of this particular rule for composite rotation matrix. So, now, I am just going to discuss another coordinate system and that is called the spherical coordinate system. Now, here, our aim is to reach the same point, which I represented in Cartesian coordinate system, that is, this particular Q with respect to U. With the help of Cartesian coordinate system, that is, q_x, q_y and q_z, the same point, I want to reach with the help of this spherical coordinate system. So, in spherical coordinate system, in fact, there is one translation and there are two rotations in a particular the sequence. So, let us try to check that particular sequence. So, in step – 1: starting from the origin O, we translate along Z_U axis by r. So, this is nothing but the origin of this particular coordinate system. So, starting from here, along this particular Z_U, we translate by r units, I am here, this is by r units. Now, the next is, rotate in anticlockwise sense about Y_U by an angle alpha. So, this is nothing but my Y_U axis so, about Y_U, I just rotate by alpha in the anticlockwise sense. So, whatever was here, this particular thing will be rotated something like this. So, let me repeat, supposing that this is along this particular Z_U and here, say I have got, Y_U axis. So, I am just rotating about this particular Y_U. Now, it will be rotated something like this and after that, we take another rotation, that is, rotation in anticlockwise sense about Z_U by an angle beta. So, now, I am rotating about Z_U by an angle beta so, there is a possibility that I am going to reach this particular point, that is, nothing but Q with respect to U. So, what we do, let me repeat again, I am just going to translate it, translate along the Z by r units, after that I am just going to rotate about Y_U by an angle alpha and after that we are going to rotate about Z_U. So, this is nothing but Z_U, so rotate by an angle beta. And, then, I will be able to reach this particular point. So, with the help of one translation and two rotations I am just going to reach this particular point. Now, all such translation and rotations, if I just write with the help of this composite rotation matrix, I will be getting this particular form, that is, your that transformation matrix corresponding to this composite. Now, whatever I stated first, will go to the last. So, translation along this particular Z_U by r unit, so I will have to write at the end, because that I stated first, followed by the rotation about Y_U by alpha, followed by rotation about Z_U by an angle beta. And, as I discussed this particular 4 cross 4 matrix, I can write down, and all of us we know how to multiply like these two matrices and so, it is having the dimension of 4 cross 4, this is once again like the 4 cross 4. So, these 4 cross 4 matrices we can multiply and then, you will be getting one 4 cross 4 matrix, and that particular 4 cross 4 matrix, we can multiply with this, then, I will be getting this final 4 cross 4 matrix. Now, in this particular 4 cross 4 matrix, actually, the position terms are denoted by these three, that means, in Cartesian, what is q_x that is nothing but r sine alpha cos beta and q_y is nothing but your r sine alpha sine beta and q_z is nothing but is your r cos alpha. Now, here, if I know, in this Cartesian coordinate system, the same point I can also represent in the spherical coordinate system; that means, this is the known relationship between the Cartesian and the spherical coordinate systems. So, once again, actually, I have a re-derived. So, this particular relationship between your the Cartesian and the spherical coordinate systems is actually known to us and this is another indirect proof for this particular rule of composite rotation matrix. That means, the position of a point with respect to the universal coordinate system can be expressed in Cartesian coordinate system in cylindrical coordinate system, and in spherical coordinate system. And, that is why, the same robot can be actually controlled either in Cartesian coordinate system or cylindrical coordinate system or in spherical coordinate system. Now, here, actually, I am just going to express and discuss how to represent the orientation other than the Cartesian coordinate system. So, the position we have seen that we can represent in other coordinate system. Now, I am just going to show like, how to represent the orientation with respect to the other coordinate system. For example, in terms of Cartesian, we have already discussed that we take the help of like 3 cross 3 matrix to represent the orientation. Now, if we remember, we have seen that with the help of this normal vector, sliding vector and approach vector actually we can represent this particular orientation, n_x, n_y, n_z; s_x, s_y, s_z; a_x, a_y, and a_z. So, this is nothing but the 3 cross 3 rotation matrix in Cartesian to represent the orientation. These I have already discussed in details. Now, this n stands for the normal vector, s stands for the sliding vector and a stands for the approach vector. Now, if I just draw once again the same picture like one end-effector with two such fingers for a particular manipulator, so very easily, actually, I can represent; these are the normal vector, sliding vector and approach vector. So, if this is the end-effector with two fingers, the normal vector is nothing but this, and the sliding vector is nothing but this, and the approach vector is nothing but this. This is how to represent the orientation in terms of your Cartesian coordinate system. So, now, we will be discussing how to represent the same orientation in other coordinate system. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_41_Intelligent_Robot.txt
Now, we are going to start with another topic and that is on Intelligent Robots, we will try to see, how to design and develop an intelligent robot. Now, we call a robot an intelligent one, if it can take the decision as the situation demands. Now, let us see how to design and develop this particular the intelligent robots? Now, this intelligent robots should have the adoptive motion planner, the reason behind this particular adoptive motion planner I have already discussed that it should be able to take the decision in a varying situation. And, moreover there should be adoptive controller; now while discussing the control scheme of a robot, so we have discussed that each of this particular motor is equipped with one controller and if I use the PID controller, their gain values are to be determined. Now, if I want to make it intelligent we will have to find out one controller, which is also adaptive; that means, it can take the gain values in an adaptive way, as the situation demands. Now, if you want to design and develop this intelligent robot, we will have to merge the principle of the artificial intelligence; that is AI with this particular robotics and if we can merge the principle of AI or computational Intelligence, that is CI to this particular robotics, we will be able to make the robots intelligent. Now, here, I am just going to take one example; the example of the soccer playing robots; that means, the football playing robots. Now, before we go for this football playing robots, I am just going to discuss a little bit like the way one expert system, whose name is Deeper Blue could defeat the World Chess Champion; Garry Kasparov. So, we know that in the year 1997, one expert system, the name of the expert system was Deeper Blue. Now, this Deeper Blue could defeat Garry Kasparov; Garry Kasparov, the world the chess champion, and this particular expert system could defeat Kasparov using the principle of the artificial intelligence. Now, here, this particular chess playing is a very simple task compared to this particular the soccer playing. Because the in chess playing, the environment is static; so, we know what is happening here, we know the position of the different players, ok. But, here in this particular the soccer playing, that is the football playing task; the field is dynamic. So, the field or the scenarios are going to vary with time, so if the scenarios are going to vary with time; that means the dynamic environment. So, how to tackle? So, let us see and this particular problem is much more difficult actually, what we do in soccer playing robots. So, there are 2 teams and each team will consist of 11 players and among these players, among the team mates, there will be some sort of cooperation and between the two teams, there will be some sort of competition. And, using the principle of these cooperation, competition and updating, these particular robots are going to play. Now, each of the robots is having its own goal and that is nothing, but the function of the main goal of the team, that is how to win that particular the game or how to score the goal? Now, each of the robots, as I told is having its own goal and that particular goal is a function of the main goal. And, each of these particular robots are intelligent and they are called agent and that is why, this soccer playing team is nothing, but the Multi-Agent System, that is, MAS; Multi Agent System. Now, here, in this particular multi-agent system, each of these particular robots is intelligent or the agent and they are going to perform in the optimal sense, so that the team can ultimately win that particular the game. Now, here, as I told that we need adaptive motion planner and adoptive controller; that means, if I want to design and develop, the intelligent and autonomous robot. So, we will have to use the adaptive motion planner and the adaptive controller. And, that is why, the soccer playing robots has becomes much popular and the main purpose of this soccer playing robots is how to design and develop that particular intelligence of these robots. So, these particular robots can work in a multi-agent system and ultimately, the team of these particular robots can win that particular the game by scoring goal; now, here, the ultimate goal of the robocup was set as follows. So, by mid 21st century, a team of autonomous humanoid robots should beat the human world-cup champion team under the official regulations of FIFA. So, that particular goal was said by the investigators or the researchers working in this multi-agent system of robotics. Now, to reach that particular goal, actually many people are working throughout the world; some problems have been solved, but still there are many such open research issues, which are to be solved in a very efficient way, so that a team of autonomous humanoid robot will be able to beat the World Cup champion team, the football team, according to the regulations of FIFA. So, this is a very complicated task and to reach this particular that the target, we will have to make improvement in different areas, so that we can reach that particular the target. Now, let us see a simplified version of that, like how to implement or how to design and develop one intelligent and autonomous robot? Now, here, I am just going to take one scenario, this is a very simple scenario, but this scenario can be made much more complicated also. So, I am just going to concentrate on this simple scenario; now supposing that this is actually the starting point for a particular robot and this is the goal and here, we are going to consider a 2-wheeled one castor robot. So, this is the left wheel, the right wheel and we have got a free wheel here or a support sort of thing and that is nothing, but the castor, ok. Now, this point indicates the CG of this particular robot. Now, the physical robot, I am just going to solve and I am just going to explain after sometime in much more difficult. But, before that, let me explain the problem, which is going to be solved with the help of this particular robot. Now, at time t equals to say t_1, supposing that this is nothing, but the CG of this particular robot and supposing that this is the goal, G is the goal and we have got a few moving obstacles like we have got O_1, O_2, O_3, O_4 and O_5. Now, for simplicity, I am just going to consider only say 5 obstacles. Now, each of these obstacles is moving with a velocity or speed along a particular direction. For example, O_1 with some speed it is moving in this particular direction; similarly O_2 is moving with this particular direction with some speed, O_3 is moving with this particular direction, O_4 is moving along this direction and O_5 is moving along this direction. Now, here, to solve this particular motion planning algorithm; our aim is to find out the collision free and time optimal path for this particular mobile robot. Now, to solve this particular problem, we take the help of some sort of the concept of the distance step and the time step. So, we consider the distance step and time step; now during a particular time step, the robot is going to move through a particular distance and that is nothing, but the distance step. Now, at time t equals to t_1; supposing that this is the CG of the robot and these are all predicted position of this particular the obstacle. Now, here, if it wants to find out what should be the collision free path for this for a particular time step, what it will have to do is, first thing it will have to find out which one is the most critical obstacle?. Now, here, there are 5 obstacles and we consider the distance between the present position of the robot and the position of this particular obstacle, the different obstacle. And, we try to find out, what should be the distance between the robot and this particular the obstacle. So, here, there are 5 obstacles; so, I will be getting 5 distance values and out of those 5 distance values. So, we will try to find out that which one is the minimum. Supposing that the distance values are d_1, d_2, d_3, d_4 and d_5; d_1 is the distance between the obstacle 1 and the CG of this particular robot and. So, this is nothing, but the d_1; similarly I will be getting d_1, d_2 up to d_5; we compare all the d values and we try to find out the minimum. Now, supposing that out of all such d values and if you concentrate on this particular scenario. So, this d_4 is found to be the minimum; that means, this particular obstacle that is O_4 obstacle is physically found to be the nearest to this particular robot. But, this particular the obstacle O_4 is moving in this particular direction; on the other hand, the O_3, another obstacle, is moving towards this particular robot. Now, if I compare the distance here, so, d_4 will be less than the d_3, but as this particular obstacle is moving towards the robot. So, this will be considered the most critical obstacle, but not O_4; that means, your O_3 is considered as the most critical obstacle, because it is moving towards that particular robot. Now, if I just select these as the most critical obstacle, the distance between this particular the present position of the robot and the obstacle. So, this is nothing, but the distance input for the motion planner, so C_1 O_3; C_1 O_3 is nothing, but the distance input for the motion planner. Similarly, the angle between the goal; the present position of the robot and the obstacle O_3, that is your G C_1 O_3; so, this particular angle is another input for the motion planner. Now, here, for this motion planner, I have got two inputs; one is the distance, another is the angle. And, to avoid collision, now, we will have to find out like what should be the angle of deviation. Now, here, this angle G C_1, C_2 is nothing, but the deviation angle; so this is nothing, but the deviation angle. That means, to avoid collision with this particular moving obstacle, the robot is going to follow this particular path or the robot is going to deviate from this particular path just to avoid the collision with the moving obstacle. So, the output of the motion planner will be this particular the deviation angle, that is nothing, but G C_1 C_2. So, this is nothing, but the deviation of the motion planner and we can also consider another output of the motion planner, that is nothing, but the speed of the robot or the speed or the acceleration of this robot. So, if I just try to find out what should be what should be the inputs and outputs of the motion planner; now if I just draw the block diagram of these particular motion planner, there are 2 inputs one is the distance input the distance input and another is your the angle input. And there are 2 outputs one is nothing, but the deviation angle and another could be the speed or the acceleration of these particular the robot. So, if I know the acceleration, we can also find out the speed. So, there are two inputs: distance and angle and there are two outputs, that is, deviation and acceleration of this particular the robot. And, using this motion planning algorithm; we can find out; what should be the angle of deviation and what should be the acceleration, so that the robot can avoid collision with this particular the moving obstacle. Now, let us see, how to implement it in the real experiment. Now, here, we can consider that this is nothing, but an optimization problem and our aim is to minimize travelling time; that means, the robot will be able to reach the goal, starting from the initial position in minimum time. And, at the same time, the path should be collision free; so there should not be any collision between the robot and this particular the moving obstacle. And, moreover, the kinematic and dynamic constraints of this particular robot are to be the fulfilled. Now, regarding these particular kinematic and dynamitic constraints actually, before I discuss a little bit, I will have to show the physical model of this particular the robot and now, I am just going to show you the physical robot the physical model of this particular robot. Now, this is nothing, but a two-wheeled, one castor robot; now, we can see that we have got one wheel here, we have got another wheel here and we have got one support and that is nothing, but the castor. So, this is a two-wheeled, one castor robot and this is nothing, but a two-wheeled one castor differential drive robot. Now, for each of these particular wheels, we have got a separate motor. So, this motor is connected to the wheel and of course, for each of this particular motor; there must be a controller. And, here, we use some sort of PID controller just to control this particular the motor. So, that we can generate some speed at the two wheels and accordingly, will be getting that movement of this particular the robot. Now, we will be discussing, in details, how to make it intelligent? But, before that, let me tell you now regarding this kinematic and dynamic constraints; so, the kinematic constraint for these particular robot could be of two types. For example, say, there could be non-holonomic constraints, and there could be holonomic constraints. So, by non-holonomic constraints, we mean those constraints, which are dependent on velocity. So, these non-holonomic constraints are dependent on velocity of the robot and these holonomic constraints are independent of the velocity. So, these constraints are to be fulfilled, otherwise, we cannot generate the movement of this particular robot particularly, whenever it is taking a turn. Now, if I just concentrate once again on the physical model; we can see that supposing that it is going to take a turn, the robot is going to take a turn. Now, if it is going to take a turn on the left side; on the right hand side actually the RPM or this speed of this particular wheel should be more compared to that that of the other side, then only I can take a turn. And, while taking this particular turn; these particular constraints like non-holonomic and holonomic constraints are coming in to the picture. And, moreover, each of these particular the motors is controlled by the controller and this motor is going to generate the torque required just to give some sort of rotation to the wheels. Now, to determine the power rating of this particular motor; we always try to find out, how much is the torque requirement of these particular wheels?. So, these dynamic constrains are going to tell us like how to decide the power rating of the motor so that your these particular motor will be able to provide the necessary torque; so that it can generate that particular RPM to this particular the wheel. So, these kinematic and dynamic constraints are to be fulfilled and at the same time, the path has to be the collision-free. Now, let us see, how to carry out this particular the real experiment. Now, before I go for this particular real experiment, the motion planning algorithm which I am going to use for this experiment is nothing, but the potential field method. The principle of which I have already discussed in much more details and as I told, this particular potential field method is going to work based on the concept of the attractive potential and the repulsive potential. Now, let me repeat that this attractive potential, U_attractive (X) is nothing, but half zeta_attractive, that is nothing, but a constant value, d_goal (X) square and this d_goal (X) is nothing, but the distance between the goal and this particular the CG of the robot. And, this goal is going to attract that particular robot and on the other hand, we have got one repulsive potential, which is nothing, but your U_repulsive (X); this I have discussed in much more details and here I am just going to consider. So, this particular expression for the repulsive potential half zeta_repulsive multiplied by 1 divided by d_obstacle (X) minus 1 divided by d_obstacle 0 square. Now, this particular d_obstacle (X) is nothing, but the distance between the obstacle and this particular robot. And, your d_obstacle (0) is nothing, but supposing that I have got an obstacle here. And, surrounding that obstacle, we consider one circle and that is called the circle of influence. Now, if this is the center of the circle; so, this is nothing, but is your d_obstacle (O) sort of thing. And, this d_obstacle (X); supposing that the robot is here; so, this particular distance is nothing, but is your d_obstacle (X), ok. Now, using this actually, the obstacle is going to put some sort of repulsive force on the robot. And, using these attractive and repulsive forces, attractive and repulsive potentials, the robot is going to move towards the target or the goal. And, ultimately, the robot is going to reach the goal and this robot will be under the combined action of attractive and repulsive potentials or attractive and repulsive forces. And, I have already mentioned that we consider that there are two inputs for these motion planer, that is your distance and angle. And, there are two outputs, that is nothing, but the angle of deviation and acceleration or speed of this particular the robot. Now, this I have already discussed in much more detail. So, I am not going to spend much time on this. And, now, I am just going to show you the robot, the physical model; I just showed a few minutes ago. Now, this is actually the photograph of the same robot, now here, you can see that these are the wheels and the castor we cannot see. Now, this is nothing, but the antenna used for the radio frequency module. Now, this robot is wireless; so, we will have to use some sort of RF module or the radio frequency module. So, which I am going to discuss in details and with the help of this particular antenna; so, we are going to send signal to the controller for this particular the motor. Now, as I told that to control the movement of this particular, the wheel, we have got a motor. The motor is not visible, they are inside and each of these motors is connected to the controller and this robot is having actually the micro-controller. So, that a little bit of calculation, a little bit of the decision it can take with the help of that particular motion planning algorithm. Now, let us see, how does it work? Now, here, actually we are going to take the help of one camera. So, this is nothing, but a CCD camera and this is the stand of the camera. So, here, we are actually inside that particular camera and this is the field, on which we are going to carry out this particular the experiment. So, on this particular field, we have got the said robot, say robot is denoted by R and we have got some obstacle, say it is denoted by O. Now, let us see, how can this particular robot take the decision just to avoid collision with these particular the moving obstacles. Now, here, we can see that this is nothing, but is actually the my vision board; so, this is my vision board. And, this particular the hardware; now it is for how to carry out that image processing online with in a fraction of second? Now, this particular my vision board, so we will have to put it in the CPU or the computer and I am just going to tell you the method through which, we just carry out this particular the experiment. So, this my vision board has to be put in this particular the CPU. Now, let us see, how can you send this particular information of the environment, so that we can carry out this particular the experiment and this is actually the whole view of this particular the robot. Now, let us see, how can you implement this particular the principle of the motion planning to make it intelligent, ok? Now, this shows actually the experimental set-up, we developed; now this is nothing, but the field. So, this is nothing, but the field in a real experiment, the same field I am just drawing it here and as I told that we have got this particular robot. So, this is nothing, but the robot and we have got the obstacle here. Now, let us see, how can we take the decision to avoid collision with this particular the moving obstacle. Now, here, we use one camera that is called the overhead camera, I can also use onward camera; that means, the camera can be mounted on the body of this particular the robot. Now, here, we had just going to a put one overhead camera and this camera has to be calibrated. So, the camera calibration is the first task actually the quality of the image collected with the help of camera depends on a number of parameters. For example, say if depends on, what should be the focal length of the lens; it depends on some sort of scaling factors and some other factors, those factors are to be determined with the help of the calibration. So, this particular camera has to be calibrated first and supposing that this camera has been calibrated. Now, with the help of camera; so, we can collect information of this particular the environment; that means, we can take the snap of this particular environment at a regular interval. Now, depending on this speed of this particular camera; so, we can take snap of this particular environment at regular interval. And, this particular information, that is the information of the environment or this particular image that will pass through the BNC cable and through this BNC cable; the information of the image or the environment it will go to the CPU or this particular the computer. Now, here inside this particular CPU; we have put that my vision board or the image processing software. So, the information of the image will enter this my-vision board and there will be some sort of image processing. And, through this image processing, the principle of which, I have already discussed in much more details; we can find out the information of this particular the environment. That means, I can find out the distance between the robot and this particular obstacle and the angle through which this obstacle is moving towards the robot and these two are nothing, but the inputs of the motion planner. And, once we have got the inputs of the motion planner; now the motion planning algorithm, that is, the potential field method is going to determine, what should be the output; that means, what should be the deviation and what should be the acceleration or the speed of the robot. And, using this particular deviation angle and the speed of the robot; so, through some small programming; we can find out what should be the RPM of the two wheels ? Now, once we have calculated; these particular RPM values, what we do is. So, this particular information will pass it through this particular the radio-frequency module, that is nothing, but the RF board or the RF module. And, through this RF module; so, we are going to pass the information through this particular robot. And, in this particular robot, we have got your antenna; so, through this wireless communication, the information regarding the RPM or the speed requirement of the two wheels will be passed through the controller of this motor, which is connected to the two wheels of the robot. And, now with the help of this controller like the PID controller or PI controller; so, this particular motor is going to rotate the wheel and we will be getting this particular movement at the two wheels of the robot. And, consequently, the robot will be able to move in the forward direction or in the backward direction; it will be able to take some turn either clockwise or anti-clockwise. So, this shows actually the photograph of this experimental set up, this is nothing, but the overhead camera. And, this is the field and we can see that we have got this particular the robot here and we have got the obstacle here, ok. And, this is the CPU, which you use for the carrying out this experiment and this is the display of that particular the computer. And, let us see, how can you carry out this particular the experiment. Now, to carry out the experiment, as I told that these are the different steps, which are to be followed; for example, the first step is, we will have to calibrate this particular camera. Then, we can go for some sort of online image processing, which I have already discussed, then we will have to activate the motion planning approach; that means, here we are going to use the potential field approach. Then, there must be some wireless communication through the radio frequency module. So, that we can pass the information of the required RPM at the two wheels to the respective controller of the motors and with the help of this particular controller, the motor is going to generate that particular the required motion or the required torque. And, then we will be able to get some sort of movement of the robot, ok. Now, I am just going to show you one video just to show you this particular experiment; the way we carried out this particular the experiment. Now, here, I just want to acknowledge, we got one DST project, that is, Department of Science and Technology; Government of India project. With the help of this particular project, actually we conducted this particular the experiment and the video of this particular experiment I am going to show you. And, here, one SRF worked on this particular the experiment Dr. Nirmal Baran Hui, who worked on this particular project to develop this particular the experimental set-up. So, we are just going to show the video of our experiment like how could we develop the intelligence of the robot; that means, how could you design and develop an intelligent wheeled robot, the mobile robot. So, I am just going to show you that particular the video. Now, this is the robot, which I actually showed you and here, the robot is now moving in the forward direction. And, it will be able to move in the forward and the backward direction. And, now, I am just going to show you the way, it can avoid the collision with the two static obstacles. Now, this particular robot, the wheel robot can move in the forward direction and backward direction. Now, I am just going to show you that how we can avoid collision with one static obstacle; so this is nothing, but the static obstacle. So, with help of this particular the robot; with the help of the motion planning algorithm, we will be able to find out the collision free path. Now, this is another example like how to avoid the collision with the two static obstacles? So, the robot is going to a avoid collision with both the static obstacles and it is going to reach the goal. Now, I am just going to show you, how to use the potential field method? Now, you can see that the robot has become almost stationary and after it has crossed the obstacle, the moving obstacle, now it is moving with the high speed to reach that particular the goal. Now, this is the way with the help of the motion planning algorithm and with the help of this controller; we can incorporate intelligence to this particular the robot. Now, here although we used some sort of motion planning algorithm to make it intelligence, sometimes we take the help of some sort of reactive control along with some sort of the motion planning algorithm; just to incorporate intelligence to this particular the robot. Now, the principle of which you have used here to make this particular wheeled robot intelligent one, more or less the similar type of principle we can use to make the different types of mobile robot intelligent. For example, we can make, say multi-legged robot like say 4-legged robot, 6-legged robot or 2-legged robot an intelligent one, using more or less the similar type of principle. Now, here, if I just consider a legged robot, beside this motion planning, we will have to consider the gait planning also. Now, we are not going to discuss the gait planning and other things, in details, ok. Now, if you are really interested, you can have look a look into the textbook, that is, the fundamentals of robotics written by me. Now, that particular book is the textbook for this particular course, as I mentioned earlier. So, we will have to concentrate on that particular textbook just to collect more information. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_26_Robot_Dynamics_Contd.txt
So, this is the expression of this particular kinetic energy of the differential mass, that is, dk_i. Now, once we have got it, I can find out the kinetic energy of the i-th link, that is, k_i and that is nothing but integration dk_i and that is equal to half trace, then I am just going to write down the expression only thing I have done is I have put one integration sign here. So, this particular integration sign, I have put one integration sign here, and other things are the same. So, this can be written as half trace summation a equals to 1 to i, b equals to 1 to i U_ia. Now, this particular expression is nothing but J_i and this J_i is nothing but the moment of inertia, which I have already derived. Next is U_ib transpose q_a dot q_b dot, ok, where the moment of inertia J_i is nothing but this particular expression. Now, the total kinetic energy for the whole manipulator is nothing but K is nothing but summation i equals to 1 to n k_i, that is, summation i equals to 1 to n half trace, summation a equals to 1 to i b equals to 1 to i U_ia J_i U_ib transpose q_a dot q_b dot something like this. So, this is the expression for your the kinetic energy for the whole manipulator. Now, the kinetic energy for this particular manipulator having n links is nothing but this particular expression. So, this can be rearranged and it can be rewritten in a slightly different way. So, here, we can write down this K is nothing but K is nothing but half summation i equals to 1 to n summation a equals to 1 to i summation b equals to 1 to i trace of U_ia J_i U_ib transpose q_a dot q_b dot. So, this is the way actually we can find out the expression for the total kinetic energy for the whole robot. Now, we are going to find out the potential energy of the manipulator. How to determine the potential energy? Now, the potential energy P_i is nothing but minus m_i g bar r i with respect to 0. Now, this particular g is nothing but the acceleration due to gravity and truly speaking this is a vector having component g_x, g_y, g_z. And, here, I have put this 0 very purposefully that I am going to tell. Now, this g_x, g_y, g_z are nothing but the three components of the acceleration due to gravity. Now, at a particular place, this particular g_x and g_y are negligible and that is why, we generally consider only your g_z, which is acting vertically downward. Now, if it is a acting vertically downward, we will have to put here accordingly, we will have to do the sign correction. And, this T i with respect to 0 is the transformation matrix and r i with respect to i. So, this T i with respect to 0 multiplied by this r i with respect to i, so this is nothing but the height of this particular link, that is, r i with respect to 0. So, I can find out the expression of this particular potential energy. Now, if you see the dimension of this T, T i with respect to 0, so this is nothing but a 4 cross 4 matrix. If I see r i with respect to i, so this is nothing but a 4 cross 1 matrix, because x_i y_i z_i and then I put a 1 here, so this is nothing but a 4 cross 1. So, if I multiply 4 cross 4 and 4 cross 1. So, I will be getting 4 cross 1 matrix. And, this 4 cross 1 matrix, I will have to multiply by 1 cross 4 matrix then only I will be able to do this particular multiplication and that is why, in place of g_x, g_y and g_z, I put one 0 here just to make 1 cross 4 matrix, so that I can multiply with this 4 cross 1 matrix. Now, if you multiply, then we will be getting the expression for this particular potential energy. Now, P is nothing but summation i equals to 1 to n P_i. So, summation i equals to 1 to n minus m_i g bar T i with respect to 0, r i with respect to i. So, this particular Lagrangian is nothing but the kinetic energy minus potential energy. So, we know the expression for kinetic energy and this is the expression for the potential energy. So, I can write down kinetic energy minus potential energy. So, this is the expression for the whole Lagrangian for the robotic system. And, now actually, what we do is, we try to go back to that particular expression, that is, your that Lagrangian equation d/dt of partial derivative of L with respect to your theta_i dot minus the partial derivative of L del theta_i is nothing but is your tau_i. So, this particular expression, I am going to use. Now, here in place of theta_i, we can put this particular your q_i. So, this is actually the same expression, which I showed. So, let me try to go back to that particular expression once again. So, this is actually the expression. So, I am just going to use this particular expression. And, now, if I just substitute this particular the Lagrangian and if I just find out the partial derivative of L with respect to this q dot, and if you find out d/dt of that and if you find out separately the partial derivative of Lagrangian with respect to q, then we will be able to find out the final expression of this particular joint torque, which is nothing but this. Now, here in this particular expression for the joint torque, we have got three distinct component. So, this is nothing but the inertia term, which depends on the mass distribution of the link. This is your Coriolis and centrifugal term; and this is nothing but the gravity term. Now, here, so this particular terms like your inertia term D_ic is nothing but summation j equals to the maximum between i and c to n trace of U_jc J_j U_ji transpose, where i, c varies from 1, 2 up to n. So, this is nothing but the inertia tensor that we can find out this D_ic. Then, the Coriolis and centrifugal term is h_icd, that is summation j equals to the maximum among i,c,d to n trace of U_jcd J_j U_ji transpose, where i,c,d vary from 1, 2 up to n. And, the gravity terms, C_i, that is equal to summation j equals to i to n then comes your minus m_j g bar U_ji r j with respect to j, where i varies from 1 to n. Now, using this particular expression, what you can do is, we can find out the expression for the joint torque or the force, ok. Now, actually what I am going to do by using this particular expression, we will try to derive the expression for the joint torque or the force which is required at the different joints. And, we will take one numerical example and with the help of this particular numerical example, I am just going to find out the big expression for this particular joint torque. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_34_Robot_Vision.txt
We are going to start with a new topic, that is, topic 7, it is on Robot Vision. Now, this robot vision is also known as the computer vision. And, in robot vision, we use the principle of the digital image processing. So, we are going to use the principle of digital image processing. Now, the aim of this robot vision, or the computer vision is to help the robot to collect information of this particular environment. Now, let us see, how a robot can collect information of the environment with the help of camera. Now, before I discuss further, now let us try to see the way we, the human-beings, do collect information of the environment, with the help of our eyes. So, with the help of our eyes, we take the photograph or snap of the environment and there is a lot of processing in our brain and consequently, we could identify that this is object a, this is object b, present in a particular scenario, or project in a particular the image, or photograph. Now, exactly this particular principle, we are going to copy in the artificial way in robot vision, or the computer vision. Now, once again, let me repeat the purpose of robot vision is to identify and, interpret the different objects present in a particular image or the photograph. Now, let us see how to carry out this particular the digital image processing, or the robot vision or the computer vision. Now, the purpose as I told to extract characterize and interpret objects present in a scenario, or a photograph with the help of camera. Generally, we use some sort of CCD camera, the charged coupled device camera. So, we capture image with the help of the CCD camera now, as I told, by CCD we means charge coupled device camera. So, this is actually the step one of this particular computer vision or the robot vision and, once we have got this particular image of the environment collected with the help of this camera, it looks like this. Supposing that, this is the computer screen. So, this is the positive Y direction and this is the positive X direction. Now, here actually, what I do is, with the help of camera, the image or the environment, whose photograph we have taken, is actually transferred to the computer and on the computer screen, we will be able to see this type of image. Now, what I do is, this particular computer screen that is divided into a large number of small segments for example, say along Y direction. So, we take m number of divisions along X direction, we take N number of divisions and this is nothing, but the origin, that is, (0, 0). Now, if this is the computer screen, that is divided into M cross N. So, so many such small small subdivisions for example, say, I will be getting M equals to, N equals to 512 or 256 or 128 or 64 or 32. Now if I take M equals to, N equals to 512 that means, here there are 512 divisions and here, also along this particular X direction, there will be 512 divisions. That means, this particular area is divided into like 512 multiplied by 512. So, so many such small small image elements and this image element for example, say this is one small image element and this is known as the image element, or the picture element or the pixel or in some of the literature this is also known as pel. So, this particular pixel, so many such pixels we have, and now we will have to concentrate on these particular the pixels. Now, supposing that with the help of camera, we have got this type of image so, for example, say this is one image, which I have got on this particular computer screen and this is collected with the help of say camera and that is transferred to the display of this computer, ok. Now, if this is black and white picture, the difference between the black and white is actually the amount of the light intensity, for example, if I consider that this is the black object means, the light intensity is less and on the white portion, the light intensity will be more. Now, depending on this particular light intensity and the difference in light intensity, we can identify the black and white for example, say if we take the photograph, the black and white photograph of a human being, the head portion or the hair portion will be black and the face will be slightly white-ish, if I compare the light intensity values of the hair and the face, the light intensity value of the face will be more compared to that of the hair part, that is the black hair part. So, this is the way actually, we can find out the difference between the black object and white object due to the difference in light intensity. Now, supposing that we have got this particular picture and now, actually we are going to find out, what should be the light intensity at each of these particular pixels. Now to do that actually, what I do is, we try to take the help of step 2. Now, this particular step two is nothing, but actually as follows: we take the help of one electron beam scanner and, we do the scanning along the Y direction and this particular the X direction just to collect the light intensity values at each of these pixels. Now, let us say how to do it, now what I do is, say we are just doing scanning in the positive Y direction, supposing that I want to find out what should be the light intensity. So, at this particular pixel, what I do is, we try to actually do the scanning along this particular the Y direction and, how to do this particular scanning, we take the help of one electron beam scanner. Now, this electron beam scanner is something like this, now here on this electron beam scanner there are some photo-sites. So, we have got a large number of photo-sites here for example, if there are 512 divisions so, I can consider the 512 photo-sites. And, these particular electron beam scanners, this is electron beam scanner and that is put just say below that and we do the scanning, the moment we do this particular scanning along this particular Y direction, what will happen is so, due to the variation of this light intensity, different amounts of electrical charges will be accumulated on these photo-sites, for example, say here I have got a photo-site, I have got another photo-site here, another photo-site. So, these are all photo-sites and now, I am doing the scanning in this particular direction. Now, if the light intensity is more, the more amount of charge will be accumulated in the photo-site, on the other hand, if the light intensity is less; that means, I am passing through the black region, less amount of charge will be accumulated in this particular photo-site, now what I can do is, we can measure, how much is the amount of accumulated charge at each of the photo-sites. And, here we just prepare one plot and this particular plot is nothing, but light intensity versus the Y direction, now here each of these points indicates actually the pixels. So, starting from here, pixel-wise I can plot; that means, along this particular Y, ok. So, I can plot, what is the variation of these particular the light intensity values and, this particular information is nothing, but the analogue information. Now, this is what is happening along the Y direction, ok. And, supposing that I am concentrating here so, corresponding to this particular pixel, I am getting that this is the amount of the light intensity and supposing that that is denoted by say L_y. The same thing, we do along this particular X direction. So, what I do is, we try to do the scanning in this particular direction, in the positive X direction and once again, we will pass through the same pixel, ok. And, exactly in the same way, if I just plot, supposing that this is the light intensity, say for example, this is the light intensity and this is the X direction. So, once again, I have got all such pixels. So, for each of these particular pixels, I will be able to find out what should be the analogue plot for this light intensity. For example, say I am just moving along this particular the positive X direction. So, there is every possibility that I will be getting some sort of the profile of light intensity like this. And, once again, if I concentrate on the same pixel supposing that I am here. So, I will try to find out what is the light intensity value corresponding to that particular the pixel and supposing that, that particular numerical value is nothing, but L_X. Now, corresponding to this particular pixel, I have got this L_Y and this particular L_X. So, after that actually what I do is, we try to find out what is this square root of L_X square plus your L_Y square. So, I will be getting some numerical value. So, I will be getting some real value and we try to find out, what is the nearest integer. Now, corresponding to this value, corresponding to this the nearest integer will be the light intensity value, corresponding to this particular the pixel. The same process we follow for each of these pixels. So, we can find out, what should be the light intensity value ok, corresponding to each of these pixels. Now, here, corresponding to this particular image, so, I have got some sort of the light intensity values, ok. Now, the step three, whatever I mentioned so, that is nothing, but the step 3, that is image is stored as an array of pixels. And, at each pixel actually, we try to mention what is the light intensity value and this particular process of storing an image, or a photograph with the help of some numerical values of light intensity. So, this is what is known as the frame grabbing. In fact, unless we do the frame grabbing, we will not be able to carry out any such calculation with the help of this computer. Now, because computer does not know anything except the numbers so, what I will have to do is, corresponding to that particular image. So, I will have to find out the corresponding matrix of light intensity values. And, this particular process is known as the frame grabbing. And, once that particular the frame grabbing is done. Now, actually, we are in a position to represent this particular image in the form of this type of matrix. Now, here, if you see, for example, say we consider, there are M number of divisions along this particular positive Y direction. So, this is the positive Y direction and, we consider capital M number of divisions. Now, this is actually the positive X direction and we consider capital N number of divisions, ok. Now, here, this f (0, 0) is nothing, but the light intensity corresponding to the pixel, whose coordinate is 0 comma 0. Similarly, this f (N minus 1, M minus 1) is nothing, but the light intensity value corresponding to the pixel whose coordinate is (N minus 1, M minus 1). So, for each of these pixels, we can find out the light intensity values and this values are nothing, but the integer values. Now, here I have written so, f (x, y) indicates the light intensity of the image at the point (x, y). So, similarly, this f (1, 1) is a light intensity value at the point, whose coordinate is your (1 comma 1). So, this is the way actually, we can represent one image with the help of a matrix of some numerical values and, this numerical values are nothing, but the light intensity values in the integer form. Now, let us see like how to proceed further. Now, here actually, what we will have to do is, if you see this particular matrix of light intensity values. So, this particular matrix cannot be very accurate, or the correct the reason is very simple. The quality of this particular image, or the light intensity values depends on a number of parameters, for example, it depends on the level of illumination at which we are taking that particular, we are collecting that particular picture. It depends on the angle at which I am collecting that particular picture; it depends on actually the expertise of the operator. So, this data which you have got corresponding to this particular image may not be very accurate. So, there could be some noise, there could be some sort of in imprecision, there could be some sort of uncertainty and that is why, we take the help of one step, that is called the preprocessing. So, we try to do some sort of the pre-processing and, if you do this preprocessing actually we can remove this particular noise from this the matrix. So, the purpose of preprocessing is to the remove noise, from this particular the light intensity values, or sometimes there is a possibility that some part of information will be lost from this particular picture, and we try to restore that particular the information. So, if you want to reduce the noise from this particular data, or if you want to restore some sort of lost information, will have to take that help of some sort of preprocessing. Now, if you see the literature, in fact, we have got different methods for this particular processing. Now, here I am just going to discuss the principle of each of these particular the pre-processing methods. So, this is actually the thing which we will be getting, this particular matrix corresponds to the image. Now, let us see how to do this particular the preprocessing just to reduce that noise from the data. So, methods of preprocessing as I told there are several methods and out of all such methods, I am just going to discuss a few very popular methods, for example, say the masking is a very popular method for preprocessing. Now, the method of masking is a very simple actually, what I do is, supposing that, this f (x, y). So, this is nothing, but the light intensity value at the pixel, whose coordinate is x comma y. And, on this particular light intensity value, we use one operator that is nothing, but O. So, this operator O is going to work on this f (x, y) that is the light intensity value, which is nothing, but the input intensity. And, we are going to find out what is this preprocessed intensity, that is, P (x, y). So, our aim is to determine this particular P (x, y). Now, let us see like how to determine this particular the P (x, y). That is the preprocessed data. Now, here, I am just going to concentrate on a particular pixel and its neighborhood. Now, here, as I told that f (X, Y) is going to indicate the light intensity value at the pixel, whose coordinate is nothing, but is your (x, y). Now, this is the positive direction of Y, this is the positive direction of X. So, starting from here, if I move along this particular direction. So, Y is going to increase, this will be your (X comma Y plus 1). Similarly, here, this will be (X comma Y minus 1), because this is in the negative direction of Y. Now, similarly starting from here, if I just go down, then what will happen is, this particular X is going to increase, because this is the positive direction of X. So, this will become f (X plus 1 comma Y) and the coordinate of this particular pixel will be your (X minus 1, Y). Similarly, I can also find out the coordinate of this. Now, if I concentrate on this particular pixel, that is denoted by Q, whose coordinate is (X, Y) and whose light intensity is f (X, Y). Now, this particular pixel has got two horizontal neighbors, it has got two vertical neighbors and, it has got four such diagonal neighbors. So, once again, let me repeat, let me repeat that for a particular pixel, there are two horizontal neighbors two vertical neighbors and there are four such your diagonal neighbors. So, we will have to concentrate on this particular horizontal vertical and the diagonal neighbors, ok. Now, let us see how to carry out this particular the pre-processing. Now, here actually, in masking what we do is, we try to take the help of a mask and this particular mask is nothing, but a template. So, by masking we mean, this is nothing but a template, now with the help of this particular template actually, we can do this masking operation, or we can do this particular the preprocessing.. Now, here, this shows a typical 3 cross 3 mask and these W values are nothing, but the coefficient of the mask, for example, W_1, W_2, W_3 up to say W_9. So, this is a 3 cross 3 mask so, there are 9 such W values and these are nothing, but the coefficients of this particular mask, ok. Now, here, it shows a typical 3 cross 3 mask, now here we can see that here, I have put plus 8 so, this is plus 8 and here you can see I have put minus 1, minus 1, minus 1, minus 1, here minus 1, minus 1, minus 1, minus 1. So, if we just add all such minus 1 values. So, I will be getting 3 plus 2, that is, 5 plus 3 that is 8. So, I have got plus 8 minus 8, that is equal to 0. So, the sum of all these particular coefficient values will be equal to 0. So, this mask coefficient values are selected in such a way, that the sum of these particular mask coefficient values becomes equal to 0. So, as I told that this is one typical 3 cross 3 mask, which is very frequently used for the preprocessing. Now, let us see, how to implement. So, this is actually, say one image and these are actually the light intensity values at the different pixels, and our aim is to find out what should be the preprocessed value, corresponding to this particular f (X, Y). And, if I want to find out what should be the corresponding preprocessed value, for this particular f (X, Y), what we do is, we try to take the help of one template, or the mask and let be considered one 3 cross 3 mask, or 3 cross 3 templates. And, as I discuss, the coefficients are W_1; W_2, W_3 then comes W_4, W_5, W_6, then comes W_7, W_8, W_9. So, these are nothing, but the mask coefficients and how to find out the pre-processed value corresponding to this. The method is very simple actually, what we do is, actually, what I do is so, this is actually its corresponding preprocessed value I will have to find out and, you concentrate on the mask centre, and supposing that this is the mask center. So, this particular template or the mask you bring it here and this particular mask centre is going to coincide with this particular pixel. So, what I am going to do is, I am just going to put this particular mask here. So, as if I am just going to put the mask something like this and here, I am just going to write down all such mask coefficients: W_1, W_2, W_3, then comes your W_4, W_5, W_6 then W_7, W_8 and W_9 and after that actually what we do is. So, we multiply this particular W_1 with f of (X minus 1, Y minus 1) plus W_2 multiplied by f of (X minus 1 comma Y) plus W_3 multiplied by so, this particular light intensity value plus W_4 multiplied by this light intensity value, plus W_5 multiplied by this plus W_6 multiplied by this, W_7 multiplied by this f, W_8 multiplied by this f, plus W_9 multiplied by this f, we sum them up. And then we will be getting some numerical value and that particular numerical value is nothing, but the preprocessed value corresponding to this particular the light intensity value. So, this is the way actually, we can find out like, what should be the preprocessed value corresponding to that particular the pixel. Now, the same thing, whatever I discussed the same thing I have just written it here. So, P (x, y) is nothing, but the operator O that is acting on f (x, y) and if we remember. So, W_1 multiplied by this f, W_2 multiplied by this f, so, whatever I discuss the same thing I have written it here. So, this is the way actually, we can find out the preprocessed value, corresponding to that particular the masking. Now, here I am just going to take another very small example, like how to determine the preprocessed value corresponding to this particular the light intensity value. So, we have already seen, how to determine the preprocessed value here, but let us discuss, how to determine the preprocessed value corresponding to this particular the pixel whose light intensity value is nothing, but f of (X minus 1 comma Y minus 1). Now, as I told that we will have to take the help of the mask, that is nothing, but the 3 cross 3 matrix. So, these are W_1, W_2, W_3, W_4, W_5, W_6, W_7, W_8 and W_9. So, our aim is to determine actually what should be the preprocessed value corresponding to this particular pixel. So, what we do is, we concentrate on this particular the mask center. So, once again this is the mask center so, this particular mask center is made coincident with this particular the pixel; that means, W_5 will come here and so, this will be the W_8 sort of thing. So, I will have to put the mask something like this. So, this is the way I can put this particular mask, so here, this particular W_5 will be here so, I will have to do something like this, ok. And, now we can see that we have got so, this is the way actually we can do, so we put this particular mask here. So, this is the mask, which I am going to put. So, this corresponds to your W_1, this is your W_2, this is W_3, this is your W_4 and here you have got W_5 and we have got W_6 here, then comes your W_7 here, then comes W_8 here and this is your W_9 and our aim is to find out the pre-processed value corresponding to this. Now, if this is the scenario, the contribution of these particular W_1, W_2, W_3, W_4, and W_7 will be equal to 0 here, ok. So, now, we will have to concentrate only on these particular 1, 2, 3, 4. So, only on these four, we will have to concentrate. Now, if we concentrate only on these particular the four, I will be able to find out the preprocessed value is nothing, but W_5 multiplied by f of (x minus 1, y minus 1). This particular thing plus W_6 multiplied by f of (x minus 1, y) plus W_8 multiplied by f of (x, y minus 1) plus W_9 multiplied by f of (x comma y). So, we can find out the pre-processed value corresponding to this particular pixel, the same procedure actually, I can follow at each of the pixels, just to find out the preprocessed value corresponding to that particular the pixel. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_16_Robot_Kinematics_Contd.txt
Now, I am just going to discuss once again, like how to determine this particular T_i with respect to i minus 1, that is, the transformation matrix of i with respect to i minus 1. Now, if i draw the same thing once again, for example, say this is my Z axis, this is my X axis. So, this is Z_i, this is X_i and supposing that this is my Z_i minus 1 and let me consider this is X_i minus 1. So, starting from here, actually, I will have to reach this, how to do it? So, what I do is, first we take the help of T_A with respect to i minus 1. So, what I do? So, we rotate about Z by angle theta. So, here, this is nothing but a_i. So, I am just going to draw one line parallel to that and this particular angle is nothing but theta_i. So, we will start from this Z_i minus 1 and X_i minus 1 and the first is T_A with respect to i minus 1. So, what we do is. We draw this is nothing but X_A and this is nothing but is your Z_A and that is nothing but rotation about Z by theta_i, ok? So, as if I am taking rotation about Z by an angle theta_i; so this X_i minus 1 will take the position of X_A and this particular Z_A will remain the same along this particular Z_i minus 1. Now, I am just going to do one thing, that is, T_B with respect to A. So, from here, actually, I will have to reach this X_B and this will become your Z_B and this distance is nothing but the offset, that is, d_i. Now, this is nothing but translation along Z by d_i. So, X_A will become X_B and your Z_A will become Z_B. Now, I am just going to take this thing, like your C with respect to B. So, if I draw one parallel here. So, this particular angle is nothing but alpha. So, this is the alpha angle. So, what I will have to do is: I am going to take rotation about X by an angle alpha. So, what will happen to my X_C? X_C will remain same as X_B, but Z_C will be different from Z_B. So, this will become equal to Z_C and now, I am translating along this particular X, that is, a_i with respect to C. So, I am translating along X by a_i. So, from here, I am just going to reach this particular Z_i and X_i. This is the way, actually, I can find out what is T_i with respect to i minus 1, ok? So, this is the way, I can reach this T_i with respect to i minus 1. Now, if you see this particular sequence, for example, say the rotation about Z by theta_i translation along Z by d_i, this is one thing, then rotation about X by alpha and translation along X by a_i is another. Now, this rotation about Z followed by translation along Z by d_i is nothing but the screw Z. Now, this is very simple for example, say if this is the Z axis. So, I am rotating about Z, and if I have got a threaded part here, the screwed part is going to have some linear displacement along this particular Z direction. So, I am a rotating about Z and there is translation along Z. So, this is nothing but the screw principal and that is why, the rotation about Z and translation along Z is nothing but screw Z, then a rotation about X and translation along X is nothing but the screw X. So, we will have to follow this particular screw rule. Now, you see, I know the expression for rotation about Z by theta_i, I know the expression of translation along Z by d_i, then rotation about X by alpha_i and translation along X by a_i. So, I will be getting some matrices and these mattresses if I multiply, then I will be getting this type of 4 × 4 matrix. So, this is the 4 × 4 matrix. Corresponding to each of these particular transformations, I can find out the 4 × 4 matrix. So, there are 4 such matrices, each having 4 × 4 dimensions and if I multiply, then I will be getting finally this particular 4 × 4 matrix. Now, here, c theta_i means cos theta_i, this is the short form cos theta_i minus s theta_i is minus sine theta_i, c alpha_i is cos alpha_i and so on, and d_i is nothing but the offset. So, this is nothing but say T_i with respect to i minus 1. So, this is the final matrix, which we are going to get. So, I think up to this, it is clear to all of you, but here, I have got one query, that is the rule which you have followed to derive this particular expression, are we not violating the rule for the composite rotation matrix? According to the rule for the composite rotation matrix, whatever we state fast that should go to the end, ok, but here, we stated first the rotation about Z by theta_i, that I have written at the beginning, but not at the end, ok? So, my question is, are we violating the rule for composite rotation matrix? The answer is, no. Now, the reason why that particular answer is “no” is as follows: if you follow the rule for the composite rotation matrix, actually, whatever we were doing is nothing but T_i minus 1 with respect to i. With respect to I, if you want to find out these, whatever I stated first that I will have to write at the end. So, rotation about Z by theta_i, then comes translation along this Z_i, then comes your rotation about X by alpha_i, then comes your translation along X by a_i. So, according to the rule for composite rotation matrix this should be the sequence. Whatever I stated first should go to the end, followed by this, followed by this, followed by this and truly speaking, this is nothing but T_i minus 1 with respect to i, but what we are trying to find out is just the reverse, that is, T_i with respect to your i minus 1, and all of you know that this particular T_i with respect to i minus 1 is nothing but inverse of T_i minus 1 with respect to I, ok? So, we are not violating the rule for the composite rotation matrix and according to that, T_i minus 1 with respect to I, as I told, it is the inverse of that, and if we just try to find out the inverse of this particular matrix, we will be getting. So, this is the way, actually, we can find out the final matrix by using that particular rule for the composite rotation matrix and using the Denavit-Hartenberg notation, we can find out the expression for this transformation matrix. Now I am just going to solve one example, one very practical example by using the rules, which I have already discussed. So, how to assign the coordinate system and how to carry out the kinematic analysis? Now, here, for simplicity, I am just going to consider a very simple problem, a problem of 2 degrees of freedom serial manipulator. So, in this 2 degrees of freedom serial manipulator, here we have got 2 joints, this is joint 1 and this is joint 2, the link 1 is having the length L_1, the link 2 is having the length L_2 and this is, say, the wrist joint or say, approximately this is the end-effector. And, here, the joint angles are nothing but theta_1 and here with respect to the previous. So, this is nothing but theta_2. So, theta_1, theta_2 are nothing but the joint angles, I have already defined the joint angles. Now, let us try to assign the coordinate system first, according to the D-H parameters setting rule or the Denavit-Hartenberg notations, now, let us try to concentrate on the first joint. So, as I told that we have got two joints here, two motors here, one motor is here, another motor is here, here, there is no motor. Now, here, how to assign the coordinate system? The first thing we will have to see is, we will have to see the reference coordinate system. So, this is nothing but the reference coordinate system, we can see that the Z × X is nothing but Y and this is actually the reference coordinate system. So, with respect to this, by following these, I will have to assign the coordinate systems according to the rule. So, this particular joint is a revolute joint and this is a rotary joint, this is also a revolute joint, this is a rotary joint. Now, this is in Cartesian X and Y and this particular end-effector is having the coordinate (q_X, q_Y) and you forget about Z, as this is on the two dimensions. Now, here, actually what we do is, we will have to find out first the Z. So, Z is the axis about which I am taking the rotation. So, Z will be what? Z will be perpendicular to this board. So, Z will be perpendicular to the board and here, also the Z will be perpendicular to the board and X is what? The Z is here at the first joint and the Z is here at the second joint, they are parallel. So, their mutually perpendicular direction is this direction. So, X should be along that particular the direction. So, Z is perpendicular to the board away from the board and X is in this particular direction and Z × X, so, this will be my Y_0 direction and Z is perpendicular to the board, which is not shown here and it is coming out of the board. Similarly, here, the Z is perpendicular to the board coming out of the board, and this will be my X direction and this will be my Y direction and what we do is, whatever coordinate system we assign here, the same thing we copy at the last joint although here there is no motor. So, in place of your X_1 I will have to write X_2 in place of Y_1, I will have to write Y_2. So, this X_2 and Y_2 is nothing but we are copying this X_1 and Y_1. So, this is an extra coordinate system, we are adding at the end, although there is no such motor here. There are two motors, as I told here, I have got motor_1 and here, I have got motor_2, just to create the rotary movement. So, this is how to assign the coordinate system, according to the D-H parameters setting rule. Now, once we have actually assigned this particular coordinate system, now I can prepare the D-H parameters table, this is called the D-H parameters’ table. Now, here, what we write is your frame_1, frame_2. Whenever we write frame 1, what you will have to do is, one with respect to 0 and whenever we consider 2, that is 2 with respect to the previous, that is, 1. So, this particular sequence is very important, for example, first we consider theta_i next we consider d_i, if you remember the screw Z rule. Now, screw Z, there is a rotation about Z by an angle theta_i, there is translation along Z by d_i. So, I am following this particular screw Z, then I am going to follow the screw X, that is the rotation about X and then translation along X. So, this is nothing but screw X. So, screw Z screw X. So, that particular rule, we will have to follow, while writing down the link and joint parameters in the D-H parameters’ table. In some of the textbooks, they do not follow this particular rule, they write in a slightly different fashion like they first write alpha, theta, then d_i, a_i something like that, but if you write in that particular fashion, whenever we are going to write down the forward kinematic equation. So, you will have to make the correction. But, if you follow this particular sequence like the screw Z and screw X, this particular sequence, you need not make any change and directly, you can write down the forward kinematic equation, that I am going to show. Now, how to determine these numerical values or how to find out these variables; now, as I told 1 means what? 1 means 1 with respect to 0, what is theta_i? This is a rotary joint, this is a revolute joint and for this particular revolute joint for example, this type of revolute joint, sort of thing. So, this particular angle is the variable, ok? So, this theta_i has to be a variable. So, at this particular joint, this is my theta_i and that is variable, then what is d_i? By definition, if you remember, d is the distance between two X, measured along Z. Now, this is X_0 direction and this is X_1. So, if I extend X_1 they are going to intersect. So, X_0 and X_1 are going to intersect, then what is the distance between X_0 and X_1 is 0. So, here, I have put 0. Now, then comes alpha, alpha is the angle between two Zs, if you remember. So, this is actually my Z_0, which is perpendicular to the board, Z_1 is perpendicular to the board and they are parallel. So, their included angle is 0. Now, next is you’re a_i. So, a_i is the distance between two Zs. So, here, I have got one Z, here I have got one Z and along X, I can find out the mutual perpendicular distance and that is nothing but the length of the link. So, a_i is nothing but the length of the link. Next, we can find out that frame_2, that is, 2 with respect to 1, that is 2 with respect to 1. So, once again, I have got a rotary joint here, revolute joint here. So, the variable is theta_i next is the d, that is the distance between two Xs. So, X_1 and X_2 are in the same line. So, the distance between them is 0, the next is your alpha. Hypothetically, we have assumed that this is your Z_2 and this is Z_1 and they are parallel. So, the angle is 0, then comes the length of the link. So, this is your Z_1 and Z_2 they are parallel. So, this particular L_2 is nothing but the length of the link. So, we can find out, all the entries of the D-H parameters’ table. And, once you have found out the entries for that, now, very easily can find out, what should be the kinematic equation. So, what I am going to do is: I am trying to find out the kinematic equation; the purpose of kinematic equation is to represent the position, and orientation of end-effector with respect to the base coordinate system. And, to get it actually, what I will have to do is: I will have to take the help of this type of transformation matrix, that is, T_2 is respect to base. So, this is T_1 with respect to the base. So, T_2 with respect to base is nothing but T_1 with respect to base multiplied by T_2 with respect to 1. Now, how to find out T_1 with respect to base, now to write down T_1 with respect to base, you concentrate here. And, you just move along this particular direction, without making any change. So, T_1 with respect to base, the first is theta_1 that is your rotation about Z by theta 1. Next is 0, 0, I am not going to write anything and the in the last one is translation, Trans along X by L_1. So, this is what you mean by T_1 with respect to base. Similarly, we can also write down, that is, T_2 with respect to 1. So, I will have to concentrate here, and this is nothing but rotation about Z by an angle theta_2, then comes your translation along X by L_2 and all such things, actually, I am just going to consider next. So, this T_1 with respect to base is nothing but is rotation about Z by theta_1, translation along X by L_1. So, I know the 4 × 4 matrix corresponding to this, I know the 4 × 4 matrix corresponding to this and if I just multiply then I will be getting this particular 4 × 4 matrix for this T_1 with respect to base. Now, similarly, actually what we can do is: we can find out this T_2 with respect to one, that is, your rotation about Z by an angle theta_2 and translation along X by L_2. So, I know the 4 × 4 matrix here, and I know the 4 × 4 matrix here, and I can multiply for getting the final matrix. And, once we have got this particular final matrix now, I am in a position to find out, what is T_2 with respect to base, that is, nothing but T_1 with respect to base multiplied by T_2 with respect to 1. Now, if I just multiply. So, this T_1 with respect to base that is nothing but 4 × 4 matrix, and this T_2 with respect to 1 is nothing but another 4 × 4 matrix, then I will be getting actually this final 4 × 4 matrix and here, actually this carries information of this position, ok? So, the position information is given by this particular information and here, c_1 means cos theta_1 and c_12 means cos of theta_1 plus theta_2. Similarly, s_12 is nothing but sine of theta_1 plus theta_2, now if you just compare, whatever position information we are getting. So, if I just compare with our general information of trigonometry. So, we can find out that this particular expression is correct. For example, say, if you see this particular 2 degrees of freedom serial manipulator. So, this is theta_1, the length of the link is your L_1 and the second link is L_2 and with respect to this particular, joint angle is theta_2. Now, with respect to X, the total angle with respect to X is nothing but theta_1 plus theta_2. So, this is your theta_1 plus theta_2, now very easily, you can find out using the principle of trigonometry, we can find out the general expression for q_X is nothing but L_1 cos theta_1 plus L_2 cos of theta_1 plus theta_2. Similarly, we can find out this q_Y is nothing but L_1 sine theta_1 plus L_2 sine of theta_1 plus theta_2, this we can we can find out using the principle of trigonometry. Now, the same thing, we are getting after carrying out this particular analysis. So, we get the same expression, that is your L_1 cos theta_1 plus L_2 cos theta_1 plus theta_2 and that is nothing but q_X. L_1 sine theta_1 plus L_2 sine of theta_1 plus theta_2 is q_Y and q_Z is equal to 0. The same expression we are getting. Now this particular problem is actually known as the forward kinematics problem. Now, in forward kinematics problem, actually what we do is: our aim is to determine the position and orientation of the end-effector of the robot, provided the length of the links are known and the joint angles are known. So, that is actually the problem of the forward kinematics. Once again, let me repeat supposing that the length of the links say L_1 and L_2 are known, the joint angles theta_1, theta_2 are known. So, these are known and if these values are known, can I not find out the position and orientation of the end-effector with respect to the base coordinate system of the robot? If I take the physical example, if this is the end-effector and this is my base coordinate system, can I not find out the position and different orientation of this particular end-effector with respect to the base coordinate system? So, this particular problem is the problem of the forward kinematics. So, the forward kinematics problem we can solve very easily using this principle of Denavit-Hartenberg notation and then, this frame transformation. So, very easily, we can find out, we can solve the problem, that is the forward kinematics problem. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_03_Introduction_to_Robots_and_Robotics_Contd.txt
Now, let us see how to represent the different types of joints used in robot with the help of a few symbols, so that we can represent the whole manipulator with the help of these symbols. This revolute joint, we have already discussed, that is denoted by R. And, this particular symbol is also used to represent the revolute joint. This is another symbol, which is also used to represent the revolute joint. Now, then comes the prismatic joint, that is denoted by P, we use either this particular symbol to represent the prismatic joint or that particular symbol to represent the prismatic joint. Now, then comes the cylindrical joint, that is denoted by C. And here, we can use this particular symbol to represent the cylindrical joint. Then comes the spherical joint, which is having 3 degrees of freedom, is represented using this particular symbol S prime, and we can also use this particular symbol to represent the spherical joint. The Hooke joint having 2 degrees of freedom is denoted by U. And, this particular symbol is also used to represent the Hooke joint. Now, then comes the twisting joint, this is also a rotary joint, that is denoted by T. And, this particular symbol is also used to represent the twisting joint. Now, with the help of these symbols, we can represent the manipulator. So, before we start doing kinematic analysis, what we do is, we try to represent the whole robot or the whole manipulator with the help of some symbols used for the joints. Now, let me take one example here, and to take the example actually, I will have to go back to the previous slides, where we consider a robotic system. Now, here, in this particular robotic system is one serial manipulator. And, the same serial manipulator I just want to represent with the help of the symbols that means, I want to prepare the kinematic diagram of this manipulator or the robot. Now, let us see how to prepare the kinematic diagram. To prepare the kinematic diagram, what we do is, we start from the base. So, here, I have got a fixed base, so this is denoted by fixed base. And I have got one twisting joint here, so here I have got a twisting joint. So, let me draw one twisting joint with the help of symbol, so this is the symbol for the twisting joint said T. Next joint is here, that is a revolute joint. Now, remember here, there is no joint actually and this is rigidly connected. Here, there is no joint, the rotary joint is here, that is a revolute joint. So, to represent the revolute joint we take the help of this type of symbol, this is the symbol for the revolute joint. Next joint is here, so this is another revolute joint. So, I am just going to use the symbol for the revolute joint. The next is the twisting joint, and this twisting joint is the wrist joint, so this is a twisting joint. So, I will have to draw one twisting joint here, so this is a twisting joint. And after that, actually I am just going to connect one end-effector or the gripper here. And, this particular joint will be nothing but a revolute joint. So, there will be a revolute joint here, and after that there will be the end-effector. So, this is actually the symbol for the end-effector. So, this is the last one is nothing but a revolute joint. So, let me repeat. So, this is the twisting joint, a revolute joint, revolute joint, revolute joint, then we have got the twisting joint. And here, so we have got the revolute joint with the help of which I connect the end-effector. This is what, we mean by the kinematic diagram of this particular robot, OK? So, this is known as the kinematic diagram for this particular robot. In robotics actually as we mentioned little bit, that there are four modules. And all such modules are actually explained one after another, and these are all dependent also. So, gradually I will be discussing all such things. But, the starting point is the kinematic diagram, based on the kinematic diagram. So, I am just going to carry out the kinematic analysis, that is, kinematics, based on kinematics; I will be discussing dynamics, based on dynamics; I will be discussing the control. And, once that particular robot is made ready, after that I will try to incorporate intelligence to make it intelligent and autonomous. So, all such things actually, I am just going to discuss one after another. So, let us try to come back to the original discussion, where we stopped. In fact, this is the place. So, we have seen how to prepare the kinematic diagram, and the purpose of kinematic diagram, as I told, that just to represent (with the help of a few symbols), that complicated robotic system. So, this is the purpose of making kinematic diagram. Now, once you have studied the degrees of freedom or connectivity of the different types of robotic joints, I am in a position to discuss about the degrees of freedom of a robotic system. Now, the degrees of freedom of robotic system is defined as the minimum number of independent parameters, variables, or coordinates needed to describe a robotic system completely, and that is nothing but the degrees of freedom of a robotic system. Now, before I discuss, the degrees of freedom of a robotic system a few preliminaries, which all of you know, I am just going to recapitulate. For example, say a point in 2-D plane has got 2 degrees of freedom. For example, say I have got a 2-D plane like this, say x and y. And if I want to represent a particular point, I need only two information, one is this x information, another is y information. And, supposing that it is having the coordinate (x, y), so I need only two information. So, a point on 2-D has got only 2 degrees of freedom. Similarly, if I consider a point in 3-D, for example, if I add one more dimension here, say z, that is, x, y and z, so what I need is, the z information also to represent. So, x, y and z information, so all 3 information actually I will have to find out, so, this is one information, this is another information, this is another information. So, in place of x y, now I need x, y and z. If I consider the 3 dimensions, that means, a point in 3-D space has got 3 degrees of freedom. So, I think this is clear to all of you. Now, a rigid body in 3-D space has got 6 degrees of freedom. So, how to define that, how to how to explain that a rigid body in 3-D space has got 6 degrees of freedom. Let me take a very simple example. Supposing that I am once again considering X, Y and Z, so X, Y and Z, in the 3-D space. And, I have got one 3-D object, very simple 3-D object like this. So, this is the 3-D object, which I have. Now, if I want to represent, so this particular 3-D object in this 3-D space, how to represent. To represent this 3-D body in 3-D space, actually what we do is, we first try to find out the mass-center. Now, supposing that the mass-center of this particular 3-D object is this, and it is having the coordinate, say x, y and z. So, to represent the position of this particular mass-center, I need three information: x, y and z. And, now, this particular 3-D object can have different orientations also, so this is one orientation. Similarly, there could be some other orientations also, this could be another orientation. Now, to represent the orientation, once again I need to take the help of rotation about X, rotation about Z, rotation about Y. So, I need three more information. So, three information for position, and three information for orientation or the rotation, that is why, a 3-D object in 3-D space has got 6 degrees of freedom, OK?. Now, if I want to manipulate this particular 3-D object in 3-D space. For example, say one serial manipulator is going to come here, just to grip this particular object. Supposing that, it is going to grip it like this. Say, I have got a gripper here, and with the help of this gripper, say I am just going to grip it. Now, with the help of this particular gripper, if I want to grip this particular object, what I will have to do is: this particular gripper should be able to grip this particular 3-D object in different orientations, and different positions, that means, if I want to grip with the help of a serial manipulator. So, this serial manipulator should have ideally 6 degrees of freedom. And, that is why, most of the industrial robot are having 6 degrees of freedom. Ideally, one industrial spatial manipulator should have 6 degrees of freedom. For example, if I take the example of PUMA, Programmable Universal Machine for Assembly, it should have 6 degrees of freedom, ideally speaking. And, that is why, actually I have mentioned here, for an ideal spatial manipulator, there should be 6 degrees of freedom. For a planar manipulator, which is working on 2-D plane, it should have ideally 3 degrees of freedom. So, by definition, a spatial ideal manipulator should have 6 degree of degrees of freedom, and a planar manipulator should have 3 degrees of freedom. Now, comes the concept of redundant manipulator. Now, remember sometimes to serve a specific purpose, we need to use some sort of redundant manipulator. And, this redundant manipulator, if it is a spatial one, it should have more than 6 degrees of freedom, like 7 degrees of freedom, 8 degrees of freedom. If it is a planar manipulator, it should have more than 3 degrees of freedom; say 4 degrees of freedom, 5 degrees of freedom, and so on. And, as I told, these types of redundant manipulators are used just to serve some specific purposes. Let me take one very simple example. This is a very practical example. Supposing that, say I am just going to do some sort of welding with the help of a serial manipulator at a place, which is very difficult to reach. Let me take a very hypothetical example. Say this is the place, where I will have to do this particular welding, and this place is so remote, that it is not so easy to reach that particular place. And, supposing that, this is the geometry, and it is so much constrained scenario. And, at this particular position, say I will have to do this particular welding with the help of a serial manipulator. The base of the serial manipulated is here, OK?. Now, if I want to do the welding here, with the help of a serial manipulator, the welding torch has to be gripped by the end-effector of this particular serial manipulator. And, to reach that particular point the base is here. So, I need to use a number of links, number of joints, so might be one joint, one link another joint, another joint, another joint, another joint, another joint, another joint, and another joint and might be then only I will be able to reach this particular position. Now, if I use this type of serial manipulator, (which is a closed, sorry) which is an open loop chain. So, how many revolute joints we are using: one revolute, another revolute, another revolute, 4th revolute, 5th revolute, 6th revolute 7th revolute 8th revolute joints here. So, I am using 8 revolute joint here, and that means, this particular manipulator should have more than 6. Now, how to determine that degrees of freedom, I am just going to discuss after some time but, here actually we need to note that the number of the degrees of freedom is more than 6. This is a typical example of the redundant manipulator. Now, similarly, sometimes we use actually some sort of manipulator, which is under-actuated. Now, by under-actuated manipulator, we mean that this is either a spatial manipulator with less than 6 degrees of freedom or a planar manipulator with less than 3 degrees of freedom. Now, here, if I use a special manipulator with less than 6 degrees of freedom or a planar manipulator less than 3 degrees of freedom, that is called the under-actuated manipulator. Let me take one example. Supposing that, one manipulator is working in 3-D space, and it is doing some sort of pick and place type of operation. So much accuracy is not actually required. And, here, we can even use one manipulator, having say 5 degrees of freedom. For example, say we have got one manipulator, whose name is Minimover. So, Minimover is actually a manipulator having 5 degrees of freedom, and that is a spatial manipulator, so that is nothing but under-actuated manipulator, OK? Now, I am just going to take another very practical example, just to find out the difference between the redundant manipulator, and this under-actuated manipulator. Let me take one task, very simple task. Supposing that, I have got one the board, the white board. Now, on this particular white board, say I have written something, I want to clean it with the help of a duster. Now, what are the different ways, I can clean this particular the board. Now, this particular board is in 2-D. So, this is say the X direction, this is your Y direction, and Z is perpendicular to the board. Now, if I want to clean this particular board, I can use the duster in different ways, let me take one possibility. For example, say I can use one duster in this particular direction, and this particular direction, only in two directions. So, I will move duster along X, I will move duster along Y, I can clean the board, so this is one way of cleaning the board. Another way of cleaning the board should be as follows: I can move along X, I can move along Y, and I can also rotate about this particular Z, Z is perpendicular to the board, so this is another way of cleaning the board. Now, I am just going to show another method to clean the board. So, I will move along X, I will move along Y, I will move along this particular Z direction, opposite to the Z, OK, and at the same time, I will just rotate about Z. Are you getting my point? So, for the same task of board cleaning, so what I can do is: I can use three types of serial manipulator. Now, if I use this particular manipulator, it is having 2 degrees of freedom. If I use this particular manipulator, it is having 3 degrees of freedom. If I use this particular manipulator, it is having 4 degrees of freedom, OK? Now, this is the 2-D plane. So, ideally speaking, if it is the ideal one, it should have 3 degrees of freedom. So, if I use this manipulator with 3 degrees of freedom, that is an ideal. Planar manipulator for cleaning this particular board, but if I use this particular manipulator, this will be an under-actuated planar manipulator for cleaning the board. And, if I use this particular board having 4 degrees of freedom that will be one redundant planar manipulator used for cleaning the board. I hope, the difference is clear between the ideal manipulator, redundant manipulator, and under-actuated manipulator. Now, I am just going to discuss the mobility or the degrees of freedom. So, how to do mathematically, calculate the mobility or degrees of freedom of a spatial manipulator. So, I am just going to start with the spatial manipulator, which is walking in 3-D space. Now, let us consider a manipulator with n rigid moving links and m joints. So, there are small n number of rigid links, and I have got small m joints. Now, as I discuss that each rigid body in 3-D space has got 6 degrees of freedom. So, I have got n such rigid links. So, I have got 6 n total degrees of freedom. Now, C i is the connectivity of i-th joint. Connectivity of the joint, I have already discussed, i varies from 1 to up to m. Now, a particular joint, say i-th joint, if it is having the connectivity C i, it is going to put constraint, that is nothing but 6 minus C i, once again. So, C i is the connectivity of the i-th joint. And this particular i-th joint is going to put constraint, that is nothing but 6 minus C i. Similarly, we have got how many joints, small m number of joints. So, each joint is going to put 6 minus C i constraints. So, the total number of constraint will be summation i equals to 1 to m 6 minus C i. So, this is the total number of constraints. And, this is the total number of availability. So, this particular difference is nothing but the mobility of the manipulator denoted by M, and that is nothing but 6 n minus summation i equals to 1 to m 6 minus C i, and this particular formula is very well-known Grubler’s criterion. And, by using this particular Grubler’s criterion, very easily we can find out, what should be the degrees of freedom of a particular robotic system. Now, the same thing we can also do it for the planar system to determine mobility or degrees of freedom of a planar manipulator. Now, I am going to consider a planar manipulator, which is working on 2-D plane. And, here, the same n number of moving links and small m number of joints have been considered and connectivity is C i. The number of constraints put by i-th joint is 3 minus C i. And the total number of constraint summation i equals to 1 to m 3 minus C i. And, the mobility of the manipulator is given by 3 n minus summation i equals to 1 to m 3 minus C i, so this is nothing but the mobility. This is once again the well-known Grubler’s criterion. Now, here, I just want to mention one thing very purposefully, particularly in the previous slide. Let me go to the previous slide. I am using a particular term, that is, the mobility, ok?. So, in place of these degrees of freedom I am using this term: mobility. Now, here I have something to say regarding the concept of mobility and the degrees of freedom. Now, here, on principle as I told by definition one serial manipulator should have 6 degrees of freedom and one spatial manipulators should have 6 degrees of freedom. Now, supposing that, one redundant manipulator is having certain degrees of freedom, truly speaking, we should not call, it is having 10 degrees of freedom. Instead we should say that, it has got the mobility levels of 10. By definition, the maximum degrees of freedom can be equal to 6, and that is why, if it is more than 6, we generally use the term: mobility. We say that this particular manipulator is having the mobility level of 10, instead of saying that this serial manipulator is having 10 degrees of freedom. So, I think, it is clear. Now, I am just going to solve some numerical examples, just to show you, how to determine the degrees of freedom using the Grubler’s criterion for some of the manipulators. Now, this is one serial manipulator, you can see, and here all the links are in series, ok?. So, this is the fixed base first revolute joint, second revolute joint, the linear joint (prismatic joint), the revolute joint, and this is the end-effector. So, let us try to calculate its degrees of freedom or mobility. Now, here small n is nothing but the number of moving links. For example, 1, 2, 3, 4, so there are four number of moving links. The number of joints small m is equal to 4; 1, 2, 3, 4. Connectivity for each of these particular joints, this is, the revolute joint, prismatic joint is equal to 1, each of the joints is having connectivity of 1. Now, supposing that, so this particular joint is having one connectivity, then how many constraints it put, this is a planar one. So, the number of constraints it is going to put is nothing but 3 minus C i C i is equal to 1. So, it is going to put two constraints. Each of the joint is having one connectivity. So, each of the joint is going to put two constraints, so 2 plus 2, 4 plus 2, 6 plus 2, 8. So, the degrees of freedom or the mobility M is nothing but 3 n minus summation i equals to 1 to m 3 minus C i n equals to 4. So, 3 n is nothing but 3 multiplied by 4, and total number of constraint is 8. So, I am getting 4. Although this is a planar manipulator, it is having 4 degrees of freedom, that means, this is one redundant serial planar manipulator, so this is nothing but the redundant planar serial manipulator. And, another observation we should take. Here, for this serial manipulator the degrees of freedom or the mobility is nothing but 4 and that is nothing but the sum of all C i values. Like each of these C i is equal to 1 and if you sum them up you will be getting 4, and this particular condition is true only for the serial manipulator, but not for the parallel manipulator. Then, comes your parallel planar manipulator. Now, here this is very simple. So, this is the fixed base and the revolute joint I have. So, there are 3 links and here, at the top, we have got a top plate, and that is nothing but the end-effector. And at each link we have got a revolute joint, one prismatic joint, and one revolute joint. Similarly, we have got one revolute joint, prismatic joint, one revolute joint. And, each link is having how many constraints, how many joints that we will have to count. Now, here how many links we have, on each leg I have got 1, 2. So, 2 plus 2, 4 plus 2, 6 and this particular end-effector will be considered as one link. So, I have got a total of 6 plus 1, that is, 7 links. And, how many joints we have, on one leg, we have got 1, 2, 3. So, 3 multiplied by 3, so I have got 9 such joints, ok? These are all, the revolute joint, and prismatic joint, and each is having connectivity of 1, that means, each of the joints is going to put, how many constraints 3 minus 1, that is, 2 constraints. So, each leg is putting how many constraints, 2 constraint here, 2 constraints here, 2 constraints here, 2 plus 2 plus 2. So, one leg is going to give 6 constraints and here also 6, here also 6. So, we have got 18 constraints. So, summation i equals to 1 to m, 3 minus C i is equal to 18. 3 n is 3 multiplied by 7, that is, 21; so, the mobility is coming to be equal to 3. So, this is an ideal parallel planar manipulator. This is the way actually, we can find out the degrees of freedom or the mobility of different types of manipulators. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_14_Robot_Kinematics_Contd.txt
Now, I am going to discuss how to represent the orientation using the principle of this Roll, Pitch and Yaw angles. Now, the concept of this roll, pitch and yaw, actually, we have copied from the movement of a ship. Now, let me first define the rolling, pitching and yawing movement of a particular ship and let us see, how to copy these to represent the orientation with the help of this roll, pitch and yaw. Now, supposing that this is nothing but a ship, now, this ship will have rolling movement, pitching movement and yawing movement. For example, say, if I consider this particular movement, this particular movement is nothing but the rolling movement. Similarly, this type of movement of the ship is nothing but the pitching movement, and this particular movement of the ship is nothing but the yawing movement; for this type of movement of the ship, as if, this is the axis about which I am taking the rotation. Similarly, this is the type of movement of this particular ship that is your pitching, as if this is the axis about which I am taking the rotation. And, the moment, we consider like this type of movement of the ship that is called the yawing movement, and as if this is nothing but the axis about which I am taking the rotation. Now, if I call the rotation about X is the rolling movement, then, the rotation about Y is the pitching movement, and then, rotation about Z is nothing but the yawing movement. So, rolling is nothing but the rotation about X, then the pitching movement is nothing but about Y, and this particular yawing movement is nothing but the rotation about Z, the same concept we are going to use it here. Now, let us try to explain the way, we can copy here. Now, here, this particular universal coordinate system is denoted by, once again, X_U, Y_U and Z_U, and the body coordinate system, which is attached to the 3D body, whose rotation I am just going to represent, that particular body coordinate system is B, and it has got X_B, Y_B and Z_B and initially, they are coinciding and origin is exactly the same, that is, O for both the coordinate systems. Now, what you do is: we take some rotation about the universal coordinate system and we try to rotate that particular B, for example, say we first take the rotation about this particular X_U. So, this is my X_U and take rotation by an angle alpha in the anticlockwise sense, that is, plus alpha. Now, if I take rotation about X_U and initially X_U and X_B were coinciding. So, my X_B prime will remain same as X_U, because I have taken rotation about X_U. Now, this particular Y_B prime will be different from Y_U, similarly, this Z_B prime will be different from Z_U. So, this is what you mean by the rotation about X. Now, I am just going to take the rotation about Y_U. So, this is nothing but your Y_U direction. So, I am taking the rotation about Y_U by an angle beta in the anticlockwise sense. So, if I take the rotation about Y_U by an angle beta so, what will happen to my X_B double prime? So, X_B double prime will be different from X_B prime, then Y_B double prime will be different from Y_B prime and Z_B double prime will be different from your Z_B prime. And, now, I am just going to take the rotation about this particular the Z_U. So, if I take the rotation about Z_U, by an angle gamma and in the anticlockwise sense. So, what will happen to my X_B triple prime? So, X_B triple prime will be different from X_B double prime, then Y_B triple prime will be different from Y_B double prime and Z_B triple prime will be different from Z_B double prime. So, the final coordinate system, the final frame, I will be getting: X_B triple prime, Y_B triple prime, Z_B triple prime after taking three rotations in a particular sequence. Now, here, actually what you do is: the rotation about this particular X, we call it this is nothing but the rolling motion, then the rotation about Y we call it this is nothing but the pitching motion and the rotation about Z is nothing but the yawing motion, and these particular rotations, three rotations are in a particular sequence. The sequence is nothing but the roll, pitch and yaw, and this particular sequence, we are going to follow. We are going to write down the composite rotation matrix as follows: like R_B with respect to U composite and here, I am writing rpy, that is, roll, pitch and yaw, this is very important, because if you just change the sequence, you will be getting all together a different final matrix. So, this particular sequence (rpy), I will have to write. So, rotation of B with respect to U, that means, I am just going to find out what is the orientation of that particular body with respect to the universal coordinate system. So, we will have to find out this R_B with respect to U composite comma rpy ( EMBED Equation.DSMT4 ), that is, roll-pitch-yaw and once again, the same rule is to be used. So, whatever I stated first that will go to the end. So, that is rotation about X_U by an angle alpha followed by the rotation about Y_U by an angle beta, followed by the rotation about Z_U by an angle gamma. Now, each of these rotation matrices is nothing but 3 cross 3 matrix. So, this is a 3 cross 3 matrix. We know the expression of rotation about Z by an angle gamma, similarly, the rotation about Y_U by an angle beta is 3 cross 3 and this is rotation about X_U by an angle alpha is once again 3 cross 3 and if you multiply, then ultimately, I will be getting this as the final matrix and this is once again a 3 cross 3 matrix,. So, by using the concept of the roll, pitch and yaw, I will be getting that this is nothing but the final form of the rotation matrix. Now, in Cartesian coordinate system, which I have already discussed the same rotation can be represented with the help of your 3 cross 3 matrix and this is nothing but r_11, r_12, r_13, these are the elements for the first row, then r_21, 22, 23, and then 31, 32, 33. Now, in vector form, if you see, this is corresponding to the normal vector, this is corresponding to the sliding vector and this is corresponding to the approach vector. So, if this is known and this is the final expression for the rotation matrix using the concept of the roll, pitch and yaw and element-wise, if I just compare, I will be able to find out that corresponding to this known rotation, what should be the angles for the rolling, pitching and yawing, so that we can determine very easily. So, what we will have to do is: element-wise we will have to compare and then, we will have to find out. For example, the angle of rolling alpha is nothing but tan inverse of r_32 divided by r_33. So, r_32 is this, r_33 is this and if I compare, r_32 is nothing but this and r_33 is nothing but this. Here, c beta means cos beta, s alpha is nothing but is the sine alpha. So, c beta s alpha, that is, cos beta sine alpha is divided by cos alpha then comes your cos beta, this particular thing. Now, if I just see, c beta, that is, cos beta, cos beta gets cancelled. So, I am getting tan alpha, that is, sine alpha divided by the cos alpha. So, this is nothing but tan alpha. So, very easily, I can find out that alpha is nothing but tan inverse of r_32 to r_33. Now, following the same principle, I can also find out the angle for this particular pitching. So, beta is nothing but tan inverse minus r_31 divided by squared root of r_11 square plus r_21 square, it is up to this, ok? So, we can find out this particular beta. Similarly, the angle gamma, that is nothing but the angle of yaw can be determined as tan inverse r_21 divided by r_11. So, if I know the orientation in Cartesian coordinate system, and to achieve the same orientation, I can also find out what should be the corresponding values for the angles of rolling, pitching and yawing. And, that is why, the orientation of the robot can also be actually expressed using the principle of roll, pitch and yaw. Now, here, just to explain it further, I am just going to take the help of one numerical example. So, very easily, we can find out the angles for the rolling, pitching and yawing. The statement of the problem is as follows: the concept of roll, pitch and yaw angels has been used to represent the rotation of B with respect to the reference frame U, that is your R B with respect to U ( EMBED Equation.DSMT4 ). Let us suppose that the above rotation can also be expressed by a 3 cross 3 rotation matrix, as given bellow. So, this particular rotation matrix is known. So, this is given, we will have to determine the angles of rolling, pitching and yawing, it is very simple. So, what I am we are going to do is: very easily, we are going to use those expressions that is, angle of rolling alpha is nothing but tan inverse r_32 by r_33. So, this is coming as tan inverse minus 0.5 divided by is your 0.000. So, this is nothing but actually the infinity and that is why, this alpha is equal to 90 degrees. Similarly, the angle for pitching, beta is tan inverse your minus r_31 divided by this and if you just put the numerical value and calculate you will be getting 59.99 degrees (sorry for the calculation mistake in the .ppt file). Now, similarly, we can also find out, actually, the angle for the yawing. So, this gamma is nothing but tan inverse minus r_21 divided by r_11 and if you put the numerical values, we will be getting this as equal to more or less, approximately equal to minus 60 degrees, ok? So, if we consider that positive is anticlockwise and your negative is clockwise, this is nothing but the clockwise. So, this is the way, actually we can find out these angles for the rolling, pitching and yawing. Now, I am just going to discuss another method, which is also very frequently used to represent the orientation of this particular object. Now, supposing that the initial coordinate systems are the same, for example, we are discussing this Euler angles, we have got this X_U, Y_U, Z_U as the universal coordinate system; X_B, Y_B and Z_B is a body coordinate system and the origin is at nothing but O. Now, here, actually, what I do is: initially, they are coinciding, but after that, we are going to leave this particular universal coordinate system and we are going to rotate only the B coordinate system with respect to the rotated coordinate system, but not with respect to the universal coordinate system. But, what is our aim? Our aim is to determine; what is R B with respect to U, that is, rotation of B with respect to U, that we are trying to find out ok, but that will be found out by following some sort of indirect method. So, what we will do is we will try to find out first, what is R U with respect to B and we will try to find out the inverse of that and that is nothing but R B with respect to U. Let me repeat, we are trying to find out first what is R U with respect to B and as I told that in this particular method, we are taking rotation of B with respect to B itself and we will be taking rotation with respect to the rotated frame itself. Now, let us try to explain, so initially, you forget about U coordinate system. So, initially, this is actually the origin, this is my X_B, this is, my Y_B and this is my Z_B. So, what I do is: we rotate B about Z_B by an angle alpha in the anticlockwise sense. So, we are rotating with respect to this Z_B by an angle alpha. Now, if I do that what will happen to my Z_B prime, Z_B prime will remain same as Z_B, because we took the rotation about Z_B, but what will happen to my X_B prime. So, this X_B prime will be different from X_B, then Y_B prime will be different from your Y_B and now, I am just going to take rotation. Rotate B prime about this Y_B prime by an angle beta in the anticlockwise sense. So, this is nothing but Y_B prime. So, whatever Y_B prime we have got, you draw this particular Y_B prime similarly, you draw this particular Z_B prime and you draw this X_B prime. Now, I am just going to rotate about this Y_B prime by an angle beta in the anticlockwise sense. So, what will happen to my Y_B double prime? Y_B double prime will remain same as Y_B prime, but what will happen to X_B double prime? X_B double prime will be different from X_B prime. Similarly, your Z_B double prime will be different from your the Z_B prime and after that, actually, what we do is: we rotate B double prime with respect to X_B double prime. Now, here, let me draw this X_B double prime. So, this is nothing but X_B double prime, this is nothing but is your Y_B double prime and this is nothing but the Z_B double prime. Now, we are going to rotate this B double prime with respect to X_B double prime. So, this is my X_B double prime. So, I am just going to rotate by the angle gamma in the anticlockwise sense. So, what will happen to my X_B triple prime? X_B triple prime will remain same as X_B double prime, but Y_B triple prime will be different from Y_B double prime and Z_B triple prime will be different from Z_B double prime and till now, all the rotations, we have taken are with respect to the B coordinate itself and I have not yet involved, I have not yet used this particular universal coordinate system. Now, let us try to understand one thing. So, initially, this particular U coordinate system and B coordinate system were coinciding and after that, we took rotation of B with respect to B itself, that means, I am rotating the B coordinating system, but I am not doing anything with U. Now, can I not consider a similar situation that here, initially they are coinciding and U is kept constant and B is rotating, can I not find out one equivalent situation that as a B is kept constant and U is rotated by the same angle in the opposite direction? So, let me repeat again, initially the U coordinate system and B coordinate system were coinciding. Now, B is rotated by some angle say alpha in the anticlockwise direction, now can I not say that this is equivalent to the situation that B is kept constant and U is rotated by the same angle alpha in the opposite direction, that means, just like the velocity and relative velocity that particular concept sort of thing. Now, the reason why I am going for this type of thing is as follows: my aim is to determine actually R_B with respect to U ( EMBED Equation.DSMT4 ), but before that, I will have to find out what is R_U with respect to B ( EMBED Equation.DSMT4 ). Now, if I want to find out what is R_U with respect to B and if I do not include U, I cannot find out this R U with respect to B. Just to include U with B I am taking in the help of that particular concept, that means by using that particular concept if I write down the composite rotation matrix. So, this is nothing but R_U with respect to B and Euler angles will have to write down Euler angles here. So, that is nothing but whatever I consider first, but with the negative sign because we rotated B keeping U fixed. Now, we are considering, as if we are rotating U keeping B fixed in the opposite direction. So, we are considering the rotation about Z_B by an angle minus alpha followed by the rotation about Y_B by an angle minus beta, the rotation about X B by an angle minus gamma. Now, we know the expression of each of these rotation matrices. For example, rotation about X minus gamma so, I can write down the 3 cross 3 matrix. Similarly, rotation about Y_B by an angle beta, so, I can write down this particular 3 cross 3 matrix then comes your rotation about Z_B by an angle alpha. So, I can write down the 3 cross 3 matrix, and these three 3 cross 3 matrices I can multiply. And finally, I will be getting one 3 cross 3 matrix and that is nothing but is your R_U with respect to B, but what we need is just the reverse. So, what I need is R B with respect to U. So, this R B, that is, the rotation of B that is the body coordinate system, with respect to the universal coordinate system U and that is nothing but R U with respect to B inverse of that. Now, if I can find out so, this R U with respect to B, this is nothing but a 3 cross 3 matrix and this is a pure rotation matrix, and as it is a pure rotation matrix, very easily we can find out its inverse, because for a pure rotation matrix, the inverse is nothing but its transpose. So, that means, the row will become column and column will be row. So, whatever matrix we are getting here, the 3 cross 3 matrix you try to find out the transpose of that, and that is nothing but the inverse. So, you will be getting the rotation of B with respect to U. So, this particular thing we can find out. So, by using this Euler angles, we can represent the orientation with respect to the universal coordination system. Now, supposing that in Cartesian coordination system, this particular R B with respect to U, so, this particular 3 cross 3 matrix is known to us. Now, if it is known, then element-wise I just compare. So, very easily, we can find out the numerical values for these particular alpha, beta and gamma, that is, the Euler angles. So, let us see how to find out the Euler angles values. So, alpha is nothing but tan inverse r_21 divided by r_11. So, r_21 is nothing but this, and r_11 is nothing but this. So, r_21 is nothing but sine alpha cos beta. So, sine alpha cos beta is s alpha c beta and r_11 is nothing but is cos alpha cos beta, that is, c alpha c beta. So, cos beta, cos beta gets cancelled and I will be getting tan alpha and if I get tan alpha. So, very easily, I can find out alpha is tan inverse this. Similarly, I can find out your alpha is this, beta is nothing but this. So, minus r_31 divided by r_11 square plus r_21 square and it should come up to this, ok? Then, gamma is nothing but tan inverse r_32 divided by your r_33. So, in this way, actually we can represent both the position as well as orientation in different coordinate system. And, if we can represent the position and orientation in different coordinate systems, the same robot you can control in different coordinate systems, and that is why, if you see the remote controller for the robot, for that is the teach-pendant (explained while discussing the robot teaching methods) can be used in different coordinate systems to control the manipulator. So, once again, let me repeat that the position can be expressed either in Cartesian coordinate system or in cylindrical coordinate system or in spherical coordinate system. Similarly, the orientation or the rotation can be represented either in Cartesian coordinate system or in roll, pitch and yaw or in Euler angles. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_27_Robot_Dynamics_Contd.txt
Now, let us see how to determine the joint torque for the different robotic joints, I am using the principle of the Lagrangian method. So, we are going to take one example of 2 degrees of freedom serial manipulator. So, this is a 2 degrees of freedom serial manipulator, this is the first joint, and the link 1, the second joint, and link 2, the length of the first link is L_1, and the length of the second link is l_2, the joint angles are theta_1 and theta_2. Now, here, the link 1 is having the mass m_1. So, m_1g that particular force is acting here at the mass center and x_1, y_1 is the coordinate of the mass center, similarly, for the second link the mass center is x_2, y_2 and this m_2 g is acting here vertically downward and g is nothing but the acceleration due to gravity. Now, here, our aim is to determine, what should be the joint torque here, that is, tau_1 and what should be the joint torque here, that is your tau_2. So, tau_1 and tau_2, we will have to find out, we will have to derive the mathematical expression for the joint torques, tau_1, tau_2. Let us see, how to proceed with this, now to determine this particular joint torque actually, what will have to do is, at first we will have to assign the coordinate system at the different joints according to the D-H parameter setting rule, now according to the D-H parameter setting rule, this is nothing but X_naught and this is Y_ naught and Z_naught is perpendicular to the board and away from the board. So, this I have already discussed, so at the joint 2, this is my X_1, this is Y_1 and Z_1 is perpendicular to the board, similarly here at the end, these are X_2, Y_2 and Z_2 is perpendicular to the board. Now, if you just draw this D-H parameters table. So, it looks like this, for example, say if I prepare the D-H parameters table. So, this is nothing but the frame, then we will have to consider the screw z, that is, the rotation about z by an angle theta_1, then translation along z, that is, d_i, then comes your rotation about that x, that is, alpha_i and translation along x, that is, a_i and here. So, for this particular the first one, so this is the joint angle. So, this particular joint angle is the variable theta_1 then d=0, alpha is 0 and the length of the link, that is nothing but l_1; for the second one, so the joint variable is theta_2, 0, 0, l_2. Now, this is the D-H parameters table, now if I know this particular D-H parameters table, very easily, we can determine, what is the transformation metrics, that is T 1 with respect to 0? That is nothing but rotation about z. So, this T_1 with respect to 0 is nothing but rotation about z by angle theta_1, then comes your translation along x by l_1. So, we can express 4 x 4 metrics, which we have already discussed and if you multiply, this will be the final matrix, the 4 cross 4 matrix, which will be getting. Similarly, this T_2 with respect to 1; that means, I am here. So, I can find out like T 2 with respect to 1 is nothing but rotation about z by an angle theta_2, then comes translation along x by l_2 and these to 4 cross 4 matrices if you multiply, we will be getting another 4 cross 4 metrix and this T_2 with respect to 0 like T_2 with respect to 0 is nothing but T_1 with respect to 0 multiplied by T_2 with respect to 1. So, this particular matrix and that particular matrix, if you multiply, then I will be getting the 4 cross 4 metrix, that is nothing but T 2 with respect to 0, that is, this particular point with respect to the base coordinate system. And, as we know that these indicate the position terms and this is nothing but the orientation term, that is, 3 cross 3, we have already discussed. So, let us start with here and then, let us see, how to determine the joint torque. Now, if you see the expression like the expression for the joint torque. So, you will be getting such a big expression for this particular tau_1 and such a big expression for this particular tau_2. Now, these expressions, we can derive from the general expression like if you just go back a few slides. Then, form here, we can, in fact, derive the expression for tau_1 and tau_2. So, let us try to concentrate here, so let us try to concentrate on this particular equation and see how to determine that particular tau_1 and tau_2, now here there are two joints. So, c varies from 1 to 2. So, let me try to find out the expression for tau_1; that means, I is equal to 1 and c varies from 1 to 2. So, c equals to 1 c equals to 2, so if I take c equals to 1. So, I will be getting D_11 and q_c double dot, c equals to 1 to n, in place of q_i, we will be using theta, so theta_1 double dot. So, theta_1 double dot plus i equals to 1, so you will be getting D and, but c equals to 2, so D_12, theta_2 double dot. So, this I will be getting form this particular expression. Now, I concentrate here, I put i equals to 1. So, I will be getting h_1, then you consider c equals to 1 to 2, d equals to 1 to 2, you first consider c equals to 1. So, I will be getting 1 here and d varies from 1 to 2, so it is 1 and I will be getting theta_1 dot square, then comes your h, i equals to 1, c equals to 1, d equals to 2. So, I will be getting theta_1 dot theta_2 dot plus then I consider like c equals to 2. So, h_121, then you will be getting theta_1 dot theta_2 dot then comes your h_1 like c equals to 2 and will be getting d is also equal to 2. So, I will getting theta_2 dot square plus I mean I will be getting c_1. So, this is the expression for this particular joint torque tau_1 similarly, we can write down the expression for tau_2 also. So, tau_2; that means, i equals to 2 and c is varying from 1 to 2. So, I will be getting D_21 like theta_1 double dot, then comes your D_22 then comes your theta_2 double dot plus I will be getting here h_2, c equal to 1, d equals to 1. So, I will be getting theta_1 dot square then comes your h_2 like c equals to 1, but d equals to 2. So, I will be getting theta_1 dot theta_2 dot plus h_222. So, I will be getting h_222, so I will be getting here theta_2 dot square and another term h_2 then 2 and I will have to consider this 1 also. So, I will be getting theta_1 dot theta_2 dot and your c_2. So, this is the way h_211, h_212, h_221, h_222 can be determined. So, this is the way actually, we will be getting the expression for tau_1 and tau_2. So, there will two D terms and there will be four such h terms and one c_1, similarly, here there are two D terms and there will be four such h terms and will be getting your one c terms. So, this is the way actually, we can find out the expression for this particular tau_1 and tau_2, the same expression I have written it here. So, this is exactly the same expression which I have written it here like tau_1 and tau_2 now I will have to concentrate on, this particular term, that is, D_1, but before I go for this particular D_1 actually what I will have to do is, so I will have to find out another term that is called U_11 and U_11 is nothing but the partial derivative of the transformation matrix t 1 with respect to 0 and this partial derivative with respect to theta_1. So, if we remember the expression for T 1 with respect to 0 for example, if you see the expression for T 1 with respect to 0. So, this is the T 1 0, so the partial derivative of T 1 0 that is your if you want to find out the partial derivative of T 1 with respect to 0. And if this partial derivative is with respect to theta_1, so here in place of cost theta_1, I will be getting minus sine theta_1. So, here I will be getting minus cos theta_1 then this is 0 and here I will be getting minus L_1 sine theta_1. Similarly, this will be cos theta_1 this will be minus sine theta_1, 0 and this will be L_1 cos of theta_1 and this will be 0 0 this 1 will also become 0 because this is the partial derivative 0 0 0 0 0. So, this type of expression you will be getting for your U_11. So, this U_11 as I told is minus sine theta_1 minus cos theta_1 0 minus L_1 sine theta_1; cos theta_1 minus sine theta_l 0 L_1 cos theta_1 and here will be getting all 0 terms. So, this is the way actually, we can find out this U_11, this is actually the rate of change of these particular transformation metrics with respect to your only theta_1. Similarly, we can also find out U_21 and U_21 is nothing but the partial derivative of T 2 with respect to 0 with respect to your theta_1. So, T 2 with respect to 0 like if you see this particular expression like T 2 with respect to 0. So, with respect to theta_1, so we will have to find out its partial derivative. For example, here, I will be getting in place of cos theta_1 plus theta_2, I will be getting minus sine of theta_1 plus theta_2 here I will getting minus cos of theta_1 plus theta_2, and so on. So, this is the way actually, we can find out the partial derivative and this is nothing but minus sine of theta_1 plus theta_2. So, this is nothing but minus sine of theta_1 plus theta_2, then minus cos of theta_1 plus theta_2 and so 1 and these particular terms all terms will become 0 and this will also become is equal to 0. Now, by following the same method I can also find out the U_22 that is nothing but the rate of change of T 2 with respect is 0 with respect to theta_2. Now, with respect to theta_2 only if you determine then you will be getting something like with respect to theta_2, if you determine. So, then here will be getting like d/d theta_2 of cos of theta_1 plus theta_2, so will be getting minus sine of theta_1 plus theta_2. And that means you will getting here minus s theta_12 here will be getting minus cos theta_12 then this will become 0. And here, there will be no contribution because here there is no theta_2, the only contribution will come here and this will become minus L_2 sine of theta_1 plus theta_2. Similarly, the other terms also you can find out, and by following the same method actually, we can find out what is your U_22. So, this is nothing but is your U_22 and once I have got this particular U_2. So, let us try to concentrate on the inertia tensor. So, these are nothing but the inertia tensor inertia tensor for the link_1 and link_2. Now, if we remember, you have already derived this particular expression. So, this particular J_1 is the inertia tensor for the first link. And, we have considered that this particular link is having actually your circular cross-section with radius r and for that will be getting the inertia tensor like this like m_1 L_1 square by 3 0 0 minus half m_1 L_1 0 m_1 r square by 4, 0, 0, 0, 0 m_1 r square by 4 0 minus half m_1 L_1, 0, 0, m_1. So this, I have already derived this particular inertia tensor. Now, similarly, for the link 2, I can find out what is J_2 and exactly in the same way I can find out this 4 cross 4 matrix. And, this I have already discussed in much more details in one of the previous classes, how to determine that this particular inertia tensor. Now, I am just going to derive that particular the D_11 term. Now, if you see, this particular D_11. So, this is nothing but actually this particular D_11 term I am just going to find out. So, I am just going to derive what should be the expression for this particular D_11. Now, to derive this particular expression, let me try to go back to this particular expression for D_ic. Now, if you concentrate on this particular D_ic there is an inertia term and our aim is to determine what should be your the expression for D_11 ; that means, I equals to 1 c equals to 1 and j is the maximum between 1 coma 1, that is 1. So, j will vary from 1 to 2 and now with the help of this, I can write down. So, when j equals to 1, so it is nothing but trace u the j equals to 1 here and c is 1 here, then comes your J_1 then comes u j equals to 1, i equals to 1. And, this is the symbol for the transpose like Tr that is the trace. Now, I am just going to use j equals to 2, so this will become U_21 J_2 U_21 transpose of that. So, this is the expression for this particular D_11, now, the same expression I am using here. So, the same expression I am using here for this particular D_11, that is trace of U_11 J_1 U_11 transpose plus trace of U_21 J_2 U_21 transpose. Now you see all the terms you know to us for example, say this U_11, we have already derived J_1 you have derived. So, you know U_11 transpose of that then comes U_21 we have derived J_2 you have derived U_21 transpose is also known to us. So, each of these matrices are 4 cross 4 matrix, ok. So, if you multiply 2 times, then you will getting finally, 4 cross 4 matrix. So, you will be getting one 4 cross 4 metrics here and here also, you will be getting one 4 cross 4 matrix and here actually this is a trace. So, for this 4 cross 4 matrix, so what will have to do is, you will have to consider only the diagonal elements. So, by trace we mean the sum of the diagonal elements. So, we consider the sum of the diagonal elements here and some of the diagonal elements here and if you just add them of you will be getting the final expression for this particular D_11. So, the expression for D_11 is known, now the same method we will have to follow for the other example, say you find out D_12 once gain we go back to the expression, so this particular expression. So, here D_12; that means, your D_1 to i equals to 1 and c equals to 2, as the maximum between 1 and 2 is 2. So, there will be only 1 term here, so this is nothing but the trace of U_22 then comes J_2 then comes U_21 transpose of that, so this is the expression for your D_12. Now, similarly, we can also find out the expression for D_21, now this D_21; that means, i equals to 2 and c equals to 1 and the maximum between 2 and 1 is 2. So, there will be only 1 term because j varies from 2 to 2. So, you will be getting the trace of like U_22 and then comes your J_2, then the transpose of U_22. So, this is nothing but your D_21 and I can also find out the expression for D_22 now for this particular D_22, i equals to 2, c equals to 2, j equals to 2 to 2. So, trace of your u j equals to 2 here and c is also 2 then comes your J_2 then comes j equals to 2 and i is also 2 and transpose of that. So, I can find out the expression for this D_12, D_21 and D_22, and all the terms I know and very easily actually we can find out the expression for these particular the D terms, the way I discuss. So, this is the expression for your D_12 and once again you write down the 4 cross 4 matrix here, 4 cross 4 matrix for J_2, 4 cross 4 for U_21 transpose, you multiply them consider the sum of the diagonal elements. So, will be getting this is the expression for your D_12. Similarly, your D_21 the expression we have already seen and if you just follow the same method like matrix multiplication and then, if you consider the sum of the diagonal elements. So, we will be getting the same expression of D_12. So, D_21 becomes equals to D_12, but D_22, so once again you follow the same method write down the expression, multiply the matrices and consider the trace, so you will be getting this expression. So, till now, all the D values are calculated, now, if I have got all the D values; that means, you’re in this particular expression like your all D values are known, now we will have concentrate on the h values that is your h_111, h_112, h_121 and h_122. Similarly, we have got 4 more, so there are 8 such h terms and that I will have to derive ok. Now, how to derive, so what we do is, once again we go for the expression for this h_icd, so here, if you write down like your h_111. So, h_111, for example, here, i,c,d all are 1, so j varies from 1 to 2, so there will be 2 such terms. So, this is nothing but trace of U_111, multiplied by J_1 and then multiplied by U_11 transpose and then the trace of U_211, then comes J_2, then comes U_21 transpose. So, this is nothing but is your h_111, similarly, I can also write down the expression for h_112, now h_112 is where i equals to 1, c equals to 1, d equals to 2. So, j is the maximum of icd; that means, 2, so there will be only one term here. So, this is nothing but the trace of U_212 and then J_2 and U_21 transpose. your u. So, similarly I can find out the other your h terms like your say h_121, h_122 and other terms we can find out. So, we can we can determine other expressions of this particular h terms, now if you see this particular h_111. Let us see how to derive this h_111 is nothing but as we have seen trace of this particular thing plus trace of this particular thing, now this U_111 is nothing but the partial derivative of U_11 with respect your theta_1. We know the expression of U_11 as 4 cross 4 matrix. So, again will have to find out the partial derivative with respect to 1 ok, then you will be getting this U_111, once again the 4 cross 4 matrix. Similarly, you know this U_21 we have already seen that 4 cross 4 matrix and that matrix we will have to find out the partial derivative with respect to theta_1 and that will become U_211. And, we will be getting this particular partial derivative of 4 cross 4 matrix. Now, U_111 is a 4 cross 4 matrix, J_1 is one 4 cross 4 matrix and U_11 is another 4 cross 4 matrix. Similarly, U_211 is 4 cross 4 matrix, J_2 is 4 cross 4 matrix and this is also 4 cross 4 matrix. So, ultimately, I will be getting one 4 cross 4 matrix, another 4 cross 4 matrix, we consider the trace value and if we just add them up fortunately we will be getting this h_111 is equal to 0. So, this is the way actually, we can find out the other h values by following the same method like h_112, the expression I have already seen. So, this is the expression for h_112 and this U_212 is nothing but the partial derivative of U_21 with respect to theta_2. So, this is the expression and if you just substitute the metrics says multiply find out the sum of the principle diagonal elements. So, you will be getting h_112, is nothing but this, so this is actually the expression for h_112. Then, h_121, so this is the expression and if you just follow the same method and this U_221 is nothing but partial derivative of U_22 with respect your theta_1. So, this particular matrix we will be getting and by following the same principle, I can find out h_121 is nothing but this. Now, here actually, once again let us try to recapitulate our purpose is to determine the expression for the joint torques like tau_1 and tau_2. So, we are following the same procedure, then comes your h_122. U_222 is nothing but partial derivative of U_22 with respect to theta_2. So, this is the matrix you will be getting and this is h_122. So, this is the expression of this h_112. So, till now 4 h values we have calculated and we will have to determine the remaining 4 h values by following the same procedure. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_33_Sensors_Contd.txt
Now, we are going to discuss, the working principle of a Range Sensor. Now, this range sensor is generally used, just to find out the distance between the object and this particular sensor. Now, supposing that this is actually the detector or the sensor. Say, this particular detector or the sensor is mounted on the robotic link or a robotic joint. And, I am just going to ensure the collision-free movement of that particular joint with the obstacle. Now, supposing that this is the obstacle. So, this obstacle actually I am just trying to find out, what is the distance between this particular obstacle or the object and the sensor or the detector or the receiver. So, I am trying to find out, what is the distance between this particular object or the obstacle and the sensor or the detector. So, this d, I will have to find out, d is equal to what, ok. Now, here actually on the PPT, I will have to make one correction. So, this particular angle, it is not alpha, let us write the angle is theta. And, this particular distance is, say a in place of x, ok. Now, here, we have got, there is a light source or a detect or an emitter. Now, this particular angle theta, so it can be varied. So, I can vary this particular angle theta. Now, supposing that I know the distance between the sensor and this particular light source or the emitter, so this distance is known, that is a is known ok, so a is known and theta is actually a variable and I am just going to vary this particular theta. Now, by varying theta, so I will be getting different types of responses, here in this particular sensor. For example, say if I use a high value of this particular theta, say for example, I am using a higher value and this light is going to fall here. So, this is the light source; so, light is going to fall here. And, this is not a very smooth surface. So, what will happen is, so for this particular light, there will be some reflection here. So, this type of reflected beams will be getting and it is not a very smooth surface, so I may not get a very bright spot with the help of this detector. Now, you just go on varying this particular theta. The moment theta becomes equal to this, there is a possibility that I will be getting this type of beam; and here, there will be some reflection this side and that side; but there is a possibility, I will be getting a very bright spot here with the help of this particular sensor. The moment you get a very bright spot with the help of this particular sensor ok, it is more or less correct that this particular angle is your 90 degree and this angle theta we can measure, ok. And, if you can measure this particular angle theta, that means, I am getting very good reflection here ok, and this angle is 90 degree. I know this particular theta, so very easily I can find out, because d divided by a; so, d, I am going to determine. So, d divided by a is nothing but tan theta, theta I can measure, a is known. So, very easily, I can find out d, that is the distance between the sensor and this particular obstacle. Now, let me repeat that particular example once again. Suppose, if this is the robotic joint, here I am just going to put that particular sensor. And, I want to make this particular joint collision-free and supposing that, I have got one object here. So, it will come very near to this particular object, so I will have to find out the distance between the object and this particular the joint. So, this type of sensor, I can use just to find out the distance between this sensor and this object or the obstacle. Now, here, I can use a light source; I can also use some sort of sound source. And, this method is a very popularly known as the triangulation method. So, in range sensor, we use some sort of the triangulation method. It is very simple. And, using this very simple mathematics, we can find out the distance between the obstacle and this particular sensor, which is mounted on the body of this particular robot, ok. This is the working principle of the range sensor. Now, I am going to discuss the proximity sensor; and these proximity sensors are very frequently used. For example, we have got three popular types of proximity sensor; one is called the inductive sensor; we have got the Hall-effect sensor and we have got the capacitive sensor. Now, let me try to explain the working principle of this particular inductive sensor. Now, by proximity we mean, it is the closeness, ok. So, how much closeness we are going to consider between the sensor and this particular object. So, proximity means, it is actually the closeness. So, the working principle of this inductive sensor is very simple. So, here, we use one permanent magnet, supposing that this is the permanent magnet. And, this is the North Pole, and this is the South Pole of the magnet. And, this is the extended version of this particular magnet, say ok. And, on the extended version, we put some coils for current flow. So, we have got some electric coils wires for current flow. Now, this is the North Pole and this is the South Pole. So, the magnetic lines of force will come from come out from North Pole and it will move through the South Pole outside that particular magnet; and inside the magnet, it will be from South Pole to North pole. So, this is actually the direction of the lines of forces. So, these are the direction of lines of forces, ok. These are all fundamentals, all of us we know. And, this is a permanent magnet, ok. Now, so this will have some lines of forces, it will have some magnetic flux. Now, let us see, what happens like if I just bring one ferromagnetic material or the magnetic material closer to this particular inductive sensor. This is nothing but the inductive sensor. Now, this is the ferromagnetic object, which is brought very near to this particular sensor. So, this is actually the inductive sensor. And, this is the ferromagnetic object. The moment we bring this particular ferromagnetic object very near to this inductive sensor, what will happen to these magnetic lines of forces, the magnetic lines of forces will be deflected. So, previously the magnetic lines of forces were here ok; it was shown here. Now, there will be a shifting of the magnetic lines of forces; and there will be a change of magnetic flux. Now, this particular rate of change of magnetic flux is proportional to the induced voltage or the induced current. So, due to this change in magnetic flux, what will happen, there will be some induced voltage and there will be some current flow through these particular coils. So, we have got the coils through which the current will flow, and due to this induced voltage, there will be some current flow, ok. So, this particular voltage or this current, we can measure. So, let us see the way it works. Now, supposing that say I have got this particular paramagnetic object. So, this is stationary. So, inductive sensor is, say stationary. And, this ferromagnetic object is brought near to the inductive sensor or it is actually thrown away from this particular inductive sensor. The moment it is brought near to this particular ferromagnetic object, near to this inductive sensor, there will be some amount of the voltage induced. And, you will be getting the voltage induced like this. For example, if I plot the induced voltage with time, supposing that the ferromagnetic object is brought near to the inductive sensor with high speed. So, if it is brought with high speed, so there is a possibility that I will be getting this type of plot for this induced voltage. Now, this positive sign, as if the ferromagnetic object is brought near to the inductive sensor. And, the moment it is thrown away from the inductive sensor, it will be getting this particular the negative induced voltage, ok. Now, once again let me repeat if this particular ferromagnetic object is brought at high speed, I will be getting this type of plot for the induced voltage. But, if it is brought at low speed, there is a possibility that I will be getting this type of plot for this induced voltage with time, ok. So, the nature of the induced voltage will be changing, and it depends on the speed with which I am just going to bring that particular ferromagnetic object near to the inductive sensor; or I am throwing away this ferromagnetic object from the inductive sensor. So, depending on this particular speed, we will be getting the different types of voltage distribution. Now, here if I just concentrate on this type of the induced voltage distribution, I will be getting its amplitude. And, if I consider this type of distribution, I will be getting the different amplitudes, that means, I will be getting high amplitude value for this induced voltage, if it is moving with a high speed, and if it is moving with low speed, then I will be getting low amplitude for this particular distribution of voltage. Now, once again, let me repeat that corresponding to the high speed, I will be getting high amplitude that means, corresponding to the high speed. Now, if I consider a fixed time, I am plotting here with time. So, if I consider a fixed time, and if this is the inductive sensor and this is the ferromagnetic object. Now, if it is moving with high speed towards the ferromagnetic object, at the fixed instant of time a, for at the fixed duration, it will come more closer to this particular inductive sensor compared to the situation, whenever it was moving with slow speed. Now, once again, let me repeat, if the time duration is same, say delta t or t, if it is the same, so time is the same. And now, once again, I am repeating for the same time. So, this is the fixed position of the inductive sensor, and it is moving with high speed. So, in the same duration, it will be coming closer to this particular inductive sensor compared to the situation, whenever it is moving with the slow speed. So, whenever it is moving with the slow speed, the distance between the sensor and the object will be more; and whenever it is moving with high speed, the distance between the object and the sensor will be small. The same thing is actually is coming here in the calibration curve. Now, this is the calibration curve. Now, whenever we are getting the higher amplitude that means, the ferromagnetic object is moving with high speed, it is coming very near to the inductive sensor, ok. Now, in that case, corresponding to the high amplitude, I will be getting the smaller or the distance between the sensor and the object and corresponding to the lower amplitude, I will be getting the larger distance between the sensor and this particular the object; now if this particular calibration curve is known, and if I can measure the normalized amplitude or the voltage signal. Now, why this is the normalized, because in the scale of 0 to 1, I want to represent this normalized amplitude of voltage signal, if we can measure, then very easily I can find out what should be the distance between your object and this particular sensor, ok. So, this is the way, actually, we can use the inductive sensor, just to find out like what should be the distance between the sensor and this particular the object. Now, this is the way actually one inductive sensor is working. Now, this sensor is suitable only for the magnetic material; this is not going to work for the non-magnetic material. Now, I am just going to discuss another very popular sensor, that is called the Hall-effect sensor. And, this is also suitable only for the ferromagnetic material; this is not suitable for the non-magnetic material. And, this particular Hall-effect sensor is very frequently used in the vertex. Now, its working principle is based on the principle of the Lorentz force. Now, supposing that an amount of charge q is moving with velocity v in a magnetic field of strength B, then it will be subjected to one force, that force is known as the Lorentz force, ok. Now, this Lorentz force is nothing but F is q multiplied by V cross B; V is nothing but the velocity, so this is the vector; and B is nothing but the strength of the magnetic field, that is also a vector. So, we find out the cross product, that is, V cross B multiplied by the amount of charge that is q that is nothing but the Lorentz force. Now, this particular principle, in fact, we are going to use here to develop the Hall-effect sensor. Now, supposing that this is once again a permanent magnet, say permanent magnet, this is the North Pole, and this is actually the South Pole, North Pole and South Pole, so will be getting the magnetic lines of forces like this. Now, supposing that in between the North Pole and South Pole, supposing that I am just going to put one Hall-effect sensor, now what is a Hall-effect sensor, the Hall-effect sensor is nothing but one semi-conductor material say silicon. So, this is nothing but a semi-conductor material. Now, this semi-conductor material or say silicon or having some free electrons, the moment we put the semi-conductor material in between the North and South Pole, and within the influence of this magnetic lines of forces, there will be some amount of voltage induced in the your semi-conductor material or this Hall-effect sensor, ok. Now, actually, what happens, the moment we just bring one magnetic object near to that. Supposing that this is the magnetic object, which has been brought near to that particular Hall-effect sensor, ok. So, this is the Hall-effect sensor. Now, what will happen is, so this is a ferromagnetic material. And, we have got the permanent magnet here, this is the North Pole, this is the South Pole. So, the magnetic lines of forces will pass through this. And, here, as this is the magnetic material, some of the lines of forces will pass through this particular ferromagnetic object. And, due to that actually, what will happen is, the strength of the magnetic field here will be reduced. And, the Lorentz force F is nothing but q multiplied by V cross B. Now, here actually, due to the presence of this particular ferromagnetic object, the strength of the magnetic field will be reduced, the strength of B will be reduced. Then, what will happen to the Lorentz force, the Lorentz force is also going to be reduced. And, consequently, the amount of induced voltage here in this semi-conductor material is going to be reduced, and there will be some drop in voltage. So, when this particular material was not there in front of the sensor, there was some induced voltage. And whenever it is coming, there is some change in induced voltage, truly speaking there is some drop in induced voltage. And, this particular drop in induced voltage can be measure with the help of voltmeter or multimeter. And, if I know this particular calibration curve, so this particular calibration curve is known. And, if I know the drop in voltage, I can find out the distance between the sensor and this particular object. Now, if this particular object comes very near to the sensor, what will happen to this drop in voltage, the drop in voltage is going to increase, and the distance between the sensor and the object will be reduced. And, this is the way by measuring the drop in voltage, I can find out the distance between the sensor and this particular object. This is the working principle of this particular Hall-effect sensor. And, this is also very popular in robotics. But, the main drawback of these sensors like inductive sensors, then comes your Hall-effect sensors is that these are not suitable for the non-magnetic material. Now, I am just going to discuss the working principle of another sensor that is called the capacitive sensor, it is suitable for any material, whether it is magnetic or nonmagnetic, so this particular sensor is going to work. Now, let us try to understand the construction details at first, it is very simple. Supposing that we have got a container here, so this shows this view, this shows the container, it is just like cylindrical container sort of thing. So, this is another view, so this is the container, ok. So, this shows actually another view. So, we have got the cylindrical container, and here we have got one sensitive electrode, and that is nothing but a metallic disc. This is a very thin metallic disc. And, we have got one reference electrode, which is nothing but a ring. So, this reference electrode is nothing but a ring. So, this is nothing but a ring, and this is the reference electrode. And we have got your the sensitive electrode. And, this is nothing but is your sensitive electrode. So, this is the sensitive electrode; and it is very thin and lightweight. If you see in this particular view, this sensitive electrode is shown here, so this is nothing but the sensitive electrode. And, the reference electrode is nothing but this. Now, this reference electrode is kept fixed, but in this sensitive electrode, there could be some oscillations. Now, let us try to find out the reason behind this particular oscillation. Now, as I told that this particular sensitive electrode is a thin metallic disc, and very lightweight, the moment we bring any object in front of this particular the thin metallic disc. So, what will happen is, there will be some amount of charge accumulated. And, due to this particular accumulation of charge, its capacitance is going to change. And, the moment its capacitance exceeds the threshold value, then oscillation starts. So, there will be some sort of oscillations here. And, this particular charge is due to some sort of static electricity sort of charge. And, there will be some oscillations, as we told. And, for the sensitive electrode, there will be some oscillations, but the reference electrode is kept fixed. So, with respect to the fixed reference electrode, there will be some oscillations in the the sensitive electrode or the metallic disc. Now, let me repeat a little bit. Now, here if you see so this particular sensitive electrode is a very thin and lightweight, and an object is brought very near to that sensitive electrode. What will happen is, there will be some amount of charge accumulation, static electricity sort of charge accumulation. And, due to this accumulation, the capacitance of this particular metallic disc is going to increase. And, the moment it exceeds the threshold value of capacitance, there will be some oscillations. And, as I told that this reference electrode is kept fixed, but in this sensitive electrode, there will be some oscillations. So, with respect to the fixed one, there will be some oscillations. Now, this oscillation is converted into some output voltage here with the help of some electronic circuit, some printed circuit board. And, with the help of this printed circuit board, the output side will be getting some output voltage, but the inputs will be some sort of oscillations. And, here, there are some electronic circuits, and this is the printed circuit board, and this output voltage can be measured with the help of your voltmeter or multi-meter. So, we can find out so by measuring this particular change in capacitance or change in output. In fact, we can determine the distance between the sensor and this particular object. Now, as I told, the moment that object is brought near to that particular sensor, its capacitance is going to be changed due to the accumulation of charge. Now, supposing that the amount of charge accumulation is more, and this is the percentage change in capacitance. If it is more then what will happen? The object has come very near to the sensor. So, this is the distance between the sensor, and this particular object. And, supposing that the percentage change in capacitance is less that means, the object is far from this particular sensor. So, this is the calibration curve. And, by knowing this particular calibration curve, and knowing the percentage change in capacitance, we can find out the distance between the sensor and this particular object. So, this is the way, we can determine the distance between the sensor and object using the capacitive sensor. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_25_Robot_Dynamics_Contd.txt
Now, I am going to discuss how to determine the inertia tensor for the robotic link having circular cross-section. The length of the robotic link is equal to l and it is having the circular cross section with the radius r. Now, here, I am just going to consider a small element, the same element I am just going to redraw here. So, this is a X Y and Z. So, the coordinate system is attached here. Now, let us try to concentrate on this small element, now this is r, this particular included angle is your d theta. So, this arc is nothing but your r d theta. So, this is nothing but is your r d theta and this is a dr. So, the cross-sectional area of this part is shaded part and it is nothing but is your r d theta multiplied by dr and so, to determine the volume that is your dv, what we do is, this area r d theta dr multiplied by dx. So, this is nothing but the volume. So, if I want to find out the differential mass, that is, dm is nothing but rho dv and that is nothing but is your rho r then comes your d theta dr dx. So, this is nothing but the differential mass of this small element. So, the differential mass of this small element is nothing but rho r d theta dr dx. And, now let us try to find out its moment of inertia and before that, let me write down, that this particular z, z is nothing but r cos theta and your y is nothing but r sin theta. Now, by using this particular expression, we can find out the moment of inertia. Now, this is your differential mass, as I told rho dv, that is, rho r d theta dr dx, where rho is the density. Now, moment of inertia about xx, that is, I_xx is nothing but volume integration y square plus z square dm. Now y square plus z square is nothing but r square sin square theta plus r square cos square theta. So, this is nothing but your r square. So, y square plus z square is nothing but r square and this particular dm is nothing but rho then comes your r d theta dr dx and let us try to find out the limit of this integration now, d theta. So, theta will vary from 0 to 2Pi, then comes your r, r will vary from 0 to r and x will vary from minus l to 0, now let us try to see, what happens like how to decide the range for this particular x. Now, here, we have got the origin of the coordinate system. So, it is from minus l to 0 that is the range for your x. So, we can find out this and we can carry out this integration. And, if we carry out this integration we will be getting half m r square, where m is nothing but the mass of this particular the link having a circular cross-section. Now, this mass m can be determined as pi r square multiplied by l is the volume multiplied by rho, that is, a density. So, half m r square is nothing but the moment of inertia about xx. Now, if I see this moment of inertia about YY, that is nothing but the volume integration x square plus z square dm. So, in place of z square, I am putting r square cos square theta, and next is your dm is rho r d theta dr dx and if you carry out this particular integration and if these are the limits for integration. So, very easily you can find out. So, this particular expression for I_YY, that is nothing but the moment of inertia about YY, that is, ml square by 3 plus mr square by 4. So, following this I can also find out the moment of inertia about ZZ and that is nothing but volume integration x square plus y square dm and a y square is nothing but r square sin square theta then, rho r d theta dr dx. And, if we carry out this particular integration we will be getting ml square by 3 plus mr square by 4. So, I can find out the moment of inertia about ZZ, the next is the product of inertia. The product of inertia I_XY is nothing but the volume integration xy dm. Now here, y is nothing but your r sin theta and dm is rho d theta dr dx and these are the ranges for the integration, and if you carry out this integration, we will be getting I_XY is equal to 0. Now, by following the similar method, we can also find out what is I_YZ. I_YZ is nothing but is your the volume integration yz dm and if you carry out this integration you will be getting I_YZ is equal to 0. Similarly, I can also find out the product of inertia that is I_ZX and that is nothing but volume integration of zx dm. And, if I carry out this particular integration and we will be able to find out that I_ZX is equal to 0. Now, here, we have got this particular product of inertia, now volume integration xdm, if you carry out. So, I will be getting minus half ml, then volume integration ydm will become equal to 0, then volume integration zdm will become equal to 0. Now, the mass center, that is, X_i bar, Y_i bar, Z_i bar is nothing but minus l by 2, 0, 0 like, if I draw this particular circular cross-section robotic link, my X direction is along this particular direction and the total length is l. So, its mass center will be here whose coordinate is nothing but minus l by 2, 0, 0. So, this is nothing but the coordinate of the mass center. Then, comes your integration, the volume integration of dm, so, this is nothing but m. So, we can find out all such integrations and this is, the volume integrations. So, once you have got all such expressions, now, I can find out the inertia tensor. So, this inertia tensor for this particular link with circular cross-section, that is denoted by J_i will become equal to ml square by 3 0 0 minus ml by 2 then comes 0 mr square by 4 0 0; 0 0 mr square by 4 0; minus ml by 2 0 0 m. Now, if I consider the slender link, where l is very large compared to r, l is very large compared to your r, in that case, this mr square by 4. So, this can be neglected. So, this tends to 0, then mr square by 4 tends to 0. So, this will become the inertia tensor ml square by 3 0 0 minus ml by 2, then comes 0 0 0 0; 0 0 0 0; minus ml by 2 0 m. So, this is nothing but the inertia tensor for the robotic link having circular cross section of radius r. Now here, till now we have considered the rectangular cross-section having the dimensions a and b. So, I can also consider the square cross-section like having the dimensions a and a. So, for this particular cross-section robotic link, we can also find out the inertia tensor. So, this type of robotic link is having the constant cross-section. If you see in the robotic link, the cross-section is varying, the same is true in our hand also. For example, say if I consider say, this is the robotic links. Now, if I take one cross-section here, if I take another cross-section here, the cross-section is not the same. So, cross-section is going to vary and it is having the varying cross-section. So, determining this inertia tensor is not so easy, the same is true for the actual robotic link. In actual robotic link generally, we do not consider the link with constant cross-section and the cross-section is going to vary along the length. So, it is a bit difficult to determine the inertia tensor. In fact, we try to take the help of some sort of finite element analysis to find out what should be that particular inertia tensor. So, this is the way, actually we try to find out the inertia tensor for the robotic link. And, once you have got the inertia tensor, now, we are in a position to determine the mathematical expression for the joint torque or the joint force. Now, here actually, we are going to use Lagrange-Euler formulation, truly speaking this is the Lagrangian method, Lagrange approach. Now, the rule is or the equation is something like this d/dt of partial derivative of L with respect to q_i dot minus the partial derivative of L with respect to q_i is nothing but your tau_i. Now, I am just going to define the different terms. So, here, the t is nothing but time. Now L is nothing but the Lagrangian of a system of a robotic system is nothing but the difference between the kinetic energy and the potential energy, that is nothing but the lagrangian of the robotic system. Now, here this qi is the generalized coordinate for example, if it is a rotary joint. So, q_i is nothing but theta_i, that is the joint angle; if it is a prismatic joint that is nothing but the link offset, that is, di, now q_i dot is nothing but the first time derivative of q_i and your tau_i is nothing but the generalized torque, if it is a rotary joint and if it is a linear joint, that is nothing but the generalized force. So, our aim is to determine the mathematical expression for this particular tau. Now, let us see, how to determine the mathematical expression for this particular tau. Now, to find out this mathematical expression, the first thing we will have to do is, we will have to find out the expression for this particular the Lagrangian. And, this Lagrangian is nothing but once again the difference between the kinetic energy and potential energy; that means, we will have to determine the kinetic energy for the whole robot. And, we will have to find out the potential energy and this particular difference of kinetic energy and potential energy is nothing but is Lagrangian. So, my first task is to determine the Lagrangian of this particular robotics system, and now, let us see how to determine the Lagrangian of this particular robotic system. Now, let me once again start with a particular small element, whose coordinates are in its own the coordinate system. For example, say if I consider just like the previous the way, I have got is a robotic link and here, the coordinate system I am having here and the motor is connected here, ok. So, I am just trying to find out the mass center here, and if I just consider a particular point or say this particular point. So, this point in this coordinate system is nothing but r_i with respect to i. Now, with respect to the base coordinate system, the same point I am just going to find out and that is nothing but is your r_i with respect to 0. So, our aim is to determine this particular r_i with respect to 0, but in this particular point; I am just trying to find out this particular point with respect to the base coordinate system. So, this is nothing but r_i with respect to 0, and this I am trying to find out provided this is known, that is, r_i with respect to i. So, this r_i with respect to i is nothing but X_i Y_i Z_i and this r_i with respect to 0 is nothing but T_i with respect to 0 multiplied by r_i with respect to i. Now, this T_i with respect to 0 is nothing but T_1 with respect to 0 T_2 with respect to 1 up to T_i with respect to i minus 1 and this particular T is nothing but the transformation matrix. Now, once again, let me let me repeat. So, this is actually the circular link and my coordinate system is here, ok. So, I have got a point here and with respect to this particular coordinate system. So, this dimension is nothing but r_i with respect to i and I have got the base coordinate system here and the same point I am trying to find out and that is nothing but is your r_i with respect to 0. So, r_i with respect to 0 is nothing but T_i with respect to 0 multiplied by this r_i with respect to i and this is nothing but the i-th coordinate system and this is nothing but the base coordinate system. So, in i-th coordinate system, this particular position vector is known, now, I am trying to find out r_i with respect to the base coordinate point, the same point. So, this is the way actually we can represent your r_i with respect to 0. Now, actually, what I am going to do is, I am trying to find out the kinetic energy of the whole robot. Now, if I want to find out the kinetic energy, the kinetic energy is nothing but half mv square. So, the expression for the kinetic energy is nothing but half mass multiplied by V square. So, this is nothing but the kinetic energy. Now, here, actually what we are going to do, first, we are trying to find out the kinetic energy of one small element lying on a particular link, then we are trying to find out the kinetic energy for the whole link, say i-th link and after that, we will try to find out the kinetic energy for the whole robot or the whole robotic system. Now, to determine the kinetic energy of the particle; so what you do is, you will have to find out the velocity of the particle. Now, the velocity of the particle, that is, V_i with respect to 0, with respect to the base coordinate frame. So, this is nothing but the rate of change of the position; that means, your d/dt of r_i with respect to 0. So, this r_i with respect to 0 is nothing but the position of that particular differential mass with respect to the base coordinate frame, and a rate of change of that particular position with respect to time is nothing but V_i with respect to 0; that means, the velocity of that particular particle with respect to the base coordinate frame. Now, this r_i with respect to 0 as I told can be written as T_i with respect to 0 multiplied by r_i with respect to i. Now, this T_i with respect to 0; so this can be written as T_i with respect to 0 is nothing but T_1 with respect to 0 multiplied by T_2 with respect to 1 and the last term will be your T_i with respect to your i minus 1. T stands for your transformation matrix. Now, I will have to find out the derivative with respect to time. So, this T_i with respect to 0 is nothing but this. So, if I find out the derivative with respect to time. So, first, I will have to concentrate on this, that is, T 1 with respect to 0 dot multiplied by T 2 with respect to1 and there are a few terms, and the last term is T i with respect to i minus 1. The next term will be like T 1 with respect to 0. So, here T 1 with respect to 0, then T 2 with respect to one dot then comes the last term will be your like T i with respect to i minus 1 ok. So, multiplied by r i with respect to i, and the last term will be this that is T 1 with respect to 0 T 2 with respect to 1. And, the last term is your T i with respect to i minus 1 dot multiplied by r i with respect to i plus T i with respect to 0, this particular r i with respect to i dot. So, this is another term. So, this is the way actually, we can find out this particular time derivative, now, let us try to concentrate on this particular term. Now, if we concentrate on this particular term. So, this r i with respect to i dot means, supposing that I have got one robotic link, the rigid link something like this. Now, if this is the rigid link and I am just going to concentrate on a particular point, ok. And, supposing that I have got the coordinate system here. So, this is actually the position of this particular differential mass and this is nothing but r i with respect to i. Now, the rate of change of this particular position with respect to time will be 0, because this is a rigid link. So, this is a rigid link. So, for this particular rigid link, this r i with respect to i dot is nothing but is equal to 0. So, this particular term will become equal to 0. So, we are left with this up to this. Now, here I just want to mention that if we consider robotic link like flexible robotic link, we cannot assume that this r i with respect to i dot is equal to 0. So, this will become nonzero for a flexible link and we have a few robot having flexible link, also. And, determining this particular your V i with respect to 0 is not so, easy and we will have to consider this particular your flexible link and this particular term is non-zero and to determine once again, we will have to take the help of finite element analysis. Now, here, actually what we do is so, if you can see. So, this particular term, this particular term can be written like this in a short form. Now if we concentrate on this. So, this is your d/dt, that is, the derivative with respect to time of your T i with respect to 0. So, this is nothing but the transformation matrix. Now, d/dt of T i with respect to 0, can be written as the partial derivative with respect to your q_j multiplied by dq/dt. So, d/dt of T i with respect to 0 is nothing but the partial derivative of this of T i with respect to q_j multiplied by dq/dt, let me write it once again. So, d/dt of T i with respect to 0 is nothing but the partial derivative with respect to q_j of T i with respect to 0 multiplied by is your dq/dt. So, it can be written something like this, ok. So, this particular term has been written in this particular form that is your partial derivative with respect to q_j of T i with respect to 0, and dq/dt is nothing but q_j dot and we have got this particular thing, that is, a r i with respect to i. So, this particular expression can be written in short form like this, and this is nothing but your like Vi with respect to 0; that means, the velocity of that particular particle with respect to the base coordinate frame is nothing but this particular expression. Now, here, I am just going to use another symbol that is partial derivative of T i with respect to 0 with respect to your q_j is nothing but U_ij is a another symbol I am using; that means, your Vi with respect to 0 can be written as summation j equals to 1 to i, then comes U_ij. So, I am using U_ij multiplied by q_j dot multiplied by r i with respect to i and here, this U_ijk is nothing but the partial derivative of U_ij with respect to q_k. So, these particular symbols, we are using just to write down in a very compact form. So, let us see, how to write down this particular thing in a very compact form. Now, kinetic energy of the particle having the differential mass is nothing but half mv square. Now, here, the mass of this particular differential mass is dm. So, half dm v square now, here this particular velocity, which is a vector having 3 components. So, this particular components velocity has got your 3 components like, say x_i dot then comes your y_i dot and z_i dot. So, these are the 3 components of this particular velocity, ok. Now, here, this the kinetic energy of the differential mass dm, the small particle, that is dK_i is nothing but half dm, then xi dot square plus yi dot square plus zi dot square. Now, this V bar that is nothing but a V i with respect to 0 is nothing but this and now this V i with respect to 0 trace of that. So, this is nothing but a 3 cross 1 matrix and I can write down actually here like its trace is nothing but x_i dot, y_i dot then comes your z_i dot now here if I just multiply. I will be getting your first row first column, that is, x_i dot square, then first row second column x_i dot y_i dot then comes your z_i dot x_i dot. So, I will be getting x_i dot y_i dot then comes your y_i dot square then comes your y_i dot z_i dot, then comes your zx, that is, z_i dot x_i dot then comes your y_i dot z_i dot and then comes your z_i dot square. So, this type of like 3 cross 3 matrix we will be getting. Now, here, we try to concentrate only on the diagonal elements and this diagonal elements is nothing but trace of this particular matrix, ok. So, x_i dot square plus y_i dot square plus z_i dot square and that is actually nothing but trace of this particular V i with respect to 0, V i with respect to 0 transpose. So, trace of that is nothing but x_i dot square plus y_i dot square plus z_i dot square. Now, this dKi is nothing but your half trace of this V i with respect to 0 now we have already derived that V i with respect to 0 is nothing but summation a equals to 1 to i U_ia q_a dot r i with respect to i, this I have already seen and then. So, this particular V i with respect to 0^T prime this is nothing but is your summation b equals to 1 to i U_ib q_b dot r i with respect to i trace of that. Now, here, I one thing I just want to mention here, this summation I have taken a equals to 1 to i. And, here, I have taken b equals to 1 to i and this I have done very purposefully; for example, say if I consider. So, it is a rotary joint. So, in place of this q_a dot I am just going to write down theta_a dot and in place of this q_b dot, I am just going to write down like q_b dot. Now, if you see the final expression, which I am going to derive. So, there is a possibility, there will be a few terms like theta_1 dot multiplied by theta_2 dot then might be theta_2 dot multiplied by theta_3 dot something like this, ok. Now if I do not consider the separate range for this particular summation, I am just going to miss this particular the combination of theta_1 dot multiplied by theta_2 dot. Now, if I consider both are a equals to 1 to i and b equals to 1 to i, there is a possibility that I will be getting theta_1 dot square. Now, for a very special case, I will be getting theta_1 dot square and where a will become equals to b, but just to keep this particular possibility alive. So, I have taken two separate ranges for these particular summations. Now, once you have written a like this we can a rearrange in this particular format half trace summation a equals to 1 to i summation b equals to 1 to i U_ia r i with respect to i, r i with respect to i transpose. Then comes U_ib transpose then q_a dot q_b dot dm. This can be further rearranged in this particular format half trace summation a equals to 1 to i, summation b equals to 1 to i, U_ia. Then, this r i with respect to i dm, r i with respect to i transpose of that then U_ib transpose q_a dot q_b dot. So, it can be rearranged in this particular the format. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_15_Robot_Kinematics_Contd.txt
Let us take the example of my own hand in the form of a serial manipulator. Say, this is the serial manipulator, this is the end-effector ok? Now, if I want to find out the position and orientation of this particular end-effector, with respect to the fixed coordinate system, then, what I will have to do is: I will have to assign the coordinate system at the different joints, and then, we will have to assign the frame and talk about the frame transformation. Now, to make it possible, actually, the first thing we will have to do is: at each of this particular joint, we will have to assign the coordinate system. Now, to assign the coordinate system, we will have to follow certain rules, and these rules, actually, are nothing but the Denavit Hartenberg notation rules, and this particular concept was proposed in the year 1955 by Denavit and Hartenberg. Now, according to this Denavit and Hartenberg rules, actually, we can assign coordinate systems at the different joints. Then, very easily, we will be able to find out the position and orientation of this particular end-effector with respect to the base coordinate frame and vice-versa. So, now, we are going to discuss the Denavit Hartenberg notation and Denavit Hartenberg rules to assign the coordinate system at the different joints. Now, before I go for that, what we will have to do is: we will have to define a few parameters. For example, say, we will have to define two link parameters and there are two joint parameters. Now, before I am just going for deriving or explaining the meaning of this link parameters and the joint parameters, let me tell you the purpose of using them. Now, the link parameters are used to represent the structure of a link. So, if I want to represent the structure of a link, I will have to use the link parameter. Similarly, the joint parameters are used to represent relative position of the neighboring links. So, the relative position of the neighboring links, if you want to find out, we take the help of the joint parameters, ok? Now, I am just going to define the link and joint parameters. Now, here, the first thing I am just going to define is the link parameter, that is, the length of a link. The length of a link that is denoted by a_i, and by definition it is the mutual perpendicular distance between the axis_i minus 1 and axis_i. Now, before that, let me tell you that in this particular sketch, supposing that, this is the joint_i, similarly, this is joint_i minus 1, this is joint_i minus 2. So, for this particular joint_i, this is the axis_i and for the joint_i minus 1, the axis is axis_i minus 1 and for joint_i minus 2, the axis is denoted by i minus 2. Now, here, if you see this particular axis_i and axis_i minus 1. So, this upper part is actually in 3-D. So, if I consider this axis_i and axis_i minus 1, they may lie on the same plane, they may not lie on the same plane, too. For example, say if I consider say, this is one axis, say axis_i and this is another axis, the axis_i minus 1, they could be like this, they could be like this, they could be like this. So, they may not lie on the same plane or there is a possibility that both of them are lying on the same plane, same 2-D plane. Now, supposing that, they are not lying on the same plane, they are lying on the two different planes. So, this is axis_i, this is axis i_minus 1, and they are lying on two different planes, if they are lying on the two different planes like this, I can find out the mutual perpendicular distance, and that particular mutual perpendicular distance is nothing but the length of link_i, that is, denoted by a_i. Now, similarly, if they are lying on the same 2-D plane, the plane of the board now, here, their mutual perpendicular distance will be this. So, this will be the mutual perpendicular distance and they are parallel. So, if they are parallel and lying on the same 2-D plane. So, I can find out the mutual perpendicular distance. Once again, let me repeat that particular definition, the length of link, that is, a_i, it is the mutual perpendicular distance between axis_i minus 1 and axis_i. So, this is nothing but axis_i, this is nothing but axis_i minus 1 and here, this particular is nothing but is you’re a_i, that is, the length of the link and this particular angle is 90 degrees. And, this axis_i and axis_i minus 1, according to this figure, might be, they are lying on two different planes, here. So, this particular angle is 90 degree and this is also 90 degree, and this is the mutual perpendicular distance and this a_i is nothing but the length of the link i. Now, let me take one very special case, supposing that axis_i and axis_i minus 1. So, this is my axis_i minus 1 and this is your axis_i and they are going to intersect at this particular point, and supposing that, they are lying on the same 2-D plane. And, if they are going to intersect at this particular point, then what will be the value for this particular a_i? a_i will become equal to 0. For this type of case, actually, a_i becomes equal to 0. So, this particular, the length of the link could be 0 and it could be nonzero. So, this is what you mean by the length of link i. The next is the angle of twist of link, I think here, there is a typo-graphical error. So, this particular symbol, we generally use as alpha. So, this is alpha_i. So, in place of a_i, this is actually alpha_i, this is the alpha_i, ok? So, angle of twist of link, that is denoted by alpha_i, it is defined as the angle between the axis_i minus 1 and axis_i, ok? Now, this is the axis_i, this is my axis_i minus 1. So, what you do is: this particular axis i, we draw it here, now, if I just draw it here, this particular axis_i. So, this particular line is parallel to this, ok? Now, here, I will be getting one angle, the angle between axis_i minus 1 and axis_i measured from axis_i minus 1. So, this angle is nothing but alpha, that is the angle of twist, alpha. So, this is actually the angle between the axis_i minus 1 and axis_i. Now, remember, this particular angle alpha, that is the angle of twist, it could be either positive or negative or sometimes it may also become 0. If the two axes are parallel, then this particular angle will become equal to 0 here. So, this angle is measured from axis_i minus 1 to i and, here, it is anticlockwise. So, this is positive alpha, similarly, if it is found to be clockwise, alpha could be negative also, ok? So, these two things are actually nothing but the link parameters, and the link parameters are used to represent the structure of a particular link. For example, this is link_i, this is your link_i minus 1. So, this is a_i similarly, this is you’re a_i minus one. So, this is the mutual perpendicular distance between axis_i minus 2 and axis_i minus 1. So, this particular is a_i minus 1. So, this is a_i minus 1 and this is your a_i. So, till now, I have defined only two link parameters, now, I am just going to define the joint parameters. The purpose of using joint parameters, I have already told that just to represent the relative position of the neighboring links. Now, here, there are two joint parameters, one is called the offset of link, that is, denoted by d_i, the link offset. And, this particular link offset is nothing but the distance between the two points, the first point is a point where a_i minus 1 intersects the axis_i minus 1 and another point is, where a_i intersect the axis_i minus 1. So, this is the distance measured from a point where a_i minus 1 intersects the axis_i minus 1 to the point, where a_i intersect the axis_i minus 1. So, this particular distance is actually nothing but the link-offset and that is denoted by d_i, ok? So, this is actually the d_i. Now, this particular d_i, link offset, could be 0, sometimes the link offset becomes equal to 0 and it could be the positive value also and in some special case due to the coordinate system, it may take a negative value, also. Now, then comes the joint angle that is denoted by theta_i, now, this particular joint angle is defined as the angle between the extension of a_i minus 1 and a_i measured about axis_i minus 1. So, if I extend, this particular a_i minus 1. So, I will be getting this particular line and so, this line is actually parallel to this particular a_i, now, this angle between the extension of a_i minus 1 and a_i measured from a_i minus 1 is nothing but the joint angle, that is, theta_i and theta_i is actually measured from an extension of a_i minus 1 to a_i. Now, this theta i it could be once again 0, it could be positive or it could be negative. Now, here, I have put actually two notes: for a revolute joint, theta_i is the variable, it is very obvious, for example, if I take the example of a revolute joint like this, this particular joint is a revolute joint. So, if I take this is the axis about which I am taking the rotation and if you concentrate on this particular angle, this particular angle is the joint angle and that is actually a variable, ok? So, for a revolute joint, theta_i is the variable and what about the other three parameters. Other three parameters, like a_i, alpha_i, d_i are kept constant, similarly, for a prismatic joint, d_i is link offset, which is the variable and the other three remaining parameters, for example, say a_i, alpha_i and theta_i, these things are kept constant. So, for revolute joint, theta_i is the variable, for prismatic joint, d_i is the variable. Now, till now, actually, we have defined two link parameters, and two joint parameters, and with the help of these four terms, I am just going to now state the rules to be used to assign the coordinate system at a particular joint, how to assign the coordinate system at a particular joint. The rules for the coordinate assignment, we will have to find out. Now, remember one thing, the first thing we will have to do is: we will have to assign the Z coordinate system, next we try to find out the X coordinate system, and after that, we go for the Y coordinate system. Now, let us see, how to represent or how to find out that Z_i axis first. The rule for determining the Z_i axis is as follows: Z_i is an axis about which the rotation is considered and along which the translation takes place, now let us try to understand. So, Z_i is an axis about which the rotation is considered, now, here according to this so, for example, for this particular joint. So, this is the axis about which I am taking the rotation. So, this is nothing but your Z_i. Similarly, here, you can see that, this is the axis about which I am taking the rotation at this particular joint. So, this is actually the axis, that is, Z_i minus 1. So, this is Z_i and this is your Z_i minus 1, now let me repeat. Now let me take the example of the same revolute joint. Now, this is the axis about which I am taking the rotation. So, this particular thing is going to represent my Z axis, and if it is a linear joint, if there is a translation here, I have already taken the example of linear joint like say key and keyway for example, let me prepare a very simple sketch once again. So, for example, say if I take this type of sketch for this linear joint, which I have already discussed, this type of linear joint if I take, and here, if I just insert one key sort of thing. So, this particular key, I am just going to insert here, ok? So, this will be the Z direction. So, this is the direction of Z, actually, for the linear joint. So, Z is the axis along which, this particular translation takes place, and if it is a rotary joint, Z is the axis about which the rotation takes place, ok? Now, I am just going to consider one case, if Z_i minus 1 and Z_i axes are parallel to each other, ok, then X axis will be directed from Z_i minus 1 to Z_i along their common normal. Now, as I mentioned that these particular things, that is Z_i and this particular Z_i minus 1, say, they may belong to two different planes, but they could be parallel or they are lying on the same plane and they could be parallel. So, if they are found to be parallel, this is the mutually perpendicular direction, that is, the way, we define this particular a_i. So, a_i is actually the direction of X. So, X will be along the length of the link. Once again, let me repeat, if the two Z axes are parallel. So, X will be along their common normal and that means, you are here Z_i minus 1 and Z_i are parallel. So, this is a_i direction. So, this is nothing but X direction. So, this is your X_i direction. So, I will have already got this particular Z_i and X_i. So, this is your Z_i minus 1 and this is your X_i minus 1, this is X_i minus 1 and this is Z_i minus 1, ok? Now, let us see, now, there could be some other cases, ok? So, I am just going to discuss these special cases. For example, say, if Z_i minus 1 and Z_i axes intersect each other. So, X axis can be selected in either of the two remaining directions. Now, if Z_i and Z_i minus 1axes intersect each other. So, this I have already discussed a little bit, supposing that, this is my Z_i minus 1 and this is the Z_i axes. So, this Z_i minus 1 and Z_i are intersecting. If they are intersecting, then what will happen to the value of the length of the link a_i, that will become equal to 0. And, if they are intersecting, then X can be selected along either of the two remaining directions, so at a particular joint. So, Z has been selected. So, I have got two remaining direction and out of these two remaining directions, anyone can be selected as X. So, this is a very special case. Now, similarly, there could be some other very special cases, like if Z_i minus 1 and Z_i axes act along a straight line; that means, they are collinear. So, Z i minus 1 and Z i they are collinear, then x axis can be selected any where in a plane perpendicular to them. So, if they are found to be collinear, it is a very special case, for example, say might be this is my Z_i minus 1 and this is my Z_i. So, this is Z_i and this is Z_i minus 1, it is a very special case. So, what I will have to do is: I will have to consider a plane, which is perpendicular to them. Now, a plane perpendicular to them could be of this type, for example, if I just draw it here, roughly one plane perpendicular to them could be something like this, which is perpendicular to this particular line. Now, X may lie on this particular plane, ok? So, this is actually one possibility, similarly, there could be another possibility like, this is Z_i minus 1 and this is Z_i. Now, I am just going to define a plane, which is perpendicular to both now that particular plane could be something like this. So, my X can lie here, also ok? So, both the possibilities are there. So, what you can do is: for these special cases, we will have to find out very carefully that particular X direction. So, till now, we have discussed actually how to determine the Z axis and then, X axis and this particular sequence has to be maintained; that means, first, we will have to find out the Z axis, then we go for X axis and after that, we try to find out the Y axis. Now, Y axis is nothing but the Z cross X. Now Z, X: these are all unit vectors. So, I can find out that the cross product of these particular Z and X, and Y will be nothing but Z cross X. Now, let me take a very simple example supposing that this is my Z. So, I have already defined and say this is my X. So, that also I have got, now I will have to find out your Y direction, now the Z cross X. So, according to the rule of cross product: Z cross X will be something like this. So, this will be the direction of this particular Y. So, Z cross X will be Y, are you getting the point?. So, this is the way, actually, we will have to find out the Y direction. Now, say, this is my Z and this is my X, ok? Now, Z cross X. So, Z cross X will be something like this. So, this will be Y. So, Z cross x will be something like Y. So, we will have to be very careful while determining, so this particular your Y direction. So, Y is nothing but is your Z cross X and by following this particular rule at each of the joint, we can actually define the coordinate system like your X,Y and Z. And, once you have got this particular thing, now what you will have to do is: So, we will have to find out what is this transformation matrix, that is, T_i with respect to T_i minus 1. So, to determine this T i with respect to i minus 1; so this is nothing but I, and this is nothing but i minus 1. So, my aim is to determine that T_i with respect to i minus 1. Let us see how to find out how to find out? So, my i minus 1 frame is here, and i-th frame is here. So, from here, so I will have to reach this particular, how to do it? Now, to do that actually, what we do is, I am just going to follow one sequence of rotations and translations. So, let me try from X_i minus 1 and Z_i minus 1. As I told that this is nothing but is X_i minus 1 and this is nothing but is Z_i minus 1. So, what I will have to do is: I will have to take some rotation by an angle theta i about this particular Z axis, ok, then, only I will be able to move, here ok. So, I will be getting X_A and I am taking rotation about Z_A, so, Z_A will remain same as Z_i minus 1, ok. So, first, I will have to take some the rotation sort of thing that is nothing but I am just going to write down. So, rotation about Z by angle theta_i, the next from here I will have to reach this particular point, how to reach? I will have to translate along Z, so, I am just going to translate along Z by d_i. So, I will be getting actually this particular thing, that is, X_C and this is your Z_C, sorry, X_B and this is your Z_B. And, after that, actually, I am just going to take some rotation about this particular X axis by an angle alpha. So, I am just going to take rotation about X by an angle alpha and then, I will be getting these as nothing but X_C because I have taken a rotation about X_C and Z_B will take the position like Z_C. So, Z_C will be something like this and once we have got this particular X_C and Z_C, now, I can translate along X direction. So, TRANS along this particular X direction by this particular amount a_i. So, I will be able to reach this point and I will be getting these Z_i and X_i. So, with the help of these translations and rotations, actually, starting from here, I am just going to reach this particular point. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_36_Robot_Vision_Contd.txt
We have discussed that for edge detection, we can use the gradient operator like 1st order gradient or the 2nd order derivative. Now, here, let us see what is happening physically; if I consider the 1st order derivative or the 2nd order derivative during the edge detection. Now, once again let me consider the light object; let me consider the light object and the dark background. So, this is actually the dark background, the black one is the dark background and here, I have got the light object. Now, if I just do the scanning along this particular direction and try to find out the light intensity value; so if I just do the scanning in this particular direction, I will be getting this type of light intensity distribution. So, up to this black portion, the light intensity value will be small and in the light zone actually, the light intensity value will be more. And, once again, in the dark zone, the light intensity value will be less and in between actually here, this particular light intensity value is going to increase and it will reach the maximum. And, starting from here, the light intensity value is going to decrease and it is going to reach the minimum value. So, this is actually the distribution of light intensity, if I do the scanning in this particular direction, ok. Now, the moment we are taking the help of gradient operator, that is 1st order derivative. So, what happens is, you are here, so, there is no change of light intensity value; that means, the rate of change is 0. So, this indicates your 0, then comes from here to here; so, from here to here there is an increment, there is increase in light intensity value and this particular rate is constant, this is the straight line. So, it has got the constant slope; so, this is actually the amount of the 1st derivative. Then, comes your here; so, from here to here, there is no change in light intensity. So, once again, this will be the distribution for the 1st derivative; then from here to here, there is decrease in light intensity; that means, there is one the rate for decrement and that particular rate is constant. So, this is actually the constant rate of decrement and then, it is your 0; the rate of change is 0. So, this is actually what we mean by the 1st derivative of this particular the change in light intensity. Now, if I consider the 2nd derivative; let us see what happens. So, up to this, there is no problem, so this will become 0 sort of thing, ok. So, here, there is suddenly a change; so, the rate of change. So, it is actually, if you see, this is nothing, but the positive sort of thing, ok. So, after that, this will remain same, then it is going to be reduced. So, it is going to be reduced; so, I will be getting some sort of negative sign here, ok. Then, from here to here; the rate is 0; so, there is no change, now once again, from here to here, there is further decrease. So, here, there will be an arrow in the negative and here, suddenly there is an increase. So, this will be the positive, that is plus, ok. So, this is what is happening in the 2nd derivative. Now, if I just compare this particular distribution of light intensity with the displacement. So, this is nothing, but the velocity and this is nothing, but the acceleration sort of thing, ok. This is actually, what is happening, the moment we are using the gradient operator like; 1st order derivative or 2nd order derivative as a tool for the edge detection. So, this particular derivative is going to detect this particular edge. So, this edge will be able to detect between this light object and the dark background with the help of this gradient operator. So, this is the way actually these particular gradient operators are working just to detect that particular the edge. Now, I am just going to discuss the concept of the boundary descriptor. Now, before we start discussing on this particular descriptor, let us try to understand the reason behind going for this boundary descriptor. Now, these boundary descriptors are used just to represent the boundary of this particular object. Supposing that I have got say on the white background, say this is the background and on this particular background, I have got one object something like this. So, if I have got this particular object, now the boundary of this particular object, I will have to represent for further processing. Now, how to represent this particular boundary? Now, to represent the boundary, actually we take the help of the boundary descriptor and if you see the literature, we have got a few boundary descriptors. So, here I am just going to discuss two boundary descriptors, in detail, and these are very frequently used. The first one that is called the chain code; now, here the boundary of the object is represented with the help of some straight line segments of pre-specified length and direction. So, what we do is, we generally take the help of either the 4 directional chain code or 8 directional chain code. Now, let us try to see, what is there in 4 directional chain code; it is very simple, it shows only 4 directions denoted by 0, 1, 2 and 3 and they are 90 degree apart. So, this is the 4 directional chain code and if I take the 8 directional chain code starting from 0, 1, 2; up to say 7. So, this is called the 8 directional chain code and here, the included angle is 45 degree and here, the included angle is your 90 degree. Now, to represent the boundary of the object; if I take the help of say 8 directional chain code; so, there is a possibility that we will be able to represent the boundary more accurately compared to the 4 directional chain code. Now, let us see, how to use that the 4 directional chain code or 8 directional chain code to represent the boundary. Now, here, I am just going to use one 4 directional chain code just to represent this type of the object. Now, supposing that this particular thing, this is nothing, but say white, this is the white background. So, this is the white background and here, we have got one object, whose boundary is something like this. Now, this particular boundary, I will have to represent with the help of some numbers or mathematically, so that we can do some sort of processing; the further processing, ok. Now to represent this particular boundary in the computer program; we will have to use the set of numbers, because computer program does not know anything except this particular the numbers, ok. Now, here supposing that the object, the boundary of the object is something like this; so, this is a very simple example. Suppose that this is the object, the boundary of this particular object and this particular object is, say dark object. So, this is the dark object on say light background, ok. Now, how to represent this particular the boundary or the object? As I told, we are going to take the help of 4 directional chain code. So, this is 0; this is 1, this is 2 and this is your 3 and let us start from any point; let me start from here. Now, if I just start from here; if I just start from here. So, this is the starting point now from here; so I will have to move along the boundary; now this is the direction of 1. So from here to here; so I move along the direction of 1; by how much amount? By some pre specified the fixed length ok; so, I will be here. Now, from here; so, I will have to reach this particular point, once again, I will move along this particular 1. So, I will be writing 1 here; then from here; so I will be moving towards this side, ok. This is the direction of 0, so here I will write 0 and from here, I will move along this particular direction this is the direction of 3; so, I will have to write 3, here. Then, from here, so this is once again the direction of 0; once again second from here to here the direction of 0, then from here to here; so this is nothing, but the direction of 1 ok, this is the direction of 0 this is the direction of your 3. So, this particular direction this is the direction of 3 then from here, I am just going to move towards that towards 2. So, this is the direction of 2, 2, 2 and I am just going to reach this starting point. Now, to represent this particular boundary; what we do is, we just go on writing all such numerical values in this particular sequence. For example, we start from 1. So, I have got a 1 here next is 1, next is 0, then comes here 3 then 0 0 0 0 then 1, 1 and we follow that and then we will be coming back to this and we have got 2 all such 2s here. So, this particular numerical value is going to represent this particular boundary of the object, ok. And, in a computer program, this particular object will be represented like this and then, we can do the further processing with the help of the computer programming. So, this is the way actually, we can represent the boundary with the help of some boundary descriptor. Now, if you see the literature, we have got some other types of boundary descriptor also and that is known as the signature. Now, signature is actually nothing, but the functional representation or the mathematical representation of boundary of the object. Now, let me take a very simple example of say circular object; now if I take a circular object, for example, say this type of circular object I have got and I want to represent its boundary. Now, how to represent? So, what we do is; we try to find out its center. So, this is the center and this is the reference line with respect to which, I am just going to represent the fixed reference, ok. And, r is actually the distance between the center and the point lying on this particular boundary of the object. So, this r is the distance between the center and the boundary point and I am just going to start from this particular reference, where theta is made equal to 0. So, corresponding to this particular reference theta is made equal to 0 ok. So, theta is made equal to 0; so, this is in radian. So, all such things are in say radian this corresponds to your say 0; this is pi by 4 means your 45 degree, pi by 2 with respect to these; so this is 90 degree; then 3 pi by 4, pi, 5 pi by 4, 3 pi by 2, 7 pi by 4 and then 2 pi. So, I will be coming back, ok. So, 360 degree rotation and here, so as I am moving along this particular theta ok; so, the distance between the center point and the point on the boundary is kept equal to r; A is nothing, but the radius of this particular the circle. So, A is the radius and the distance between center and the boundary that is nothing but r. So, if I plot r as a function of theta, that is nothing, but is your r (theta). So, there is a possibility that I will be getting one straight line because starting from here up to here. So, the value of r that will remain same as your equal to A that is nothing, but the radius. Now, this particular straight line can represent the circular object, mathematically, ok. So, this is nothing, but like say r is equal to A, that type of equation, ok. So, this particular equation is going to represent this circular object lying on the computer screen. Now, this particular circular object with the help of this equation we can represent and then, we can do some sort of the further processing; this is actually the method of signature. Now, let me take another example just to make it more clear; so, I am just going to take the help of another example ok. Now here supposing that I have got one object and this is nothing, but a square object. So, if I take a square object like this; so this is nothing, but the square object and if I take this type of square object and A; 2 A is actually the dimension of this particular side and that particular side. So, I can find out the distance between this particular center and the point which is there on this particular boundary and that is denoted by r and this r as a function of theta; so, I can plot. So, this is theta in radian and theta corresponds to 0. So, this is the reference; so I am here, so if I am here then this r (theta) is nothing, but this is equal to A. So, corresponding to this particular theta equals to 0 ok. So, r is nothing, but A; then corresponding to theta equals to 45 degree. So, I will be getting; so this is actually the distance between the center and the boundary and that is nothing, but is your root 2A; so, this is your root 2A, ok. Similarly, at pi by 2; so, at pi by 2; that means, I am here. So, once again r is equal to A then comes 3 pi by 4; that means, I am here. So, once again it is root 2A, then corresponding to pi. So, this will be you’re a, then once again here it is root 2A, A, root 2A; A and so on, ok. And, this particular distribution will be non-linear distribution, so there is a possibility, you will be getting this type of non-linear distribution of r (theta) with respect to your theta, ok. This type of distribution will be getting for this particular corresponding to this type of square object. Now, in place of square like if I just take the rectangular shape for example, this type of rectangular shape. So, once again I will be able to find out, for example, this side is say 2A and this is your 2B and supposing that 2A is greater than say 2B or A is greater than B and for corresponding to this, we can also find out another signature. So, corresponding to this particular square object, this is the signature, which we are getting and once you are getting this type of plot, now it can be expressed mathematically. And, if I can express mathematically, then further processing becomes easier. Now, I am just going to discuss like how to identify the multiple objects which are present in one photograph or in one image. Now, if I just take; let me take a very simple example, then after doing all such calculations, supposing that I am just going to get one scenario, for example, say this is the background. And, on this particular background; supposing that I have got one object like this, I have got another object like this, I have got another object like this, ok. So, this is object 1, this is object 2 and this is object 3 and I have taken the photograph and the way I explained; I carried out that image analysis and ultimately on the computer screen supposing that I am getting so this type of image. For example, this is nothing, but the object it is a black object and white background sort of thing. So, I am getting say this is the black object and a white background; so, I am getting, so 3 objects on this particular the computer screen. Now, how can I identify that this is object 1, this is object 2, this is object 3? Now to identify that, what we do is, for example, with the help of our eyes, whenever we see the picture of the environment or the surrounding, we very easily can identify that this is a chair, this is a table, this is a human being and so on. So, immediately within the fraction of a second, there is a lot of processing in the brain and due to this particular processing; we are able to identify these particular objects. Now, how can computer or how can one robot identify that this is object 1, this is object 2 and this is object 3? Now, the method I have already explained and this type of objects we are getting. Now, actually what we do is, we try to calculate one parameter that is called compactness, now this particular compactness is nothing, but perimeter square divided by the area. So, for this particular object; we try to find out perimeter square by area, that is nothing, but the compactness. For example, if I have got a chair, if I have got a table, if I have got a human being. So, in our brain, actually this compactness, this particular information is already stored. And, that is why, very quickly within a fraction of second, we can identify that this is a chair, this is the table and so on, ok. So, all such information has been stored in our brain. Now, here actually for this artificial image processing or the artificial computer vision or the robot vision; what we do is, for each of these particular objects; we predetermine what are their compactness values? Now, on the screen, we are getting the background and the object. So, approximately I can find out what is the perimeter of this, what is the approximate area, I can find out perimeter square by area. So, I can find out the compactness; supposing that for this particular object 1, the compactness is C_1, object 2 the compactness is C_2 and object 3 it is your C_3. So, this we can calculate from this particular computer screen and we try to match with the known values of compactness of object 1, object 2, and object 3. And, then, we try to recognize and we interpret, we identify that this is object 1, this is object 2, this is object 3. So, this is the way actually one computer or a robot can identify, interpret the different objects. Now, if the robot wants to do some sort of manipulation task; it will have to identify, it will have to interpret the objects, ok. And, the way we human-beings carry out in our vision system; exactly in the same way, we try to copy in the artificial way, in computer vision or the robot vision just to collect information of this particular environment. Now, once again, if I just summarize a little bit, this computer vision or the robot vision what they do? The first thing is do is, we try to capture the image. So, we capture image; so, image capturing in the first stage. So, image capturing with the help of camera. So, we try to capture; we try to capture this particular image with the help of camera and once I have got this particular image. So, what we do is; we do some sort of sampling and for this particular sampling, we take the electron beam scanner, ok. And, we generally go for analog to digital conversion and that is nothing, but the digitizing. So, we generally go for the digitizing and once you have done it; we go for the frame grabbing. So, we go for the frame grabbing and once I have got this; we go for the preprocessing; preprocessing. And, once the data have been preprocessed, then we go for some sort of thresholding. Now, once you have done this thresholding, then we go for some sort of edge detection, and after this edges have been detected; we generally go for object identification. So, object identification; so these are the actually the steps for the computer vision or the robot vision; exactly the same thing we do; we, human-beings, do. And, in computer vision or the robot vision, we try to copy everything in the artificial way, so that we can collect the information of the environment. The robot can collect the information of the environment with the help of camera and we will have to make this particular process very fast, so that within a fraction of second, we get the information of this particular environment. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_17_Robot_Kinematics_Contd.txt
Now, we are going to discuss how to carry out the analysis related to inverse kinematics of this particular 2 degrees of freedom serial manipulator. Now, in inverse kinematics, we try to find out the joint angles; that means, the purpose is to determine the joint angles theta_1 and theta_2, provided the position and orientation of the end-effector with respect to the base coordinate frame and length of the links are known. Now, this particular matrix, that is, T_2 with respect to base carries information of the position and orientation of the end-effector and this particular matrix is known to us. And, our aim is to determine the values for these particular joint angles: theta_1 and theta_2, that is the purpose of inverse kinematics. Now, let us see how to carry out this inverse kinematics the problem, how to find out the solution. Now, here, if I just compare like this particular q_x and q_y, that is the coordinates of the position of the end-effector with the position terms, that is nothing but q_x is equal to L_1 cos theta_1 plus L_2 cos of theta_1 plus theta_2; q_y is equal to L_1 sin theta_1 plus L_2 sin of theta_1 plus theta_2. So, we have got two equations. So, there are two equations and there are two unknowns, that is, theta_1 and theta_2. So, we can solve for these two unknowns, theta_1 and theta_2. Now, let us see, how to solve it, now, this is the very simple set of equation and as I told, there are two unknowns and two equations. So, what we can do is: we can square and add equations (1) and (2). So, by squaring by squaring and adding equations (1) and (2), we get that q_x square plus q_y square, that is nothing but, L_1 square plus L_2 square then comes 2 L_1, L_2 C_12 C_1. So, this particular term plus 2 L_1 L_2 S_12 S_1. So, this particular expression we will be getting here. So, this part of this expression can be further simplified and we can write that q_x square plus q_y square minus L_1 square minus L_2 square is nothing but 2 L_1 L_2, then cos of theta_1 plus theta_2 cos theta_1 plus sine of theta_1 plus theta_2 sine theta_1. Now, cos of theta_1 plus theta_2 minus theta_1. So, we will be getting cos of theta_2. Now, from here, we will be getting this particular the expression, that is C_2 is nothing but q_x square plus q_y square minus L_1 square minus L_2 square divided by 2 L_1 L_2. Now, once you have got this particular cos theta_2, very easily you can find out what is theta_2. So, theta_2 is nothing but cos inverse q_x square plus q_y square minus L_1 square minus L_2 square, divided by 2 L_1 L_2, and this is cos inverse. So, we will be getting two values for this particular theta_2. So, theta_2 is known, now will have to determine what is theta_1. Now to determine this particular theta_1, what we do is: we try to concentrate on this equation, that is, equation (1). So, from equation (1) actually, we can find out, we can further write this q_x is nothing but L_1 cos theta_1 plus L_2 cos of theta_1 plus theta_2. So, cos of theta_1 plus theta_2 can be written as cos theta_1 cos theta_2 minus sine theta_1 sine theta_2; so this will become q x becomes L_1 cos theta_1 plus L_2 cos of theta_1 cos of theta_2 minus L_2 sine of theta_1 sine of theta_2 because C_12 is nothing but cos theta_1 cos theta_2 minus sine theta_1 sine theta_2. Now, here, now this particular expression can be rearranged, we can take this cos theta_1 as common and within the first bracket we can write down L_1 plus L_2 C_2 and here I can take S_1 is common and within bracket I can write down L_2 S_2, now here, if you see, this L_1 plus L_2 C_2, C_2 we have already determined theta_2 is known. So, cos theta_2 is known, then L_1 and L_2 are the lengths of the links. So, L_1 and L_2 are known. So, this particular expression is actually known. So, this is known and your cos theta_1 is unknown. Similarly, L_2 is known sine theta_2 is known. So, this part is known. Now, this known part actually, we can assume that L_1 plus L_2 cos theta_2 is nothing but rho sine psi and we can take this L_2 cos theta sorry sine theta_2 is nothing but rho cos psi, ok? So, if I just assume like that this known part is nothing but your rho sine psi and L_2 S_2 is nothing but is your rho cos psi. So, very easily, we can write down that q_x is nothing but cos theta_1 multiplied by rho sine psi, then comes your minus sine theta_1 and this is nothing but is your rho cos psi, and from here, I can take rho constant and sine psi cos theta_1 minus cos psi sine theta_1 is nothing but sine of psi minus theta_1. Now, here, these part q_x can be obtained as rho sine psi minus theta_1. Now, from here, actually, we can also find out what is rho? Now, rho is nothing but is your squared root of L_1 plus L_2 C_2 square plus L_2 S_2 square. So, the squared root of that. So, this is nothing but rho and we can also find out, what is psi. Psi is nothing but tan inverse of L_1 plus L_2 cos theta_2 divided by L_2 sine theta_2, and that is nothing but is your psi. So, we can find out rho, we can find out psi. So, here rho is known, here psi is known. So, only unknown is your theta_1. So, from here, directly you can find out by sine inverse or what we can do is we take the help of another equation, that is, your that q_y equation. Now, this q_y by following the same procedure, can be written as rho cos psi minus theta_1 and if we get q_x equals to say rho sine psi minus theta_1, that we have already seen. So, from here, we can find out that q_x by q_y is nothing but is your tan of psi minus theta_1 and rho is not equal to 0. So, from here, we can find out that psi minus theta_1 is nothing but tan inverse of q_x by q_y. So, from tan inverse q_x by q_y. We can find out the theta_1, this particular theta_1 is nothing but psi minus tan inverse of tan q_x by q_y. So, we can find out this particular the theta_1. Now, once again, we will be getting two values for this particular theta_1; so for this problem of 2 degrees of freedom serial manipulator, there are two sets of theta values, we are getting: one is called theta_1, theta_2 right hand solution, another is called theta_1, theta_2 left hand solution. Now, if we consider that this particular manipulator will have to reach this particular point, whose coordinates are q_x q_y, this is one configuration with the help of which, it can reach this particular point, and here, this is your theta_1, theta_2 R right hand solution. Now, another solution could be another configuration like this. So, this is L_1, this is L_2, where theta_1 is nothing but this. So, this is my theta_1 and theta_2 will be clockwise. So, theta_2 will be negative. So, we will be getting two sets of theta_1 and theta_2 values; that means for this particular serial manipulator having 2 degrees of freedom, there are two solutions for this inverse kinematics. So, one is called the left hand solution, another is called the right hand solution. So, this is the way actually, we can carry out the inverse kinematics and forward kinematics for the serial manipulator. Now I am just going to take the example of a more complex manipulator and this particular manipulator is having actually 5 degrees of freedom and this is a spatial manipulator. So, a spatial manipulator with 5 degrees of freedom, this is nothing but an under-actuated manipulator, as we discussed. So, this is a manipulator, the name of this particular manipulator is MINIMOVER and it is having 5 degrees of freedom. Now, here, if I just try to understand the nature of the joints, we can see that. So, this is nothing but the fixed base and with respect to the fixed base, here we have got one joint, now this particular joint is actually the twisting joint, then we have got another joint here. So, this is nothing but a revolute joint, the third joint is here this is once again a revolute joint, the fourth joint is here, this is once again a revolute joint and we have got another joint here, the twisting joint and this is nothing but the fifth joint. So, this particular robot, as we discussed, is known as is T-R-R-R-T manipulator. So, each of these particular rotary joints is each having one degree of freedom and this is the serial manipulator, thus, it is having 5 degrees of freedom. Now, if you draw the kinematic diagram. So, let us try to draw the kinematic diagram of this particular manipulator. We start with the fixed base. So, this is the fixed base, the first joint is the twisting joint. So, this is the symbol for the twisting joint. So, let us draw this twisting joint. The second joint is the revolute joint. So, this is actually the revolute joint, the third joint is once again a revolute joint, the fourth joint is once again a revolute joint and the fifth joint is nothing but a twisting joint. So, this is nothing but a twisting joint and this is the symbol for the gripper. So, we have got twisting joint, revolute joint, revolute joint, revolute joint and this is the twisting joint. So, this is nothing but the kinematic diagram of this serial manipulator having 5 degrees of freedom. Now, here, actually what we do is, we try to assign the coordinate system at the different joints according to the D-H parameters’ setting rule. Now, as I told that we will have to see the reference coordinate system, at first. So, if you see the reference coordinate system. So, this is X ,Y and Z and Z × X is nothing but Y. Now, if you see the first joint, the first joint is nothing but a twisting joint and for these twisting joint, Z is what? So, this is my Z direction, and that is why, we have considered. So, this is your Z_0 and this X_0, I have consider along this particular direction, in the direction of this reference X reference coordinate X and Y_0 is in this particular the direction. So, this indicates actually the coordinate system at the twisting joint and if you see the second joint is actually a revolute joint and this particular Z and that particular Z are going to intersect and that is why, actually, we have written. So, as if this particular and that particular Zs are intersecting, and this is the Z about which we take the rotation for this revolute joint. Now, if this is Z and they are intersecting, now according to the D-H parameters’ setting rule, X could be either this one or it could be this one; that means Z_1 is selected, now X could be either this direction or that particular direction. So, what we do is, we will have to select X in such a way, that we can show the length of the link because the next joint if you see is once again a revolute joint and the Z here and the Z here are parallel, and if they are parallel, their mutual perpendicular distance is going to be the direction of X and that is nothing but the length of the link and that is why, we have considered, this is your Z_1 and this we have selected as X_1. Now, if this is selected as X_1. So, Z × X is nothing but Y_1. So, this is your Y_1, ok? The third joint is once again a revolute joint. So, its Z will be parallel to Z_1. So, this is your Z_2 and if you see this particular the manipulator; actually the fourth joint is once again a revolute joint. So, here, this particular Z and that particular Z should be parallel and that is why. So, if this Z and that particular Z are parallel. So, the mutual perpendicular distance should be the X and that is why, we have considered this, as X_2. So, these are Z_2 and X_2 and Z × X is nothing but Y_2, as I discussed. So, here, this is once again a revolute joint. So, I have taken Z along this, X_3 along this, now Z × X is nothing but Y_3. So, Z_3 X_3 and Y_3 we can find out, now,if you see the next joint, the next joint is nothing but a twisting joint and the Z here for the fourth joint. So, the Z for this particular fifth joint, that is the twisting joint, they are going to intersect, now, if the two Zs are intersecting, I have got both the options. So, Z is selected because this is the twisting joint. So, Z_4 is selected here. So, this is my Z_4. Now, regarding the X_4; as this particular Z_3 and Z_4 are intersecting. So, X_4 could be either this particular direction or this particular direction. So, anyone can be taken as X_4, and here, I have taken this as X_4 and if this is taken as X_4 then Z × X. So, Z × X will be or Y_4. So, this is your Y_4. And, as I told that the same coordinate system will be copied at the last, that is my Z_5, this is my X_5 and this is Y_5. Now, this completes actually the assignment of these particular the coordinate system. Now, here, one thing I just want to mention that this is not the only way of showing this type of coordinate systems at the different joints, in some of the textbooks, they follow a slightly different method that method is something like this. So, at each of the rotary joint there will be some rotations. So, what they want to do is they show the rotation even while showing that particular coordinate system. For example, here, there will be some rotation, now if I show that particular rotation, this particular X_0, Y_0, Z_0 will be rotated and here, once again, there is some rotation and if I once again show that particular rotation, then it will be further rotated. So, each of this particular coordinate systems, will be rotated on this particular figure, then it becomes difficult to visualize actually, the coordinate systems assigned at the different joints. That is why, actually, this is the better way of representing, where we show the coordinate system at each of these particular joints. So, this particular reference coordinate system is followed at each of these particular joint and wherever we have some rotation. So, that particular rotation we are going to show while preparing the D-H parameters’ table. So, in the D-H parameters’ table, we show all such rotations, now let us see how to fill up; this table, the D-H parameters’ table. Now, to fill up, actually, what we do is: first, we concentrate on the frame_1 and as I told that while preparing this particular table, we will have to follow the screw Z and screw X; that means, I will have to write theta_i first, then d_i, then alpha_i after that a_i. So, what I do is: the first joint that is this particular twisting joint is a rotary joint. So, definitely the variable will be the joint variable and this particular joint variable, I have considered as theta_1. Now, here, I just want to say that here I can write down theta_1 or if you just draw this particular X_0 here, if I draw the X_0 here, let me just draw it X_0 here. So, if I draw the X_0 here, the angle between X_0 and X_1 is already 90 degree. So, what you can do is in place of theta_1 you can also write down 90 plus theta_1. Now, if I write 90 plus theta_1, then this particular theta_1 could be the acute angle, but here for simplicity, whatever I am doing, I am writing the whole thing as theta_1; that means, this particular theta_1 will be more than 90 degrees. So, what in place of this 90 plus theta_1, for simplicity, I am writing this as theta_1 so, that theta_1 is actually your obtuse angle. Now, next is your how to find out this particular d. According to the definition of d, d is the offset and that is nothing but the distance between two X, the distance between two X measured along Z. Now, these two are coinciding they are in fact, intersecting. So, if they are intersecting then the distance between X_0 and X_1 is nothing but 0, then we try to find out alpha. So, by definition α is nothing but the angle between two Zs measured about X. So, if I draw this particular Z_1, this is my Z_1; so Z_0 to Z 1. So, I will have to move in the clockwise direction by 90 degrees, clockwise we have considered negative, so, this is nothing but negative 90 degrees, then comes a_i. a_i is the distance between two Zs, if two Zs are parallel, we try to find out the mutual perpendicular distance. Now, they are intersecting. So, the distance between Z_0 and Z_1, that is nothing but 0. So, I can fill up actually all the entries corresponding to frame_1 now, we can concentrate on this particular 2. Now 2 means what? 2 means 2 with respect to 1, now here. So, once again the joint is nothing but a revolute joint. So, the variable is theta_2 and d is the distance between two Xs. So, X_1 and X_2 are on the same line. So, the distance is 0, then comes here, alpha Z_1 and Z_2 are parallel. So, the angle between them is 0, then comes here a. So, if these two Zs are parallel, Z_1 and Z_2. So, X is along the mutual perpendicular distance and this is actually L_1. So, L_1 is nothing but the length of the link. So, this is your L_1. Now, 3 means 3 with respect to 2 now, here. So, this particular joint is a revolute joint. So, the variable is theta_3, then comes d, which is the distance between two Xs. So, X_2 and X_3 are on the same line. So, this is 0, then comes your alpha is the angle between two Zs, Z_2 and Z_3 are parallel. So, this is 0, now then comes here this “a”, that is, Z_2 and Z_3 are parallel and this is the mutual perpendicular distance. So, the length of the link is L_2, then comes 4, that is, 4 with respect to 3. So, once again, this particular joint is a revolute joint. So, the variable is theta_4. The next is d, d is the distance between the two Xs. Now, here, you can see if I extend this particular X_3, it is going to intersect the X_4. So, the distance between them is equal to 0. Next is the alpha, that is the angle between Z_3 and Z_4. Now, here, if I just draw this particular Z_4. So, this is my Z_4 and Z_3 to Z_4 if I want to move, I will have to move in the anticlockwise direction by 90 degrees. So, this is nothing but positive 90, then comes your “a’. “a” is the distance between two Zs and two Zs are actually intersecting and coinciding. So, this particular distance is nothing but is your 0 and then, comes here, 5. So, 5 with respect to 4 and this is a twisting joint. So, the joint angle is nothing but your theta 5 and the other things will be equal to 0. So, this is the way, actually, we will have to prepare the D-H parameters’ table, that is, Denavit-Hardenberg parameters’ table. And, once you have got this particular table, carrying out the forward kinematics becomes very easy. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_40_Robot_Motion_Planning_Contd.txt
Now, I am going to discuss the principle of another very popular motion planning algorithm, which is known as the reactive control strategy. Now, this reactive control strategy was proposed in the year 1986 by Brooks. Now, here, a robotic action is divided into a large number of independent primitive or basic behaviours. For example, the robotic action in a soccer plying robot is to score goal. Now, to score goal; this particular robotic action is divided into a large number of basic behaviours, and each of these basic behaviours is controlled at a particular layer of the control architecture. Now, if I just consider a complicated robotic action; now this complicated robotic action if I want to control, so, I will have to take the help of a large number of layers in the controlled architecture. And, here, there must be one supervisory controller; that means, there must be one main computer, which is going to control the activities of the different layers. And, therefore, to handle a complicated task, we will have to take the help of a large number of layers and the computational of complexity is going to increase and moreover, it requires a large amount of computer memory. Now, this particular control strategy becomes very famous and a particular scheme in robotics was proposed, that is known as the Behaviour-Based Robotics. Now, this behaviour-based robotics could reach the popularity and many people used in different forms, they modified also and they could solve a number of problems related to the motion planning algorithm. Now, here, this reactive control scheme has got a few drawbacks. For example, say if I consider a complicated robotic task, as I told that there could be a large number of layers and consequently, this particular control architecture will become complicated, it requires a large amount of computer memory and this particular process will become slow. There is another drawback now, supposing that the designer say could not foresee a few behaviours, while designing that particular the controller. Now, if this particular robot is going to face that type of complicated scenarios, which was not consider during the design of that particular controller, the robot will not be able to handle that particular complicated situation very efficiently. And, there is a possibility that the robot is going to fail to tackle that type of scenario. Now, these are all drawbacks of this particular the reactive strategy. Now, if I see the computational complexity of the various motion planning algorithms used in robotics. So, this particular motion planning algorithm is computationally very expensive and thus, this algorithm could not be actually implemented, online. For example, say the computational complexity of these motion planning algorithms was studied, in details, by Canny and Reif. Now, in the year 1987, they studied that computational complexity of the motion planning algorithms and according to them, the motion planning for a point robot moving among moving obstacles in 2 D plane with bounded velocity is found to be NP-hard; so, this is computationally very expansive. Now, the computational complexity of this particular algorithm is expressed in terms of the hardness values like NP-hard, P-hard then P-space hard and all such things. For example, this hardness is represented as P-hard, then comes your NP-hard, that is not polynomial hardness. And, then comes your P-Space hardness; P-space hardness and that is very hard and this particular hardness is the exponential hardness. Now, here, in 2 D plane and for a very simple scenario like a point robot; moving in the presence of some moving obstacle with some bounded velocity is found to be NP-hard. Now, similarly the similar study was carried out by Reif and Sharir in the 1985. And, according to them; they studied this computational complexity in 3-D space for a point robot in 3-D space. Now, this particular motion planning algorithm with the velocity bound is found to be NP-hard. On the other hand, the motion planning problem of a point robot in 3-D space with the velocity bound is found to be the PSPACE-hard. And, that is why, this type of traditional method for robot motion planning could not be implemented online, because these are all computationally very expensive. Now, if you see these particular drawbacks of this particular traditional tools for the motion planning. Now, we have seen that these particular traditional tools for robot motion planning are computationally very expansive. Thus, we could not implement online to solve this dynamic motion planning problem. And, moreover, now each of these particular traditional tools for motion planning is suitable to solve a particular problem and that is why, these algorithms are not versatile. So, for different problems, we will have to use the different algorithms and that is why, in fact, we will have to find out some sort of versatile algorithm or the robust algorithm, which can be implemented to solve a verity of problems. Now, then comes these particular traditional tools are not equipped with any such optimization module. And, that is why, the planned path or the generated path may not be optimal in any sense. Now, these are all actually the drawbacks of the traditional methods of the robot motion planning. Now, here, if you see these particular the drawbacks, due to these drawbacks, we could not implement all such traditional tools for robot motion planning in a very efficient way. And, that is why, there is a need for the development of an efficient, robust and computationally tractable algorithm, which can be implemented online to solve this robot motion planning in a very efficient way. Now, to understand the computational complexity of this particular robot motion planning; let me try to take one very simple example. Now, if I take this particular example, we will understand that this particular robot motion planning problem is very complicated. Now, for simplicity, let me just try to consider one path planning problem for a very simple manipulator. Now, supposing that this is the Cartesian coordinate system like X and Y. And, I have got a serial manipulator having say 2 degrees of freedom; say this is the serial manipulator having 2 degrees of freedom. So, this is the length of the first link and this is the length of the second link. So, this is the length L_1 and the length L_2 and supposing that the starting point for this particular manipulator is nothing, but S and the goal point of this manipulator supposing that is denoted by G. Now, starting from the point S; it will have to reach the goal and while moving so, this particular tip of the manipulator should not collide with some static obstacle. For example, this is a point obstacle; now, similarly I can also consider this is the point obstacle. I can consider there could be a line obstacle here, there could be some sort of triangular obstacle, there could be some sort of circular obstacle here or say elliptical obstacle here. Now, this particular tip of the manipulator will start from the point S and it will reach the point G; that is the goal. Now, while moving the tip of the manipulator should not collide with all such static obstacles. Now, how can we ensure this type the movement; the collision-free movement? Now, to solve this actually, we can do it analytically in a very easy way for example, say the moment this particular the tip of the manipulator is going to touch the point obstacle. So, this could be actually the configuration, this is one configuration. Similarly, there could be another configuration with the help of which the tip of the manipulator can touch this particular point obstacle. And, we can solve for the joint angles, the moment it is going to touch. So, this particular point obstacle, I can find out the joint angles like your theta_1, theta_2, the left hand solution and theta_1, theta_2 and that is nothing, but the right hand solution. So, corresponding to this particular point, I will have that two sets of values for these particular the joint angles. Now, if I concentrate on this particular the line obstacle; so, this line obstacle will be nothing, but a combination of so many such points. So, we consider a large number of points lying on this particular the line obstacle. And, corresponding to each of these particular points so, I can find out the two sets of theta values. So, I will be getting; so a large number of; the large sets of theta values corresponding to this particular the line obstacle. Now, if I want to ensure the collision-free path for this particular tip, for this particular triangular obstacle. So, this tip is going to trace the boundary of the triangle and I can also find out the sets of theta values or the joint angle values. Similarly, the moment is going to trace, the boundary of this particular circular obstacle. So, I can also find out what should be the theta_1, theta_2; the sets of theta_1, theta_2 values. And, following the same principle, the moment this particular tip of the manipulator is going to trace the boundary of this particular the elliptical obstacle. So, I can also find out the combination of this particular the theta values. Now, if I want to ensure the collision free movement of this particular tip. So, what I will have to do is; so, I will have to plot these particular theta_1 and theta_2 like the joint angles. And, these are in say radian, theta_1 and theta_2 are in radian. And, there is a possibility for one point I will be getting two points here. And, for this particular the straight line obstacle; so, I will be getting one curve line here and there is a possibility I will be getting another curve line here. Then, corresponding to this particular triangular obstacle so, there is a possibility I will be getting one curved line, another curved line, another curved line. Similarly, here, I will be getting one curve line, another curve line, another curve line. Then, corresponding to this particular circular obstacle; so, there is a possibility, I will be getting one elliptical and here, I will be getting another elliptical step. And, corresponding to this particular elliptical so, I will be getting one distorted ellipse sort of thing; so, might be another distorted ellipse sort of thing. Now, on this particular theta space, these are nothing, but the forbidden zone; that means, if I want to ensure the collision free path for the tip of this particular manipulator. So, what I will have to do is; I will have to select theta_1 and theta_2 in such a way that your theta_1, theta_2 should not lie on these particular the forbidden zone. So, these are all forbidden zones, this is nothing, but a forbidden point, another forbidden point this is nothing, but a forbidden curve, another forbidden curve, ok. So, this is the way actually, we can ensure the collision free movement of this particular the tip of the manipulator. Now, this is a very simple the motion planning problem or the path planning problem. Because here, we have considered that these particular obstacle, that is, your obstacle 1, then comes obstacle 2, obstacle 3, then comes obstacle 4 and obstacle 5; these are all stationary obstacle and this is in 2 D. Now, if I consider that these particular obstacles are moving; then, how to ensure the collision free path for this particular the tip of the manipulator? Now, this problem will become more complicated; now this will become more complicated, because the positions of these particular obstacles are going to change with time. So, might be at time t equals to t_1; so, I will be getting. So, this is the scenario, this is actually the infeasible the region at times a t equals to t_1. Now, if it is a problem of dynamic motion planning; so, there is a possibility. So, at time t equals to t_2; so, this particular feasible and infeasible the zones that is going to vary. And, similarly at time t equals to t_3; so, I will be getting another combination of these particular the feasible and infeasible zones. And, through this particular feasible zone supposing that this is nothing, but the total area. Now, this white portion is nothing, but the white portion is nothing, but the feasible zone. And, through this particular feasible zone actually, what I will have to do is; so, I will have to go for, I will have to find out the feasible and infeasible zones. Now, once gain let me consider the same example here. So, if I consider x and y and so this is nothing, but the robot, this is L_1 and L_2 and we consider some static obstacles like point obstacle, line obstacle, triangular obstacle, circular obstacle, elliptical obstacle, something like this and these obstacles are all stationary. Now, as I consider in the next time, these obstacles are moving in 2 D; the obstacles are moving. So, as I mentioned at time t equals to t_1, at time t equals to t_2, at time t equals to t_3; the scenario that is the feasible and the infeasible scenario; so, that is going to vary. The problems becomes much more difficult; now if I just make it more complex. Now, supposing that I am just going to add another dimension to these particular obstacles, that means, your obstacles will become 3 D and if I consider that these particular obstacles are moving. And, if I consider a manipulator having say 6 degrees of freedom. So, if I consider that my hand is a manipulator; the serial manipulator and staring from a particular point. So, tip of this particular finger; so, I just wanted to move this particular the goal point and while moving from here to this particular point; so, supposing that the robot has started moving. And, while moving, now, there are some moving obstacles, which are going to come in between. And, this particular tip of the manipulator, we will have to find out the collision free path online. And, this particular problem becomes very difficult to handle because here, the motion planning algorithm will have to take the decision within a fraction of second. And, to solve this type of problem actually, the traditional motion planning algorithm is going to face a lot of problem. Because these algorithms are computationally expansive so, we cannot take the decision online within a fraction of second and there is a chance that the tip of the manipulator is going to collide with the moving obstacle. Particularly, whenever it is working in the 3 D space and whenever we are working with the robot with 6 degrees of freedom and if it is working in the 3 D space with the moving obstacle; it becomes difficult to find out the collision free movement for the tip of this particular the manipulator. So, this particular motion planning algorithms are very complex and finding the online solution is very difficult. And that is why nowadays there is a trend to replace this particular the behaviour based robotics, the principle of behaviour based robotics, which I have already discussed using the reactive control scheme. Now, this particular behaviour-based robotics actually will not be able to solve this type of difficult situation in a very efficient way. And, that is why, nowadays, there is a trend that in place of these particular behaviour-based robotics; we go to another, the motion planning algorithm and that is called actually your evolutionary robotics. Now, this particular evolutionary robotics, we take the principle of evolutionary principle. So, I am just going to tell you, in short, the principle of this particular evolutionary robotics, but I am not going to discuss, in details, the principle of evolutionary robotics because this is not is not there within the scope of this particular the course. But, I am just going to tell you the philosophy behind this particular evolutionary robotics and this particular evolutionary robotics could be a possible solution to solve this type of very complicated problem particularly the motion planning in 3 D space for a manipulator having 6 degrees of freedom. And, if we consider the moving obstacle, there is a possibility that this principle of evolutionary robotics is going to help us to find out a feasible solution. Now, let us see , how does it work? Now, here in this evolutionary robotics actually, what you do is; we use the principle of the biological adaptation. So, this biological adaptation, if you see, there are nothing but two principles of the biological adaptation. Now, those are nothing, but the evolution and the learning. So, this evolution and learning are going to help in biological adaptation. Now, if you see this particular evolutional and the leaning; so, these operators are working on two different time scales. For example, say evaluation is working through a large number of iterations. On the other hand; learning takes space in one’s life time and these two operators are going to help each other. For example, say if you see the principle of learning; now while learning, we use the principle of optimization. And, most of the optimization tools actually work using the principle of evolution. So, this particular principle of learning or the principle of optimization works through a large number of evaluation. And, if I can learn some good things throughout my life; so, I am just going to pass this particular good information to my next generation. And, there is a possibility, say due to this particular good information, there is a possibility, the rate of evolution is going to increase. So, the evolution is going to help this particular learning and learning is going to help this particular the evolution. So, they are going to help each other and there is a possibility; it is going to increase the rate of this biological adaptation. Now, this particular principle of biological adaptation has been copied in evolutionary robotics. Now, here in evolutionary robotics actually what we do is; we try to design and develop some motion planning algorithms using the principle of the evolution and that of learning. So, what you do is, we try to use some evolutionary tool like some sort of biologically inspired optimization tool. For example, it could be genetic algorithm, particle swarm optimization, and so on. And, we use some learning tool like some sort of neural networks, some sort of fuzzy reasoning tool and we try to evolve the more efficient motion planning algorithm for this particular the robots. And, that is actually the principle of these particular the evolutionary robotics. So, evolutionary robotics is going to solve this particular motion planning, the problem in a very efficient way. And, nowadays, actually there are many such applications to solve the motion planning problem; using the principle of this evolutionary robotics. Now, as I told that this particular principle of evolutionary robotics is beyond the scope of this particular course. So, I am not going to discuss in more details the principle of this particular the evolutionary robotics, but as I told that this is one of the possible ways to solve this particular motion planning problem of the robots; particularly, for the mobile robots in a very efficient way. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_42_Biped_Walking.txt
Now, I am going to discuss on a new topic and that is on Biped Walking. Now, before I start, let me define what do we mean by this biped robot? Now, this biped robot is actually a simpler version of this particular the humanoid robot. So, humanoid robot is very much complicated and this biped robot is actually the simpler version of that particular the humanoid robot. Now, a biped robot should be able to walk on the plain surface, it should be able to negotiate the staircases, take turn, cross ditches as the situation demands. Now, while walking, this particular biped robot should be able to maintain its balance and that balance is nothing, but the dynamic balance. Now, I am just going to discuss in details, like how can it maintain the dynamic balance. So, here, I am just going to discuss the walking cycle of a biped robot; now if you see, if you concentrate on this particular figure, we can see that these LF and RF are nothing, but the left foot and the right foot and these two feet are the ground feet. Now, let me assume this particular the rectangular box indicates actually it is the ground. So, both the feet are placed on the ground and this is nothing, but a double support phase. Now, after this double support phase what happens? The right foot will remains at the same position and the left foot that will be taken away from the ground and now, it is an air. So, here, in this particular configuration, for example, this particular configuration that is the single support phase configuration; the right foot is on the ground and the left foot is in air and this is a single support phase. Now, after that; there will be another double support phase, now here, this particular right foot that is already there on the ground and the left foot that will be placed on the ground. So, here both the feet are on the ground and this is nothing, but the configuration of the double support phase. And, after that, this particular the left foot will be on the ground and the right foot will be put in air and this is once again a single support phase. Now, starting from this particular double support phase, then there will be one single support phase, then double support phase and single support phase that completes actually one walking cycle. So, one walking cycle consists of 2 single support phases and there will be 2 such double support phases. Now, while walking this particular biped robot should be able to maintain the dynamic balance during its single support phase as well as the double support phases. Now, here, I am just going to discuss, how to maintain that particular the dynamic balance. Now, before that, let me define like what do you mean by gait; the term gait is very frequently used in biped walking. Now, by gait, we mean it is the sequence of legs movement in coordination of the body movement, which is required for walking of that particular the biped robot. Now, while walking, this particular biped robot should be able to consume the minimum amount of power, but at the same time, it should have the maximum dynamic balance margin. So, I am just going to discuss, in brief, how to determine this particular the power consumption during walking and how to maintain this particular the dynamic balance. Now, here actually, what I am going to do? I am just going to discuss, in brief, just to make it simple, but the exact derivation or the detailed derivation, if you want to have a look, you will have to concentrate on the textbook, that is, the fundamentals of robotics written by me. So, there are all such things are dealt in much more detail, but here, as I told for simplicity I am just going to discuss, in brief, how to determine the power rating for this particular biped robot and how to determine the balance margin. Now, here, I am just going to concentrate first on this single support phase, and here, for simplicity, I am just going to concentrate on a particular task that is nothing, but the ascending of staircase. So, I am just going to discuss staircase ascending and that too for the single support phase. So, let us see, what happens during the single support phase and whenever this particular biped robot is going to negotiate or going to ascend through the staircase. Now, here, on this particular figure, the staircase; that is denoted by these. So, these are nothing, but the steps of the staircases. So, these are nothing, but the staircases now, here. So, this S_w that particular symbol indicates the width of the staircase and S_h is nothing, but the height of this particular the staircase. And, here, this is the single support phase; so, only one foot will be on the ground and the other foot will be on the air. Now, here, out of these two feet; this particular foot is on the ground and this particular foot is in air because this is a single support phase. And, this indicates actually the trajectory of this particular the swing foot that is the foot which is in the air. So during this particular walking through the staircase, this is the locus of the swing foot; so this is nothing, but the swing foot trajectory. Now, this particular swing foot trajectory I am just going to represent with the help of some mathematical expression and we will derive that particular the mathematical expression. Now, before that let me tell you that this indicates say one foot, this is another foot, this is one link, this is another link, another link, another link, another link and this is the foot. So, let me write here; the length of this particular the foot is denoted by say L_1 and its mass is denoted by m_1. Similarly, for this particular link supposing that the length is your L_2 and the mass is m_2 and this particular mass is concentrated at this particular the point. Similarly, for this particular link, the link length is L_3 and the mass is m_3 and say, m_3 is concentrated here at this particular the point. Similarly, for this particular link, the length is L_4 and the mass is m_4. Now, here the length is L_5 and mass is m_5, this here the length is L_6 and the mass is your m_6. And, for this particular foot the length is L_7 and mass is nothing, but m_7. So, here, there are 7 links say L_1, L_2, L_3, L_4, L_5, L_6 and L_7 and here I am just going to consider for simplicity only 7 degrees of freedom. So, here, I am going to consider a biped robot having 7 degrees of freedom and the joint angles. So, all the joints are actually the rotary joints and the joint angles are denoted by say theta_1. Here, theta_1 is equal to 0, then comes here, we have got theta_2; the second joint angle. So, this is also theta_2 then comes your theta_3; so, this is also theta_3 and the joint angle theta_4. Similarly, the joint angle theta_5 then comes theta_6 and theta_7 and for simplicity, we have assumed that theta_1 is equal to 0 and theta_7 is equal to 0. And, this particular joint is actually the hip joint and here, we consider that this joint, that particular joint and this particular joint all 3 joints are coinciding. So, this is actually the ankle joint, this is the knee joint and this is the hip joint, similarly on the other leg, this is nothing, but the knee joint and this is the ankle joint. Now, here, let us see, how to determine the power consumption, if this particular robot is planning to negotiate the staircase; in this particular direction. And, here, for simplicity, we are going to consider the movement only along the sagittal plane; that means, the sideways movement, we are not going to consider, for simplicity. Now, let us see, how to carry out this particular the analysis, but as I told that I am not going to discuss in details, the mathematical derivation which is available in the textbook of this particular the course. Now, here, the step length that is denoted by l is nothing, but 2 s_w plus x_3 minus x_1. So, here, actually the step length, this is nothing, but the distance between this particular point and this particular point. So, this is actually the step length, that is l and this l is nothing, but 2 s_w minus x_1 plus x_3; so, this is your x_3. So x_3 plus s_w plus s_w minus this particular x_1, that is from here to here; so, from here to here. So, this is nothing, but the step length, that is nothing, but l. Similarly, the height of this particular hip, that is denoted by h is nothing, but L_2 cos theta_2 plus L_3 cos theta_3. Now, this is your L_2; the length of this particular the link; this angle is theta_2. So, your L_2 cos theta_2 is nothing, but this; so from here; so this will be your L_2 cos theta_2. Similarly, this is your L_3 and this particular angle is nothing, but theta_3. So, from here to here is nothing, but is your L_3 cos theta_3; so L_2 cos theta_2 plus L_3 cos theta_3 is nothing, but the height of this particular the hip. So, these two terms actually will have to be defined for the purpose of analysis and here, another thing. So, this is the hip joint, now during this particular walking through the staircase; so the hip should also follow a particular trajectory. Now, here, for simplicity, we have assumed that the hip is going to follow a straight path. And, the slope of this particular straight line is nothing, but the slope of this particular the staircase. So, if I just try to find out the slope of the staircase and the slope of this particular the hip; so, they are having the same slope. Now, this is the way actually mathematically, we are going to describe this particular the configuration for the purpose of analysis. Now, here, as I told that this particular swing foot should have some trajectory, now for simplicity, we have considered that the swing foot is going to follow one cubic polynomial of this particular form; that is z is nothing, but c_naught plus c_1 x plus c_2 x square plus c_3 x cube. Now, here actually, for this cubic polynomial, there are 4 unknowns like c_naught, c_1, c_2 and c_3. Now, if I want to solve it; I will have to take the help for such known conditions and these are nothing, but the boundary conditions. Now, if I discuss that let me once again go back now this is nothing, but is actually the hip trajectory, which I am going to represent mathematically. Now, here, this is the coordinate system, this is your x and this is z. So, at this particular point, the z height is actually equal to 0; similarly, when x is this at that particular situation, I can find out that this is nothing, but the height along this particular z and then, we try to find out for a particular value of x. So, when x is here, I can find out this much is actually your z, when x is this much here. So, this is nothing, but the value of this particular the z. So, using these 4 conditions; so, I can derive this particular the cubic polynomial. So, I am just going to write down all such conditions here; so, the conditions are written here. So, these boundary conditions, the four boundary conditions are written here, for example, at x equals to 0; z is equals to 0, and so on. And, if you use this boundary condition, we can find out what should be the values for these c_naught, c_1, c_2 and c_3. And, once you have got the values of the coefficient, I can represent this swing foot trajectory. Now, once you have got this particular the swing foot trajectory; next we try to find out the hip joint trajectory. And, I have already mentioned that we have assumed that this particular hip joint is going to follow a straight path and whose slope is nothing, but the slope of this particular the staircase. So, this particular angle and that particular angle, they are the same. Now, here, actually there is a chance of optimization, we can find out a suitable optimal slope or optimal trajectory for this particular the hip joint. But, here, for simplicity, we consider that the slope of this particular trajectory is same as the slope of the staircase. Now, if I concentrate on this particular hip joint, I can find out. So, if I take the projection of this particular hip joint; I can find out the distance between this ankle joint and the projection of the hip joint. Similarly, I can find out the distance between the projected point from the hip and the distance of this particular the ankle joint and that is denoted by l_2. And, I can also find out, what is h_1, that is the height of this particular the hip joint and I can also find out what is h_2, that is nothing, but the height of this particular the hip joint. Now, knowing this particular, what you can do is, we can carry out the analysis for this dynamic balance. And, we can also find out the expression for the power consumption. Now, here, I am just going to discuss a little bit, how to maintain the dynamic balance for this particular the biped robot. Now, before I proceed further, I just want to mention that we human-beings, we are not statically stable; we are dynamically stable. Even if we are standing at a particular location, we are not statically stable, but we are dynamically stable. Now, let us see, how to maintain that this particular the dynamic balance. Now, here, I am just going to use the concept of the ZMP and that is known as the zero moment point. So, ZMP is the zero moment point and the concept of ZMP was introduced by Vukobratovic and this particular concept has become very popular. Now, let us try to understand, how can you find out this particular ZMP or the zero moment point. Now, to find out this particular zero moment point, actually what I am going to do is, I am just going to consider, say a particular link for the robot. And, for this particular biped robot, I am just going to consider say a particular leg. Supposing that that particular leg is denoted by this, this is nothing, but a link or a leg and this particular link or the leg is having one concentrated mass and this particular mass is denoted by this. And, supposing that for this i-th leg or the i-th link; the mass is denoted by m_i and this mass center is having the coordinate that is nothing, but x_i, y_i, z_i. Now, let us see, how to determine the ZMP, that is, the zero moment point. Now, here, this is nothing, but the foot which is in touch with the ground; so, this is nothing, but the foot. And, this foot is having the length and this length is denoted by L_7. And, this is nothing, but the center of this particular the foot; that means, that is at the midpoint. So, this particular length is nothing, but L_7 by 2; so, this is the midpoint. Now, let us see, how to derive and how to determine this ZMP. So, before I define actually, let me tell, what do you mean by this particular the ZMP? Now, ZMP is actually a zero moment point, which is a hypothetical point and this is a point about which the sum of all the moments becomes equal to 0. Now, let me repeat; ZMP is a hypothetical point, now this is a point about which the sum of all the moments becomes equal to 0. Now, here so this particular mass m_i; so this is subjected to a few forces. For example, say; g is the acceleration due to gravity. So, this particular m_i g is acting vertically downward; so this is the direction along which this m_i g is acting vertically downward. Now, here, if I consider the movement of this particular mass along the x direction and the movement of this particular mass along the z direction. And, if I say that along the x direction, there is one acceleration, that is nothing, but x_i double dot and along this particular z direction, there is one acceleration that is nothing, but z_i double dot. Then, we can say that there is a force acting along the x direction, that is, your m_i x_i double dot, mass multiplied by acceleration is the force. Similarly, here, along this particular z direction; m_i, z_i double dot, that particular force is acting and moreover, here, this is a rotary movement the link is rotating. So, here, we will have to consider the moment of inertia and this I is nothing, but the moment of inertia of the i-th link or the i-th leg and this omega_I dot, that is nothing, but the angular acceleration. So, the moment of inertia multiplied by angular acceleration is nothing, but is actually a torque. For example, say force multiplied by linear acceleration, sorry, mass multiplied by linear acceleration in force. Similarly, the moment of inertia multiplied by the angular acceleration is nothing, but is actually the torque. So, here, this is subjected to the torque, that is, I_i omega_i dot and then, it is subjected to the force like m_i g; then comes a m_i, x_i double dot, then m_i, z_i double dot. Now, here, I am on this particular the ground foot; so, I am just going to consider a hypothetical point. Supposing that, the point is here; now corresponding to this particular point, let us try to find out what should be the moment and we just put the sum of those particular moments equal to 0. Now, here, the vertically downward force is nothing, but m_i g and vertically upward is m_i z_i double dot. And, truly speaking, this m_i, z_i double dot is larger compared to this particular m_i g; because this is moving vertically upward direction. So, what is the difference between these two forces; the resultant force is nothing, but m_i z_i double dot minus m_i g. So, this is nothing, but the resultant force in this particular the direction. Now, I will have to find out the moment; so, the force is acting, the resultant force is acting in this particular direction and how much is the moment? So, with respect to this particular moment; so, I will have to find out, I will have to multiply it by this particular the distance. And, what is this particular distance? That is nothing, but x_ZMP minus x_i. So, ZMP minus x_i; so I am getting the moment due to this particular the vertical force; Now, I am trying to find out the moment due to this horizontal force; now horizontal direction, the force is m_i x_i double dot. So, m_i x_i double dot and this particular height is nothing, but is your z_i. So, m_i x_i double dot multiplied by z_i is the moment; now here, so this is going to create some sort of clockwise moment. This also creates some sort of clockwise moment, but this is going to create some sort of anticlockwise and that particular torque, the summation of torque is nothing, but summation i equals to 1 to 7 because I have got 7 links. So, I_i multiplied by omega_i dot; so this is nothing, but the torque and summation of that and this is anticlockwise and other things are clockwise. So, clockwise I have taken as positive and anticlockwise is negative and that is equal to 0 and if I solve, if I simplify; so, I will be getting the expression for the x_ZMP. So, I will be getting the coordinate of this particular point, that is nothing, but the zero moment point. And, once I have got this particular zero moment point, now very easily, I can find out what is this; your dynamic balance margin, that is, x_ZMP. Now, x_ZMP is nothing, but L_7 by 2; now L_7 by 2 if I conside,r now L_7 by 2 if I consider. So, I am here. So, this is nothing, but L_7 by 2 minus x_ZMP is the dynamic balance margin? So, dynamic balance margin is this much. So, this is nothing, but is the dynamic balance margin. Now, if I consider x_ZMP is here, in that case, I will have the maximum dynamic balance margin. Now, this is the way actually, we calculate the dynamic balance margin for this particular during the biped walking. Now, in short, let me discuss; let me tell you the procedure, how to find out the joint torque and how to determine the joint torque, I have discussed in much more details, while discussing the dynamics. Now, let me proceed a little bit faster. So, this is actually, how to how to assign the coordinate system at the different joints; according to this D-H parameter setting rule. Now, this D-H parameter setting rule, I have discussed, in details, in the chapter of robot dynamics and those things, I am not going to repeat. Now, using that particular principle of D-H parameter setting, at each of these particular robotic joints 1, 2, 3, 4, 5, 6, 7; So, I will have to assign this particular the coordinate system like x axis, y axis and z axis. Now, once you have assigned this particular coordinate system; now, if I want to find out the joint torque. So, what I will have to do is, I will have to find out, what should be the variation of theta as a function of time. And, we will have to assume actually the smooth variation of this particular joint angle. Now, while discussing the trajectory planning, I have discussed in much more details; like how to fit. So this type of fifth under polynomial just to find out a smooth variation of theta; so, here q (t) is nothing, but theta (t) because this is nothing, but the rotary joint. So, what I will have to do is; I will have to find out theta as a function of time, some sort of a smooth curve I will have to fit. And, once you have got that particular thing; now I am in a position to find out what should be the variation of this particular joint torque, that is, tau_1 as a function of time, then tau_2 as a function of time and so on, up to tau_7 and how to derive? So, those things I have discussed in much more details; so I am not going to repeat. Now, this is actually the final expression for the torque and this D_ik is nothing, but the inertia terms, which I have already discussed and derived; h_ikm; correlation and centrifugal term and C_i is nothing, but the gravity terms, as we have discussed in much more details in robot dynamics. So, I am not going to spend much time on this. So, now, I am in a position to think that for this particular biped robot; I am able to find out what should be the expression for the joint torque and how to determine actually the dynamic balance margin. Now, here, actually what we can do is, I can discuss like how to determine this power consumption. Now, the expression for the power consumption; power we know that is nothing, but the rate of change of work done or a work done per unit time. So, here; this particular tau_i, tau denotes actually the torque multiplied by q_i dot that is nothing, but the angular velocity. So, torque multiplied with angular displacement is the work done per unit time; so, this is nothing, but the power plus here I have written k multiplied by tau_i squared. Now, I have already discussed that at each of the robotic joint, we use some DC motor and whenever we are going to use DC motor, there will be some loss. And, that particular loss is nothing, but the loss in the in this DC motor, that is proportional to tau square; that means, here the loss L is nothing, but k multiplied by tau square and k is nothing, but is your constant of proportionality. And, generally for the DC motor; so this particular k is taken to be equal to 0.025 or very close to that. Now, if I know this particular; so I can find out how much is the loss due to this particular loss in this particular the DC motor. And, this is the requirement of the torque and I can find out what should be the a power rating for the motor, which I am going to put at the different joints. And, using this particular principle actually, we can find out what should be the power rating for this particular the motor connected at the joint. This is how to carry out the analysis for the single support phase. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_38_Robot_Motion_Planning_Contd.txt
We are discussing, how to solve the find-path problem using graph-based techniques. Now, I am just going to start with the working principle of this Voronoi diagram, which is a very popular graph-based method to solve the find-path problem. The concept of Voronoi diagram was proposed by Dunlaing et al. in the year 1986; Now, here, the problem is something like this; supposing that this is the field, and I have got one robot, say point robot and this is the starting position of the robot. And, its goal is denoted by the point G; now starting from here the point S, it will have to reach the point G by avoiding collision with the obstacles. Now, here, we are just going to consider 3 obstacles, 3 static obstacles like O_1, then comes O_2 and O_3. Now, supposing that there is no such obstacle; now if there is no obstacle, then it will start from here and it is going to reach this point G following a path something like this, but unfortunately, there are some obstacles, there are some fixed obstacles. Now, let us see, how to find out the collision-free path using this particular the Voronoi diagram. Now, in a Voronoi diagram actually, what we do is; we try to find out the locus of the points, which are equidistant from two boundaries. Now, if I start from the point S and our aim is to reach this particular goal. So, here, the most critical obstacle is nothing, but O_1 and we have got the boundary of this particular field and this is nothing, but the boundary of this particular field and at the same time, this is nothing, but the boundary of the obstacle. So, what we do is; we consider this particular boundary of the obstacle and the boundary of the field and we try to find out the midpoint. So, the midpoint of this is nothing, but this particular point. So, this point is equidistant distance from this obstacle boundary and the boundary of the field. Similarly, I can find out the midpoint of these two boundaries are something like this and I will be able to reach this particular point. And, once I have reached this particular point; now I will have to consider this particular point that is the vertex of this particular obstacle and the boundary of this field. So, if I consider this is one point and this is another point lying on the boundary, this is the midpoint. Similarly, we are going to consider a few more distances from this particular point and we try to find out what should be the midpoint and what should be the locus of the midpoint. For example, on this particular straight line, the midpoint could be here; the midpoint could be here, the midpoint could be here, the midpoint could be here and the midpoint could be here. So, what you do is, we start from here and then up to this, we can find out the collision-free path; the collision free path could be something like this. For example, starting from here the collision free path up to this it is something like this, then I can find out a path up to this. Next, we try to consider this particular vertex of the obstacle, that is O_2 and this is the boundary of this particular field. So, what you do is, we try to draw some straight lines; so, for example, say from this particular boundary to this point, these are the straight lines, we consider. So, these are the straight lines, we consider and try to find out the midpoint; so, the midpoint of this particular straight line is this, here this is the midpoint, this is the midpoint, midpoint and we try to join by a smooth curve and this could be the path. So, we have reached up to this; that means, the point robot has reached up to this; now we consider this particular boundary and the boundary of this particular field and the locus of the midpoint starting from here; so, this will be the locus up to this. Now, after that we consider this particular vertex of O_2 and this is the boundary; so, we draw the straight lines, we draw all such straight lines here. And, for each of this particular straight line, we try to find out the midpoint. So, these are the midpoints; so, from here, we can just join by a smooth curve. So, the point robot has reached up to this. Next, we consider this vertex and this is nothing, but the boundary of the field. So, this is the straight line; so, these are the straight lines, we consider. And, we consider the midpoint of this particular straight line; so, from here, there is a possibility that I will be getting this type of path. So, the point robot is here; so from here to here; this is the boundary of the obstacle and this is the boundary of the field. So, it is a midpoint, the locus of the midpoint could be something like this; so, from here, I will be able to reach this particular point. Next, we consider this vertex and this is the boundary of this particular field; so, we join by straight line. So, these are the straight lines and we try to find out the midpoint; so these are the midpoint and we join by a smooth curve. So, up to this, this is the path now we consider, so this vertex and this is the boundary of the field. So, these are the straight lines; so these are the straight lines, which you are going to consider and once again, we try to find out the locus of the midpoint. So, it will be getting say this type of locus and then from here to this particular goal; so, this is the obstacle boundary and this is the boundary of the field. So, from here, the locus will be something like this. So, starting from here; starting from the initial point, I can find out a collision-free path, which is something like this. So, this is a collision-free path for this particular point robot; so this is a collision-free path. Similarly, we can start from here and I can move along this particular direction and there is a possibility that I will be able to find out another feasible path something like this or I will be able to find out another path something like this. So, there is a possibility that we can find out several such feasible paths and out of all such feasible paths actually, we will have to find out the time-optimal path. But, in a Voronoi diagram actually, they did not try to find out the time optimal path; they only tried to find out the obstacle-free, the collision-free path. So, this is one of the possible collision-free path for the point robot. So, this is the way actually, by using the principle of Voronoi diagram, we can find out the collision-free path for the point robot in the presence of some static obstacles. That means, we can solve the find-path problem using the principle of Voronoi diagram. Now, the next is the concept of cell decomposition. Now, this particular cell decomposition is another very popular graph-based technique to solve the find-path problem. The concept was proposed by Lozano Perez in the year 1983, so it was proposed in the year 1983 and Lozano Perez proposed one technique that is called the cell decomposition and he also gave the concept of this particular configuration space or the C-space. Now, let us try to see the problem; now, here, actually what we do is, we consider one physical robot in place of point robot and we consider one static obstacle. Supposing that the physical robot is something like this, so this is the physical robot which I am going to consider and this is the marked vertex of this particular physical robot. And, supposing that I have got one static obstacle and the boundary of the obstacle is something like this. So, this is nothing, but the obstacle; and I have got a physical robot something like this. So, this is the robot and we have got the obstacle here; how to ensure the collision-free path for this type of the robot, ok? Now, what you do is; this particular physical robot is converted into a point robot something like this and this obstacle, this type of rectangular obstacle is converted into this type of grown obstacles something like this. Now, how to reach this grown obstacle starting from this particular obstacle? That, I am going to discuss. So, how to replace this physical robot by a point robot and how to replace this particular obstacle by a modified grown obstacle, so that the problem remains the same. Now, what is the problem? The problem is, we will have to find out a collision-free path for this particular robot, so that it does not collide with this particular obstacle. And, this problem is equivalent to that of determining a collision-free path for the point robot in the presence of this type of the grown obstacle. Now, let us see, how to achieve this grown obstacle; now here, I am just going to put one condition, the condition is as follows. So, this particular marked vertex; so its orientation will remain the same. So, what you do is. So, I have got this particular obstacle and this particular the robot I just place it here. The robot is placed here and this particular marked vertex is going to coincide with this particular the corner of the obstacle. Now, actually what you do is, we try to slide in this particular direction; so, the robot is going to slide in this particular direction. So, there is a possibility, this will be the position and once again, it is sliding and then, after sometime, it will reach this particular point. So, this is the marked vertex and once it is reached this, what we do is, this particular robot R can slide along this particular edge. So, what we will be getting is, it can slide in this particular direction. So, this will be the marked vertex and this will be the position of the robot and once it is reached, now, it can slide in this particular direction keeping this particular marked vertex, the orientation of the marked vertex the same. So, I can slide in this particular direction, then gradually, I am just going to reach this particular position. So, this will be actually the locus of the marked vertex. Then, here, actually what I can do is, I can slide in this particular direction so, there could be sliding here and there is a possibility, if it slides it will take the position something like this. And, this is the orientation of the marked vertex and this will be the locus of this particular the marked vertex. So, after that, it can slide in this particular direction, then gradually I am just going to reach this particular point. So, the marked vertex will be here and this will be the position of this particular robot R. So, this will be the locus of this particular the marked vertex and after that, here it is going to slide in this particular direction. So, there is a possibility that this will be the situation. So, this could be the situation and this is the marked vertex. So, the marked vertex is sliding along this and then, the marked vertex is going to reach this particular the starting point and this is the direction along which the robot is going to slide. Now, if this is the situation, then very easily we can find out the locus of this particular the marked vertex. For example, we started from here then the marked vertex moves like this, then it moves like this, then it moves like this, then it moves like this, then it comes here, then it will come here and then, it will come here. And, this is nothing, but actually the grown obstacle. So, this particular robot will be replaced by the marked vertex; so, this particular marked vertex is nothing, but a point. So, this is the point and the original obstacle; so, that will be converted to the grown obstacle. So, this particular grown obstacle I have just now redrawn it here; so, this is nothing, but the grown obstacle. So, now the problem is equivalent to a point robot and this particular the grown obstacle. So, I will have to find out the collision-free path for this particular point robot by considering this type of grown static obstacle. Now, let us see how to tackle this particular problem using the principle of cell decomposition. Now, here actually what we do is; the same grown obstacle I just redraw here and this is nothing, but the position of this particular the point obstacle. So, this is the position of the point obstacle, ok; so this is the starting position and the goal could be here that is denoted by G and this is nothing, but the grown obstacle. So, the grown obstacle is something like this; so, this particular grown obstacle represents the infeasible zone. The point robot should not reach here just to avoid that particular the collision with the static obstacle. Now, if this is infeasible zone, the rest of the zone will be the feasible zone. So, this is actually the boundary of the field. So, this particular white portion will be nothing, but the feasible zone; now this particular feasible zone this is divided into a large number of small sub-regions. For example, starting from here; so what I can do is, this feasible zone, I can just divide something like this. So, this is one feasible sub-region, similarly, this is another feasible sub-region, this is another feasible sub-region, another feasible sub-region, another feasible sub-region. So, we will be getting some feasible sub-regions, something like this and once we have got this particular the feasible sub-regions, what I do is, we start from here, go to the nearest feasible sub-region; so, this could be the nearest feasible sub-region. So, we try to find out the midpoint of this particular the sub-region; the center of this particular the sub-region. Similarly, this is the center of this particular sub-region, this could be the center of this particular sub-region, this is the center of the sub-region, center of the sub-region. Then, the path could be something like this, start from here, then you reach this particular point, then you come here, then you come here, you come here and the robot is going to reach that particular the goal. Now, this is one of the feasible paths; there could be some other feasible paths, for example, another feasible path could be something like this. You just start from here, then you find out this as the sub-region; the feasible sub-region. So, the midpoint could be here, now next, this is the feasible sub-region, the midpoint could be here, the center of this particular sub-region could be here, this could be the sub-region; the center for the sub-region. So, another feasible path could be something like this; so, this could be another feasible path for this particular point robot. So, this is the way, by using the principle of the cell decomposition method; we can find out the feasible collision-free path for the robot. Now, here, we are trying to find out the collision-free path for the point robot considering this grown obstacle. Now, by solving this, we are also able to solve the collision-free path for the actual physical robot, which are something like this and this was the marked vertex and original obstacle was something like this. So, if you solve this particular find path problem; so, indirectly we are solving this particular the find-path problem. So, this is the way by using the principle of the cell decomposition method, we can find out the collision-free path. Now, we are going to start with another very popular method for solving the find path problem. And, this is known as the tangent graph technique, now this tangent graph technique was proposed by Liu and Arimoto in the year 1991. Now, here actually, what you do is, we try to move along the tangent of these circles. Now, supposing that I have got this type of the fixed obstacles, a triangular fixed obstacle or there could be some sort of say, rectangular fixed obstacles sort of thing. So, here actually, what we do is, we try to draw one bounding circle for this particular the obstacle. Now, supposing that this is actually the triangular obstacle; so we try to find out the center and considering this as the center, we try to draw one circle and this particular circle will be the bounding circle for this static obstacle, that is, the triangular obstacle. So, we try to find out the collision-free path considering this particular the circular boundary and this is nothing, but the bounding circle. So, this is the bounding circle for this particular static obstacle. Similarly, if this is the static obstacle, we try to find out the center of the area and once again we try to draw one circle. And, this will be nothing, but the bounding circle for this particular the static obstacle. And, once you have got this particular bounding circle, instead of considering the physical dimension of this particular obstacle. So, we are going to consider this type of boundary, the bounding circle. Now, here, let us try to find out like the feasible path for a point robot considering this particular the bounding circle. So, once again, let me repeat like for each of these particular static obstacles, we try to find out the bounding circle. And, considering the bounding circle, we try to determine, what should be the collision-free path for the point robot? Now, here, if you see, this is the find path problem, say we consider this is the initial position; that is, starting position of the point robot and this is the goal. Now, if there is no such obstacle; so very easily starting from here, it is going to reach that particular the goal. And, absolutely, there is no problem because there is no such obstacle, but if I consider the obstacle here. So, the path will be slightly different; how to find out that particular path? To find out the path, we take the help of the tangent graph technique; the technique is as follows: we start from this particular point S and we try to find out which one is the most critical obstacle. Now, if I compare O_1, O_2 and O_3, these are all static obstacles; so out of these three obstacles, so O_1 is the physically the closest. So, we first consider this particular O_1 obstacle. So, what I do is; from here, we try to draw the tangent, so from here, we can draw one tangent, which is something like this. From here, we can draw another tangent to this particular circle; then we consider this particular obstacle, that is, O_3. And, we try to draw all the external and the internal tangents, for example, one tangent could be something like this, another tangent could be something like this and we can also consider this type of tangents. So, these are also tangents, ok; next we try to find out the tangent between this particular obstacle and that particular obstacle. But, before that starting from S, I can also draw this particular tangent, I can draw another tangent here, ok. And, between these two obstacles; so I can find out the tangent something like this. So, this is one tangent, this is another tangent, then we get another tangent here; we get another tangent here. Now, from here, this obstacle O_2; so, I can find out this type of tangent also between O_1 and O_2, then this is another tangent, ok. Then, I can find out another tangent, another tangent, ok, I can draw the tangent here, I can draw the tangent here. Similarly, this is another and I can draw this type of tangent, also. Now, once you have got these particular tangents; now, we are trying to find out, what should be a feasible path. Let us see, how to find out a feasible path; now to find out the feasible path actually, what you do is, supposing that I am just going to start and this is the first critical obstacle. So, let me follow this particular path ok; so from here supposing that I am just going to follow this particular tangent. The next tangent is here; so from here, so the next tangent could be here up to this. And, this point obstacle will follow this circular path and it will reach this, then it is going to follow this particular the tangent; so, this is one feasible path. Now, here in this particular feasible path, we can see that we can draw one circular arc here, for example, say from here to here, there will be a circular arc. So, from here to here, there could be another circular arc something like this. Now, if I draw one circular arc here, another circular arc here; supposing that this is denoted by point 1, this is denoted by point 2, ok, this is denoted by point 3, this is denoted by point 4. So, the feasible path will be from S to 1, 1 to 2, 2 to 3, 3 to 4, 4 to G; so, this is one feasible path. Similarly, there could be some other feasible paths, for example, say I can also consider some other feasible paths. For example, another feasible path could be something like this; so you start from, here then you reach this particular point, then, from here, you follow these particular circular paths up to this. Now, from here, actually what you can do is; you can follow this type of the tangent path, ok. So, if I just write down, say this is A, this is B and this is G; the path could be your S to A, A to B, B to G. Similarly, we can find out the suitable such feasible paths and out of all the feasible paths; in fact, we will have to find out what should be the time-optimal and collision-free path. Now, this particular algorithm actually could reach some popularity and as it is using the principle of tangent. So, there is a possibility that you will be getting the optimal path, but here, the main problem is actually the computational complexity. Now, if you see the computational complexity it has been checked that the computational complexity of this particular algorithm is nothing, but order of N square. where N indicates the number of control points or the number of circular arcs, ok. So, the more the number of control points or the circular arcs, the more will be the complexity. And, the complexity is actually quadratic, it is in the order of N square; N is nothing, but the number of circular arcs. Otherwise, this method is very good and it could solve the find-path problem very efficiently. So, this particular method, as I told, could reach good popularity. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_11_Robots_Kinematics.txt
We are going to start with the second topic, and this is on Robot Kinematics. Now, here, the purpose of kinematics is to study the motion of the robotic link, but we do not try to find out the reason behind this particular motion. For example, say if it is a linear movement, there must be some force acting, if it is a rotary movement, there must be some torque acting, but here, in kinematics, we do not try to find out - what should be the amount of force, what should be the amount of torque. But, we study only the motion of the different links, the relative motion of the different links, and so on. So, let us see how to carry out this particular kinematic analysis. Now, before I start with this kinematic analysis, let me start with the very scratch like the very beginning I should say, supposing that I have got a 3-D object. So, this is nothing but a 3-D object, now this particular 3-D object, I will have to represent. So, this is actually the 3-D object, which I will have to represent; that means, its position and orientation, I will have to represent in 3-D space. Now, here, U indicates the universal coordinate system and it has got the axis like X_U, Y_U and Z_U and they are independent, they are mutually perpendicular. Now, we will have to represent, this particular 3-D object in this 3-D space. So, how to represent, how to represent its position, how to represent its orientation? Now, here, to represent this particular position actually, what we do is: we try to find out the mass center of this particular 3-D object. So, this is the mass center. So, at this mass center, we just draw one coordinate system, that is nothing but the B coordinate system and it is having X_B, Y_B and Z_B. Now, here, if I want to determine the position of this particular mass center. So, what I will have to do is, I will have to move along X, I will have to move along Y and I will have to move along Z just to find out this particular position. So, I need three information, and if I want to represent the orientation of this particular 3-D object in this 3-D space, I will have to consider the orientation or the rotations, rotation about X, rotation about Y, rotation about Z. So, I need three more information. So, for position we need three information, that is, X, Y and Z ; and for this particular rotation or this orientation, we need actually the three more information. So, we need a total of six information. So, this is the way, actually, we can represent the position and orientation of a 3-D object in 3-D space. Now, let us see like how to represent the position or what do you need to represent the position, only? So, representation of the position, that means, the position of the mass center of the 3-D object, let me draw once again the same thing. So, I have got the universal coordinate system. So, this is X_U, Y_U and Z_U and Q is actually a point, whose position is to be determined. So, what I will do is: starting from the origin O, I will move along X, then I will move along Y, then I will move along Z. So, this particular point will be having the coordinate like q_x, q_y, q_z. So, this is nothing but the coordinate. And, here, so this particular point can be represented as a vector, as a position vector. So, this is nothing but the position vector and that is denoted by Q with respect to U. So, this particular point is actually Q with respect to U, that is, the universal coordinate system and to represent that particular position vector, I need the elements like q_x, q_y and q_z. So, this is a vector and in matrix form, this is nothing but a 3 cross 1 matrix. There are 3 rows and 1 column and this is nothing but a 3 cross 1 matrix. So, this is nothing but a 3 cross 1 matrix or a vector. So, to represent the position we need one vector or one 3 cross 1 matrix. So, this is how to represent the position. Now, let us see, how to represent the orientation. To represent the orientation, what I do is so, this is once again my U coordinate system, the universal coordinate system and this particular B is the body coordinate system, which is attached to the mass center of the body, whose orientation I am just going to represent. Now, this particular B, you can see that there have been some amounts of change in orientation. For example, say X_B is not parallel to X_U, Y_B is not parallel to Y_U. Similarly, Z_B is not parallel to Z_U, that means, there has been some rotation. So, orientation of this particular B has been changed with respect to this particular U. Now, here, the way we represent this particular orientation or rotation is like this. So, R_B with respect to U is nothing but the rotation of B with respect to U, that is, the rotation of body coordinate system with respect to the universal coordinate system is nothing but R_B with respect to U. Now, to represent this R_B with respect to U, we take the help of, in fact, three vectors. So, this is one vector, this is another vector, this is another vector. So, we need in fact, a set of three vectors, we need now one set of vector is nothing but a 3 cross 1 matrix and this will become a 3 cross 3 matrix. Now, how can we visualize this particular vector? So, I am just going to prepare one rough sketch just to understand like what are these the three vectors. Now, let me just prepare one very rough sketch for a robotic hand or a robotic gripper sort of thing. So, this is one actually one two finger, say, a very simple gripper. So, this is actually the gripper. So, I have got two fingers here, finger 1 and finger 2. Now, here, with the help of this particular gripper, I am just going to grip that particular 3-D object. Now, if I want to grip the 3-D object it may have different orientations and depending on the orientation of the 3-D object, I will have to change the orientation of this particular gripper or the finger, then only I can grip it and here, to represent the orientation, actually, we take the help of three vectors and this is, actually, one is called the normal vector, that is, denoted by n; another is called the sliding vector, that is, denoted by s; another is called the approach vector, that is, denoted by a, ok? So, we have got actually normal vector, sliding vector and approach vector. So, if I just write down in the form of matrix n, s, a, this is, along x, x, x; n, s, a; along y, y, y; n, s, a; along z, z, z; n, s, a. So, in the matrix form this particular orientation will be represented like this and this is in Cartesian coordinate system. So, I am now, discussing here the Cartesian coordinate system, how to represent that particular orientation. Now, here, this n_x, n_y and n_z are the elements of this particular the normal vector. So, these are the elements of the normal vector, these are the elements of the sliding vector and these are the elements of the approach vector. So, I need a set of three vectors and this is nothing but actually the 3 cross 3 matrix. In the matrix form so, we have got 3 rows and 3 columns. So, this is nothing but a 3 cross 3 matrix. So, to represent the orientation, we need a set of three vectors or we need a matrix and that is nothing but actually a set of 3 cross 3 matrix. Now, I am just going to define the frame. Now, frame is actually a set of four vectors, which carry information of the position and orientation. We know that to represent the position, we need only one vector and to represent the orientation we need, in fact, three vectors, a set of three vectors; that means, to represent both position as well as orientation we need a set of four vectors. Now, this set of four vectors is known as the frame. Actually, what we do is, we try to assign frame at each of the joints. Now, here, in this schematic view, if you see, this is nothing but the universal coordinate system and once again B is actually the body coordinate frame, which is attached to the mass center of the body. Now, this particular coordinate system X_B, Y_B and Z_B is having some translation with respect to U. So, this is the amount of translation, so this particular origin has been shifted to this particular point and this particular translation is nothing but Q_Borigin with respect to U. So, in the vector form, I can represent like this. And, here, there are some rotation and that is why, X_B is not parallel to X_U, Y_B is not parallel to Y_U and Z_B is not parallel to Z_U, and this is the way, actually, we can represent the position and orientation with the help of actually the four vectors. Now, I am just going to concentrate on the frame transformation. Now, frame transformation means it includes frame translation and frame rotation. So, here, actually, I am just going to concentrate on the translation of a frame first. So, only translation, the pure translation, I am just going to consider. Now, once again, this is the universal coordinate system: X_U, Y_U, Z_U are the universal coordinate system, B is the body attached coordinate system and here, I am just going to consider only translation. So, there is no rotation of B with respect to X_U and that is why, you can see that this particular X_B is parallel to your X_U, then comes this Y_B is parallel to Y_U and Z_B is parallel to Z_U, but there has been a shifting of the origin from here. So, previously the origin was here, now, the origin has been shifted to this and here I am just going to show it with the help of one position vector. So, this is the situation. So, the frame B or the coordinate system B has been translated only with respect to the universal coordinate system, but there is no such rotation. Now, supposing that, the position of a particular point with respect to the B is known, supposing that this particular vector is known, ok? So, if this vector is known and this particular translation is also known, then, how to find out this Q with respect to U, that is our aim. So, our aim is to determine the position of the same point, which is lying on this particular body B with respect to the universal coordinate system, that means, I am trying to find out the position vector and very easily, I can find out, that is, Q with respect to U, that is, Q with respect to U is nothing but Q_Borigin with respect to U plus Q with respect to B. So, this is your Q with respect to B. So, I will be getting, that is, Q with respect to U. That means, if I know the position with respect to the body coordinate frame very easily you can find out the position information with respect to the universal coordinate system and here, I am just going to consider only the translation. Now, I am just going to consider only rotation, that is, pure rotation. Now, once again let me consider that universal coordinate system U having X_U, Y_U and Z_U and there is pure rotation with respect to the universal coordinate system by some angle, say theta in the anticlockwise sense. Now, if I rotate by an angle theta in the anticlockwise sense, my Z_B, that is, the Z axis of the body coordinate system will be different from Z_U, although initially they were coinciding. Now, Z_B will be different from Z_U, X_B will be different from X_U and Y_B will be different from Y_U, ok? And, we will be getting this particular rotated frames and supposing that the position of a particular point with respect to the rotated frame B is known to us. So, this is known. So, once again, let me repeat that the body coordinate system has been rotated with respect to the universal coordinate system, and I know the position, that is, Q with respect to B and our aim is to find out Q with respect to U, the position with respect to the universal coordinate system. Now, this Q with respect to U is nothing but the rotation of B with respect to U multiplied by Q with respect to B, so this particular Q with respect to B is known and if I can find out, this particular rotation matrix, that is, R_B with respect to U, very easily I can find out Q with respect to U; now, how to determine that R_B with respect to U? That I am going to discuss after some time. Now, I am just going to consider a more complicated situation, where I am just going to consider both translation as well as rotation, and once again let me repeat that, this is the universal coordinate system: X_U, Y_U and Z_U and B is the body coordinate system, there has been some translation, that means, the origin has been shifted. So, from here, to this particular point and this is actually the position vector, that is, Q_Borigin with respect to U. And so, this particular B, the B coordinate system has got some rotation with respect to this particular universal coordinate system U and that is why, X_B is not parallel to X_U, Y_B is not parallel to Y_U and Z_B is not parallel to Z_U. So, this is the situation, where there has been both translation as well as rotation. Now, supposing that, this particular point, that is, Q with respect to B is known. So, this particular point is known and what is our aim? Our aim is to find out, say Q with respect to U. So, that is our aim. This is known, that is, Q_Borigin with respect U. So, this particular vector is known, this particular position vector is known, moreover, the rotation matrix of B with respect to U, say, that is also known, then, how to find out this Q with respect to U. Now, to find out this Q with respect to U, I will have to find out like this. So, I will have to find out the rotation of B with respect U multiplied by Q with respect to B. So, both the things are known plus Q_Borigin with respect to U. So, this is also known, so, very easily we can find out Q with respect to U. Now, this Q with respect to U is nothing but this expression. Now, here inside the expression there are two things; one is actually the translation of the origin of B, another is actually the rotation of B with respect to U. So, there are two things one is the translation, another is the rotation. So, now, what I am going to do is: I am just going to use a particular term, which includes both the translation as well as rotation, and that particular term is nothing but transformation. So, this particular transformation, it includes both translation and rotation. So, translation and rotation are taken together inside this particular transformation. Now, here so, this particular expression I am just going to write down in terms of the transformation matrix, that is, Q with respect to U is nothing but the transformation of B with respect to U. So, in place of this rotation and this position now, I am using this particular transformation; so transformation of B with respect to U multiplied by Q with respect to B. Now, if I do this transformation of B with respect to U now, I can multiply. So, this is Q with respect to B and you will be getting this Q with respect to U. So, this is the way actually we can find out if we have both translation as well as rotation. Now, let us try to check the dimension matching of this particular matrix. Now, if you see in the last slide, we wrote the equation, that is, Q with respect to U is nothing but the transformation of B with respect to U multiplied by actually Q with respect to B. Now, here, this Q with respect to U, if you see, in terms of matrix, this is nothing but a 3 cross 1 matrix. Now, this particular transformation matrix has got two things; one is called the rotation matrix and we have got the position vector. Now, this rotation matrix is a 3 cross 3 matrix and position vector is nothing but a 3 cross 1 matrix. So, taken both the things together, that is, rotation as well as translation will be actually 3 cross 3 and 3 cross 1, so, this will become your 3 cross 4 matrix. So, there will be 3 rows and 4 columns and this is nothing but 3 cross 1 and this particular Q with respect to B is nothing but your 3 cross 1. So, that means, I will have to multiply one 3 cross 1 matrix by one 3 cross 4 matrix, just to get a 3 cross 1 matrix, which is not possible. Now, to solve this particular problem to make it possible actually, what I do is, here we do some modification sort of thing. So, here on this position term Q with respect to U we add 1, and here, just below the rotation matrix on the fourth row, we use 0 0 0 (three zeroes) and here, we add 1 and now, this particular transformation matrix will have the dimension, that is, 4 cross 4. So, this was 3 cross 4 previously. Now, I have added one more row here. So, this will become your 4 cross 4 and this particular thing will become 4 cross 1 matrix and this will also become 4 cross 1 matrix. Now, if I just multiply. So, this 4 cross 1 with 4 cross 4 so, I will be getting this 4 cross 1 matrix. So, it is matching. Now, my question is like why do you put 1 here? why do you put 1 here? why 1 here and why do you put these three zeros? That I am going to discuss all such things. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_29_Robot_Dynamics_Contd.txt
Now, I am going to discuss another approach using the Lagrangian method to derive that particular expression for the joint torque that is tau_1, tau_2 for a say 2 degree of freedom serial manipulator. Now, here actually one modification, we will have to do. Now, till now, if this is the robotic arm having the length L and the coordinate system is attached here, so this is your X, Y and Z, this is the coordinate system. But, the motor is connected at this particular joint. And, as I told several times that we are trying to find out the reaction torque. Now, what you will have to do is, we consider the moment of inertia with respect to your, this coordinate system. For example, we got I_XX equals to half m r square. So, m is the mass of this particular link, r is the radius. So, this is having circular cross-section with radius r. Then, I_YY is one-third m L square plus one-fourth m r square, so this expression we have already got, we have already derived. Then, I_ZZ is one-third m L square plus one-fourth m r square, so this expression we have already derived. Now, here, if we just go for this particular approach, what will have to do is, we will have to express the moment of inertia with respect to this particular coordinate system, which is attached at the mass center. Now, this is the coordinate system, which is attached at the mass center that is denoted by c. And what is the coordinate of this particular mass center, the coordinate of the mass center is nothing but X equals minus L by 2, Y equals to 0, Z equals to 0, so this is the coordinate of this particular mass center. So, what I am going to do is, I am just going to represent the moment of inertia with respect to this particular coordinate system and I am just going to transfer from here to here using the parallel axis theorem. So, by using the parallel axis theorem, I can find out the moment of inertia about ZZ with respect to C; C is the coordinate system, which is attached to the mass center is nothing but I_ZZ, so this is I_ZZ minus m X bar square plus Y bar square. Now, here, this X bar is nothing but, actually minus L divided by 2 and Y bar is is equal to 0. And, so this can be written as one-third m L square plus one-fourth m r square minus your m L square by 4 plus 0 and if you simplify, so you will be getting this particular the expression. So, this is nothing but the expression for, this I ZZ about the coordinate system which is attached to the mass center, so knowing this actually, what I can do is, we are going to derive the same tau_1 and tau_2. Now, if I just draw this picture once again, the same manipulator having 2 degrees of freedom. So, this is your X, this is Y, so here I have got say L_1, this is L_2. So, length of this particular is L_1, this is L_2. Now, here, this is actually the mass center here m_1 g will be acting, this is the mass center where m_2 g will be acting. Now, here, this mass center is having the coordinates say x_1, y_1; this is having the coordinate say x_2, y_2. And our aim is to find out, our aim is to derive the expression for so this particular tau_1 and tau_2, so that is our aim. The problem is the same; I am using a slightly different approach. Now, here, let us first try to concentrate on the first link, whose length is L_1. Now, here, we try to find out the kinetic energy of the first link and by definition the K_1 is half m_1 v_1 square plus half I_1 omega_1 square. So, I_1 is the moment of inertia, omega is nothing but angular velocity and v_1 is a linear velocity and m_1 is the mass. So, what I will have to do is, the mass is assumed to be concentrated at the mass center. So, I will have to find out, what should be the linear velocity of this particular the mass center. Now, to find out the linear velocity, what we will have to do is, this is the joint angle say theta_1; and this is nothing but, the joint angle that is your theta_2. So, if this is theta_1 and up to this is actually L_1 by 2, then how much is the arc, arc is nothing but, is your L_1 by 2 multiplied by theta_1 and the rate of change of this with respect to time is nothing but is your V_1 so that is nothing but the linear velocity at this particular point. So, this L_1 theta_1 by 2, d/dt of that, so this is actually I have determined here d/dt of L_1 by 2 theta_1 and that is nothing but, is your half L_1. So, d theta_1/dt that is nothing but theta_1 dot, so this is nothing but the expression. And here, so this particular omega_1 is the angular velocity and that is nothing, but theta_1 dot. Are you getting my point? And, this particular I_1 the expression for this particular I_1 is nothing but, these 1/12 m_1 L_1 square plus one-fourth m_1 r square, so this I have already derived. Now, if I substitute here, m_1 is the mass, m_1 will remain same as m_1, V_1, so this is the expression for V_1 then I_1, so this is the expression for I_1 and omega_1 is nothing but theta_1 dot. So, if I substitute all the terms here and if I simplify then, I will be getting, this particular expression for the kinetic energy of the first link, that is, K_1 is nothing but, one-sixth m_1 L_1 square theta_1 square plus one-eighth m_1 r square theta_1 dot square. So, this is nothing but the expression for this particular the kinetic energy, this is the kinetic energy for the first link. Now, I am just going to derive the potential energy for the first link and then, I am going to derive the kinetic energy for the second link and potential energy for the second link and if I can find out the kinetic energy and the potential energy for both the links I can find out, what should be the Lagrangian for this particular robotic system. For example, say the Lagrangian for the robotic system L will be nothing but kinetic energy of the first link plus kinetic energy of the second link minus potential energy of the first link minus potential energy of the second link. So, till now I have derived only this particular K_1 so, I am just going to derive K_2, P_1, P_2. So, let us see, how to derive this particular the P_1, that is the potential energy for the first link. Now, the potential energy for the first link, that is, P_1 is nothing but, minus m_1 then multiplied by minus g once again g is acting vertically downward opposite to the that Y direction and this particular the height. If you see, so that was nothing but if this is the first link, if this is actually the first link the total length was say L_1 and this particular angle, angle was actually with theta 1 and this is nothing but L_1 by 2, so up to this is your L_1 by 2. So, L_1 by 2 sin theta_1 is actually this particular the height, so this is nothing but L_1 by 2 sin theta_1. And, if you just simplify, so you will be getting this as the expression for the potential energy. Now, till now, we have derived the expression for the kinetic energy and the potential energy for the first link. Now, I am just going to derive the kinetic energy and potential energy for the second link. Now, once again, if you go back like if you just once again draw this particular picture once again, so this is actually your. So, I have got first link here and I have got the second link here this is your X, this is your Y, ok. So, the length of this particular link is your say L_1 and this particular link the length is your L_2 and its mass center is here whose coordinate is nothing but x_2, y_2 and here m_2 g is acting and here m_1 g is acting. Now, here this x_2, y_2 is the coordinate of the mass center for the second link. Now, the general expression for the coordinate of the mass center for the second link that is, x_2 is nothing but L_1 cos theta_1 plus L_2 by 2 cos of theta_1 plus theta_2, because with respect to this is your theta_2, this particular angle is theta_2, but with respect to x this is theta_1 plus theta_2, ok. Then, y_2 bar is L_1 sin theta_1 plus L_2 by 2 sin of theta_1 plus theta_2. Now, we can find out the time derivative, that is, x_2 dot, that is, d/dt of this, now, d/dt of your L_1 cos theta_1 plus L_2 by 2 cos of theta_1 plus theta_2. So, I will be getting minus L_1 sin theta_1 d theta_1/dt, that is nothing but theta_1 dot. Similarly, here, we will be getting minus L_2 by 2 sin of theta_1 plus theta_2 or multiplied by theta_1 dot plus theta_2 dot. So, I will be getting this particular expression. By following the same method, so y_2, I know the general expression for y_2 is L_1 sin theta_1 plus L_2 by 2 sin of theta_1 plus theta_2. So, y_2 dot, that is nothing but, your d/dt of this particular expression and you will be getting L_1 cos theta_1 theta_1 dot plus L_2 by 2 cos of theta_1 plus theta_2 (theta_1 dot plus theta_2 dot), so we will be getting this particular y_2 dot. Now, v_2 square is nothing but x_2 dot square plus y_2 dot square. So, square of this plus square of this and if you just add them up, then you will be getting the expression like L_1 square theta_1 dot square (these sin square theta_1 plus cos square theta_1 will give rise to 1) plus here I will be getting L_2 square by 4 then comes theta_1 dot plus your theta_2 dot square, then we will be getting L_1, L_2 then comes sin of theta_1, sin of theta_1 plus theta_2 then theta_1 dot multiplied by theta_1 dot plus theta_2 dot plus, I will be getting L_1, L_2 then comes cos theta_1 cos of theta_1 plus theta_2 multiplied by theta_1 dot theta_1 dot plus theta_2 dot. So, using this actually I can if you simplify further. So, I will be getting L_1 square theta_1 dot square. So, these I am getting L_2 square by 4 then this is expanded theta_1 dot square plus 2 theta_1 dot theta_2 dot plus theta_2 dot square. Now here we will have to simplify, now to simplify this part actually, what I can do is, we can take L_1 L_2 then theta_1 dot into theta_1 dot plus theta_2 dot as common. So, I can write down like this particular part I am just going to concentrate on say this particular part. This particular part if I want to make it simple, so this will become your cos of theta_1 plus theta_2 cos theta_1 plus sin of theta_1 plus theta_2 sin theta_1, so cos of theta_1 plus theta_2 minus theta_1. So, this is nothing but cos theta_2. So, I will be getting this particular cos theta_2 term, ok. So, I am getting this particular expression for v_2 square and once you have got the expression for v_2 square. So, very easily, I can find out the kinetic energy for the second link K_2, that is, half m_2 v_2 square, then half I_2 omega square, so this omega_2 will be nothing but theta_1 dot plus theta_2 dot, not only theta_2 dot. So, we will have to be careful. Now, if we just write down the expression v_2 square we have already derived and we just put it here I_2, I know the expression of I_2, I know, so this is the expression of I_2. In fact, this is the expression of half m_2 v_2 square then comes your this particular half I_2. I_2 is nothing but this one. Then, omega_2 square, that is nothing but, theta_1 dot plus theta_2 dot square and if you simplify. So, you will be getting this particular the expression ok, only thing you need some practice to find out whether you are getting the same expression or not. So, you will be getting this particular expression for the kinetic energy of the second link. And, once I got the kinetic energy for the second link, now, we are in a position to determine the expression for the potential energy for the second link. Now, for the second link, actually once again if you draw this particular picture, so this is L_1 link and this is another link. So, your minus m_2 minus g L_1 sin theta_1 because this particular angle is your theta_1 and this is your theta_2 ok, this length is L_1, this is L_2 and here minus m_2 minus g L_2 by 2 sin of theta_1 plus theta_2. So, this is actually your, the total height will be getting. So, this is the way actually you can find out the potential energy. So, this is the expression for the potential energy, now as I told that we have got the expression for the kinetic energy of the two links and potential energy of the two links. Now, we are in a position to find out the expression for this particular Lagrangian. So, this Lagrangian L is nothing but K_1 plus K_2 that is kinetic energy of the first link plus kinetic energy of the second link minus potential energy of the first link minus potential energy of the second link. So, whatever expressions you got you just write down and then you do a little bit of rearrangement of theta_1 dot square. So, in this particular way, you try to arrange these particular the terms and this is the expression for this particular Lagrangian for the whole robotic system. And, once I have got this particular Lagrangian, now you know the expression, you know how to determine actually the tau_1 and tau_2. So, using the same formula, the tau_1 is d/dt a partial derivative of Lagrangian with respect to your theta_1 dot minus partial derivative of L with respect to this particular theta_1. So, we know the expression for this particular L, that is, the Lagrangian. So, once again, let us go back to the expression of Lagrangian and here, the derivative which we will have to find out is nothing but the derivative which we will have to find out is nothing but your partial derivative of L with respect to your theta_1 and to determine tau_1, another partial derivative of L with respect to your theta_1 is to be known. Now, you see, you concentrate on this particular expression. So, here, we have got one theta_1 dot square term, I have got another theta_1 dot square term, theta_1 dot square term and here I have got theta_1 dot term, theta_1 dot square term, theta_1 dot term. So whenever I am just going to find out this particular partial derivative. So, starting from here up to this, we will have some contribution. Are you getting my point? But, these terms will not have any contribution. Similarly, whenever we are going to find out your partial derivative of L with respect to theta_1, now theta_1 dot square will have no contribution, here there will be no contribution, no contribution, no contribution ok, and this is theta_2, but this is with respect to theta_1. So, there will be no contribution here. Here also, no contribution, but the contribution will come from here and it will come from here in this particular partial derivative ok, so that we will have to understand Now, exactly the same thing whatever I told, the partial derivative of the Lagrangian with respect to theta_1. So, this is the thing which we will be getting. And, this particular partial derivative with respect to theta_1 dot, so these are the terms which you will be getting. And, once I have got it, now we are in a position, like now we will have to find out the time derivative of this. So, the d/dt of the partial derivative of L with respect to theta_1 dot, so this particular previous term it was what theta_1 dot, now it will become theta_1 double dot. This was theta_1 dot, theta_2 dot, so this will become theta_1 double dot, theta_2 double dot. So, here it was theta_1 dot, theta_2 dot, so this will become theta_1 double dot, theta_2 double dot and next this particular term, So, once again, it will take the form like this, and your so this particular term has got has got two terms here you can see just a minute. So, this term we will be getting from here and this particular term will be getting from here, but this particular term if you want to find out, the time derivative. So, finding this particular term, the time derivative, so these terms and these terms you will have to consider separately. So, once you will have to consider this as constant, and you will have to find out the time derivative. For example, if I concentrate only here, so its time derivative will be something like this like half m_2 L_1 L_2 cos of theta_2 (2 theta_1 double dot plus theta_2 double dot) plus half m_2 L_1 L_2 (2 theta_1 dot plus theta_2 dot). Now, you will have to find out d/dt of cos theta_2, that is nothing but d/d theta_2 that is minus sin theta_2 of d theta_2/dt, that is your theta_2 dot ok. This is multiplied. So this particular thing you will have to do. And, if you do that, then you will be getting, this particular expression for this joint torque that is tau_1. And, once again, you can see that, this is nothing but this is nothing but is your the inertia terms multiplied by theta_1 double dot this is inertia terms multiplied by theta_2 double dot. This is the centrifugal terms the centrifugal terms and you will be getting some gravity terms something like this. Now, let us try to find out the expression for the torque for the second joint. Now, here this tau_2 is nothing but d/dt of partial derivative of Lagrangian with respect to theta_2 dot minus partial derivative of L with respect to theta_2. Now, exactly in the same way, so we will have to find out the partial derivative of L with respect to theta_2. Now, if I see the expression of this particular Lagrangian, this is the expression of the Lagrangian. Now, we will have to find out actually the partial derivative of L with respect to theta_2 dot; this is one partial derivative another partial derivative with respect to your theta_2. Now, if you see with respect to theta_2 dot here there is no such theta_2 dot here. So, here the theta_2 dot comes here, so it will have some contribution. Then, theta_2 dot here it has got theta_2 dot, so it will have some contribution ok. On the other hand, this theta_2 here, there is no theta_2 here, no here, cos theta_2, so it will have some contribution, it will have some contribution, this is theta_1 and here, we have got theta_2. So, it will have some contribution. So, this is the way actually, we will have to find out the partial derivative of this Lagrangian with respect to theta_2 ok and if you find out the partial derivative of this particular Lagrangian with respect to theta_2. So, we can find out like partial derivative of L with respect theta_2, so this is the expression which will be getting this is the expression which we will be getting. And, partial derivative of L with respect to theta_2 dot, so this is the expression, which I will be getting. And, now, we will have to find out d/dt of this. And, if you find out the d/dt of this then what will happen is, so this theta_1 dot term will become your theta_1 double dot. So, this will become theta_1 double dot, theta_2 double dot. And, here also, dot will become double dot sort of thing. And, then, if we just arrange you will be getting this particular expression. This is the expression, which we will be getting for this. And, then, you substitute you will be getting the expression for the joint torque, that is, tau_2. And, this particular tau_2, if you see once again, so this is nothing but the inertia term, this is also inertia term, this is your the centrifugal term, and this is the gravity terms. So, we have got this particular expression for this particular tau_1 and tau_2 for the same problem using two different methods. And, if we compare, we are getting exactly the same expression, what we got earlier. So, the same expression, I will be getting like if we compare these particular tau_2 for the same problem, whatever tau_2, I got a few minutes ago exactly the same expression I got. The same is true for your tau_1. The expression, which I got using this particular method and the expression which I got a few minutes ago are exactly the same ok. And, it proves that both the methods are correct and we are getting exactly the same expression for joint torques: tau_1 and tau_2. Now, if I consider the slender link exactly in the same way. So, what you can do is, this r squared term we can neglect for the slender link, this particular r square term, we can neglect. Similarly, what you can do is, from this particular tau_2, we can neglect this particular r^2 terms, here also r^2 terms will become 0, and you will be getting this particular final expression for tau_1 and tau_2 for the slender link. Now, we have got the expression for this particular joint torque. And, as I told, once I have got the expression, the next task will be your how to implement in the robot, in the robotic joint, so that the motor can generate this particular joint torque, which will be discussed in the next class. But, before that, one thing I just want to mention. Now, today I discussed actually two approaches, two methods to determine the joint torques. The first method is more structured but the second method is very easy to implement; but if we just compare for a manipulator having two degrees of freedom, three degrees of freedom, we can go for the second approach. But, for a manipulator having say 6-5 or more than 6 degrees of freedom, my recommendations would be the first approach. And, by using this, you can find out the joint torque, then we will see how to control in future like how to control these particular motors to generate that particular torque and to generate the joint angle as accurately as possible. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_12_Robots_Kinematics.txt
Now, I am going to discuss, why do you put three zeros here and one 1 here. Now, to discuss why do you put three zeros and one 1 here, I will have to see the definition of the inverse of a matrix, and how do you calculate the inverse of a matrix using the computer program? Now, let me take a very simple example, supposing that I have got a matrix, say, this particular T matrix, and this T matrix is having the dimension, this is, 4 cross 4 matrix, and this transformation matrix. I will have to find out its inverse, that means, this transformation matrix has to be invertible, that is, non-singular. Now, by definition, the inverse of this particular matrix, that is, nothing but your adjoint of T divided by the determinant of T, and we know how to find out the adjoint. Adjoint is nothing but the transpose of the co-factors. And, we will have to find out the determinant, this is, by definition, how to find out the inverse of this matrix. Now, if you write down one computer program. So, in the denominator, there will be actually the determinant of this particular T matrix. Now, the determinant of the T matrix, it may also become equal to 0, sometimes. Now, if T becomes equal to 0, this will become equal to adjoint T divided by 0. So, something divided by 0 so, this is going to generate one NAN, not a number. So, your computer program is going to give a NAN and that is why, to determine the inverse of a particular matrix, this particular definition, we generally do not use in the computer program, instead we follow another method. Supposing that I will have to find out the inverse of a particular matrix, say T, and T is nothing but a 4 cross 4 matrix. So, what we do is: side by side, we just write down one identity matrix I, and that is also a 4 cross 4 matrix, that is your 1 0 0 0, 0 1 0 0, 0 0 1 0, 0 0 0 1, something like this, ok? Now, we take the help of some row and column operations. So, we take the help of row and column operations. Now, our aim is to convert this particular T into one identity matrix, ok, we take the help of row and column operations to convert this particular T matrix to the identity matrix and the same set of operations, you will have to carry out here on the identity matrix. So, at the end of this particular operation, I will be getting one identity matrix in place of T and an inverse of T matrix in place of this particular identity matrix (I). So, whatever matrix I will be getting here, that will be the inverse of this particular T matrix. So, this is the way, actually, we try to find out the inverse of a matrix in computer program. So, this particular T matrix actually has to be invertible, and if we can make it invertible, we will be in advantageous position, if we can keep the fourth row to at least contain 0 0 0 1, because a 4 cross 4 identity matrix is nothing but this. So, very purposefully, I am generating this particular fourth row, so that this particular T becomes invertible. So, I am just going to help that particular transformation matrix to become invertible just by putting 0 0 0 1 here, that is the reason, why do we put 0 0 0 and 1 there; there is another reason behind that. So, this particular 0 0 0 is known as perspective transformation, if you see the literature and this particular 1 is known as the scaling factor. So, why do you call it a scaling factor. We call it a scaling factor because in place of 1, I can write down 5, I can write down 6, and so on. There is no problem, if I write 5 here. So, I can take 5 out of the matrix and I can make it 1, and that is why, this is known as the scaling factor. I hope, I am clear why do you put this particular the fourth row as 0 0 0 1 and now onwards so, we are going to consider that the transformation matrix is nothing but a 4 cross 4 matrix and this is known as the homogeneous transformation matrix. So, this is known as the homogeneous transformation matrix, and we will have to determine, to find out its inverse also, because if I just assign one matrix here, if I assign another matrix here, my aim is to represent this with respect to the previous one and the riverse, that means, my aim is to represent this with respect to that. So, this particular thing is possible, if and only if you have that particular invertible transformation matrix or that nonsingular transformation matrix and that is why, this particular transformation matrix has to be invertible. Now, once again, if I concentrate this 4 cross 4 homogeneous transformation matrix, out of these four, I have got 3 cross 3 rotation matrix and I have got a position vector. So, this is nothing but 3 cross 1 in matrix form, and I have got 0 0 0 1. Now, if I take one typical example of one transformation matrix, it will look like this. For example, this particular this 3 cross 3 matrix is going to carry information of this particular rotation. So, this is going to carry information of the rotation and this is going to carry information of the position and the fourth row is your 0 0 0 1. So, this is nothing but a 4 cross 4 homogeneous transformation matrix carrying this particular rotation matrix and position vector. So, this is the way, actually, we represent the transformation matrix. Now, I am just going to concentrate on the translation operator the properties of the translation operator. The translation operator, in short, is written as Trans X comma q; that means, along the X direction the translation is only by q units and here, the rotation matrix is nothing but the identity matrix. You can see that, so this is nothing but the identity matrix, 3 cross 3 identity matrix like 1 0 0, 0 1 0, 0 0 1, and along this X direction, I have got the position information, that is, q along y it is 0, along z it is 0 and as usual we have got 0 0 0 1. So, this is nothing but Trans X comma q. So, this is the way we can write down this 4 cross 4 matrix. Now, here, I have put one note, the Trans operator is commutative in nature, that means, it does not depend on the sequence. For example, say I can write down Trans X comma q_x; that means, along X there is a translation by q_x amount, along Y there is a translation by q_y amount. So, Trans X comma q_x,Trans Y comma q_y is nothing but Trans Y comma q_y Trans X comma q_x. So, it does not depend on the sequence and they are commutative in nature. So, Trans operators are commutative in nature. Now, another thing, I just want to tell you in some of the literature, you will find one notation, that notation is something like this, Trans a comma b comma c. In some of the literature, we can find this type of notation. Now, it means that translation along X by a units, along Y by b units and along this particular Z by c units, and this is equivalent to your Trans X comma a, Trans Y comma b, Trans Z comma c. So, they are equivalent. Now, I am just going to start with the rotation operator; that means how to determine that 3 cross 3 rotation matrix. Now, here, I am just going to derive something, that is nothing but the 3 cross 3 matrix corresponding to the rotation about Z by theta. So, I am just going to find out the rotation about Z by theta. Now, here, so this particular capital X, capital Y and capital Z represents the main coordinate system or the universal coordinate system, and small x then comes small y and small z represent the rotated coordinate system and here, the rotation is about Z by an angle theta. So, here you can see that I am rotating about Z by angle theta in the anticlockwise sense. Now, initially this capital X, capital Y and capital Z and small x small y and small z, they were coinciding. Now, if I take the rotation about capital Z by an angle theta in the anticlockwise sense, my small z will remain same as capital Z, but small x will be different from capital X and this particular rotation will be by the angle theta and small y will be different from capital Y and this particular rotation will be theta, moreover, this particular angle will also become theta, ok? Now, let us try to concentrate on the main coordinate system or the universal coordinate system first. Supposing that I have got a point, here, Q in the main coordinate system and its coordinates are (q_x, q_y), so, this is the coordinate and what about Z? Z is, here, actually perpendicular to the board and this particular thing, as if I am considering on the 2D, but Z is perpendicular to the board. In fact, and that is why, very purposefully, I have not written any Z value here on this 2D plane, x-y plane. In fact, Z is put equal to 0. Now, with respect to the main coordinate system, if the coordinates are q_capital X, q_capital Y, so, I can find out this OB; OB is nothing but q_capital X and this BQ is equal to OE is nothing but is your q_capital Y, ok? Now, I just concentrate on the rotated frame, that is, your small x, small y and small z, the same point q, its coordinate in the rotated frame is denoted by q_small x, q_small y. That means, if I just draw this particular, if I just draw this particular perpendicular here, ok, this particular OD will be q_small x. So, OD is q_small x and this particular DQ is equal to your q_small y, ok? Now, if I know this particular DQ; DQ is how much? So, this particular DQ is nothing but q_y, this angle is theta. So, I can find out that this particular AQ is your q_y cos theta, and similarly, this AD, which is equal to BC is nothing but q_y sine theta. Similarly, this OD, this OD is nothing but q_x; this angle is theta. So, it is cos component that is, OC will be your q_x cos theta and your CD, this particular CD is nothing but q_x sine theta, ok? So, these things I have written here like DC, DC is nothing but q_x sine theta then AQ; AQ is nothing but q_y cos theta, then OC; OC is nothing but q_x cos theta and AD equals to BC, AD equals to BC is nothing but q_y sine theta. So, all such things, we can find out very easily. Now, I am just going to write down this particular q_capital X. This q_capital X is how much? So, up to this, is your q_capital X, OB. OB is nothing but is your q_capital X and that is nothing but q_x cos theta. Q_x cos theta is from here to here minus q_y sine theta, and here, the Z component is 0, because this is on the 2D plane. So, I am just writing here, q_z multiplied 0. The next is q_Y. So, what is q_Y? q_Y is nothing but is your this BQ, and this BQ is how much, that is, q_x sine theta, that is up to this. So, this is q_x sine theta plus q_y cos theta and after that I am adding q_z multiplied by 0 and this particular q_Z because here Z is perpendicular to the board. So, its x component will be multiplied by 0, y component is multiplied by 0 and z is multiplied by 1, ok? So, I am just trying to find out the relationship between the original coordinate system and this particular rotated frame. That means, I am trying to find out the expression for rotation about Z by an angle theta in the anticlockwise set. So, this particular 3 cross 3 matrix, I am just going to find out. Now, this can be written, as I think, this can be written as in the matrix form q_capital X, q_capital Y, q_capital Z is nothing but cos theta, minus sine theta, 0; sine theta, cos theta, 0; 0 0 1; q_small x, q_small y, q_small z, if you just see the previous one, from there, I can write down this particular thing. For example, say let me write it here, once again, so this q_x, q_y, q_z can be written as your cos theta, minus sine theta, 0 then comes your sine theta, cos theta, 0; 0, 0, 1 and this is multiplied by your q_small x, q_small y, q_small z something like this, ok? I am sorry, here there will be 1, so we have got 1 here. So, this is the way, actually, we can write down this particular rotation term. So, this is actually this rotation term, that is, the rotation about Z by an angle theta. So, rotation about Z by an angle theta is cos theta, minus sine theta, 0; sine theta cos theta 0; 0, 0, 1. So, this is nothing but the rotation about Z by an angle theta. Now, by following the similar procedure, the similar procedure in fact, I can find out the rotation about X by an angle theta, that is nothing but 1, 0, 0; 0, cos theta, minus sine theta; 0, sine theta, cos theta. Similarly, I can also find out rotation about Y by an angle theta is cos theta, 0, sine theta; 0, 1, 0; minus sine theta, 0, cos theta. So, using these, I can find out rotation about X by an angle theta, rotation about Y by an angle theta. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_39_Robot_Motion_Planning_Contd.txt
Now, we are going to discuss how to determine the collision-free path for the robot in the presence of some moving obstacles. Now here the obstacles are moving, the robot is moving, the obstacles are also moving; how to find out, how to ensure the collision-free path? Now, if you see the literature the first approach which was proposed in the year 1984 by Kant and Zucker is the most popular approach, which is known as the path velocity decomposition. Now, let us see, how to use the concept of this path velocity decomposition to solve the navigation problem of a mobile robot in the presence of some moving obstacles. Now, this particular problem is popularly known as the dynamic motion planning problem. So, this is known as the dynamic motion planning problem; dynamic motion planning problem. Now, the problem is as follows: supposing that I have got one field, say this is the field, and I have got a point robot at position S. So, this is the starting position for the robot and the goal could be here that is denoted by G now, here. So, the robot we will have to start from S and it will have to reach G and it will have to find out some sort of optimal path. Now, if there is no obstacle, then starting from S, directly it will reach that particular point G. But, supposing that there are some moving obstacles; so, this is obstacle 1, O_1 and this is moving in this particular direction with some speed. I have got another obstacle here, say this is O_2 and this is moving in this particular direction with some speed. I have got say another obstacle; so, this is the direction of movement with some speed say O_3. Then, how to ensure or how to find out the collision-free path and the time-optimal path for this particular robot? So, this is actually the problem and let me consider one more obstacle here and suppose this is moving in this particular direction with some speed. So, this type of problem is known as the dynamic motion planning problem. Now, this is a dynamic motion planning problem; that means, it is varying with time, ok? So, it is varying with time; now this particular dynamic motion planning problem can be converted to a find path problem at time t equals to t_1. So, at a particular instant, at time t equals to t_1; so, this will become a find path problem. So, I know at time t equals to t 1; I know the predicted position of this particular obstacle and there is a possibility that I will be getting this type of problem which is nothing but so, this is the field. And, I have got the starting point here, the goal is here and this is one obstacle, this is another obstacle, this is another obstacle, this is another obstacle. So, these are all obstacles; so, at time t equals to t_1; this becomes a find-path problem. So, for simplicity; so this particular dynamic motion planning problem is converted into a find path problem at time t equals to t_1 and actually they solved this particular the find-path problem. Now, supposing that for this particular problem, say supposing that it has got a collision-free path something like this. So, this could be one of the possible collision-free paths for this find-path problem, ok. Now, once it has got this particular find path, this particular feasible path; now it will have to do something to ensure the collision-free movement because truly speaking, these particular obstacles are moving, ok; so, here actually, in this particular method, what we do is, we try to solve this dynamic motion planning problem using two sub-problems. So, this dynamic motion planning problem is actually considered as a combination of two sub-problems: one is called the path planning problem and another is called the velocity planning problem. So, in path planning problem, we consider that at time t equals to t_1. So, this is nothing, but a find path problem; that means, the robot will have to find out a collision-free path in the presence of some static obstacle. So, considering the obstacles to be stationary; so, it will try to find out a collision-free path and next, we just go for the velocity planning; that means, the robot is going to follow the path obtained through this path planning; that means, in the first stage. Now, the velocity of this particular robot has to be adjusted, so that it does not collide with the moving obstacles. So, once again, let me repeat the dynamic motion planning problem is converted into two sub-problems; one is called the path planning problem that is PPP and another is the velocity planning problem, that is, VPP, ok. So, we first plan a path, the collision-free path considering the obstacles are stationary and the robot will try to follow that predetermined path by adjusting its velocity, so that it does not collide with the moving obstacles. So, this is the way actually, they implemented the path velocity decomposition method and it became very popular. This particular approach became very popular, but this method has got a few drawbacks for example, say if there are so many obstacles, there are so many moving obstacles, this particular method may not find a feasible path; the robot may not be able to find out a feasible path by following this path velocity decomposition method. Now, here, as I told that in the second stage; we will have to plan the velocity of the robot. So, the velocity of the robot is going to vary with time and there could be a sudden change of the velocity of the robot. And, consequently, there could be some sort of jerky movement of the robot, which is not desirable. So, these are actually the drawbacks of this particular the path velocity decomposition. Now, if you see the literature, there are some other algorithms, which have been proposed to solve the dynamic motion planning problem and out of those methods, the accessibility graph also could reach some popularity. Now, let us try to explain the principle of this particular accessibility graph. The accessibility graph, this concept was proposed by Fujimura and Samet in the year 1988. Now, here actually, what they do is, this is the modified version of the visibility graph, which you have already discussed and which was proposed by Nilsson in 1969. Now, if I just consider a dynamic motion planning problem, as I discussed that at time t equals to t_1. So, this particular dynamic motion planning problem will become a find-path problem; that means, that is the path planning problem for a robot in the presence of static obstacle. Now, if it is a find-path problem; so, I can use the concept of the visibility graph to find out a path, to find out a collision-free path, which I have already discussed. Now, at time t equals to t_1, we consider that this is a find path problem. So, we will be getting some visibility graph collision-free path; next at time t equals to t_2. So, once again I will be getting another scenario; so, another find path problem I will be getting. So, I will be getting another visibility graph; so, with time, I will be getting a number of visibility graphs, ok. Now, these particular visibility graphs will go on changing with time. So, this accessibility graph is nothing, but the modified version of this particular the visibility graph. As if we have added one more dimension, that is time to the visibility graph and this particular visibility graph is going to vary with time. And, that is nothing, but the concept of the accessibility graph, but the main drawback of this particular accessibility graph is the computational complexity. So, it is computationally very complex and it cannot be implemented online. So, what is our aim? Our aim is to determine one collision-free path, time-optimal path, but at less computational time, so that we can implement this particular motion almost online. So, this particular accessibility graph, as I told is computationally very expensive and which may not be suitable for online planning. Now, another concept that is the concept of the incremental planning that was proposed by Slack and Miller in the year 1987; Now, this concept is very simple, very simple concept, they used, the problem is the dynamic motion planning problem; that means, the obstacles are moving and the robot is also moving. Now, the robot will have to find out the collision-free time optimal path; so, this is dynamic motion planning problem. Now, how to solve this particular problem? Now, what I do is, once again at time t equals to t_1, we consider that this particular dynamic motion planning problem is nothing, but a find path problem. So, if it is a find path problem; so very easily actually, we can find out, what should be the collision-free path? So, we plan a collision-free path based on predicted position of the obstacle, ok. Next, we follow that particular path until it becomes invalid, the moment it is found to be invalid, ok. So, what we do is, we replan and we try to find out actually another collision-free path and we follow that particular collision-free path. So, till it becomes invalid and the moment it becomes invalid, once again we replan and this particular process will continue till the robot reaches the destination. So, this is the way actually, we can implement the incremental planning. Now, incremental planning actually could not reach much attention of the researcher; it is due to the fact that several times, we will have to find out the collision-free path, we will have to re-plan and that is actually not very interesting. So, it could not reach much attention of the researchers working in the field of robot motion planning. Now, then came actually the concept, another concept that is the concept of the relative velocity scheme. Now, supposing that the robots are moving, the obstacles are also moving; so, we can find out the relative velocity of this particular robot with respect to the moving obstacle. Now, let me just draw it here a little bit, say if this is the field; so, the robot is here, the robot is moving and obstacles are also moving and say, this is the goal. So, what you do is, as both the robots and the obstacles are moving, we can consider the relative velocity of this particular robot with respect to the different obstacles. The moment we consider the relative velocity of this particular robot with respect to obstacle O_1, we consider, as if obstacle O_1 is stationary and we try to find out the relative velocity. Now, this is the concept of relative velocity. So, like your two bodies are moving with different velocities; so we try to find out the relative velocity of body 1 with respect to body 2; as if we consider the body 2 is kept stationary and we try to find out the relative velocity. Now, the same principle is copied here, both the robot and the obstacles are moving. So, we try to find out the relative velocity of the robot with respect to O_1, then relative velocity of the robot with respect to O_2, relative velocity of the robot with respect to another obstacle O_3, and so on. So, for the same robot, there will be different relative velocities with respect to the different obstacles. And, its implementation actually becomes a little bit difficult and by following that, so we could convert the dynamic motion planning problem into several static problems. And, once you have got that particular matrix of the relative velocity of a robot with respect to the different obstacles; so, we try to implement just to find out the collision-free path. So, the set of velocity vectors, we try to find out, so that the robot avoids collision with all the moving obstacles. So, this is the way actually, we can implement this particular the relative velocity scheme, just to find out the collision-free path, the collision-free the time optimal path. Then, came actually the concept of the potential field approach and that was proposed by Khatib in the year 1986. And, out of all the traditional motion planning algorithms, this particular algorithm could reach the maximum popularity. Now, here, actually in this potential field method, the robot will move under the combined action of attractive and repulsive potentials or attractive and repulsive forces. Now, let me consider that this is the goal for the robot and this is the present position of the robot and at time say t equals to t_1. So, this is nothing, but the distance between the robot and the goal. The goal is going to attract that particular robot towards it and there will be some attractive potential, that is, U_attractive or there could be some attractive force. Now, this is nothing but the attractive potential or the attractive force with which this goal is going to attract the robot towards it. Similarly, there could be repulsive potential or the repulsive force between the robot and this particular obstacle. And, due to this repulsive force actually, the obstacle is going to repel that particular robot. So, it is going to repel; so here, there will be some attraction, but here, there will be some repulsion. Now, this robot will be under the combined action of this particular attraction and this repulsion, ok. Now, here, so before I go for this, how to find out the resultant of these attractive force and the repulsive force. Let me tell you, how to find out this attractive potential; attractive force from this particular the attractive potential. It is very simple, now, this attractive force, that is, F_attractive is nothing, but the derivative of this particular potential, attractive potential with respect to the distance; so this is d_goal-R, that is the distance between the goal and the robot. So, this is nothing, but d_goal-R; so, this is your d_goal-R, that is the distance between the robot and the goal. So, what we do is, we try to find out the derivative of this particular the attractive potential with respect to d_goal-R; that particular distance; so we will be getting the attractive force. Similarly, from this repulsive force, if you want to find out this repulsive force, that is, F_repulsive is nothing, but the derivative of repulsive potential U_rep with respect to the distance between the robot and this particular obstacle. So, this is the distance between the obstacle and the robot. So, we try to find out the derivative with respect to your d_R-O, and that is nothing but the repulsive force. Now, as I told that the robot is subjected to both attractive as well as repulsive force. Now, this I am just going to draw it here; so, here I have got the attractive force in this particular direction. So, I am just drawing it here; so this is nothing, but the attractive force and here, there will be a repulsive force. So, here, I am just going to draw the repulsive force, ok; so, I have got the attractive force, I have got the repulsive force. So, very easily, I can find out what should be the resultant force and this resultant force is denoted by F_res; so, I can find out so this particular the resultant force. And, once you have got this particular the resultant force; so, what do you do is; the robot actually as I told that it is moving under the combined action of attractive and repulsive forces, now it will try to follow this particular the resultant force. That means, the speed of the robot or the acceleration of the robot will be directly proportional to the magnitude of this particular the resultant force. And, moreover, this is the angle of the resultant force with respect to the attractive; supposing that this is angle theta. And, this particular angle theta with respect to the attractive; so, this will be the angle of deviation for this particular the robot. So, the robot is subjected to the attractive force and repulsive force and due to this attractive and repulsive force. If you try to move with a speed, which is proportional to the magnitude of this particular resultant force and its angle of deviation will be this particular angle, that is, theta; that is the angle between the resultant force or attractive or we can find out this particular angle, that is the angle between the resultant force and this particular the repulsive force. So, we need these two information for the movement of the robot; one is the speed, another is the angle of deviation. So, this is the way actually, we are determining the speed and the angle of deviation for this particular robot Now, here whatever I mentioned, the same thing I have written it here. So, what I am going to do is; so, I am just going to assign some attractive potential and repulsive potential, some mathematical expression. And, try to see, how to derive that particular thing like how to derive that attractive potential and the repulsive potential. Now, as I told that this is nothing, but the position of the robot and this is the goal and this is nothing, but the distance, that is, d_goal-R. Now, this U_attractive is here, I have considered half zeta d goal ® square; if it is considered to be parabolic, that is second order curve. And, if I consider that U_attractive is nothing, but zeta d_goal ®; so, this is nothing, but a straight line. Now, if I just draw it here; so this U_attractive as a function of d_goal-R; so this particular thing. So, I will be getting; so these type of plot for the attractive potential; if I consider this type of expression, so this is nothing, but the variation of U_attractive with d_goal-R. And, as I told, how to find out this particular F_attractive; so this F_attractive will be nothing, but d_goal-R of this particular thing. So, that will be d_goal-R of U_attractive potential and this will be nothing, but here, there is a square. So, 2 will come; so this will become zeta multiplied by d_goal ®; so, this will be the attractive force. Similarly, if I consider this type of distribution for the attractive potential, then your F_attractive will be nothing, but is your constant and that is nothing, but this zeta, ok. So, by differentiating actually, we can find out the attractive force. So, we can use different types of function for this attractive potential and accordingly, we can find out like what should be the expression for attractive force. Next, we try to concentrate on the repulsive force. Now, here, this repulsive force is actually is defined in such a way that this is actually the obstacle and the robot is here. And, this is the distance between the robot and the obstacle, that is, denoted by P ®. Now, surrounding this particular the obstacle, we define one circle and on the boundary of the circle and beyond the boundary of the circle the repulsive potential will become equal to 0, but inside this particular circle, there will be some repulsive potential. And, this particular repulsive potential will be maximum, when the robot comes very close to this particular the obstacle. So, when the robot is very close to the obstacle; the repulsive potential will be more and whenever the robot reaches this particular the boundary of this particular circle, then the repulsive potential will tend to 0. And, beyond which, outside this particular circle, the repulsive potential will become equal to 0. Now, the same thing, I have just plotted in here; so this is actually the plot of the repulsive potential. So, when P ® is small, the repulsive potential is very large and this repulsive potential will become equal to 0, when P ® becomes equals to P_0, ok. So, when P ® becomes equal to P_0, that is actually the radius of that particular circle, this repulsive potential becomes equal to 0; now this is the mathematical expression like repulsive potential is equal to half eta (1 by P® minus 1 by P_naught) square. If P® is found to be less than P_naught; so, if I just draw it here. So, this is the obstacle say center of the obstacle; so this is your P_naught and supposing that the robot is here. So, this is the robot and this is your P ®. So, if P ® is less than P_naught; that means, it is inside this particular circle. So, this is the expression for this particular the repulsive potential, otherwise, this particular repulsive potential becomes equal to 0. And, once you have got this particular repulsive potential, by differentiating with respect to this particular P ®, P ® is nothing, but the distance between the obstacle and the robot. So, we can find out what should be F_repulsive. So, F_repulsive is nothing, but d/d P® of your U_repulsive; so, this is nothing, but your F_repulsive, ok. So, this is the way actually, we define this particular attractive and the repulsive potential. Now, here, actually for this particular attractive potential, there is another point to be considered, that is your, as I told, that is, F_attractive becomes nothing, but zeta d_goal ® ok. So, here, there is another important point to be discussed like say zeta is having some fixed and numerical value. Now, if d_goal ® is small; that means, the robot has reached very near to this particular goal, might be the robot is here. So, if d_goal ® decreases; this particular attractive potential is going to be reduced. Now, this is done very purposefully, otherwise the robot will not be able to stop at the goal with 0 velocity. So, when d_goal ® is more, then the attractive potential is more. So, when the robot is at far distance from the goal, the active force will be more, but whenever it comes very close to the goal, the attractive potential is going to be reduced, so that the robot can stop at the goal with 0 velocity; and this has been actually done very purposefully; this is the way actually we can determine attractive and repulsive potentials, attractive and repulsive forces. Now, as I told that out of all the traditional tools for the motion planning; this potential field method is the most popular one, but it has got a few drawbacks. For example, say here, the solution depends on the chosen potential function, this is called the artificial potential function. And, depending on the nature of this artificial potential function, we will be getting this solution for the robot. The robot will try to find out a path, the collision-free path depending on the chosen potential function. Now, here, there is another very big problem, which we are going to face that is actually called the local minima problem. Now, this is a very typical scenario and this happens for the concave obstacle. Now, supposing that this is the concave obstacle; so this is the concave obstacle sort of thing, it is a very hypothetical situation. And, supposing that this is the goal for the robot and fortunately or unfortunately, this is the present position for the robot, ok. Now, if this is the situation; here, we have got the concave obstacle, the goal will try to attract this particular robot. So, there will be some attractive force in this particular direction, but this particular the obstacle, the obstacle boundary is going to put some sort of repulsive force on this robot. For example, say it is going to put some sort of repulsive force here, some sort of repulsive force here, repulsive force here; so, it is subjected to the repulsive forces. Now, all such forces are passing through a particular point. So, all of us, we know how to find out the resultant of these particular forces, the set of forces. So, graphically, we can find out, what should be the resultant of these forces. And, supposing that the resultant repulsive force is something like this. So, this is the resultant and repulsive force; so, this is F_repulsive and this is your F_attractive, ok. Now, fortunately or unfortunately, if F_attractive becomes equal to your F_repulsive then what will happen? So, attractive force becomes equal to the repulsive force; the robot will become stationary here. So, there will be no movement of this particular robot and the robot will not be able to reach this particular goal. So, this type of the problem, we may face in the potential field method. There are some other disadvantages, supposing that I have got one narrow corridor. And, I am just going to find out a collision-free path for this particular robot, say I have got a robot here and I am just going to make a plan for this particular robot. And, this is one wall, this is another wall. So, there will be some repulsive force here, there will be some repulsive force here and consequently, this particular robot will have some sort of oscillatory movement something like this; so, there will be oscillation and which is not desirable. Moreover, supposing that there are so many such obstacles; large number of obstacles, there is a possibility that it may not be able to find out the time-optimal and collision-free path. Now, here, there is another very popular scheme that is called the reactive control strategy that I will be discussing later on. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_09_Introduction_to_Robots_and_Robotics_Contd.txt
Now, I am just going to write down the VAL programming to solve the task, that is, pick and place type of operation, and I have already defined the position and orientation of this particular 3 D object, and that is nothing, but the PART. PART is actually that particular name of the 3 D object, and this particular part is here on this particular you are the bucket number 1 or the bin number 1, the tasks of the robot will be as follows: So, this end-effector or the gripper will come to bin 1, it will grip that particular object and it will carry it to this particular bin 2 and it is going to place it there, and for that I am just going to write down this VAL programming. So, the first command is APPRO PART comma 100, PART is actually the name of that particular 3 D object. So, if I write down APPRO PART comma 100; that means, this PART is defined its position and orientation are defined. So, this particular end-effector will come to a position, which is 100 millimeter above this particular part in the z direction and it will stop there, the next command is your MOVES PART, “S” means straight path. So, from here, it will move to the PART by following a straight path and it is going to reach that particular item, that is, PART then CLOSEI. So, with the help of this particular end-effector or the finger, so it is going to grip that object and “I” means, there will be a short delay. So, “I” indicates a short delay, so now, this particular end-effector has already gripped that particular object, whose name is PART. The next command is DEPARTS 200, “S” is once again stands for straight. So, from here, so depart by 200 millimeter along this particular the z direction. So, by default, this particular movement is along the z direction, then APPROS BIN comma 300, “S” means the straight path. So, from here, you approach a particular point by following a straight line to a point and that point is actually 300 millimeter above this BIN, that is, your BIN 2. So, here, I am here, so I am actually at the top of that particular BIN, then move to BIN. So, this particular end-effector will move to this particular BIN and once it is moved to this particular BIN, next is your OPENI; that means, you un-grip that particular object, you release the object and “I” means that there will be a short delay. And, once it has been un-gripped, it will depart in the z direction by 100 and that is nothing, but DEPART 100. So, that completes actually the VAL programming to solve this particular task, that is, your pick and place type of operation. So, this is the way, actually, we can write down the program, the VAL programming just to teach the robot. Now, here, I am just going to tell regarding some other VAL commands, which are also used along with this. For example, say, we can mention this speed. Regarding the speed of movement now, if I write down speed 40, it means the speed will be 40 percent of the rated speed or the maximum speed. So, actually, at each of the joint, we have got the motor and the motor is having the maximum rated speed. So, speed 40 means, it will take 40 percent of the maximum rated speed. Now, EXECUTE is actually the command, if you want to run this particular program. ABORT is another, if you want to stop running that particular robot, now EDIT filename. So, if we write down EDIT then blank filename, so that particular ready-made program will be opened, which is the already stored there. So, that will come on the display and now, you can add a few lines, you can delete a few lines, you can save it, that is, by using this particular command, that is, STORE. So, by STORE, we can save that particular program and then LISTF is the listing of the file, supposing that there are a few files, which are stored in the program and if you want to see the listing of that particular files. So, we can use this particular command, the next is your DELETE. So, we can DELETE a line, we can DELETE program with the help of this particular command DELETE, next is your LOAD blank filename. So, if I write down LOAD blank filename, so with the help of this particular command actually we can load the program on the display and there, we can add a few new lines, we can delete a few lines and after that, we can save it, we can execute and we can run this particular robot. Now, here, I just want to mention one thing that I have discussed the different robot teaching methods some specific commands to solve a particular task but the purpose is not to make it intelligent. So, by teaching, we cannot make a robot intelligent and to make it intelligent actually, we will have to do something else, which I am going to discuss at the end of this course, and once again let me repeat the purpose of teaching is just to give instructions, but not to make it intelligent. Now, actually, I am just going to concentrate on how to prepare the specification of a particular robot. Supposing that I am just going to purchase a robot to solve some specific purposes, now, if I want to purchase a robot, what are the different information which I will have to give, which I have to specify while preparing that particular specifications. So, these things, I am just going to mention one after another, for example, say. So, I will have to mention about the control type, that means, I should go for the servo- controlled robot or non-servo controlled robot. I have already discussed, non-servo controlled robot means you will not be getting very accurate movement, and if we want to get very accurate movement, we will have to go for actually, the servo-controlled robot. So, the control type, we will have to mention. The next is the drive system, we will have to mention that whether we are going for the pure mechanical drive like the gear drive or chain drive or bell drive or we are going for some sort of hydraulic drive, pneumatic drive, electro-hydraulic drive, electro-pneumatic drive. So, that particular drive system, we will have to clearly mentioned, the next is the coordinate system like whether it is Cartesian coordinate robot or cylindrical coordinate robot or spherical coordinate robot or a revolute coordinate robot that we will have to clearly mentioned in the specifications. The next is the teaching or the programming method, how can I teach a robot? which method I am going to use? So, that we will have to mention. The next are the accuracy, repeatability and resolution, we know the meaning of these particular terms. If we want to purchase a robot, we will have to mention how much resolution you want, what is the accuracy and repeatability you want, the next is actually the pay load capacity. Now, depending on the pay load capacity, actually, we will have to find out how much should be the joint torque, how much is the capacity of the motor, and others and these things, I will be discussing after some time. So, this pay load capacity is actually the maximum amount of load that can be carried at the end-effector of this particular robot, we will have to clearly mention. Then, it comes the weight of the manipulator. So, we will have to mention, what should be the weight and accordingly, actually we will have to think about the foundation of this particular the robot, also. Next is the application, the purpose for which we are going to use this particular manipulator, the range and speed of arms and wrist. So, we will have to specify, what should be the ranges of movement of the different joints and what should be the speed of this particular movement and what should be the workspace, these things actually, we will have to find out, beforehand. Next is, if I want to make it intelligent, we will have to use some sensors, which I have not yet discussed and that particular sensor we will have to select, how to select what are the different types of sensors to be used. So, these things, I am going to discuss after some time. So, this is actually how to prepare that the specifications, if I want to purchase that particular robot. Now, I am just going to carry out one economic analysis now, before we just go for this particular economic analysis. Let me tell the purpose behind going for this particular economic analysis. Now, we have understood that modern manufacturing unit should keep some robots and moreover, the robots are having some other types of applications. Now, a particular manufacturing unit, if we want to purchase a robot, robot is costly, like if we just go for today’s very sophisticated serial manipulator having say 6 degrees of freedom, the cost will be Indian rupees around 25 lakhs 20 lakhs something like that, so it is costly. So, this manufacturing unit is going to take one decision, whether it should take a loan from the bank and purchase this particular robot or not. So, to take this particular decision, that whether I should take loan from the bank to purchase a robot for my own manufacturing unit. So, I will have to carry out one analysis and that is the purpose of this particular economic analysis. Now, here, actually I am just going to define a few terms, and to take the decision at the end, whether I should take loan from the bank to purchase a particular robot for my own manufacturing unit. Now, here, the symbol: F indicates the capital investment to purchase a robot, which includes the purchasing cost and installation cost. So, if I want to purchase a robot and install that particular robot, I will have to spend F amount of money, now, this B indicates the savings in terms of material and labour cost. Now, we have already discussed that if we can replace the human operator. So, there will be some saving in terms of labour cost and as the chance of rejection will be less, there will be some sort of saving in terms of material cost. So, this particular B indicates the savings in terms of the material and labour costs, the next is C, that is nothing, but the operating or the maintenance cost for this particular robot, the next is D, which indicates the depreciation of this particular robot. So, by depreciation, actually, we mean the falling value of an asset, now let me take one example, supposing that I purchase one car today by spending say rupees 5 lakhs. And, if I want to sell it after 10 days, I may not get rupees 10 lakhs, in return, provided the cost of that particular brand of the car remains the same. So, I may get a slightly less, might be say 50000 or say 1 lakh less than the 5 lakhs, now this particular difference of 50000 rupees or 1 lakh that will be the depreciation value of this particular the car in 10 days. Now, actually, whenever we purchase the new machine, our maintenance cost is less and that is why, we consider that, initially, there will be more depreciation and with the age of this particular car or the machine, the maintenance cost is going to increase and this particular depreciation is going to decrease, so that the sum of the depreciation and maintenance costs remains more or less the constant. But, here, actually what I am going to consider for simplicity that the rate of depreciation is not varying, but generally, we consider the varying rate for this particular depreciation, initially, the rate of depreciation will be more and with time, the rate of this depreciation will be less, but here, for simplicity, I am just going to consider the constant depreciation. Now, next is A, A indicates the net savings. Now, if I purchase the robot, there will be some saving in terms of material and labour costs. So, A is nothing, but B minus C (C is the operating or the maintenance cost) minus D, D is nothing but depreciation, that I am going to subtract here and this particular depreciation value is going to help us in, actually, the tax calculation. Now, actually, what we do is, whenever we calculate tax, we have some standard deduction, now that particular standard deduction is nothing, but some sort of depreciation. So, we calculate the net saving by considering or by subtracting the depreciation value and here, G is nothing, but the tax to be paid on the net saving. So, based on our saving, certain percent of our saving or our income, we pay as tax say 30 percent or 35 percent or something like that. So, G is actually the total amount of tax to be paid on this particular net saving. Now, pay-back period, I am just going to define in this particular term. So, this payback period is denoted by E and this is nothing, but the minimum amount of time required or the number of years required to get the money back for example, say while purchasing I have spent some money on the robot. So, how much time how much minimum time it will take in years to give me the money back, which I have spent. So, that particular time is nothing, but the payback period for a particular machine or the payback period for this robot. And, this payback period E is defined as is calculated as capital investment, which is denoted by F divided by (B minus C minus G), where G is nothing, but the amount of tax to be paid. So, this is the way actually we calculate this particular the payback period; that means, the number of years required to get that particular money back, which I have spent. Now, once they have got this particular payback period, next, we try to find out what should be the rate of return on investment, and now to determine the rate of return on investment, what we do is: supposing that, based on my net savings, my income, say 30 percent or say 35 percent I have spent as tax, I have given as tax to the government. So, I am having the remaining 65 percent. So, that particular remaining 65 percent of my income is nothing but “I”. So, let “I” be the modified net saving after the payment of the tax, now if “I” is known. So, I can find out the rate of return on investment (H) and that is nothing, but I divided by F multiplied by 100 in percentage and we will be getting some numerical value. And, that is known as actually the rate of return on investment, now, we compare the payback period with the techno-economic life of the robot and this particular the rate of return on investment with the rate of bank interest. Now, here, the first comparison is your payback period with techno-economic life of the robot, payback period I have already defined. Now, I am just going to define, what do you mean by this techno-economic life of a robot. Now, this techno-economic life of a robot is actually the intersection of the technical life and the economic life. Now, by intersection we mean the minimum supposing that the technical life of a robot is a 10 years and an economic life is a 6 years. So, the techno-economic life will be your 6 years not 10 years. Now, by technical life, we mean the period up to which the robot can manufacture or the robot can produce goods within the technical specification, within the tolerance limit of this particular product, that is known as the technical life and by economic life, we mean the number of years during which the robot is going to manufacture within the profit zone. So, by using this particular robot in my manufacturing unit, so long as I am getting the profit, this particular robot is in economic life zone, so this is the way we try to find out the technical life of a robot, the economic life of a robot, and we try to find out the intersection and that is nothing, but the techno-economic life. And, here, the rate of return on investment, I have already discussed and that has to be compared with the rate of bank interest, now, if I take loan from the bank. So, at the end of each year, I will have to give EMI to the bank. So, this particular rate of return on investment that has to be greater than the rate of bank interest and this particular payback period has to be less than the techno-economic life, then only we should go for purchasing that particular robot by taking a loan from the bank. So, this is the way actually, we will have to take the decision that whether we should purchase a robot by taking a loan from the bank. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_24_Robot_Dynamics.txt
Now, I am just going to start with another new topic, that is, topic 4 and it is on Robot Dynamics. Now, the purpose of dynamics is to determine the amount of force, if it is a linear joint or the amount of torque, if it is a rotary joint, which is the reason behind the movement of the robotic links and the robotic joint. Now, let us see, how to carry out the dynamics. Now, to carry out the dynamics, the prerequisite is the robot kinematics and trajectory planning, these things we have already discussed. Now, we are in a position to carry out the dynamic analysis, that means, we are in a position to find out the expression for the joint torque or the joint force, which is going to create that particular movement. Now, let us try to find out the mathematical expression for the joint torque or the joint force. Now, before I proceed further, I am just going to define one term, that particular term is actually the inverse dynamics. Now, while discussing the kinematics, we have already discussed the meaning of two terms: one is called the forward kinematics and another is the inverse kinematics. Now, I am just going to start with another term, that is called the inverse dynamics. Now, let us see, what do we mean by this particular term: the inverse dynamics. Now, let us concentrate on this particular block diagram: the input side and the output side. Now, in the input side, I have written like q_1 q_2 up to q_i, then q_1^dot q_2^dot up to q_i^dot then q_1^double dot q_2^double dot up to q_i^double dot. Now, this particular q actually represents the generalized coordinate. And, this particular generalized coordinate for a rotary joint, this q is nothing but theta for rotary joint, and this is equal to d for the linear joint, and this q^dot is nothing but the first time derivative. So, if it is theta, then that is the angular velocity, and if it is d, that is a linear velocity, then q^double dot, if it is theta, then it is angular acceleration; and if it is d, that is the linear acceleration. So, if I consider that all i joints are rotary for a particular manipulator, then this is nothing but EMBED Equation.DSMT4 , EMBED Equation.DSMT4 , EMBED Equation.DSMT4 . So, these are nothing but the inputs and what are the outputs? The outputs are EMBED Equation.DSMT4 and or EMBED Equation.DSMT4 . Now, here, all the joints are rotary joints, then if I write that k is equal to i that means all the joints are linear joints. For example, say Cartesian coordinate robot has all linear joints. If j plus k is kept equal to i that means, we have got a combination and we have got j number of rotary joints and k number of linear joints, ok? So, here, the inputs are the independent variables and outputs are nothing but either the joint torque or the force or a combination, so this particular problem of dynamics is defined as your inverse dynamics, but remember, this is not the forward dynamics. So, this is, in fact, the inverse dynamics and I am just going to discuss in this particular course the inverse dynamics problem of this particular the robots and I am not going to discuss the forward dynamics. So, I am just going to discuss the inverse dynamics. So, this is what we mean by the inverse dynamics problem, which I am going to tackle in this particular course. And, as I told, the reverse of that particular previous problem is the problem of the forward dynamics and here, in forward dynamics, as usual, as I told. So, the joint torques and forces will be the inputs and the outputs will be your joint angles, that angular velocity, angular acceleration, ok, so, this is nothing but the forward dynamics. Now, here in this course, as I told, I am just going to consider only the inverse dynamics, but not forward dynamics. If we want to solve the forward dynamics in fact, you will have to take the help of the tools like neural networks, fuzzy logic, which I am not going to discuss in this particular course. So, I am just going to concentrate on the inverse dynamics. Now, as I told, the purpose is to determine actually the joint torque or the joint force, and if I concentrate on the joint torque, the joint torque consists of a few terms: one is called the inertia terms, another is called the centrifugal and the Coriolis term, another is called the gravity terms. So, gravity terms depends on the acceleration due to gravity, inertia terms depends on the mass distribution of the robotic link and that is expressed in terms of the moment of inertia matrix and centrifugal force, all of you know the concept, and there is another thing that is called the Coriolis component or the force. Now, this Coriolis force actually requires some explanation, now this Coriolis component will come whenever there is a sliding joint on a rotary link, let me try to prepare one very rough sketch for that just to explain the concept of this Coriolis force, let me try to prepare one very rough sketch. I am, in fact, going to draw one robotic link, so this is nothing but a robotic link, and this end is actually connected to the motor and this is the other end and supposing that on this robotic link. So I have got one sliding component like this, let me try to prepare one sketch for the sliding component. So, this is roughly the sketch for one sliding component and this particular sliding component is going to slide here on this groove. So, this sliding component is sliding in this particular direction and this link is rotating, and this part here is connected to the motor. So, roughly, I think, it can be visualized, so this particular sliding member is sliding and the link is rotating. So, now, in that case, so it will be subjected to some amount of force which is known as the Coriolis force, this is the concept of the Coriolis force. And, of course, if I want to consider the friction, so that friction also we can consider. Now, here, I am just going to concentrate on this particular figure, so these are EMBED Equation.DSMT4 , nothing but the base coordinate system of this particular robot, and I am just going to concentrate on a particular link, that is nothing but the i-th link. Now, here, this is the i-th link and for this particular i-th link, the motor is connected at this particular end but very purposefully. So, I have put the coordinate system here but not here, the reason is very simple: we, mechanical engineers, we always try to think about the reaction force, reaction torque. So, whenever we calculate any force or torque that is nothing but the reaction force/torque. Now, here, actually, if I just want to put one motor, the motor is going to generate torque, it is going to generate some angular displacement, velocity and acceleration sorts of thing, now, if I want to measure that, what I will have to do is, I will have to measure the reaction torque. How to measure? To measure the reaction torque, I will have to concentrate here, that other end, and that is why, actually this particular coordinate system is attached here but not here, and here, the coordinate system is denoted by EMBED Equation.DSMT4 . Now, supposing that, on this particular link, I am just going to concentrate on a particular point, say let me consider this particular point and supposing that this point is having the differential mass, say dm, small mass dm. Now, this particular point lying on the i-th link can be represented in its own coordinate system by a position vector, which is nothing but r_i with respect to i^bar; that means, here I am consider considering i-th point lying on the i-th link, I can also consider the j-th point lying on the i-th link in that case the representation will be r_j with respect to i^bar, ok? But, here, I am just going to use r i with respect to i; that means, the i-th point lying on the i-th link, I am just going to consider and it is position is denoted by r_i with respect to i^bar. The same point can be represented in the base coordinate system of the robot by another position vector, which is nothing but r_i with respect to 0^bar. So, the same point in its own coordinate system, in its base coordinate system. And, we have studied in robot kinematics and we know how to relate this particular r_i with respect to 0 and this r_i with respect to i. Now, if you see this particular relationship, so we can find out that I am going to discuss in the next slide and in fact, I am just going to concentrate first on the inertia terms. Now, here, on the inertia terms, actually, we try to find out the mass distribution that is the moment of inertia term. So, this inertia terms, I am just going to concentrate first. Now, here, this r_i with respect to , that is the position of the i-th particle lying on the i-th link is denoted by x_i y_i z_i, so, this is the coordinate and this particular 1 actually I am just putting to make this position vector as a EMBED Equation.DSMT4 matrix, I hope you remember, this we followed, while deriving the expression for homogeneous transformation matrix. So, at the bottom of this particular position terms, we put 1. Now, this r_i with respect to i is nothing but x_i y_i z_i 1 and this is nothing but a EMBED Equation.DSMT4 matrix, now this particular expression let us try to concentrate this is how to relate that position of i-th particle lying on the i-th link with respect to the base coordinate system provided, I know this r_i with respect to i. So, what I will have to do is, I will have to multiply T_i with respect to 0, that is the transformation matrix of i with respect to the base coordinate frame by r_i with respect to i. And, all of us, we know and we have studied in kinematics that this T_i with respect to 0 is nothing but T_1 with respect to 0 multiplied by T_2 with respect to 1 multiplied by T_3 with respect to 2, and so on, and the last term is T_i with respect to i minus 1. So, T stands for the transformation matrix and all of us we know how to derive and how to find out this particular expression. Now, I am just going to concentrate on the concept of this particular inertia, that is the moment of inertia term, how to define this moment of inertia and how to find out the inertia tensor for a particular the robotic link or the i-th link. Now, moment of inertia, all of us we know by definition that is, m r square and for this particular i-th link, the moment of inertia that is denoted by J_i, that is, integration r_i with respect to i multiplied by r_i with respect to i transpose dm. So, here, actually, we will have to find out this particular the tensor. Now, let us see, how to find out this particular tensor. So, this r_i with respect to I, as we have seen, is nothing but is your x_i y_i z_i 1 and this r_i with respect to i transpose is nothing but is your x_i y_i z_i 1, ok? Now, if I multiply so r_i with respect to i by r_i with respect to i transpose; that means, this is nothing but a 4 cross 1 matrix and this is nothing but a 1 cross 4 matrix and if you multiply then you will be getting a 4 cross 4 matrix. So, if I just multiply, this is something like this: x_i y_i z_i 1 multiplied by your this particular matrix x_i y_i z_i 1. So, first row and first column, I will be getting your x_i square then, first row second column is x_iy_i, first row third column z_i x_i, first row fourth column is x_i, similarly y_i x_i so x_i y_i then comes here y_i square y_i z_i and this is nothing but y_i, next is your z_i x_i then comes your y_i z_i then comes z_i square then comes z_i then x_i y_i z_i 1. So, this is the EMBED Equation.DSMT4 matrix, which we will be getting and this is exactly same as this. So, this J_i is nothing but that is the moment of inertia tensor for i-th link is EMBED Equation.DSMT4 . Now once you have got this particular inertia tensor, now what you can do is, we can consider the different cases like the different types of the links we can consider for example, say if I consider the robotic link with rectangular cross section. So, very easily we can find out what should be the expression of this particular the inertia tensor. Now, let me consider 1 robotic link with rectangular cross section, truly speaking the robotic the motor the motor is connected here and I will have to put the coordinate system or the other end this is the length of the link that is l and it is having the dimension a and b the cross section. And I will have to find out the inertia tensor for this particular the robotic link and for simplicity. So, we are considering a robotic link with constant cross section and that 2 rectangular cross section with sides a and b. Now, let us see how to find out the inertia tensor we concentrate on this particular differential mass. So, this particular differential mass the small mass having the dimension of the dx then comes here dy and this is dz, so we try to concentrate on this particular the element. Now if we concentrate on this particular element, so very easily can find out the differential mass, we are trying to find out the moment of inertia and that is by definition m r square so always it has to be positive. So, it could be positive and differential mass dm is nothing but the volume is dx dy dz. So, this is the volume of the differential mass and multiplied by the density rho this is nothing but the differential mass. Now, moment of inertia about xx, that is, I_xx is nothing but triple integration, so this is about I xx as so r square m r square. So, EMBED Equation.DSMT4 and EMBED Equation.DSMT4 and now we will have to decide the limit of integration first you concentrate on dx then dy and after that dz. Now, let us go back to the previous slide now here and we are just going to find out the limit for dx, now this is the x direction and along this particular x direction the total dimension is a and this is at the midpoint. So, x will vary from EMBED Equation.DSMT4 to EMBED Equation.DSMT4 and what should be the range for the limit for y. Now, the coordinate system is here, so if it is here this corresponds to 0 value of y. So, this particular part is minus so from - l to 0. So, the range for this particular y, it is - l to 0 and the range for the z along the z direction the total dimension is b, so from – b/2 to + b/2. Now, you see. So, what you can find out, we can find out the expression for this particular i the limit for this integration. And, if we just solve this particular integration for example, x is – a/2 to + a/2, then dy is -l to 0, then dz –b/2 to +b/2 and all of us know how to carry out this particular integration, and all of you please practice to find out, to derive this particular expression and if you carry out this particular integration. So, you will be getting the expression that is nothing but is your EMBED Equation.DSMT4 , where m is nothing but the mass of this particular link that is nothing but EMBED Equation.DSMT4 . So, abl is nothing but the volume of this rectangular link multiplied by EMBED Equation.DSMT4 is nothing but the mass. Now, by following the similar procedure, so we can also find out this I yy that is nothing but triple integration in place of r square we will have to write down EMBED Equation.DSMT4 and the limit for integration will be the same as I discussed earlier and if we carry out this particular integration, I will be getting EMBED Equation.DSMT4 . So, all of you please try to derive this particular expression and you will be getting it. Similarly, by following this we can also find out what is your I zz and that is nothing but in place of r square, so I will have to write down EMBED Equation.DSMT4 and then if you carry out this integration. So, you will be getting EMBED Equation.DSMT4 , then comes the concept of product of inertia. Now this product of inertia in place of r square so I will have to write down so xy yz zx like this and this product of inertia all of you know it could be either 0 or it could be negative or it could be positive. So, all 3 possibilities are there for example, if we calculate EMBED Equation.DSMT4 and following the same limit of this integration and if you carry out this particular integration. So, we will be getting that is equals to 0. Please try to derive this and check then I yz. So, you will have to write down yz here, so I will be getting this is equal to 0. Now, similarly so I can find out, so this I_zx is nothing but this and it will be getting this particular expression. So, I here will have to write down zx and if you carry out this integration we will be getting equals to 0. Similarly, we will have to find out EMBED Equation.DSMT4 . So, this is nothing but EMBED Equation.DSMT4 , the same limit for integration and if you calculate we will be getting that is equals to 0, then comes your integration y dm so this will become - m l /2. Now, if you see this particular you are if you if you just see this rectangular cross section link it is something like this, now here so the mass center is here ok. Now this is the y direction and the total length is your l ok. So, if you see the mass center so x coordinate of the mass center is 0 the y coordinate of the mass center is nothing but - l by 2 and the z coordinate is 0. So, this is actually the coordinate of the mass center and that is why, EMBED Equation.DSMT4 can be written as EMBED Equation.DSMT4 and then EMBED Equation.DSMT4 is equal to 0. So, if you carry out this integration now we can also find out EMBED Equation.DSMT4 equals to m, now I am just going to concentrate on this particular very well known inertia tensor, which is available in all the textbooks of Robotics, ok. But, how to derive this particular inertia tensor the general expression, so let us try to concentrate on this. Now, while determining this I_xx if you remember we consider y square plus z square and there is a minus. So, I can write down minus y square minus z square. While determining I yy I consider x square plus z square; while determining I zz I consider x square plus y square. So, minus y square plus y square minus z square plus z square gets cancelled so I am getting 2 x square divided by 2, so I will be getting x square. Now, if you see the previous expression of the inertia tensor, so this particular expression of the inertia tensor the first term is your EMBED Equation.DSMT4 . So, to get this x i square actually I am using so and to get the other terms, so we are using this particular the final expression for this inertia tensor. So, this particular inertia tensor can be derived starting from the first principle and now if I just put all the expression the values of I_xx, I_yy, I_zx and all such things. So, I will be getting this particular the matrix, this is nothing but EMBED Equation.DSMT4 inertia tensor for this particular the rectangular link, ok. Now, there is a concept of slender link for which l is very large compared to a the length of the link is very large. That means if I consider the slender link. So, EMBED Equation.DSMT4 will tend to 0, EMBED Equation.DSMT4 will tend to 0 and I will be getting this particular matrix as a inertia tensor matrix for a slender link, so this is the inertia tensor. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_37_Robot_Motion_Planning.txt
So, our aim is to design and develop intelligent and autonomous robot. Now, we have seen how to collect the information of the environment with the help of sensors, with the help of cameras. Now, this particular camera could be either the on board camera or the overhead camera. There could be multiple sensors, there could be a combination of sensors as well as camera and we collect information of the environment. Once we have collected the information of the environment; now an intelligent robot should be able to take the decision as the situation demands. So, how to take the decision; how can a robot take decision, that I am going to discuss. So, we are going to start with a new topic that is called the topic 8, Robot Motion Planning. So, in motion planning actually what we do is, we try to plan the motion or try to find out the course of action while moving from an initial position to the final position. Now, let us concentrate, let me consider that this is nothing, but the tip of the manipulator. So, this is the initial position of the robot and the final position is here. So, starting from here, it is going to reach this particular the final point through a number of intermediate points and might be there could be a few obstacles sort of thing. So, it will have to avoid collision with the obstacle; so, how to determine the course of action or the path that is the collision-free path that is the task of a robot; that means, to perform that particular task, the robot should have a proper motion planner, the path planner, ok. Now, here I am just going to discuss like how to design and develop a suitable path planner or the motion planner for this particular the intelligent and autonomous robot. Now, if you see the robot motion planning. So, this robot motion planning is broadly classified into two subgroups. So, these are broadly classified into two subgroups; so, these are broadly classified into two subgroups. Now, one is known as the gross motion planning or the free space motion planning. And, another is the fine motion planning or the compliant motion planning. Now, let me take one example, let me take one example just to find out the difference between this gross motion planning or the free space motion planning and the fine motion planning. So, let us take one example supposing that say I have got one board something like this. And on this particular board say I will have to write say one word say I am just going to write a word say robot I am just going to write. Now, this is the black board on which I am just going to write with the help of a marker or a chalk. Now, if you just give the same task to a robot to an intelligent robot; that intelligent robot will first try to find out the free space on this particular board, where it can write that particular the word. For example, say the board could be something like this; this is here, this is not the free zone, so this is not the free zone. So, this is not the free zone ok; so, the board could be something like this. So, I am just going to show the free zone and the dark zone sort of thing. So, this is actually the structure; so something is written here and this part is not cleaned of this particular board. So, I cannot write anything on this part, where there is something written. So, how to write the word: robot or the robotics? Now, if you give this particular task to the robot; the robot will first try to find out, where is the free space, where I can write down? So, this particular the word that is the robotics. So, all such letters, I will have to write. So, first thing it will do is, it will try to find out the free space. And, once it has got that particular free space, now it is going to write down the robotics, r will be written here. So, R O B O T I C S; so, the robot is going to write down ROBOTICS on the board, ok. Now, let me repeat the first thing, you will have to find out the free space and once you have got the free space; now you will have to write this particular word. And, these letters you will have to write down and writing this particular letter on the board is not so easy; particularly for the robot. Because, whenever I am writing this particular the letter, this particular marker is in touch with the board and there will be compliant motion. So, this particular marker is in touch with the compliant motion; marker is in touch with the board there and I will have to put some force; while writing, some amount of force is to be put that is called the compliant motion. So, here there are two types planning: one is called the free space planning, another is called the compliant motion planning. Now, compliant motion planning is, if you want to write down, then how to put force, how to manipulate? And, while writing, I am just gripping that particular marker with the help of my finger, then I am doing some sort of manipulation, so that I can write down R O and all such things. So, I have got a planning to write R; I have got a planning to write O; I have got another planning, to write A; I have got another planning another sequence. And, those things starting from our childhood we learn, we learn through a number of iterations through a lot of practice; that is called the compliant motion planning. And, free-space motion planning; the purpose of free-space motion planning is to find out the feasible and the infeasible zone. For example, say this is an infeasible zone, but this part is a feasible zone, where I can write down one letter. So, the purpose of free space planning is to determine the feasible space an infeasible space, but the purpose of compliant motion planning is to write that particular the letter. So, this is actually a bit difficult this compliant motion planning or this particular fine motion planning is bit difficult. And, this gross motion planning or the free space motion planning is easy I should say and here, in this particular course, I will be concentrating only on this particular; the gross motion planning or the free space motion planning. But, I will not be discussing the fine motion planning or the compliant motion planning because this has been kept actually beyond the scope of this particular course. So, I will be concentrating on this particular the gross motion planning or the free space motion planning. Now, this gross motion planning or the free space motion planning can be once again subdivided into two parts: one is called the manipulation problem, another is called the navigation problem. The moment actually I am just going to take the help of one serial manipulator, just to write something on the board that is called the manipulation task or the moment I am just writing something on this particular board, ok. So, that is nothing, but the manipulation task and the moment that moving robot or the mobile robot is actually working that is nothing, but the navigation task. In fact, we are planning to give some practical examples of this manipulation and navigation in this particular course, might be at the latter part. So, we are going to concentrate on this particular manipulation and navigation; that means, if I just want to put it in another way; that a serial manipulator or a parallel manipulator solves the manipulation problem. On the other hand, a mobile robot could be a wheel robot or a multi-legged robot or a tracked vehicle tackles the navigation problem. And, let us see, how to proceed further with the different types of the motion. Now, this particular, it shows this sketch shows actually, if this is the total time for planning; now the free space motion planning takes only a small part, small duration. On the other hand, the compliant motion planning takes the larger duration and the total time is nothing, but is the total time for the motion planning or total task time. So, the total task time if I divide the free space motion time is much smaller compared to the compliant motion time, but as I told, I am just going to concentrate on this course only, on the free-space motion planning. Now, this shows actually the sequence of robotic action and you will see that all such things, all such modules, I have already discussed and now, I am discussing motion planning. So, if I complete this discussion on motion planning, you will see that all such modules of robotics have been touched. For example, say if I just want to solve one robotic task with the help of a robot; the first thing is the task identification. So, you will have to identify the task; the task which is going to be tackled or solved with the help of a robot. Then, we go for the motion planning, which I am discussing now and once that particular the course of action has been planned, we go for the kinematic analysis that I have already done, already discussed. Then, we go for the trajectory planning before the dynamics. So, dynamics also I have discussed; the control scheme also, I have discussed because you will have to realize that particular the torque with the help of a motor with a suitable controller. And, once those things are ready; now, we are in a position to generate that particular the motion. And, this is actually, in short, the necessary modules of robotics; and as I told that in this course, I am just going to touch the fundamentals of all the modules, ok. So, let us try to concentrate more on this particular motion planning, now. Now, if you see the environment; the environment could be either a structured environment, that is the known environment or it could be unstructured. Now, if the complete information of the environment is known beforehand, that is called the structured environment. For example, say I am just going to solve a motion planning problem like this, where the environment is known. Let me take a very simple example, supposing that say I have got say X and Y in Cartesian coordinate system and I have got a robot. Say, if the robot having say 2 degrees of freedom, very simple. So, this is L_1, this is L_2, ok; so, this is the length of the first link , length of the second link and this is the tip of the manipulator, whose coordinates are X and Y. Supposing that I am just going to give a task that you start from here, that is point S and reach the goal, that is point G, ok. Now, the tip of the manipulator is going to start from here and it is going to reach the goal. And, supposing that I am just going to put one condition or the constraint that the tip of the manipulator should not collide with any of the obstacle; supposing that I have got one triangular obstacle, I have got one circular obstacle, I have got one line obstacle and these are all 2 D stationary or the fixed obstacles. Now, here the environment is known; this is a structured environment. So, we know this particular the static obstacles, we know their location and I know my problem that the tip of the manipulator should start from here and it will reach this particular the goal, ok. And, this type of environment is known as the structured environment. Now, if I just modify a little bit, for example, say if I add, if I just consider that these obstacles are moving. For example, say this obstacle is moving in this particular direction with some speed. So, this is moving in this particular direction with some speed so, this is moving in this particular direction or say this particular direction with some speed, ok. Now, the problem becomes difficult and the position of the obstacles are going to vary with time and that particular problem will become a problem of motion planning in the presence of moving obstacles. So, the path planning or the motion planning in the presence of the structured environment is called the find-path problem. And, the motion planning in the presence of unstructured environment, this is known as your dynamic motion planning problem; dynamic motion planning problem or the motion planning among dynamic obstacle or dynamic environment. So, I am just going to discuss like how to tackle the problem, that is the find-path problem and this type of dynamic motion planning problem. And, I am just going to discuss the working principle of a few tools. The tools for motion planning and with the help of some example, I am just going to explain. Now, here the motion planning approaches if you see, those are broadly classified into two subgroups. Now, one is called actually the global approach or this is also known as act-after-thinking process or this is known as the offline planning. And, we have got another that is called the local approach, act-while-thinking process or the online planning. Now, if the environment is known; if the structured environment we have, then we can go for some sort of global planning, global approach or offline planning, ok. But, supposing that I have got some sort of the moving obstacles in the environment; the environment is dynamic. So, here, this environment is un-structured environment; so we will have to go for the local approach or act-while-thinking process or online planning. Now, let us try to explain the principle of this particular both global approach and the local approach. Now, let us start with this particular the motion planning scheme. Now, if you see the motion planning schemes; these are broadly classified are into two groups: one is called the traditional schemes or the algorithmic approaches. And, we have got the non-traditional schemes using the principle of soft computing. Now, here actually the traditional schemes are known also known as algorithmic approaches, these are once again classified into two sub-groups. One is called the graph-based techniques and we have got the analytical approaches. Now, if you see, if we compare this graph-based methods and the analytical approaches; the graph-based method actually it was proposed first. For example, the visibility graph that was proposed first in the year 1969 by Nilsson, then we have got the Voronoi diagram, cell decomposition, tangent graph, accessibility graph; so, these are all graph-based methods. On the other hand, actually we have got some analytical approaches. For example, say we have got the potential field method, we have got the path velocity decomposition, then we have got the incremental planning, probabilistic approach, then comes relative velocity approach, then reactive control scheme or behaviour-based robotics etc., ok. So, we have got a large number of approaches, large number of methods to solve this particular the motion planning problem. Now, on the non-traditional side, in fact, we have got a few approaches like the motion planning approaches using the principle of fuzzy reasoning tool, using the principle of the neural networks, using the principle of the combined neuro-fuzzy system, and so on. Now, those things actually are beyond the scope of this particular course. So, this will not be taught in this particular course; so, here actually I am just going to concentrate only on the traditional schemes or the algorithmic approaches. And, both the graph-based techniques as well as the analytical approaches, we are going to discuss and we will see that how to solve that particular the find-path problem and dynamic motion planning problem. Now, here, we are going to start with say one the graph-based technique, which is known as the visibility graph. And, this particular visibility graph as I told, this is the first approach, which was proposed in the year 1969 by Nilsson and here, the principle is very simple; now, for simplicity, we are considering that the robots are point robots. So, we are just going to consider the point robots and we are going to consider that the obstacles are stationary, that is the fixed obstacles. Now, the problem scenario is very simple and this type of problem is known as the find-path problem. So, this is known as the find-path problem; that means, starting from an initial position, it will have to reach the goal by avoiding collision with this particular static obstacle. Now, let us try to see, this is the starting point for this particular point robot and for simplicity, we are going to consider the point robot and this is the goal. And, here, we have got some obstacles. So, this is obstacle 1, here we have got say obstacle 2 and these are all 2D obstacles, stationary obstacles, ok; so this is your another obstacle. Now, according to this particular method, we will start from here, now if there is no such obstacle, very easily you can connect this S and G by straight line. And, that will be the best path or the optimal path that will be the collision-free and time optimal path could be, because there is no such obstacle. But, due to the presence of this particular obstacle; the robot, we will have to find out a feasible path. So, that it is not going to collide with this particular the obstacle; the rule is very simple. The rule is as follows: it connects those vertices of obstacle, which are visible from one another. Now, let me start from here and let me try to look towards the goal. So, I starting from here, if I look towards goal, this particular vertex is visible, this is also visible, this is also visible. So, you draw one line here, you draw another line here, you draw another line here. Now, you come back here, now from here, you look into this. So, this particular vertex is visible, this vertex is visible, this is also visible. So, the visible vertex you connect by the straight line, visible vertex you connect by the straight line and similarly, from here this particular vertex is also visible. But, this is not visible, this may not be visible, this is also not visible. Now, from here, this particular vertex is visible, so this vertex is also visible. Similarly, from here, this is also visible, this is also visible, but this is not visible this is also not visible, this is also not visible. Now from here, the goal is visible, then from here, this is visible. So, from here, this is visible from here, this is visible, ok. Now, another thing is from here; this is also visible and this is also visible. Now, if I just do the numbering, say it is 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 and this is G. So, I can write down the different feasible paths. For example, one path could be starting from S; you go to 2, then 2 to 8; 2 to 8, then 8 to G. Another path could be that start from S then you go to 3. So, you come here then you go to 6, and then you go to G. Similarly, there could be many other possible combinations, possible sequences, ok. Now, out of all the possible sequences, you can find out the time-optimal path, these are all collision-free paths. And, if you want to find out collision-free and at the same time optimal path, we will have to take the help of some optimization tool. But, Nilsson did not use any such optimization tool, he could give all such feasible paths. And, then he concluded that starting from S to reach this particular goal, there are many such feasible collision-free paths. And, out of all the feasible paths; the robot we will have to choose one. This is actually the principle of the visibility graph. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_35_Robot_Vision_Contd.txt
Now, I am going to solve one numerical example using the method of the masking. Now, let me take one example, one numerical example. Supposing that we have got one the image and corresponding to this particular image, we have got some the light intensity values pixel-wise; let me take, at random, some example, say 10, 30, 40, 50 and there are some numerical values, which I am not considering 20, 60, 65, 75; there are some numerical values then comes 15, 25, 35, 65 and there are some numerical values. Then comes 20, then comes 30, then comes 70, then comes 80; there are some numerical values and here also, we have got a few other numerical values, ok. Now, here my aim is to find out, what should be the preprocessed value so, corresponding to this particular any of this pixel? Now, let me try to concentrate that I am just going to find out the preprocessed value corresponding to this particular say 65, ok. So, how to do it and supposing that I am just going to take the help of one mask like 3 cross 3 mask, the way we discussed the 3 cross 3 mask is something like this. So, here, we have got plus 8 minus 1 minus 1 minus 1. So, minus 1 minus 1 minus 1 minus 1 minus 1. So, this is nothing, but a 3 cross 3 template or the mask. So, this is nothing, but a mask and our aim is to find out the preprocessed value corresponding to this. So, what I do is, the center of the mask is made coincident with this particular 65. So, I can draw this particular template or the mask here, ok. Now, I just try to find out, what should be the preprocessed value. So, p the preprocessed value; so here, we can see that this is minus 1; so, if we just draw it here. So this particular template if we just draw it here. So, here we have got minus 1; so, minus 1 multiplied by 30, then comes minus 1 multiplied by 40, then comes minus 1 multiplied by 50, then comes your minus 1 multiplied by 60 plus 8 multiplied by 65 minus 1 multiplied by 75, then comes your minus 1 multiplied by 25, minus 1 multiplied by 35, minus 1 multiplied by 65. Now, if I calculate; so, that will be the preprocessed value corresponding to this particular the pixel. So, in place of 65 supposing that I am getting x that integer value. So, I am just going to put this particular x in place of your 65. And, this particular the further processing will be done with the help of this particular the preprocessed value. This is the way actually, we do the masking; now for each of this particular pixel, we will have to follow this particular the method of masking and purpose I have already told, the purpose is nothing, but to remove that particular noise from the image. Now, there is another method; this method is also very popular for preprocessing, that is called the neighborhood averaging. Now, here actually what you do is, in neighborhood averaging; so, what you do is. So, we try to find out this neighborhood averaging actually, what do you do?. So, we try to concentrate on a particular neighbor, we define a neighborhood and we try to find out the average. Now, let me take a very simple example supposing that, I have got one image sort of thing. Now, say this is something like this; this is actually the light intensity values at the different pixels; say let me consider 20 here, 30 here, 40 here, 50 here something like this and there are some numerical values 10, 20, 30, 40; there are some numerical values 20, 30, 40, 50 there are some numerical values and here also there are some numerical values. Now, let us see how to find out, how to carry out this particular the neighborhood averaging. Now, before we carry out this neighborhood averaging what you do is; we define the neighborhood first. For example, if I concentrate on this, say I will have to find out the neighborhood averaging value corresponding to this particular 20. So, I will have to define the neighborhood first; supposing that the neighborhood is nothing, but is 3 cross 3 neighborhood. So, if this is the 3 cross 3 neighborhood; so this is nothing, but the 3 cross 3 neighborhood ok. So, surrounding this, actually we have got the 3 cross 3 neighborhood and once I have got defined this particular neighborhood; I know the light intensity values at the different pixels. So, what you do is, you sum them up all the light intensity values. So, that is 20 plus 30 plus 40 then comes your plus 10 plus 20 plus 30 plus 20 plus 50 plus 40. So, we try to find that and supposing that this is equal to x and how many entries are there? 1 2 3 4 5 6 7 8 9; so, x divided by 9. So, whatever value we get and its nearest integer will be nothing, but a number. So, that is equal to say y and y is considered as the nearest integer. So, this particular y is going to replace that particular the 20; so, this is the method of neighborhood averaging. Now, mathematically, actually it can be expressed something like this, the sum of all the light intensity values contained in that particular neighborhood divided by the number of neighbors including that particular preprocessed value I am going to calculate, ok. So, that is why, I have divided by 9, but not 8. So, this 1 by R multiplied by summation f (n, m) ok. So, this is nothing, but like how to determine the neighborhood, how to determine the average of this particular neighborhood. And, this particular average is going to replace that light intensity value at that particular the pixel; now, this is also a very popular method for preprocessing. Now, this method is very simple ok. So, very easily, we can implement this particular the neighborhood averaging. Now, I am just going to discuss another very popular the method of preprocessing and that is called the median filtering. Now, if you see this particular the method of median filtering this is very simple. Now, by median, we mean like supposing there are 9 values; so, what I am going to do is 9 means there are odd number of values. So, what I will do is, I will leave the first 4 and the last 4 and I will try to find out, what is there at the middle; if there are odd numbers. And, if there are even numbers, supposing that 8; so, what I will do is. So, first 3 and the last 3 we are going to remove and at the middle, we have got 2 remaining because 3 plus 3; 6 plus 2 is 8. So, we try to find out the mean or the average of the middle two; so, this is the way actually we calculate the median. Let me take a very simple example, the similar type of example; supposing that I have got an image, whose light intensity values are nothing, but say 30, 40, 50, 60, 25, 35, 25, 65 and there are some other values, here. Then, comes 35, 25, then comes your 32, 85; there are some other values and here also, there are some other values, ok. Now, once again, we will have to define one the neighbor and supposing that I am just going to find out, what should be the preprocessed value corresponding to this particular 35. Now, if I want to find out, what should be the value corresponding to this particular 35? So, what you do is, we define the neighborhood first; so, this is actually the 3 cross 3 neighborhood. And, in the neighborhood, we have got all the light intensity values like 30, 40, 50, 25, 35, 25, 35, 25, 32, ok. So, the light intensity values, let me write it here. So, we have got 30, then comes 40, then comes your 50, then comes 25, 35, then comes 25 then comes your 35, 25 and 32. So, there are 9 numbers, 9 values: 1, 2, 3, 4, 5, 6, 7, 8, 9; so, what you do is, we sort them in the ascending order, ok. Now, if I want to sort them in the ascending order. So, what I will have to do? I will have to find out the lowest value here, the lowest value is 25 and there are 3 such entries of 25. So, let me write here 25, 25, 25 then comes your; so, 25; above 25 we have got 30 here. So, let me put 30 here then comes your 32 here, 32 here, then comes 35; 35 and here also we have got 35, then we have got 40 and after that we have got 40. So, all such 9 values these are sorted in the ascending order ok. So, from the lowest to the highest, there are 9 values. So, there is odd number of values; so, what I do is. So the first 4 you neglect; so, the first 4 you neglect, the last 4 you neglect and whatever is there at the middle; so, that is 32 is the median value corresponding to these 9 values. So, this is the median, we are going to replace. So, this particular 35 by this particular 32; that is what you do in your median filtering ok. Now, let me take another; another example similar type of example, but slightly different, ok. Now, supposing that I have got, this type of numbers, say 25, 35, 65 and there are something, say I have got 35, 45 and 50 and something then, there are 65 35, 25 and something and something. So, this is nothing, but the image and supposing that I will have to find out, what should be the preprocessed value according to the median filtering corresponding to this particular 25? So, what I will have to do is. So, 3 cross 3 neighborhood if you define. So, this is the way we can define for example, say. So, this is the way we can define; so, here we have get 3 and here also we can write 3 here. So, those things are missing. So, only thing we have got only these 4 numbers: 25, 35, 35 and 45, are you getting my point? And, other things are 0s, ok. So, what will have to do it here? So, we will have to concentrate only on your. So, this particular the 4 values; that means, the values which you have here are 25, then comes 35, then comes 35, then comes 45 and they are in actually the ascending order. Now, here, there is an even number of numbers ok; so, what I will have to do is. So, this I will have to leave, this I will have to leave and as there are even numbers. So, I will have to find out the mean or average of 35 and 35 and that is also equal to 35. So, this 35 is going to replace that particular the 25 as the preprocessed value and following the same method; so I can find out the preprocessed value corresponding to each of these pixels. So, this is the way actually, we can carry out the preprocessing. And, as I told the purpose is to remove the noise and to restore, if there is any such the lost information. So, these are the methods of preprocessing, which are generally used very frequently and once I have done, this particular preprocessing. So, we will be getting the preprocessed data and on the computer screen actually we have got that particular matrix, the matrix of the light intensity values, ok; now we will have to find out the difference between your the object and the background. Now, as I told that here I am just going to concentrate only on the black and white picture sort of thing for simplicity ok. Now here we will have to take the help of one operator that is called the thresholding operator. So, using this particular thresholding operator; we can find out the difference between the object and this particular the background. Now, there could be two possibilities in black and white, say, the background and this object, the possibilities are the background could be white and the object could be the dark, ok. And, there is another option, the background could be dark and this particular object could be the white, ok. So, there are two possibilities and both the possibilities can be solved very easily using the principle of thresholding. So, let us see, let us see the principle of this particular thresholding. So, the purpose is to find out the difference between the object and this particular the background. Now, let me consider a particular special case like the background, let me consider the dark background say the dark background. And, the object is nothing, but the white, let us see what happens. White object means what? The light intensity value will be more and dark background means, there the light intensity values will be less. Now, here for this thresholding, we will have to define one threshold value of light intensity, that is denoted by T. Now, here I am writing after thresholding; supposing that I am getting say g(x, y). So, initially we had f (x, y) is the light intensity value, then we converted into p (x, y) that is the preprocessed value and now, I am just going for g (x, y) after the application of this particular thresholding. And, I am just going to consider white object and dark background; that means, on the object the light intensity value will be more. Now, here I am just putting the condition that if this particular p (x, y) is found to be greater than the threshold value which is predefined by the user; then it will generate 1, and as I told that object is white. So, its light intensity value is more; that means, 1 is going to indicate the presence of that particular the object. And, on the other hand, if p (x, y) is found to be less than or equal to T, that is the threshold value. So, it is going to generate 0; so 0 means what? So, 0 means it is nothing, but the background and 1 means this is nothing, but the object. Now, previously on the computer screen; so, this is the computer screen. So, we had the matrix of light intensity values, its preprocessed values. So, on the computer screen, I could have seen that particular matrix of the light intensity values. Now, if you use this thresholding; suddenly, we will find that on this particular computer screen, there will be a collection of 1s and 0s, ok. So, there will be a connection of 1s and 0s and as I told that 1 indicates that this is the object and 0 indicates that this is nothing, but the background. Now, if I get here all such 1s I get; then what will happen is, so, I can just find out one boundary; so, this is nothing, but the boundary and this boundary will be the approximate boundary of this particular object on 2 D view. So, this is nothing, but an approximate the boundary for the object and 0 indicates that this is nothing, but the background. So, on the computer screen, corresponding to that particular object, ok, you will be getting actually some sort of approximate picture of the objects something like this and it is in 2D, ok. So, this type of picture will be getting by using the operator like the thresholding, this is the purpose of the thresholding. Now, the reverse is also possible for example, say if I consider your; the black object and the white background that is also possible. And, accordingly, I will have to actually change this part just to take care of that and that is also possible, ok, I can just consider the reverse also. Now, once I have got this particular object, what is the next task? The next task is how to identify or how to actually detect that particular edge that is the edge between the object and this particular background. So, my aim is to detect this particular the edge of this object. Now, let us see how to detect that particular edge? That is nothing, but the edge detection, that is, step 6. So, in step 6, we try to take the help of some sort of edge detection technique. Now, edge detection techniques are nothing, but the gradient operator; by gradient, we mean the rate of change. So, on this particular boundary, the rate of change will be very prominent and that is why, to detect the edge between the object and the background, we take the help of the gradient operator. For example, this particular gradient operator is very popular just to find out the difference between the object and this particular the background; that means, to identify the edge. So, this gradient operator is working on, say, this particular the light intensity values and that is nothing, but G_x, G_y and G_x is nothing, but the partial derivative of p with respect to x and partial derivative of p with respect to y is nothing, but actually the G_y. So, we will have to find out the partial derivative just to detect that particular the edge. Now, derivative is computationally expensive and all such things ultimately, we will have to write in the computer program and in the computer program, if you want to determine this type of derivative, it will be computationally very expensive and that is why, to carry out this particular, the derivative we take the help of some sort of templates and those templates are nothing, but so, this type of template. For example, say if we want to carry out the G_x that is nothing, but the partial derivative of p; the preprocessed light intensity with respect to x; so, this type of 3 cross 3 mass or the template, we use. And, here you can see that this is the positive x direction say. Now, along this positive x direction, we can see that there is change in sign, for example, we have got minus 1 then 0 then plus 1. So, from minus 0 plus there is change in sign whereas, along this particular y direction; there is no change in sign. So, this is minus minus minus 0 0 0 plus plus plus ok. So, this is the way actually, we try to design this particular the mask to carry out this particular G_x. Now, here once again, if we just add the mass coefficient. So, this will become equal to 0 for example, minus 1 minus 2 minus 1. So, we have got minus 4 plus 0 plus 4; so, this will become equal to 0. Now, similarly, we can also find out this particular G_y and that is nothing, but the partial derivative of p with respect to this y and once again, this is the positive y direction and along this particular y direction there is change in sign. So, minus 1 0 1, minus 2 0 2, minus 1 0 1 and along this particular x direction; there is no change in sign. So, keeping that in mind actually, we try to design this type of mask or the template just to carry out this type of derivative; so, this is actually the derivative, which is very frequently used just to implement the gradient operator in the computer program. Now, there is another operator, which is also very frequently used that is called the Laplace operator. And, here actually, we generally go for the second order derivative. So, this L on p (x, y) is nothing, but del 2 p, del x 2 plus del 2 P del y 2. And, once again as I mention that if I want to implement on the computer program that will become computationally very expensive and that is why to implement this Laplace operator in the computer program, what you can do is, like we will have to use some mask or the template and this is a very typical template used for Laplacian operator. So, at the middle we have got minus 4 and on the horizontal side we have got plus 1 plus 1, vertical side plus 1 plus 1 and in there are 4 such neighboring 0s here. And, the sum of these particular coefficients values will be equal to 0. So, this is a very widely used operator, that is called the Laplacian operator just to find out or just to indicate or identify or detect the edge of an object from the background. Now, this is the way actually, we can carry out some sort of edge detection and using this gradient operator particularly the Laplacian operator; we can find out the difference between the object and this particular background. And, we can, in fact, identify the edge between the background and this particular the object. Thank you.
Robotics_by_Prof_D_K_Pratihar
Robotics_by_Prof_D_K_Pratihar.txt
Let me introduce myself, I am Dr. D. K. Pratihar professor of mechanical engineering department, IIT Kharagpur. I welcome you all to this course on robotics. Now to introduce me let me mention that I started learning robotics in the year 1995 during my PhD study at IIT Kanpur. I worked on 6 legged robot up to the completion of my PhD, I went for 2 post doctor studies. The first one was in QC institute of design Fukuoka, Japan and the second post doctor study was at Darmstadt University of Technology Germany, under the Alexander Von Humboldt Fellowship Program and in both the PDS I worked on robotics. So I have almost 22 years of research experience in different areas of robotics moreover I have been teaching robotics both at the UG at the PG levels at IIT Kharagpur since 2003. Now based on these teaching and research experience very recently I have written one text book and this particular text book has been published both in India and in abroad. Now this is actually the text book which I have written on robotics and this textbook will be the textbook for this particular course also. Now regarding this robotics I have guided a few PhD’s I have completed a few sponsor projects. Now to start with the subject let us start with the definition of robot. Now here actually what I do is we define robot as an automatic machine which is reprogrammable and which is multifunctional. Now these robots actually to develop we copy everything from the human being so we try to copy the hand of a human being, the head of a human being, heart of a human being in the artificial way to develop the different types of robots. Now if you see the robots are available nowadays in different forms. For example, say the robots are available in the form of manipulator. The manipulator means it is robot with fixed base, the manipulator could be either serial manipulator or there could be parallel manipulator. The robots are also available with moving base just like the wheel robots, the multi-legged robots, the track vehicles. Now these multi-legged robots could be eight the 2 legged robots, 4 legged robots, 6 legged robots and so on. And nowadays the drones are very popular and drone is also a special type of robot. Now to study this particular robotics actually what we will have to do is we will have to study 4 modules in robotics. The modules are the kinematics, dynamics, control scheme and intelligent issues. Now this kinematics and dynamics are coming under the umbrella of mechanical engineering. Then control scheme is coming from the electrical and electronics engineering and intelligent issues are dealt in computer science. Now it is bit difficult to learn actually this particular subject robotics because one has to be very good in fundamentals of all these basic subjects and that is why it is bit difficult to become a true roboticist. Now in this course attempts will be made to discuss all the modules in detail. And there will be lot of numerical examples and there will be analysis of a number of robots. Now this particular course is actually the multidisciplinary course. Now here if you want to take this particular course there is no prerequisite and I am just planning to start from the very beginning, very scratch, very definition of the robots and starting from there through analysis, how to make it the intelligent, so all such issues will be discussed in details. Now this course will be useful to the UG student, PG students researches belonging to the various disciplines of engine science and of course it will be applicable to the practicing engineers. Now regarding the books. The textbook on robotics that is fundamentals of robotics written by me will be the textbook for this particular course and this book actually deals with all 4 modules of robotics and of course we have got a few reference books like the robotics by Fu, Gonzalez and Lee. It is a good book, similarly there is another book introduction to robotics by JJ Craig and others. So these are all the reference books for the robots. Now the course material for this robotics course will be available in the form of PPT files, slides will be suppled to you and there will be 8 assignments and the solutions for the assignments will be supplied to you. Now here in this particular course there will be 3 teaching assistnats, TAs and I am just going to introduce my TAs. Now these TAs actually they are doing PhD here under my supervision so I am just going to call them to introduce one after another. Let me introduce 3 TA’s for this robotics course. Saikat Sahu, we have Mr. Amir Das, Mr. Kondala Rao. Now these TA’s are going to help you in different ways if you have some doubts, if you have some queries, you can send all such queries to the forum and we all 4 are going to discuss and we will try our level best to help you in this particular course. I hope all of you will enjoy this course and learn a lot after attending this particular course. So I welcome you all to this course. I wish you all the best. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_05_Introduction_to_Robots_and_Robotics_Contd.txt
Now, I am just going to find out the workspace for different types of robots. We have already discussed that based on the coordinate system, the robots are classified into four groups. So, we have got the Cartesian Coordinate Robot, then comes Cylindrical Coordinate Robot, Spherical Coordinate Robot and Revolute Coordinate Robot. Now, I am just going to spend some time to find out what should be their workspace. Let us first try to concentrate on the Cartesian coordinate robot. Now, let us see the picture once again, of the Cartesian Coordinate Robot. So, this is actually the picture for the Cartesian coordinate robot and I am going to find out its workspace. Now, here, there are three linear joints. So, very easily we can imagine that the workspace for this robot will be nothing but a cuboid. So, the workspace for this particular robot will be a cuboid. And, this particular cuboid, I can draw very easily. So, this is nothing but the cuboid and that will be the workspace for this manipulator Now, here for some of the robots, you will be getting the workspace that is so much complicated that it becomes bit difficult to visualize its three dimensional view. And, that is why, actually what we do is, we take the help of at least two views like elevation view and plan view; just to identify or just to imagine, what should be the workspace for a particular manipulator Now, for this Cartesian coordinate system, as I mentioned that the workspace is nothing but a cuboid. Now, if I take the elevation view, that means, I am just going to take the view in this particular direction. So, I will be getting this horizontal stroke. So, this is nothing but the horizontal stroke and this is the vertical stroke. So, I will be getting the vertical stroke. And, on the plan view, that means, if I take the view from the top, I will be getting this as the horizontal stroke and this transverse stroke will be nothing but this. So, the same 3D view, I am just going to draw here with the help of this elevation view and plan view. And, I am just going to take the help of this just to indicate and imagine the workspace for the complicated manipulator. So, for this particular Cartesian coordinate system, it is very simple the workspace is nothing but the cuboid Now, I am going to consider the workspace for the Cylindrical coordinate robot. Now, if you remember the cylindrical coordinate robot, it was something like this. Let us try to see the cylindrical coordinate robot once again. So, this is the cylindrical coordinate robot. As I told, this is going to show us the maximum horizontal reach and the minimum horizontal reach, and this will give the vertical reach and here, there will be some rotation. So, corresponding to this particular cylindrical coordinate robot, I am just going to imagine the workspace. Now, as I told that the maximum horizontal reach will be identified. So, this is actually the maximum horizontal reach. This is actually the minimum horizontal reach. This is the elevation view and this is the plan view. So, here, I am going to consider 360 degrees rotation with respect to the fixed base and that is why, we are getting this type of circle here, ok?. So, this is the plan view. And, on the plan view also, once again, we will be getting the maximum horizontal reach and the minimum horizontal reach. In the elevation, I will be getting the maximum vertical reach and the minimum vertical reach and the way it is working. So, it has got two linear and actually one rotary, that is, the twisting and this particular workspace, which will be getting is nothing but a cylindrical annular space. So, this workspace is nothing but the cylindrical annular space Now, next we try to find out the workspace for some other robot like the Spherical coordinate robot. Now, let us try to see the picture of the spherical coordinate robot once again. So, this is actually the spherical coordinate robot. So, here I have got only one linear joint. This is going to give the maximum horizontal reach and the minimum horizontal reach. So, here, we have got one revolute joint and here, we have got one twisting joint ok. So, I am trying to find out the workspace for this manipulator. Now, as I told that with the help of this revolute joint; so, I will be getting the maximum vertical reach and the minimum vertical reach. So, let us try to see the workspace. Now, here, if you see on the elevation view, I will be getting the maximum horizontal reach that is something like this. And, this is the minimum horizontal reach and the vertical reach. So, this is will be the vertical reach and this vertical reach will be obtained with the help of that revolute joint. And, here, with the help of twisting joint, I am considering more or less say 330 degrees rotation, but not 360 and that is why, this type of plan view, I will be getting. Now, if I want to imagine the 3D view of this particular workspace, what I will have to do is: this particular elevation view, whatever I am getting, the whole thing I will have to rotate through this particular 330 degrees and then, I can imagine like what should be the workspace in 3D for this particular spherical coordinate robot. So, I hope you got some idea like how to imagine or how to determine the workspace for a particular manipulator Now, I am just going to concentrate on another robot, that is, Revolute coordinate robot. Now, here, if I want to imagine the workspace, it is a bit difficult thing. And as I told, we have got one twisting joint here. We have got one revolute joint here and we have got another revolute joint here. Now, if I want to imagine its workspace; it is a bit difficult because I have got 3 rotary joints. Now for a rotary joints in 3D, we will be getting actually the partial sphere. We will be getting the partial spheres and there should be intersection of partial spheres. So, it is a bit difficult to imagine the workspace for this manipulator Now, to make it simple; actually I am just going to prepare one small sketch here; so, that I can imagine the workspace for this particular manipulator. Now, let me do one thing. So, that particular sketch I am just going to prepare in that slide, where I am just going to show you that particular workspace. So, I am just going to draw that particular picture there only. So, let me just draw that simplified version of that revolute coordinate robot. Now, if I draw the revolute coordinate robot in its simplified version, it will look like this. So, this type of the simpler version will be getting. Now here, as I told that we have got the twisting joint here. So, with the help of this twisting joint; supposing that I am rotating by says not 360. So, let me consider, it is rotated by say 330 degrees. And, I have got a revolute joint here. So, with the help of this revolute joint, let me assume that I am going to rotate by 90 degrees, ok?, say 0 to 90 degrees, that means, starting from here. So, I am just going to rotate by 90 degrees. And here, I have got another revolute joint and supposing that I am just going to rotate say from minus 90 degrees to plus 90 degrees, ok?. And, supposing that this is equal to theta 1, this is your theta 2 and this particular angle is nothing but theta 3. So, this particular joint angle is say going to vary from 0 to 330, theta 2 is going to vary from 0. So, this corresponds to 0. So, zero to 90 degree in the anticlockwise sense and supposing that theta 3 is going to rotate from minus 90 to plus 90, say clockwise to anticlockwise 90 to 90; that means, total 180 degree. The moment we consider; so this type of configuration, then we will be getting starting from here. So, my position is here only. So, I am here. Now, what I am going to do? I am just going to rotate by 90 degrees. So, if I rotate by 90 degrees. So, I will be getting this particular point, ok?. Now, you see, here, I have got a joint with the help of this particular joint. So, that joint could be here, ok?. Now reference if I draw, the parallel to this will be its reference. So, I can also rotate anticlockwise, I can rotate clockwise. So, this particular tip is going to be rotated. So, with the help of this particular joint; that means, this particular joint, this tip actually I can rotate something like this, ok?. So, I have reached this particular point; the momentum I am here, that is, the folded-back situation. Now, I can rotate by 90 degrees. So, I can come over here. So, here is the tip. Now, if we just release it, the tip is going to fall. So, it is going to be obstructed here. Similar is the situation for the stretched condition. So, I am here. So, if I release this particular joint. So, it is also going to fall and it is going to be obstructed here. So, this will be your the elevation view of this particular manipulator corresponding to this rotation and here, you can see that. So, with the help of the first joint, it can be rotated by 330 degrees. So, if I want to imagine the workspace of this particular manipulator, this particular shaded portion will have to be rotated by 330 degrees and then only, I will be able to imagine the workspace for this type of complicated manipulator. Now, whenever you are going to give any task to the manipulator, its workspace analysis is very important to carry out and without this workspace analysis, actually, we should not proceed to solve a particular task. And, we will have to find out whether the tip of the manipulator, that is, the end-effector is able to reach that particular workspace in order to perform that particular the task So, this workspace analysis is very important. And, as we know that we human-beings can visualize only up to three dimensions. We take the help of this plan view, elevation view, elevation and plan view and tried to imagine that particular the 3D, but supposing that I have got a manipulator having says 6 degrees of freedom like say PUMA. And PUMA has got 6 rotary joints: 3 twisting joints and 3 revolute joints and it is a bit difficult to imagine the workspace of this PUMA. So, there will be a lot of confusion, if you want to imagine the workspace for this particular industrial robot like PUMA having 6 degrees of freedom. So, the concept of workspace analysis is very important Now, I am just going to define 3 terms actually, which are very important and very frequently used, but truly speaking these 3 terms are not the same and there is a difference between among these 3 terms. So, I am just going to define these 3 terms like your resolution, accuracy and repeatability of a particular manipulator. Now, this information of resolution, accuracy and repeatability is required, if you want to design the specification of a robot, which you are going to purchase. So, you will have to clearly mention, how much resolution you want, how much accuracy you want and what is the repeatability you want. And, let us try to understand the difference between or the difference among these three terms like resolution, accuracy and repeatability Now, before I start, let me mention that these three terms are not the same. There is difference among resolution, accuracy and repeatability. Now, let us start with the one, that is, resolution. Now, the resolution is nothing but the smallest allowable position increment that the robot can measure and this is almost similar to the concept of the least count for a particular measuring device. For example, say if we use a screw gauge or if use slide calipers, there will be least count. So, that particular least count is nothing but the resolution of that particular measuring device. Now, this resolution could be of two types. There could be programming resolution or there could be control resolution. Now whenever we write down the computer program, just to teach a particular point, which I have not yet discussed (I will be discussing after some time) I should know like 1 basic unit corresponds to how much displacement. So, this programming resolution is nothing but the smallest allowable position increment allowed in computer programming corresponding to 1 unit Now here, this particular programming resolution is expressed in basic resolution unit. Now, in short, this is known as BRU. So, 1 BRU equal to 0.01 inch or in millimeter, this could be 0.001 millimeter (nowadays it is also available). And, for the rotary movement, this particular BRU could be of 0.1 degree. So, whenever we are going to purchase a robot, we will have to mention how much is the programming resolution we want and what is the programming least count for this manipulator. Now, then comes the concept of the control resolution. Now, as I mentioned that if we want to use the servo-controlled robot like the closed loop control system. We generally use some feedback device. And, in robots, very frequently, we use different types of sensors to measure the position. And, out of all the position measuring sensors, the optical encoder is very popular one. Now, I can consider one optical encoder to measure the angular displacement of a particular rotary shaft. Now, let me take one example, supposing that, this is the output shaft of 1 electric motor. Now here, I want to measure the rotation of this particular output shaft of the electric motor. So, how to measure? So, what I do is: here, we put one optical encoder and optical encoder is nothing but a collection of a number of concentric rings placed one after another and on these particular concentric rings, there will be a dark region and clear region. Now, if there is opaque region, the light will not be able to pass, but if there is clear region or the light region, the light will be able to pass. So, here, we have got this particular optical encoder. Now, on one side, we have got the light source and on the other side, we have got the photo detector. So, the shaft is rotating and the whole optical encoder is also rotating and on the left side, we have got the light source and on the other side, we have got the photo-detector. And, corresponding to this particular rotation, there will be some number, some binary number generated. Now, by decoding this particular binary number, we can find out what should be the angular displacement of this rotary shaft and this particular angular displacement is compared with the target value and we try to find out the error and this particular error is compensated. Now, the working principle of this particular optical encoder, I will be discussing after some time, in details. Now, for the time being, let me consider that we are using one optical encoder as a feedback device and supposing that so, this particular feedback device is going to give one complete revolution, that is, 360 degrees and corresponding to these 360 degrees, there is say 1000 electrical pulses. So, 1000 electrical pulse correspond to 360 degrees rotation of this particular shaft. So, one electrical pulse corresponds to 0.36 degrees and we cannot think of the faction of one electrical pulse; that means, the control resolution or the resolution of the feedback device will be 0.36 degrees. So, this will be the control resolution for this feedback device. So, this is the way, actually, we define this resolution of a particular robot. Now, I am just going to define what you mean by accuracy. Now, this accuracy is nothing but the precision with which a computed point can be reached. Now, let me just try to take one a very simple practical example, and for simplicity, let me take the example of a 2 degrees of freedom serial manipulator. So, let me draw one 2 degrees of freedom serial manipulator. So, this is X, this is Y in Cartesian coordinate system. So, this is the base and I have got one link like this and another link is something like this, ok?. The length of the first link is L 1, the length of the second link is say L 2. The joint angles; so, this particular joint angle is theta 1 and this particular joint angle is say theta 2 And supposing that the tip of this particular manipulator is denoted by P and it is having the coordinate like x comma y. Now, here, at this particular joint, I have got a motor here, I have got another motor here. So, with the help of this motor, I am just going to generate this joint angle. Now, supposing that this particular joint angle is theta 1 and with respect to this L 1, the joint angle is theta 2. So, with respect to X; X axis, the total angle will be theta 1 plus theta 2. So, this is theta 1 plus theta 2. So, this particular angle is with respect to X, that is, theta 1 plus theta 2. So, this is nothing but theta 1. So, if this is the situation, very easily, we can write down the general expression for this particular x and y. For example, say from trigonometry, you can write down x is nothing but L 1 cos theta 1 plus L 2 cos of theta 1 plus theta 2. Similarly, y is nothing but L 1, sin theta 1 plus L 2 sin of theta 1 plus theta 2. If I know the values for this particular theta 1 and theta 2, can I not calculate x and y? We can. So, we can calculate what should be the numerical value for this particular x and y. Now I am just going to give that command to this particular robot, which is having the length of the links like L 1 and L 2 and I am just going to generate theta 1 and theta 2 and, supposing that the robot has started working. So, starting from a position corresponding to this L 1 and L 2, supposing that it is going to reach this particular point Now, what is the guarantee that it will be able to reach exactly the same point? There is no guarantee. There could be a little bit of error, while reaching this particular point, the point it is going to reach could be here or it could be here, or it could be here, or it could be here. So, there could be some small deviation from the computed point or the calculated point. Now, this particular deviation from the computed point and the point which has been reached is known as the accuracy of this robot and, that is expressed in terms of millimeter, ok?. Now, it could be either positive or negative. It depends on whether that point is exceeding that or not. So, I can find out in both the ways. So, accuracy it could be either the positive or negative. Now, I am just going to define another term that is called the repeatability. By repeatability, we mean supposing that we have taught a robot to reach a particular point, and there are several teaching methods, which I will be discussing after some time. So, with the help of that particular teaching method, say I have taught the robot to reach a particular point, say the point A. Let me take the same example of say 2 degrees of freedom serial manipulator. So, this is nothing but the manipulator this is L 1 and this is L 2. So, I have taught it to reach this particular point. This is L 1 and this is L 2, ok?. Now, once I have taught, and if I just run this particular the robot say for 20 times. So, at each for at each of the 20 times, there is no guarantee that it is going to reach exactly the point, which I have taught; there could be some deviation. Now this particular deviation is known as the repeatability of this manipulator, ok? So, this is the way actually, we define the repeatability of this particular manipulator and once again let me repeat like, if we want to prepare the specification of a robot, we will have to mention, what is the resolution, accuracy and repeatability we want and based on our requirements, the manufacturer is going to manufacture that particular robot to supply it to us. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_06_Introduction_to_Robots_and_Robotics_Contd.txt
Now, we have already discussed that we use robots in manufacturing unit. Now, today, I am just going to discuss, the various applications of robots, and we know a little bit that the robots are used in manufacturing units, nowadays and there is specific requirement also that we have already discussed. Now, if I use robots in manufacturing unit, we will be getting a few advantages. So, I am just going to see these advantages first. For example, say the robot can work in dirty environment and hazardous environment like the nuclear power plant. Now, if I use robot, there is a possibility that we will be able to produce goods with good quality with less error and the productivity will be high. And, if we use robots to replace the human labor, there is a possibility that one robot can replace a large number of workers and by doing that, there will be saving of labour cost. Now, if we just do some sort of repetitive jobs with the help of some operators, the manual operators. So, what will happen is, those human-beings may not like that repetitive task and there will be a lot of mistakes, there will be a lot of wastage of that particular products. Now, if we can give this type of repetitive tasks particularly to the robot, there is a possibility that the chance of rejection will be less and due to that, there will be some sort of saving of material cost. So, if we use robots actually, these are some advantages we will be getting and moreover as I have already mentioned that for the repetitive task, it is better to use the robots because human operator may get bored to perform the repetitive task. And, that is why, the robots have become very popular nowadays in modern manufacturing units. And, nowadays, actually the robots are used in manufacturing units to perform a variety of tasks. For example, say it can do arc welding, spot welding. It can do some sort of spray painting. For example, nowadays the spray painting on the car body, particularly Maruti uses that the manipulator to do this type of task of spray painting, but we will have to be careful while doing this particular spray painting. There should not be any such discontinuity. Now, regarding this arc welding and the spot welding, the way the robot can help is as follows. Now, let me prepare a very simple sketch. We will understand, supposing that I have got two plates, two steel plates, which I am just going to join by welding. Say, I am just going for some sort of continuous arc welding and spot welding. So, this is plate 1 and this is plate 2, and these two plates I am going to join. Now, what I will do is, I will start from here and I will just go on doing this particular continuous arc welding with the help of one robot, say PUMA. So, I am using PUMA to carry out this type of arc welding. Now, if I start from here; there is a possibility, that there will be some distortion and due to this distortion, the plate may take the position something like this. So, if I start the welding here; the other side may be distorted and here, I will not be able to carry out this type of arc welding, the continuous arc welding. Then, how to overcome this particular problem?. To overcome this particular problem actually what we do is: before we carry out this type of arc welding with the help of the manipulator. The first we do is, so this is plate 1 and plate 2. So, before we start with this particular arc welding, we go for the spot welding. And, what we do with the help of the manipulator; we just do one spot welding here, another spot welding here, another spot welding here and then, we start the continuous arc welding here. So, this particular spot welding is going to arrest that particular distortion due to this welding and you will be getting continuous welding. So, this type of task you can give it to the manipulator. The same manipulator will be able to perform the spot welding and then, the continuous arc welding. So, this is one very good application of manipulator in industry. Then, comes Pick and Place type of operation. In the machine shop say, before the assembly, the few components are to be transported from one place to another. So, we can use the robots. The robot can pick that particular object and place it to another place. Then, comes the Grinding, we can carry out the grinding operation and this particular grinding wheel will be attached to the end-effector of the manipulator. We can do drilling, which I have already discussed. We can make some drilled hole on some steel plates. We can also do milling that also I have discussed to cut some complicated profile on one side of a steel plate. So, what you can do is, we can take the help of this type of milling cutter and this milling cutter will be gripped by the gripper or the end-effector of the manipulator. So, in manufacturing unit, nowadays, the robots are used actually to solve a variety of purposes, a variety of tasks, ok?. But, besides this manufacturing unit, nowadays, the robots are used for some other types of applications. For example, say, robots are nowadays used for underwater applications. For example, say inside the sea. So, we send robot just to search for the valuable stones or the gems. So, it will try to find out the possible location of the valuable stones or the gems. Now, if we want to carry out some study of the underwater environment. So, inside water, we have some living creatures. So, if you want to study their lives or if you want to study that underwater environment; we can take the help this type of underwater robots. Now, then comes the crude petroleum. Our seabed could be a very good source for the crude petroleum and you will have to drill it out. Now, for this drilling purpose, we can use robots. This particular crude petroleum is to be transported. There must be some pipelines through which the crude oil will pass and it will be transported to another place. So, we can use the underwater robots to carry out some sort of inspection for this particular pipeline and if required, this underwater robot can also do a little bit of repair job, maintenance job. It can do a little bit of welding also, that underwater welding, nowadays, is also possible. So, by creating the vacuum there and with the help of the robots, this type of welding can be carried out. So, these are the various applications of underwater robots. Now, these underwater robots actually are designed and developed in different ways. Different designs are available. Now, we can get some sort of multi-legged robot, as the underwater the robots; sometimes, we use tracked vehicle as the underwater robot, but generally, we do not use wheeled robots as the underwater robots, because the seabed may not be very smooth. But, we may have the provision that depending on the requirement, sometimes we can use it as the multi-legged robot or we can use it as the tracked vehicle. Now, if you see these underwater robots, these are actually developed in two different forms. Now, one is called the ROV, that is, the Remotely Operated Vehicle and another is AUV, that is called Autonomous Underwater Vehicle. Now, there is a basic difference between ROV and AUV. So, ROV has actually the centralized control. So, there will be a central computer, which is going to control the movement of this particular robot and that is a remotely operated vehicle. But, AUV is nothing but the autonomous underwater vehicle. So, there actually the control system, which we use is decentralized. So, each of these particular robots is intelligent and they can take their own decisions and they are decentralized. So, this particular underwater robots are actually developed both in the forms of ROV as well as AUV, and as usual, the robot should be equipped with some sensors just to collect information of the environment. And of course, there will be some propellers and thrusters; so just to control the movement of these underwater robots. Now, next is, the medical applications. Nowadays, actually, the robots are extensively used in medical science and there are several applications of robots in medical science. I am just going to discuss a few. For example, say in Telesurgery, we extensively use the robots. We generally use two robots, one is called the master robot and another is called the slave robot. Now, the slave robot is actually going to carry out the operation on the patient and with the help of the master robot, the doctor is going to give the instructions. Now, here, there is a physical distance between the doctor and the patient, and may be the patient is, say 5 kilometer away from the doctor. And, with the help of these two robots, the doctor is going to carry out that particular operation. Now, here, the slave robot is equipped with the surgical instruments like there will be knives, scissors and all such things. And, along with these knives and scissors, there will be force and the torque sensors mounted and on the other side, the doctor is having the master robot and there will be one control panel. So, the moment, the slave robot is going to carry out some operations, with the help of that surgical instruments it will have to put some force. If it is linear or it will have to put some force or sometimes you will have to create some moment. And, all such forces, torques, moments, these things will be determined with the help of that force or the torque sensor. And, with the help of this wireless connection, the required torque, required force moment and all such things will come to the doctor, who is sitting at a place, may be around 5 kilometer away from that particular patient. Now, after seeing that particular information on the computer screen; so, the doctor is going to give instruction to the slave robot with the help of this particular the master robot. So, this is the way actually we carry out the telesurgery with the help of this particular robot. So, we use two robots to carry out, this type of telesurgery. Now, next come is the Micro-capsule Multi-legged robot. Now, this is another a very good design for the robots in medical science. Now, here the robot is actually a multi- legged robot, a very small robot like safe and size-wise could be equivalent to one capsule sort of thing; very small capsule sort of thing. And, inside this particular capsule, actually we have got that multi legged robot. And here this multi legged robot is equipped with the high speed camera and this particular robot can be used just to identify whether there is any such tumor, say inside the digestion system of the patient. So, this is, as I told, just like a tablet or a capsule. The way we take capsule, the patient is going to swallow it. So, this particular capsule robot with the help of say water will go inside the digestion system and there, it will start walking. Now, remember here actually, we do not use any motor for this type of small robot. So, instead what we do is, we control the movement of this particular small robot from outside the body of the patient with the help of one permanent magnet; that means, here we have got one magnetic material inside and from outside the body of the patient, we move this particular permanent magnet, there is a possibility we can control the movement of this particular small robot. Now, the reason why we do not use any motor is the size will become larger or the bigger, if I use some motor. So, generally, we do not use any such motor for this type of robot, but it has got camera. Now, the camera needs power and for that sometimes a very small a lithium battery sort of battery, we generally use along with the camera. So, whenever this particular robot is working inside the digestion system, at a regular interval, it will take the snap and it will send this particular information to the doctor. So, the doctor on the screen will be able to see the picture of what is there inside that the stomach and whether there is any such tumor. So, the possible location of the tumor will be identified with the help of this type of micro-capsule multi-legged robots. So, it has got some applications. Then, comes the Prosthetic device. Now, nowadays one field of robotic research has become very prominent, that is called the rehabilitation robotics. Now, in this rehabilitation robotics, actually what we do is, we try to take the help of different types of robots. For example, say in rehabilitation robotics, we design and develop some prosthetic device. Sometimes, we generally go for some sort of orthotic devices. So, these prosthetic devices and orthotic device are going to help the old people, the weaker people during walking. And, both these prosthetic and orthotic devices, nowadays have become very popular and these are nothing but once again some special types of robots, some intelligent robots. So, these are some of the applications of robots in medical science. Then, comes some space applications. Now, nowadays, the robots are used in space very frequently. You might have heard about your the planetary exploration robots. So, we know about curiosity, and then spirit and opportunity. These are nothing but the planetary exploration rovers. So, with the help of these planetary exploration rovers, we can collect information of the planet. We can collect information of the Mars and these robots are all intelligent robots. For example, if you talk about the curiosity, which is an intelligent robot sent to the Mars, and it has the capability to use be as a multi-legged robot or it can also be used as the tracked vehicle. Now, there are some other applications like in space station, we can use robot just to do some sort of inspections, survey maintenance job. We can carry the astronauts with the help of robots and in future, in fact, the astronauts will be replaced by the Robo-nauts. Now, in the space station, we have got a few spacecraft. Now, what you can do is, we can use robots for the deployment and retrieval of this particular spacecraft. And, nowadays, we use some small robots, very small robots and these are known as the free-flying robots. Now, these free-flying robots are very small in size and we can also send it to the Mars, to collect information, and this type of free-flying robots are just like a fly, very small in size and this particular robots are in design and development stage. And, this will be once again, a little bit intelligent also. We can send it to the space to collect information of the space with the help of this robot. So, these are all some of the applications, which we have in the space science. Now, in agriculture, nowadays actually we are planning to use robots, different types of robots and in fact, we can use robots just for spraying pesticides in the field. We can use robots for spraying fertilizers in the form of liquid form, ok? So, the fertilizers, we can mix it with water to make it liquid and then, we can use some sort of robot in the field just to spray that particular fertilizer. We can use robots for cleaning weeds. For example, there may be some unwanted small plants sort of thing in the field. Those are called weeds. Now, for cleaning the weeds, nowadays we are thinking like how to use the robots. We can use a robot for sowing seeds in the field. Now, while sowing seeds, we follow a certain pattern and accordingly, we can take the help of the robots just to actually sow the seeds in the field. We can use robot for inspection of the plants; the health inspection, the quality inspection of the food grain. So, we can use this type of robots. Now, here, I am just going to mention a few other applications. Nowadays, in the house actually, as a replacement of the maidservant, people are thinking whether we can use some robots. Then, for garbage collection, whether the robots can be used, then underground coal mining. There are already a few applications, where robots are used for coal mining. Then, comes the sewage line cleaning. This is already in use. Now, just to clean the sewage line with the help of robots, some special type of robots can be designed and developed. And, these are nowadays, in fact, used just to clean the sewage line; then for firefighting, nowadays, the robots are also used, and so on. So, there are many applications of the robots and in future, the robots will be used to serve a variety of purposes in more fields. The robots will be used because the robots will become more intelligent and autonomous in future and we will be able to use this robot to serve a variety of purposes. Thank you.
Robotics_by_Prof_D_K_Pratihar
Lecture_28_Robot_Dynamics_Contd.txt
[Music] now i try to derive the expression for the other h terms that is your h 2 1 1 now this h two one one that is i equals to two c and d they are equal to one so j equals to maximum between your two one two two one one so this is two to 2 so there will be only one term so this is nothing but trace of like u j equals to 2 c d is 1 1 then comes your j 2 then comes your u j j equals to 2 and and and your this particular i i is also equal to your 2 so u 2 2 transpose so this is the way you can find out the expression for this particular h two one one similarly h two one two so that is nothing but so once again two one two the maximum is 2 so 2 to 2 there will be only one term then comes your u j is 2 here and c d 1 2 then comes your j 2 then u j is nothing but equal to 2 and i is also equal to your 2 so i is also equal to your 2 and i will be getting this particular the expression now then comes your ah h 2 1 2 2 1 2 that is nothing but once again so 2 1 2 maximum is 2 2 to 2 so i will be getting trace of u j equals to two here then c d two one two ah sorry so i am just trying to find out actually h two two one so so this is nothing but u so j equals to how much j equals to 2 and c d c d is nothing but 2 1 ok then comes your j 2 then comes u j equals to 2 and i is equals to your 2 i is 2 here so this is the expression then comes your i can also find out what is h 2 2 2 now this h 2 2 2 so here once again there will be only one term so is nothing but trace of u then j equals to 2 then comes c d equals to 2 2 then j 2 then comes your u j equals to 2 and i is also equal to 2 and transpose of that so this is the way actually we can find out the expression for h 2 1 1 h 2 1 2 h 2 2 1 and h 2 2 2. so using this actually i can find out the expression for your the final expression for this particular the h terms so this is your h two one one so this is the expression as we have already seen now once again we know that this u two one one that is nothing but u two one one is nothing but actually partial derivative with respect to theta one of u two one ok j two we know u two two transpose we know so these matrices we can multiply and you can find out the trace and the trace will become equal to this so this is nothing but the expression for your ah this h expression for h two one one now h two one two so this is the expression and once again you can find out what is u two one two that is nothing but the partial derivative of u two one with respect to theta two so this is the expression so this is known all all the terms are known and if you just multiply these three matrices and if you find out the trace of that this will become equal to zero so what we do is so we can find out this particular h term following the same method we can find out what is h two two one ok the same method and will be getting h two to one is equal to zero then comes your h two to two so this is nothing but this particular expression which you have already derived and here so this u two two two is nothing but the partial derivative of u 2 2 with respect to theta 2 and if you just find out so you will be getting this particular matrix and the other matrices are also known so these 3 4 cross 4 matrices if you multiply take the trace value you will be getting that is equal to zero so following this particular method so we can find out actually your all the age values so the only thing which is left if you see this particular expression for your tau one and tau two so this is the expression for this tau one and tau two so all such d terms all the d terms all the h terms all eight h terms we have determined only thing which is left is your c one and c two that means your these two gravity terms are yet to be determined now let us see how to determine so this particular the gravity term now to determine the gravity terms actually i am just going back to this particular expression and let us try to find out the expression for c one and c two ah from here so this is c i so let me try to find out the expression for c one so i equals to one that is j equals to what 1 2 2 so there will be 2 such term so i can write down like m minus m j equals to 1 here then comes g bar then comes u j equals to 1 here and i equals to 1 then comes your r 1 with respect to 1 bar then there is another term that is your j equals to your j equals to 2 now so m 2 then comes your g bar then comes u then j is equals to 2 so it is 2 1 so u 2 1 then comes your j equals to 2 that is 2 2 bar now let us see how to determine and i can also find out the expression for c 2 also so let me write down the expression for c 2 also so i equals to 2 so j equals to 2 to 2 so there will be only one term that is minus your m 2 g bar then comes u then j equals to 2 and your i is also equal to 2 then comes your r 2 with respect to 2 bar so these are the expression for c 1 and c 2. now let us see how to derive further using this particular expression so how to derive further now to derive phi the further actually ah let us see this particular c1 so exactly the same expression i got whatever i wrote so this is actually your c one now here m one is the mass of this your the first link but g is the acceleration due to gravity now here if you see as we discussed the g has three components like x y and z component now here actually the way the coordinate system has been considered ah actually ah if you see the coordinate system let us see the coordinate system once we will understand if you see the coordinate system so this is actually this is the y direction so g is acting opposite to the y direction vertically downward and that is why actually ah here we will have to put so this particular expression zero minus g in place of y i have written minus g and z is zero but here one extra zero i have put the reason i will tell you ok now here this u 1 1 we have already determined and this is nothing but a 4 cross 4 matrix ok then comes your this r 1 with respect to 1 how to determine r one with respect to one so if i consider that this is my link one and for this particular link supposing that the total length is nothing but l one ok so its mass center is here ok and the coordinate system as i told because we are interested to determine how much will be the reaction top so the coordinate system is attached here ok so with respect to this so this is the x axis with respect to this what will happen is your this will be minus l by 2 so this is y and this is your z so this will be minus l by two so here this r one with respect to one bar is nothing but the coordinate of the mass center that is your minus l one by two zero zero that is y zero z zero corresponding to this particular point and this one is actually just below the position vector we put one so that particular the one now you check the dimension so u is having four cross four and this particular r one with respect to one so it is having one cross 4 or this is here there is a transpose so this will become your then 4 cross 1 matrix so this is a 4 cross 1 matrix and this is a 4 cross 4 matrix so if you multiply ultimately i will be getting 1 4 cross 1 matrix now this particular g matrix has to be 1 cross 4 otherwise we cannot multiply and that is why actually we have made it 1 cross 4. so this is nothing but the 1 cross 4. so g has got 3 components and that is why this this particular 0 has been added as an extra ok just to make it that particular 1 cross 4 so this 1 cross 4 can be multiplied by this 4 cross 1 ok that is why so this particular extra 0 has been considered here ah exactly in the same way here also we can determine the only thing is the expression of this u 2 1 is different and this r 2 with respect to 2 is nothing but this because in place of link one now we will have to consider link two for this link two your mass center is minus l two by two zero zero one transpose and if you just put all such things here and multiply then i will be getting the expression for c one and this is nothing but this one so this is the way we will be getting the expression for this particular the c one now let us see how to determine the expression of c 2 following the same method c 2 expression i have already got and exactly in the same way g has to be written u 2 2 is known r two with respect to two so this is also known and if you multiply then i will be getting this your c two that is half m two g l two cos of theta ah one plus theta two so this particular expression will be getting for your ah this ah c two and once you have got this particular c two so now we are in a position to ah write down the expression for this particular the joint torque ok now in this particular expression so these particular terms if we just add them up whatever we got because all ah like two values of d four values of h and one value of c we have got if we just write them up and if we just ah arrange then it will be getting this type of expression so this big expression multiplied by theta 1 double dot that is you see one third m one plus m two into l one square plus one third m two l two square plus m 2 l 1 l 2 cos of theta 2 plus 1 4 r square into m 1 plus m 2 and you can see that all such terms are related to the mass the length of the link and your mass and length of the link and there could be some radius term here also but here there is no radius term but yes radius term is there so these are all actually related to the the geometry are you getting a point the geometry and this is nothing but the inertia term these are all inertia terms multiplied by this theta 1 double dot similarly so these particular terms are multiplied by theta two double dot theta two double dot is nothing but angular acceleration of the second joint now here once again you can see m 2 l 2 then comes l 1 and we have got r that is your the radius of the second ah the link and or the first link so here the radius terms are also there ok so this is once again actually the inertia terms ok now then comes your this particular terms like theta terms involving theta 1 dot theta 2 dot then theta 2 theta 2 dot square so these are nothing but is your the coriolis and centrifugal term sort of thing and this particular part your cos theta cos of theta 1 and theta 2 these are nothing but is your the gravity term so we have got inertia terms so these up to this actually we have got the we have got the inertia term so these are nothing but the inertia terms then we have got your the coriolis and the centrifugal term and here so these terms are nothing but is your the your the gravity terms and another observation we should we should have a look you see we are trying to find out the expression for the joint torque at joint one ok now here if you see carefully there are a few terms related to your theta two for example i have got cos theta two i have got theta two double dot cos theta two then your theta two dot theta two dot square then cos of theta one and theta two that means although we are trying to find out the joint torque that is torqued at joint one the second joint angle has got some contributions towards this particular the tau one now we are just going to see the expression for the second joint term that is your so these particular terms will be nothing but the inertia terms like one third m two l two square plus one fourth m two r square plus half m two l one l two cos theta two into theta one double dot plus one third m two l two square plus one fourth m two r square theta two double dot so these are all inertia terms ok so this is multiplied by theta one double dot this is multiplied by theta two double dot and then we have got actually your another term here so we can see so this term is actually the centrifugal term involving theta 1 dot square and we have got the gravity terms ok so this particular expression for theta 2 once again contains three terms and once again if we look into this we are trying to find out the expression for the joint torque that is theta that is tau two and here this theta one double dot has got significant contribution then here we have got cos of theta one plus theta two so theta one has got significant contribution on the joint torque 2. and that is why actually this particular the contributions are coupled cultivation and due to this particular coupled contribution ah the better way should be to determine this particular joint torque like to consider the multiple dynamics and so that the coupling terms we can consider ah very efficiently and you will be getting very good expression for this particular tau one and tau two but this particular method the method which i have discussed it has got one ah advantage i should say because if you use this particular method there is a possibility that you will be getting very structured ah very structured form of this particular expression for the the joint talk and which may not be available in other method but this method gives very structured form now another thing i am just going to mention ah like if i consider the slender link that means your for slender link so slender link your l is very large compared to your r so the terms involving r square are small so those terms actually we can ah neglect for example from this particular the expression here there is one r square term ok so this particular term ah you can neglect and it will tend to zero if we consider the slander link similarly here there is another r square term so this will also tend to zero if we consider the slender link ok and the expression for tau 1 will become simpler but here you will not find any such terms involving r or that type of thing so this is the way we can make it simple by considering that the links are slander similarly here if you consider the slander link so this term will tend to zero similarly here there is another term will become equal to zero so the expression for this particular tau one and tau two will become simpler now in robotics actually what we do at each of the robotic joint we just put the dc motor and the motor is going to provide this particular talk ok and this particular talk has to be this talk is going to generate the joint angle and will have to very accurately generate that particular the joint angle now how to generate that very accurate joint angle that i will be discussing after some time while discussing the control scheme now here ah this particular once you have got the variation of this torque as a function of time now we can think about what should be the power requirement for a particular joint and if you know the power requirement so we can specify we can prepare the specification of the motor which we are going to put at that particular the robotic joint so that the robotic joint will be able to provide that particular torque and it will be able to generate that particular your the rotation very smoothly thank you
Robotics_by_Prof_D_K_Pratihar
Lecture_04_Introduction_to_Robots_and_Robotics_Contd.txt
We are discussing how to use the principle of Grubler’s criterion, to determine the degrees of freedom of different types of manipulator. Now, we have already seen for the serial planar manipulator. So, we got the degrees of freedom as 4. And here, this is a serial manipulator, because all the links are in series. Now, for one very simple parallel manipulator, we also determined what should be the degrees of freedom. And, we got for this particular parallel manipulator, the degrees of freedom as 3. Now, I am just going to take the example of another more complicated parallel manipulator. And, I am just trying to find out, what should be the degrees of freedom for this parallel manipulator using Grubler’s criterion. Now, this is nothing but a spatial parallel manipulator. Now, here, we have got 6 legs, and each leg consists of one universal joint or Hooke joint, and one prismatic joint, and we have got one spherical joint. Similarly, we have got 6 such legs. And the top plate is nothing but this, so this is nothing but the top plate for this particular manipulator and the base plate is kept fixed to the ground. Now, we will have to find out the degrees of freedom of this manipulator. So, what we do is, for each leg, we try to find out, how many constraints it is going to put. Now, before that, let us try to find out, how many joints are there. In one leg, we have got 1, 2, 3. And, similarly we have got 6 such legs. So, 6 multiplied by 3, we have got 18. So, we have got 18 such joints. And, the number of links we should try to find out, on one leg we have got 1, 2. So, similarly, we have got 6 legs. So, 6 multiplied by 12, and the top plate we will also be considered as one of the links, so you have got the number of links, that is, moving links equals to 12 plus 1, that is, 13. So, we have got small n that is the number of moving links is equal to 13 and the number of joint, that is, m is kept equal to 18. Now, here, we will have to find out, how many constraints are put by one leg. So, this is the universal joint. Now, this universal joint has got 2 degrees of freedom, and this is a spatial manipulator, so this joint is going to put 6 minus 2, that is, 4 constraints. Similarly, so this particular joint is a prismatic joint is having only one degree of freedom, and it is going to put 6 minus 1, that is, 5 constraints. And, this particular joint is a spherical joint, which is having 3 degrees of freedom, and it is going to put 6 minus 3 that is nothing but 3 constraints. So, the total number of constraints put by one leg, that is nothing but 4 plus 5 that is 9 plus 3, that is, 12. So, one leg is going to put 12 constraints. And, similarly, we have got 6 such legs, so it is going to put 12 multiplied by 6, that is 72 constraints. Now, we have already discussed, that we have got 13 number of moving links. So, the mobility or the degrees of freedom is nothing but 6 n minus summation i equals to 1 to m 6 minus C i, C i is nothing but the connectivity. So, this will become equal to 6 multiplied by 13 that is 78 minus 72 and, that is, equals to 6. So, this parallel manipulator is having 6 degrees of freedom, and this is a spatial manipulator. So, this is nothing but an ideal parallel spatial manipulator having 6 degrees of freedom, that means, the top plate can have 3 translations, and there could be 3 rotation also with respect to the fixed ground. So, this is popularly known as the Stewart platform, which is generally used for the training purpose of the trainee pilot in aircraft. So, it has got other practical applications also. So, this is the way, actually we can use the Grubler’s criterion to find out the degrees of freedom or the mobility of the robotic system. Now, I am just going to discuss, the classification of robots. Now, the robots are classified in a number of ways, if you see the literature, we have got different types of robots. Now, based on the type of task it performs, the robots are classified into two groups, one is called the point-to-point robot, and we have got the continuous path robot. Now, I am just going to discuss, the working principle of this particular the point-to-point robot. Now, let me take one example, very simple example. Supposing that, I have got a steel plate, and on this particular steel plate, so I will have to make some drilled holes at some pre-specified locations; say location 1, location 2, and location 3 with the help of say one cutter, that is the twisted drill bit. Now, this particular twisted drill bit will be gripped by the end-effector or the gripper of the manipulator. Now, if I want to make the drilled hole at location 1, the tip of this twisted drill bit should be able to coincide with the center of this particular hole. And, once it is made coincident, now we can be rotate this particular cutting tool, say either in clockwise or anticlockwise. Supposing that, I am rotating it clockwise, so it is going to generate that particular drilled hole. And, once that particular hole has been drilled, so what we do is, we rotate the cutter in the reverse direction, and the tool will be withdrawn from this particular the job. Now, once the hole has been drilled at location 1, now we go to location 2, and repeat the process. And, the same process we also repeat for the point 3. Now, here after this particular the hole has been drill at location 1, the tool is withdrawn from the job, and then we move to location 2, as the tool is withdrawn from the job, so the tool is not in touch with the job continuously, this is known as the point-to-point task. So, for this point-to-point task, we use a special type of robot, and that is called the point-to-point robot. Now, this Unimate 2000, then T 3, that is, The Tomorrow Tool, these are the typical examples of this type of point-to-point robots. Now, let us try to explain the working principle of the continuous path robot. Now, here the tool will be in touch with the job continuously, and supposing that, I am just going to cut a complicated profile on say one side of a steel plate. Now, if I just draw this particular thing in a slightly different way, supposing that, I have got a profile, which is to be cut on one side of a steel plate. So, this is the steel plate say and here, so I will have to cut this particular profile. The way we cut is, we use one milling cutter and this milling cutter, so this we will be gripped by the end-effector of this manipulator. Now, this milling cutter will have to rotate, and at the same time, it should trace this particular complicated profile. And, here, we can use this type of milling cutter. Now, here what we can do is, this particular end, we grip with the help of a gripper or the end-effector, and we generate the required motions, that is, the rotation and it will be able to trace this complicated profile while cutting. Now, during this machining operation, so this particular tool is in touch with the job continuously, and that is why, this is known as continuous path task. And the robots, which are typically designed for this type of task is known as the continuous path robot. Now, the typical example for continuous path robot is your PUMA, CRS, so these are all continuous path robot. Now, here I just want to make one comment. Now, supposing that, I have got one continuous path robot, the same continuous path robot can also be used as a point-to-point robot, but the reverse is not true. And, so we have got both point-to-point robot as well as the continuous path robot. Now, another classification like based on the type of controller, which is generally used. So, we divide these robots into two groups; one is called the non-servo-controlled robots and another is known as servo-controlled robots. So, we have got the non-servo-controlled robots, and servo-controlled robots. So, in non-servo-control robots, we use open loop control system. Now, in open-loop control system, we generally do not measure the output for the purpose of comparison, and just to find out the error. Now, this error is neither measured nor compared and fedback for the purpose of compensation of this error. On the other hand, in case of servo-control robots, we use some feedback device, and we use closed-loop control system. Now, here, in closed-loop control system, actually what we do is, we measure the output for the purpose of comparison. We try to find out the error, and this particular error is fed back, and we try to minimize this particular error, and that is the principle of the closed-loop control system. Now, here in robots, if you want to perform some very precise tasks, we will have to go for the servo-controlled robot, and it has got the closed-loop control system. Now, regarding the closed-loop control system, the way it works is as follows. Supposing that, I have got the controller; say this is nothing but the controller, the block diagram for this controller. And here, I have got the mechanical load, which I am just going to handle. Now, what you do is, we try to give some sort of input here, through some error junction. So, here, we have got the input and based on this particular input, so we will be getting some output here. Now, this particular output will be measured, and output will be fed back with the help of one feedback circuit, so this is nothing but the feedback. And, this is compared here, and we try to find out the error here, that is, e. And, in this particular summing junction, we try to compare, and try to find out how much is this particular error. So, this error will pass through the controller and amplifier. So, generally we use amplifier also, and once again it will pass to the load, and we will be getting some output. And, this particular cycle will go on and go on, and we will be getting very accurate movement at the end. This is what we follow in closed-loop control system. But, in open-loop control system, this feedback circuit is absent. So, we have got two types of robots, non-servo- controlled robots, where we use open-loop control system, the examples of which are Seiko PN-100. As there is no error compensation, it is less accurate and less expensive, because there is no such feedback device. On the other hand, the servo- controlled robot like Unimate 2000, PUMA, then T 3 are more accurate and more expensive. Now, the next classification is based on actually the type of the coordinate system, which is generally used. Now, based on the coordinate system used in robots, the robots are classified into four sub-groups; one is called the Cartesian coordinate robot, then we have got the cylindrical coordinate robot, then polar coordinate robot, and we have got the revolute coordinate robot. Now, all such robots, I am just going to discuss one after another. So, based on the coordinate system as I told, there are four types of robots, the first one, that is, your Cartesian coordinate robot. Now, here, this schematic view shows a Cartesian coordinate robot. Now, for this type of robot, we have got the linear movement along the X direction, the linear movement along the Y direction, and we have got the linear movement along the Z direction. So, along X, Y and Z, we have got the three linear movements, and they are independent. And, this type of robot is known as the Cartesian coordinate robot. Now, this particular linear joint it could be either prismatic or sliding. Now, if I use all three are prismatic, so this is called PPP robot and if I use sliding joint, so that it is called SSS robot. Sometimes we use a combination of P and S also. Now, here, as in this particular robot, we use prismatic joint, this robot is very rigid and very accurate. So, this is the end-effector and this is the fixed base. So, if we want very accurate movement, we can use this type of robot. And, this is suitable for pick and place type of operation. Now, a typical example for this type of Cartesian coordinate robot is IBM’s RS-1, then Sigma robot from Olivetti. Olivetti is actually the name of the robot manufacturer. They manufacture one Cartesian coordinate robot, which is known as the Sigma robot. So, here in Cartesian coordinate system or Cartesian coordinate robot, we get three linear movement. And, this robot as I mentioned is very rigid and accurate. We can use this robot in the shop-floor. So, if something is lying on the floor, with the help of this, we will be able to pick that particular object. The next is the cylindrical coordinate robot. Now, here, we have got two linear, and one rotary joints. So, here we can see that we have got one linear joint, another linear joint, and we have got one rotary joint. Now, this rotary joint with respect to the fixed base is nothing but a twisting joint. And, this is actually the linear joint, say it is a sliding joint. And, this is another linear joint, say this is the sliding joint. So, this particular robot is known as TSS robot. Now, in place of sliding joint, I can also use the prismatic joint. Here also, I can use the prismatic joint, then it will be called TPP. Now, if you see this particular robot, there is actually one problem, the way it works. So, with the help of this joint, I will be getting the maximum horizontal reach, and the minimum horizontal reach. Similarly, here, I will be getting the maximum vertical reach, and the minimum vertical reach. And, for this type of robot, if something is lying on the shop-floor, so it will not be able to pick that particular object. And, it has got another problem, that problem is related to this rotary joint. Due to the presence of this particular rotary joint, the dynamic performance of this particular robot is poor, compared to Cartesian coordinate robot. Now, here, this Versatran 600 is a very typical example for this type of cylindrical coordinate robot. So, this is the way actually, this particular cylindrical coordinate robot is working. Now, then comes the spherical coordinate robot or polar coordinate robot. Now, here, we have got one linear joint. So, here we have got a linear joint and we have got two rotary joints, so, this is nothing but a twisting joint. And, we have got one revolute joint here with the help of which it can rotate something like this. Now, with the help of this linear joint, so I can represent what should be the maximum horizontal reach, and what should be the minimum horizontal reach. Similarly, with the help of this revolute joint, I can find out what should be the maximum vertical reach, and what should be the minimum vertical reach. And here, with the help of this particular twisting joint, I can rotate with respect to the fixed base. Now, in this robot, if I use T here, R here, and say S here, this is known as TRS robot. And, if I use prismatic joint, this is known as TRP joint. Now, this robot is suitable for picking some objects, which are lying on the subfloor. And, but here, we have got another problem, the same problem like your poor dynamic performance due to this rotary joint. Now, a typical example for this type of spherical coordinate robot is Unimate 2000B. Now, I am just going to discuss another robot, which is also very frequently used, and that is called the revolute coordinate robot or articulated robot. So, this revolute coordinate robot, we have got three rotary joints. For example, say this is the schematic view of the revolute coordinate robot, now this is the fixed base. So, with respect to the fixed base we have got a twisting joint here, similarly we have got a revolute joint here, and we have got another revolute joint here, so we have got a revolute here, revolute here, and we have got the twisting here, and this is known as actually TRR robot. And, this type of robot is actually very much used in industries. Nowadays to solve a variety of problems like pick and place type of operation or if we want to do some sort of drilling, milling that types of operation, this type of robot along with some more attachments are very frequently used. But, here once again, due to the presence of this rotary joint, its dynamic performance may not be so good. Now, a typical example of this type of robot could be your PUMA, that is, Programmable Universal Machine Power Assembly, then T 3 The Tomorrow Tool, then comes CRS is another example of this type of revolute coordinate robot. Now, this particular robot, as I told, is very frequently used in industries. Now, another classification I am just going to discuss, and this is based on actually the mobility levels. Now, based on the mobility levels, the robots are classified into two groups, robot with fixed base, and the robot with moving base. Now, the robot with moving base, I am just going to discuss after some time. Now, let me just concentrate on the robot with fixed base, and as I told these are also known as the manipulators. Now, these manipulators could be either serial manipulator like PUMA, CRS or it could be parallel manipulator like the Stewart platform. Now, both the things I have already discussed a little bit. For the serial manipulator, the links should be in series, the joints are in series. On the other hand, for this parallel manipulator, the links will be in parallel and I can compare the load carrying capacity of this serial manipulator, and the parallel manipulator. The load carrying capacity of the parallel manipulator will be more compared to that of the the serial manipulator. Now, I am just going to discuss the robot with moving base. Now, the robot with moving base, these are very popularly known as the mobile robots. Now, the mobile robots could be either the wheeled robots, there could be tracked robots (tracked vehicles), or there could be multi-legged robots. Supposing that the terrain is perfectly smooth, now for the smooth terrain, we can go for the wheeled robot. Now, if it is perfectly rough, there are many such ups and downs, staircases, so it is better to go for the multi-legged robots like 4-legged robot, 6-legged robot and so on. And, if the terrain is in between, that is, neither very smooth nor very rough, we can go for some sort of tracked vehicle. Now, here I am just going to take the example of one wheeled robot. So, this is one actually two wheeled one caster robot, it is a very simple wheeled robot. So, one wheel we can we can see here, the other wheel is on other side, and below that, there is one caster also, that is nothing but the support, so this is a typical example of wheeled robot. Now, similarly, here I am just going to show the schematic view of a six-legged robot. Now, here you can see, it has got six-legs, and each of these particular legs are generally having 3 degrees of freedom, and this is actually the trunk for this particular the six-legged robot. Now, depending on this particular duty factor, sometimes like 4 out of 6 will be on ground; sometimes 3 out of 6 will be on ground, and so on. So, these are actually you are the mobile robots. Similarly, we have got some other type of mobile robots also. So, we have seen the classification of the robots. Now, if you see the literature, we have got different types of robots. And, as I discussed that these particular robots are classified in different ways, and we get different types of robots. Now, I am just going to discuss with another topic, which is very important, and it is little bit difficult to understand also to imagine also. Now, let us start with that, and let us try to see, how to determine the workspace of a manipulator. So, by a workspace we mean the volume of space that the end-effector of a manipulator can reach. Now, let me take a very simple example. Now, if I consider that my hand is nothing but a serial manipulator, and this is nothing but the end-effector of the serial manipulator, so this is the fixed base. So, with respect to the fixed base, I can move this particular end-effector like this, so I can go to the top; I can go to this side; I can come to this side; I can go to bottom; I can go to up. So, this particular end-effector is having some locus, and it is going to maintain one volume of space. Now, this particular space has got a volume, and that is actually the workspace of this manipulator, ok?. Now, if you see this particular workspace, workspace could be either dextrous workspace, or it could be the reachable workspace. Now, to define these two terms like the dextrous workspace and reachable workspace. Let me take one very practical example. Now, let me take the example of a very simple manipulator. For example, say I have got one serial manipulator having say 2 degrees of freedom. So, this is X direction, this is Y direction in Cartesian. And, supposing that, the joints are such that, this is my the first links say L 1, say L 1 is the length of the first link, and L 2 is the length of the say second link, say, this is, L 2. Now, let us consider here, the angle between L 1 and L 2 has been assumed to be equal to 0. So, as if this will act as a manipulator having only one degree of freedom, that means, there will be only one joint angle theta and supposing that, so this theta can vary through say 360. Now, if I vary through this particular theta 1 through 360 degree, and of course, I have got theta 2, but theta 2 I have taken equal to 0. So, if I just rotate, then there is a possibility that I will be getting the work plane, which is a circle with the radius nothing but L 1 plus L 2. So, this L 1 plus L 2 will be the radius of this particular circle, and this is nothing but the work plane, because this is in 2-D plane for this manipulator. Now, I am just going to take two points. Now, supposing that, I am just going to consider a point here, lying on the boundary of the circle and that particular point is a denoted by A. And, I am just going to take another point say point B, and that is lying inside the circle. Now, if I want to reach this particular point A, I need a particular configuration, that is your theta 1 and theta 2, will have some values. On the other hand, to reach the point B, there could be two different configurations, one is this configuration, another could be your this particular configuration. And, now, I say that the point A is a point, which is lying on the reachable workspace for this particular manipulator. And point B is a point, which is lying on the dextrous workspace of this particular manipulator. Now, let us try to see the definitions of dextrous workspace and reachable workspace. Now, if you see, the dextrous workspace is nothing but the volume of space that the robot’s end-effector can reach with different combinations of the joint angles. On the other hand, the reachable workspace is that volume of space that the end-effector can reach with one orientation. And, this particular reachable point is lying on the boundary of the circle. On the other hand, the dextrous point is lying inside that particular circle. And here, I have put one note, that dextrous workspace is a subset of the reachable workspace. So, the reachable workspace is the larger workspace, bigger workspace. And, dextrous workspace is nothing but a smaller workspace, so dextrous workspace is nothing but the subset of a reachable workspace. Thank you.
MIT_6801_Machine_Vision_Fall_2020
Lecture_11_Edge_Detection_Subpixel_Position_CORDIC_Line_Detection_US_6408109.txt
[SQUEAKING] [RUSTLING] [CLICKING] BERTHOLD HORN: We're starting a new part of the course where we'll be dealing with industrial machine vision. And in particular, we'll be looking at patterns on that. And we'll start off with one by Bill Silver at Cognex. So Cognex is arguably the leading machine vision company. And it's an early starter. I tried very hard to persuade him not to join a startup because the success rate wasn't very high at the time and maybe still isn't. Anyway, he joined anyway. And he's their technical guru. And there's a bunch of patents that describe what they did. At this point, you can't manufacture integrated circuits without machine vision. They're needed all over the show for alignment and inspection. And you can't manufacture pharmaceuticals without machine vision. For example, there's a mandate that the label should be readable. So what's the language of the mandate? It's not, most of the labels should be readable. It says, the mandate-- the labels should be readable. That means every single label has to be inspected. Well, I don't think anyone wants people to do that. So of course, it's done using machine vision. And those are the two areas that Cognex managed to capture a large market, starting off with integrated circuits mostly in Japan. So most of the market was there. Anyway, let's dive into this patent and learn a little bit about patents. So first of all, why patents? So the basic idea is that you come up with an idea to do something, produce some chemical, build some machine. And you set up to build such things and sell them. And your neighbor looks over and says, oh, that's nice, you can make some money that way, and basically competes with you without having put in the effort of actually coming up with the invention. And so the whole idea of a patent is a compromise, whereby you get a limited monopoly to use your invention in return for explaining exactly how it works so that after a certain amount of time, anyone else can build it. So it's a contract with society where you get some benefit for a while. And in return, there's some benefit long-term for society. And this is how they get you to explain in detail how things work. And there are different opinions on various types of intellectual property. And we won't get into that. There are obviously some benefits and some disadvantages to this. And it's changed over the years. There's a constant revision. And it's remotely possible that some of the things I tell you are no longer true because they were true two years ago and whatever. So let's look at what's there. So first of all, then, we'll get into the technical stuff later. But the structure of this and the metadata, so obviously, there's the number. So at this point, we're up to six million. I just got notice that one of our x-ray patents got issued. And it's in the 10 million range. So since 2002, we've added a lot of patents. And a lot of companies, large companies in particular, file patents at an incredible rate. And why is that? Well, it's largely because they want ammunition in a patent war. So say if you're IBM and there's Microsoft, I hold 6,000 patents in my hand. And if you promise not to sue me, you can use that technology if I can use your 8,000 patents, that, kind of, thing. So patents can stand in the way of doing useful stuff. They can also sometimes prevent wasted money on litigation. So here's the date of the patent. We have a title, apparatus, and method for detecting, and subpixel location of edges in a digital image. So this is aimed at mostly a conveyor belt world, where you're looking down at things and you're measuring positions and attitudes. And it's also aimed at integrated circuit world, where you're largely dealing with two-dimensional images. So one of the things we know is that images often have large homogeneous regions of uniform intensity. They aren't very interesting. The interesting stuff is where there's a transition between different brightness levels. And so edges are where it's at. You can greatly reduce the bits needed to describe something by just focusing on the edges. So that's what this is about. And it's about finding them to subpixel accuracy. And that's very important, because you can always get more accuracy by adding more pixels. Instead of using a million pixel camera, use 100 million pixel camera. And you get 10 times the accuracy. And obviously, that's expensive. And also, if you can do that in software for relatively low-cost, then that's a huge advantage. Or if you have that 100 million pixel camera, you now have 10 times the resolution of whoever else is working in that field. What accuracy can you achieve? Well, it's fairly straightforward to get to 1/10 of a pixel. And at that point, you start to see all, kinds of, problems that you didn't think of. If you push it hard, you can get to maybe 1/40, which is what they claim. And that's, obviously, huge. Two dimensions, that's a factor of 1,600 in terms of accuracy. So that's what this is about. And then the authors are listed. And the assignee is listed. That is, the authors don't necessarily get the benefit. Typically, if you work at a company, they make you sign over some piece of paper that says that whatever you invent is theirs and so on. And I guess MIT wasn't that careful when I was hired. And I don't think I ever signed that paper. But it didn't help me much. Anyway, then we have the date filed and the fields of search. So there are some numbers there that tell you what area it is in. Those fields of search were established a long, long time ago. And so there isn't even something for computers. It's a subpart of electrical stuff. If you're into rubber making, there are 10 categories for different types of rubber. Of course, that was-- so those categories are things that patent examiners know a lot about. But they're not particularly interesting. And what happens is that you submit this thing. And it has to be in a form that's acceptable. And there are lots of rules. It used to be you couldn't have any equations in there. Well, this one has equations in it. So that obviously is no longer the case. You couldn't use grade levels. It had to be black and white, line drawings. And so the only way you could get something like grade levels was halftones, dots with varying sized dots and dot spacing. You can't have color. The language is somewhat arcane. There are particular terms that you come into again and again, like plurality, I can never pronounce that word, which means several, and things like comprise. So one way to say the apparatus contains certain things is to say that. But that's not very accurate. So comprise means there is at least one. There may be more. And so the patent lawyers make their money knowing this stuff. Then we have references. And there's a bunch of patents listed along with their fields. And the asterisk means that they were added by the examiners. So it goes down. And the examiner sits on it for a while. And it used to be-- well, it changed in history. But in the '90s, when this was submitted, 1996, there was a multi-year delay. And at that point, Congress changed the rules a little bit, including charging you for that process. And it speeded up a bit. And the patent examiner will add things that they think of. You can see here that the inventors only thought of this one patent by, well, one of the inventors. It's like when you write your technical paper, there are probably going to be references to your own previous papers. So they didn't think of all these other people as being relevant. And then there's more, other publications. To these people, patents are important. Technical papers aren't. But they're still listed because the author is from the scientific world, in some sense. And it doesn't-- it says, let's continue on next page, because there wasn't enough on the front page, enough space on the front page to list all the papers that he wanted to refer to. And then there's the abstract. And this patent is unusual in that it's understandable. It's detailed. It's technical. There's not much-- and it's because it was written by an engineer, mostly. So if you read the abstract, you get a pretty good idea of what it is. There's a figure. This figure is chosen by the patent examiner out of the figures that are submitted with a patent as somehow most distinctive, most likely to let you know what the patent is all about. So we'll get back to the abstract later. So let me go to the next page. So here, you can see a whole bunch of additional papers on edge detection. Of course, edge detection is an old topic in machine vision. It goes back to the 1950s, if you can believe, when people first started scanning in images and finding a need to somehow compress the information, measure things, and so on. So one of the early famous papers is by Roberts, who was at Lincoln Labs in 1965. And I guess his input device was a drum plotter. So in those days, one of the exciting output devices was a drum that rotated. And you had a ballpoint pen. And you could control its position and, in some cases, color. It might have multiple tips. And you could make great plots of circuit diagrams, PC board layouts, and so on. And so what he did was he replaced the pen with a photodetector, a vacuum tube photo detector. And then he scanned his 8 by 11 black and white glossy photographs. And then he came back the next morning, and looked at the data, and he had a very simple edge detector, which was misleading in the sense that he had a scanner that produced incredibly good images, because there was just a single detector he could afford to build, a 12-bit a-to-d. He had much better quality images than we would get out of vidicons and other devices later. And so his edge detector, which worked OK on his pictures, wouldn't work for people on other pictures. And so for a long time, there was a competition, OK, my edge detector is better than yours. And they all had names. And I remember being at a conference, and standing in the passage, and trying to be social, and chatting with this guy I'd never seen before. And we got on to edge detection. And I said, well, it's too bad that all the edge detectors that have people's names on them are worthless. Well, he was Irwin Sobel, whose name is on one of the edge detectors. And he's forgiven me since then. But he's still trying to claim his legacy and make sure that everyone knows that he invented that edge detector. And he's blogging and whatever. Anyway, so those are some more papers. And then we get to the figures. So the top one is the Roberts cross edge detector. And you can see that it's a directional derivative. The left-hand one is a derivative in the 45 degree direction. And the right-hand one is in the 135 degree direction. And why that? Well, we'll talk about the advantages of that type of operator later. And then so Irwin Sobel's operator is figure 1B, which is somewhat advantageous. It takes more computation, is somewhat more resistant to noise, is not as high-resolution, and so on. And then Bill Silver, having been to my classes, was interested in hexagonal teselations. And so he proposed these alternate operators, which would work on a hexagonal grid. Unfortunately, no one ever built hexagonal grid cameras. They have some advantages in terms of resolution and rotational symmetry. But it's not a huge advantage. It's like 4 over pi. So there's an advantage of some sort. But it's not huge. And so the extra trouble of working with a hexagonal grid apparently was too much for engineers. This one here goes off in three different directions, which is appropriate for hexagonal grid. But of course, that's redundant. If you have de dx and de dy, you've got everything. So there are two degrees of freedom to the brightness gradient. You don't really need three numbers. So he came up with this alternate pattern here, where this one is estimating the derivative in the x direction and this one in the y direction. And the square root of 3 is because these cells are further apart than those cells. So you have to compensate for that. So a key step right up front is to convert from Cartesian to polar coordinates, in that you want the magnitude of the gradient and its direction rather than the brightness gradient itself, the x and y components. And so up there is a formula. You could take the square root of the sum of squares. And you take the arctangent. And part of that is put out as a straw man, which is like, why on earth would you do this? This is very expensive. Now maybe today, we wouldn't say that. But if you have 100 million pixels and you're trying to do things at 100 frames a second, you still don't want to really take square roots and arctangents. Actually, square roots, depending on where you grew up, you might know that square roots are as easy to calculate as divisions. It's just a slight twiddle on the algorithm. And unfortunately, in the Western world, we don't teach that. So people think of square roots as these really nasty things. But unfortunately, the people who design digital computers were also not from this other world. And so for us, square roots are more expensive than division. Anyway, the argument in the patent is, we can do it this way. But it's expensive. So let's find a better way of doing it. Then figure 2B is an alternate solution using lookup tables. The idea there is that you encode the x and y components of brightness in a small number of bits. Then you stick the bits together. And that's the address of a lookup table. And it creates the table index, lookup table. The lookup table gives you the magnitude and the direction. And that's a great solution if arithmetic operations are expensive, which was true at the time, and if memory access isn't a problem. Then, well, things keep on changing. So for a while there, arithmetic operations became cheap, because people just built 32 by 32 bit multipliers and be done with it. And cash misses were a big deal. So you didn't really want to use large lookup tables. And so for that reason, you might want to go back to a method that does a lot of arithmetic rather than have a large lookup table. And that's changing again and so on. So anyway, at the time, the idea of using a lookup table was also not particularly attractive. And so he came up with this method illustrated down here, which we'll talk about, CORDIC. So CORDIC is a way of estimating the magnitude and direction of a vector from its two coordinates. And it was actually pioneered in World War II. You probably know that a lot of early computing machineries were built for unpleasant purposes like war and making sure that you could aim your guns in the right direction. And being able to compute arctangent and square roots was very important. And so people came up with this method, which is an iterative method that rotates the coordinate system to bring it more into alignment with the brightness gradient vector. And at each step, it reduces the error, the difference. And when you then add up all the results, you can get the magnitude and the direction. And amazingly, you can do it without much arithmetic. You only end up needing a shift, which costs next to nothing, adders, and subtracters, and an absolute value operation. So if you think about rotation, you think of a matrix, cosine theta, sine theta, minus sine theta, cosine theta, and multiplication. So trigonometry, that's expensive. Multiplication and addition, expensive. Well, this is a method very cleverly designed to not do any of that. And it can be implemented at low-level. So in Intel architectures, you have an assembly language. But you also have clever things you can do with instructions that access multiple bites. And so this can all be implemented extremely efficiently. So that was the motivation for that. Then once we know the brightness gradient in magnitude and direction, one of the things we want to do is find the places where the gradient is large. That's where the edge is. But we don't want to just take a local maximum because along an edge, the gradient is going to be large. So there'll be some places that are local maxima. But they're meaningless. They're arbitrary points along the edge. What we really want is to look across the edge and take the maximum in that direction. It's 1D, not 2D. And so we need the direction of the gradient so that we know which direction to search in. And we need to have this maximum finding, which is called a nonmaximum suppression, meaning we'll ignore stuff other than the maximum. A funny term for finding the maximum. But that's what it was called. And since we're working on a discrete grid, we have some limitations. And particular on a square grid, we have to somehow decide to quantize the possible directions of the gradients. We compute the gradient direction with high-accuracy using any of the three methods we talked about. But then we're restricted to-- we only have those pixels. So what do we do? Well, we quantize it into these different directions. And for purposes of finding the extremum, we pick one of eight possible compass directions. On the hexagonal grid, we have six possible directions. Same idea. Now if we want to, we can look further afield. So the previous figure was looking at a 3 by 3 area of the image. So we just had 3 by 3 values of gradient up there. But if we're not happy with that course quantization of direction, we can go to 5 by 5. And now we have a larger range of directions. So they talk about this. But it's not what was actually implemented. And of course, you can go further. So the next step is, now we're looking along a certain direction in the image. And we found the place where there's an extremum. And now we'd like to know where the actual peak is to subpixel accuracy. And so one thing we can do is, figure 4A, those are the three values. So imagine that 0 is the pixel where you found the maximum. And plus 1 is one over. And minus 1 is one over in the other direction. And those don't need to be in the x direction. They could be in the diagonal direction. Or they could be up and down. But three values anyway. And we can fit a parabola to that. Well, that's the best we can do. We've got three numbers. We can only fit something with three degrees of freedom. And so ax squared plus bx plus c, we can fit that. We could fit a cubic. But it would be ambiguous because there would be one parameter that was hanging loose. We could make it anything we want. So we fit a parabola. And we know how to find the peak of a parabola. We just differentiate that the result is equal to 0. And there we are. We can find a peak indicated by that dotted line. Now if our model of the world is Mondrian, i.e. We have patches of equal brightness with sharp edges between them, then we know that the brightness will be constant in each patch. And if the edge is very sharp, we might expect there's a different local fitting, which is shown on the right. So up to the edge, the brightness is increased. The gradient is increasing. And then it's going down. And it might seem a little bit odd. But that will give us a different result. And again, we need those three numbers to make that fit. And we can find the peak, find where those two lines intersect. And we can find the dotted line in figure 4B. And those two will generally be different. So which one do we take? Which one is more accurate? And so we'll get to that in a moment. Then there's something they call the plane position interpolation. So the idea is that when we found the extremum, we quantize directions, horizontal, 45 degrees, vertical, et cetera. But the actual gradient to higher precision, we know its real direction to maybe 8 bits of accuracy. And so one step we can perform is to actually go in the gradient direction from the center pixel and find the place in that direction where the edges. So the diagonal line is our quantized approximation of gradient direction. And at 413, we found its peak using the method of figure 4. And the error of 410 is the actual gradient direction. And now what we do is we draw a perpendicular to that gradient direction and from going through 413. And we see where it intersects 415. So the idea is that with the edge, when we find an edge point, its position is well-defined in the direction of the gradient perpendicular to the edge. But along the edge, of course, it isn't. We could be anywhere on the edge. So this is just like our aperture problem optical flow discussion. And so which of those points do we take along the edge? Well, one argument might be, the further you get away from stuff that you really know, the less accurate it's going to be. So let's find the closest point. So we've found an edge. Which point on the edge do we actually record in the output data? Well, we pick the one that's as close as possible to G0. And that's this point that we constructed this way. So that's that construction. So then we discover that we can get a pretty good subpixel accuracy, maybe 1/10 of a pixel. But to go further, we need to take into account the fact that our constructions are based on some assumptions about how the brightness gradient varies with position. And we have those two models. And we could come up with some more. And what is the real thing? Well, the real thing is going to depend on properties of the optics of the camera you're using. It's going to depend on the fill factor of the chip that's sensing the image. It's going to depend on how accurately in focus the image is. So the edge transition, of course, is not a perfect step transition, which is a good thing. But its actual shape is not something that you may be able to predict ahead of time. And it's the same for every possible camera and image situation that you can think of. Now if it doesn't fit those two models, what happens? We'll get the wrong answer, but not hugely wrong. So there's a bias introduced by our choice of that model. And we can calibrate out that bias. And so that's what he's doing here. So these are exaggerated curves. So S, one of them is the actually computed position based on our little peak finding and parabola fitting. And S prime is the actual edge position. And ideally, it would be an diagonal line if you plot those two. And because of the factors I mentioned, it's not a straight line. It has a little bit of a bow to it. And as I mentioned, this is hugely exaggerated. The bow is much less. But it shows that it might be three different shapes for three different cameras, or focused positions, or phase of the moon, whatever. And so what do we do? Well, think of this. This is now just a correction on top of a correction. So we don't need to be hugely sophisticated. So he comes up with ways of removing that bias. And in particular, you can think of just powers of S. If you take S to the 1, then that's our diagonal line. If you take S 1.1, then it's a little bowed down. If you take S 0.9, it's bowed a little bit the other way. So that the idea here is that we can increase the accuracy even more by finding out for your particular machine vision system, what the S is that gives you the best fit. And while we're there, notice that when our quantized gradient direction happens to be diagonal, 45 degrees, then the points we've got are actually further apart by the square root of 2 times as far apart as when we're horizontal. And so you would imagine that the bias would be different. And indeed, it is. And so another refinement of the patent is that you make this bias depend on the gradient direction to get even higher accuracy, so. And so this is the overall diagram of the whole thing. So we start with the image. We estimate the brightness gradient, which we call Ex Ey and he calls Gx Gy. Then we find the gradient magnitude and direction, G0 and G theta. Then we do the nonmaximum suppression. And we find the neighbors. We detect the peak in that direction. And from that, we get a position of the edge point. And we still know the gradient there. So we use that as well. And then we interpolate using that parabola or the triangle. We compensate for the bias. And we do that plane position method that finds us the closest point to the maximum that is on the edge. So here's the patent itself. This is, of course, part of the materials in the course. So you can study this yourself. And I'm not going to go through it word-by-word. But just focus on the sections. So it starts off as usual with the title. And then the field of the invention, is this about making rubber from stuff coming out of trees? Or is this about machine vision? So there's very short-- this invention relates generally to digital image processing and particularly to edge detection in digital images. Then we get to the background. So the background is where you acknowledge that people have already done certain things. So that's considered a prior art in the terminology of patents. And the main point of this section from the point of view of the inventor is that you end with where you say, therefore, you need this thing I invented because all this other stuff, it's great. But it doesn't really solve the problem. And so in this case, what do they say? Consequently, there is a need for an inexpensive and/or fast method of high-accuracy subpixel edge detection. So most of this discussion is about, why do you want to detect edges? Why is that important problem? And how have people been detecting edges? There was Roberts' gradient operator. There was Irwin Sobel and so on. And then it ends up with, well, all of those are too slow, too inaccurate, blah, blah, blah. And here's why you need what I've got. And then there's a summary of the invention, which is a short version of what it's all about. The invention provides an apparatus and method for accurate subpixel edge detection. So that's another thing about patents. Originally, the people who came up with the idea of patents, the way it's written in our Constitution, decided they didn't want to patent ideas, abstract ideas. They didn't want to patent mathematical formula. And so they didn't know about programming. But the Supreme Court later reinterpreted that to mean that you can't patent programs. Software is not patentable. And so for a long time, people in machine vision weren't able to patent their stuff because it was basically software. So then they came up with the idea, well, what if the software lives in some physical object? You can patent that. And so then they-- all this convoluted language to try and make it be a physical thing. And then for a while, what you did was, because it was unclear how the Supreme Court would come down on this, you chose both. So basically, you invented an apparatus, a box that has an SD card with a program in it or whatever, and the method separately. And then if someday, they decide that methods are not patentable, then you're OK because you've still patented the apparatus. So this meant that in many cases, as here, there are twice as many claims at the end as, really, there need to be because they had to basically use the same language, just with apparatus replaced with method and some other small changes. So it's apparatus and method. There was a case in-- so patent cases are litigated in various ways. One special one is import. So if someone's trying to import something that you think violates your patent, you can go to a particular patent court, which is in Washington. And you can argue your case there. And there was a case not related to this patent, but a closely related patent. And there's a company in Canada, Matrox, which basically didn't build a machine. But they had software that implemented exactly what was in that patent. And so the expert witnesses for the two sides got up and presented all their very hairy theories about how this might work. And by the way, the expert witnesses get to look at your code. So if you're trying to make an argument, you don't want to show the other side of the code. But the expert witness for the other side can look at the code. And so you put that person in a room where they can't copy things. They're not allow to bring their little USB sticks with them. And they can look at all of your code. And so it was pathetic, because the expert witnesses were being asked questions like, well, isn't finding a maximum of that function the same as finding the minimum of minus that function? And the judges, they were completely out of their water. And in the end, fortunately, the judges found a way out of it, which was to say, oh, it's all software. And that's not patentable. So there were three weeks of people discussing cubic interpolation and stuff like that. And then in the end, it was thrown out on a technicality, if you like. So anyway, so that's part of that. So the summary of the invention. Now in terms of patent litigation, most of this is irrelevant in the sense. That is, the lawyers immediately go to the claims. We will get to claims in a second. So this is an unusually clear explanation of the method. Again, part of what you want to do when writing a patent is make it as broad as possible so you have a clear idea of how to do it. But now someone else, your opponent, is going to figure out a way of not doing things exactly the way you describe, making some small change so he can say, oh, this isn't your patent because I did that instead of that. So one of the jobs that you and your lawyer have is to figure out all the ways of modifying it so that it's still the same idea. So probably, things, as an inventor, you don't think of first because you just want to get it to work and to work well. And now you've got to think about, oh, what if I replace ultraviolet with infrared, what if I use sound instead of radio waves, and so on. And so as a result, patents written by lawyers are almost impossible to read because they've generalized everything to such a large degree. And so here, Bill Silver refused to let them do that. He just wrote it himself. And it's clear instructions. This is what you do. And it's great. It's very pleasant to read. Then we get to the figures. And so there's first of all, a short section that lists all the figures and just gives them names. And then there's the detailed description of the drawings, which is actually the detailed explanation of how the thing works. And the other conventions, for example, all of the numbers that appear in boldface refer to items in the figures. And if they appear in more than one figure, they better have the same number, rules like that. Stuff can be rejected because you didn't follow the rules, because your rectangular boxes aren't quite rectangular. And also, in one place, you called the kernel 131. And in another place, you called it 141. You can't do that. And he has a formula for the Cartesian to polar conversion, equation 1A and 1B, which, as I mentioned, used to not be something that you'd see in patents. And really detailed. We're only up to figure 2 at this point. So they're giving really detailed explanations. And we'll talk about some of that. More formulas. These are the formulas for finding the peaks of the parabola in the top case and of the triangular wave form in the second case. And this is the formula for the bias, which is probably very hard to read. But it's the absolute value of 2S raised to the B, where B is that thing that I said could be near 1. And it explicitly says it has to be bigger than minus 1. But other than that, it can be any value. Tables. Let's see. Oh, OK. So now we get to the part that the lawyers will find the most interesting. Where is it? It starts up there. What is claim biz? And for some reason, it's not a separate section. It's just a convention. The words what is claim appear. And now you list the claim. So those are the particular things that you think that you came up with and you want to protect. And the other side is probably going to argue that, well, actually, it's obvious. It was the end prior art, whatever, so. So it starts off quite a long first claim. And I'll read it out because this is what we're actually going to study, the algorithm for that. And an apparatus for, that's claim 1. And claim 11 is going to be the same thing. Sorry, claim 21 is going to be the same thing, which says a method for. So this is apparatus 4. Detection and subpixel location of edges in a digital image, Said digital image including a plurality of pixel values, each pixel value having been associated with respective pixel point on a regularly spaced pixel grid, said apparatus comprising-- and comprising, remember, a special term here in patent language. And then it lists three items. For some reason, they are all labeled. Well, they all start with A. So it looks like the they're all labeled A. But they're not. So the first one is a gradient estimator for estimating gradient magnitude and gradient direction at a plurality of regularly spaced gradient points in said digital image, so as to provide a plurality of estimates of gradient magnitude and gradient direction, each set estimate of gradient estimate and gradient direction being associated with a respective gradient point on a regularly spaced gradient grid. You wouldn't think that it would be so difficult to talk about, OK, we're going to compute the image brightness gradient. But there it is. And I suppose the idea is that, to us, it's obvious. But this is supposed to be something that anyone could understand. So that's one. We're going to get the gradient. And the second one is-- oh, and we're also going to get gradient magnitude and direction. So that that's component one. Then component two is a peak detector, cooperative with set gradient estimate, operating such that gradient direction associated with each gradient point is used to select the respective set of neighboring gradient points. So that's the quantization of the gradient direction. So we pick a certain set of-- certain direction, and operating such that gradient magnitude associated with each gradient point is compared with each gradient magnitude of said respective set of neighboring gradient magnitudes as to determine which offset gradient magnitudes is a local maximum of gradient magnitude inappropriate said gradient direction. So in short words, it's a nonmaximum suppression in the quantized gradient directions. So that's two. And then the third and last part is a subpixel interpolator cooperating with said peak detector, operating such that said local maximum of gradient magnitude and a set of neighboring gradient magnitudes are used to determine an interpolated edge position along a one-dimensional gradient magnitude profile, including a gradient point associated with said local maximum of gradient magnitude and so on. So that's the interpolation step, where we fit the parabola and we find the peak of the parabola. And that's all of claim 1. So why do there need to be other claims? Well, the thing is that someone might come along later and say, well, wait a minute. We already invented that. And so you would claim one is blown out of the water. So what you then do is you specialize it further. You add in other things that are in your specification and refine it more, and more, and more in the hope that if in litigation later on, the early claims that are very broad are thrown out, then the narrow ones will still stand. So if they also violate all the other conditions, then you're good. So for example, claim 2, so we have claim one further comprising a plain position. So that last step that I described, that plane position step where you intersect the gradient and another line, that isn't in claim 1. So if supposing that claim 1 gets thrown out, then if you were using this plane position feature, then you are going to violate claim 2. So that's a conditional claim. And it goes on like that. We won't, obviously, read through all of these. Claim 3 is the apparatus of claim 2. So claim 2 depends on 1. Claim 3 depends on 2, which in turn depends on 1. And claim there has a gradient directional line. So it's giving a particular way of computing that plane position point. And then there's a whole bunch more, obviously. And they get more and more detailed so that you protect it in case it turns out that, well, claim 1, if you put together the writings of Roberts and Irwin Sobel, it's there. And so you need something else. Then let's see. Where does it get interesting again? Claim 11 starts all over again where claim 1 did with a change that this is a function plus means claim. So here, instead of saying that you have an apparatus for gradient estimation, gradient estimation means for estimating gradient magnitude, peak detection means for subpixel interpolation means. So it's not specific about whether it's apparatus, or method, or whatever. And again, this has to do with the history of patterns, that they needed to do that. And so everything is repeated. So we have claims 1 through 10. And then it's repeated in function plus means up to 21. And then at 21, everything is repeated with apparatus replaced with method. So you end up with a relatively simple problem. And it ends up with 34 claims? 34 claims. Now these days, they penalize you for that, because you've have to pay extra if you have more than 20 claims. So things change. So for example, at this time-- let's see. Before this, the rule was that your patent would be valid for seven and 1/2 years after it issued. And that was different from the way the rest of the world did it. And also, it was unfair because things could be delayed in the patent process. So something could issue 10 years after you submit it. And then you get another 10 years just because of that delay. And they were tricks of using that. For example, Jerome Lemelson claimed that he invented machine vision in 1956. And he had a string of patents that documented that. And some of them were called submarine patents, because you didn't know they were in the pipeline. But they depended on earlier patents. And so anyway, so Congress decided to ax that idea. And so things changed. And I guess already, at this point, the rule was 20 years after filing instead of seven and 1/2 after issue. Also, they removed the rule that who gets priority. So suppose you think of some idea and someone else thinks of the idea, and you submit patents, and then you have these two conflicting patents. Who gets the patent? Well, it used to be that whoever invented first. And so in the old days, all engineers carried around books with squares in them and every page numbered. And every day, they would ask their buddies to sign out page. Why? Well, because that way, you could document that you thought of this idea on April the 3. You didn't have it fully fleshed out. But the pages were numbered. So you couldn't cheat and tear out pages or add pages. And people's signature was on there saying that, oh, yeah, I saw this. He actually wrote this at that time. And that was a giant pain because everything you did, every time you thought of something, you immediately had to write it down. And so they decided, no, forget that. Plus, it was very hard to prove sometimes. And so the new rule is submission. When you send it in, that's the date. And if you take your time sending it in, too bad. You lose. So anyway, so that's a typical patent. It's a relatively simple one. But it has some interesting machine vision aspects, which we'll talk about now. So we mentioned that in many cases, images are composed of patches that are pretty much uniform in brightness. And the interesting stuff is at the transitions. And I mentioned that this Dutch artist, Piet Cornelis Mondrian, got rich by drawing things that just had rectangular patches of color. And is it art? I don't know. I'm not in that field. But anyway, so I call this the Mondrian model of the world. And so in that model of the world, rather than have millions of pixels, you can condense things down into just talking about the edges. Or if you want to find where something is on a conveyor belt, you just need the edges. If you're trying to line up different layers of an integrated circuit mask, then you just need edges. So what's an edge? Well, and what is an edge detector? So again, unused, remarkably, this is actually well-pointed out here. So edge detection can be defined informally as a process for determining the location of boundaries between image regions that are different and roughly uniform in brightness. So great. And then in more detail, an edge can be usefully defined as a point in an image where the image gradient magnitude reaches a local maximum in the image gradient direction. So that's important. It's not just the local maximum. It's in that direction. And by the way, they mentioned another one, which is where the second derivative of brightness crosses 0 in the image gradient direction. So think about that. Now that makes sense. So if the first derivative reaches a maximum, how do you find the maximum? Well, you differentiate one more time and set the result equal to 0. So an alternate way of determining where an edge is to look at second derivatives and look at 0 crossings. And that was used as well. And by the way, then they mention multi-scale. Now remember, this is a while back, but even then. Thus, it is known in the art to perform edge detection at a plurality of spatial frequencies or length scales as appropriate to the application. So we're just going to pretend that we're working directly at the full resolution. But imagine that for many applications, you would not want to do that or you would want to work at multiple resolutions. And so for this patent, we're not going to get into that, as they don't get into it. So we have an image transition. So first of all, here's an ideal edge. So this is a cross-section across the edge. It's a step function. Well, it turns out, actually, that's not good. We don't want that. We don't want infinite resolution. That seems crazy because you think that the bit of the resolution, the happier we should be. Well, the problem is, suppose we now image this onto a device that has discrete pixels. So we're measuring the brightness at those points. Now can we tell where the edge is? Well, suppose I move the edge a little bit. Nothing changes in terms of the brightness measurements at those points. So actually, in this very idealized case, I can move it a whole pixel width back and forth. Nothing changes. So conversely, that means I can't measure it where it is within the full pixel position. So this is actually undesirable. And basically, this is a form of aliasing. So we don't want a perfect step edge. We want something that's band limited so that when we sample it, we're not introducing artificial frequency components, which is one way of thinking about what's going wrong here. So we're looking for the edge. And we've said that one method is to take the derivative and find the peak of the derivative. And then we mentioned also that possibly, you might want to take the second derivative and look for zero crossing. So this was actually a popular game for a while, that we would be looking at second derivatives. And in the case of images which are two-dimensional, what's the generalization of the second derivative? It's Laplacian. So instead of these edge operators, which give us an estimate of the brightness gradient, we'll take the Laplacian and we find the zero crossings. And one of the appealing ideas about that was that, well, zero crossings are closed curves. It's like a contour map. So you have this-- we've talked about thinking of the image as a surface in 3D, where height is brightness. Well, then we can draw contours, isophotes, certain level. And we can draw the 0. Well, not with images themselves. But once we take the Laplacian, we'll have both positive and negative values. And we can draw the contour at 0. Well, contours are closed, other than you won't know where they disappear off the edge of the image. So that was very appealing because a lot of times, other edge detectors would peter out. And then you wouldn't quite know where the edge was and so on. So that was an interesting game for a while. But it turns out that you get worse performance in the presence of noise. And we can discuss that in terms of convolution and so on. But we won't do that for the moment. So second derivative. And it's briefly mentioned in the patent. But it's not pursued. And then what does that correspond to in the original? E. So what we're really doing is looking for an inflection point. What's an inflection point? Well, imagine that you're driving along this curve. You're turning left. You're turning left. You're turning left. You're turning left. Oh, now I'm turning right. Now I'm turning right. So the inflection point is where you're changing directions. And in terms of derivatives, it's the maximum of the derivatives. So those are different ways that we could define where the edge is. So let's talk a little bit about brightness gradient estimation. So you saw in the figures there a number of operators. So this was Larry Roberts' idea. And it's very easy to compute, very cheap to compute. But it's a 45 degree angle. So what was the idea here? So why not operate in x and y? Well, it turns out that this makes it possible to have the two operators refer to the same reference point. So where are we estimating the derivative? Well, you might say, OK, I'm estimating the derivative here by taking the difference between this value and that. And someone else might say, well, we're estimating the derivative here because I'm taking the difference of the value here and the value there. And we'll see in a second that neither of those is well-founded. It's more reasonable to say I'm estimating the derivative there. And the resistance to that, of course, is that's not a grid point. So it's not a pixel. It's halfway offset in pixels. But who cares? Maybe it's a good place to estimate the derivative then. And so Roberts used these two operators. And then he used the sum of squares and noted that, conveniently, that's actually the same as if you'd computed the sum of squares of gradients in the original coordinate system. Because if you do the 45 degree rotation, cosine 45 degrees, sine 45 degrees, it's 1 over square root of 2. You get the same thing except for proportionality factor, because these are further apart than the pixel spacing by a square root of 2. So these aren't the same. But they're related by some number. They're proportional to one another. So that is Roberts' gradient. And then we had Irwin Sobel. Well, let's first address this question about estimating the derivatives. And we may have done a little bit of this already. But let's do it more carefully. So we have this computational molecule for E of x. And we know from Taylor series. And so now, we can use that to-- so when we do that, the f of x cancels out. And then we divide by delta x. So we get-- that's the part we want. We're trying to estimate the derivative. So that's perfect. And then we divide by delta x. So we get delta x 2. So this is the lowest order error term. So that's what we want. And that's the part we don't want. And when we talk about how good a formula it is, we'll be looking at two things. One is the order of the lowest order error term, which here is second order. And then the other one is the multiplier. I suppose we have two methods that have the same order over the lowest error term. Then we can compare them based on the size of this multiplier. But the more important one first is the order. So this one's not very good, because even it works perfectly on a straight line. But even if there's a little bit of curvature to it, it's going to get the wrong answer because of that. So let's instead look at f of x plus f of x minus delta x. So here, we said, OK, this is our x. And this is x plus delta x. And we're obviously trying to get the derivative at x. Now we're saying, OK, this is x. That's x minus delta x. And we're trying to get the derivative there. And if we go through the same arithmetic, we get the same size error. But the sign is flipped. So well, what you might say is, well, gee, let's just average these two. Then we can get rid of that low order error term. And then so if we average them, which means the sum of these two divided by 2, then we just get f of x. That's what we want. And this cancels out. Well, but then they're higher order terms. So we better fill in the higher order terms. So here, we got delta x squared over 6 f triple prime of x. And so what we notice is that now, we have a higher order error term, which is desirable. So this formula will not just work perfectly for a straight line. But even if the straight line has a bit of curvature to it, a second derivative, as long as the third derivative isn't too large, we'll be OK. So what does averaging these two actually mean? Well, it means that we take-- that in effect, we've used that operator. All right? And that's a perfectly reasonable approximation to a derivative. We're subtracting two things. And we're dividing between the distance between where we've measured those two things. And then you might say, well, what if, instead, I take this idea? But now I'm saying, I'm going to find an estimate of the derivative there. And the objection would be, well, it's not a pixel position. But that doesn't matter. We can have even derivatives be offset by 1/2 a pixel as long as we just remember that, mentally, that's happened. Now I can go through and use Taylor's series, blah, blah, blah. But actually, all I need to do is use this formula and divide delta x by 2, because this is the same thing with delta x divided by 2. So in this case, my formula is going to be f plus and then delta x over 2 squared divided by 6. So now I have to compare two operators that have the same lowest order error term. And now I look at the magnitude of the factor in front of the error term. And obviously, this one is a 1/4 of the size of that one. So this one has an advantage in that respect. I can get a relatively high order error term. And I can have a small multiplier so that the error is actually smaller. And it makes sense. Over here, we're comparing things that are relatively far apart and seeing-- so obviously, they will be affected more by higher order derivatives than in this case, where I'm looking at things that are close together. Now that's fine for Ex. But how is this going to work with Ey? Because if I take that idea in both directions, so here is my Ex operator. And then there's a potential Ey operator. Well, that's not going to work, because this one is a good estimate of the x derivative there. And this one is a good estimate of the x derivative-- y derivative there. And those aren't consistent. And so what I do, well, one way to deal with it is that because these now, if I go through that Taylor series story, these are good estimates of the x derivative here. And this is a good estimate of the y derivative here. And oh, those are the same point. So that's how we get to that. And when we talked about fixed optical flow, we already mentioned that that's the way you want to compute the derivatives. And well, there, we needed not just Ex, Ey, Et, we need not just Ex and Ey, but we needed Et. And so there, we're dealing with a whole cube. And so, for example, to compute the derivative in the y direction, we can do that. And under materials, there's the detailed explanation of how to do the fixed optical flow. And that's in there. So now the story doesn't end there, because now we might talk about efficiency. And so for example, how much work these operators-- well, here we have three-- we can subtract these two. That's 1. Subtract those two. That's 2. Then we add them up. So it's 3 operations for this and 3 operations for that. But actually, they share these subcomputations. And this one's 1 operation. And that one's 1 operation. So 3 plus 3 is 6. So 1 operation and 1 operation. And then we combine them. We add them to get E sub x. And we subtract them to get E sub y. So that's 1 of plus 1 of. So a total of 4 ofs. So by cleverly arranging the computation, we can cut it back from six operations before. And you might say, who cares? Well, the thing is you're doing this in each of a few million pixels. So this is one place where, actually, efficiency does matter. And I won't go through this. If you do this the obvious way, it takes, I don't know, 21 ofs to do this. If you get Ex, Ey, Et, the obvious way, you get 21 ofs. And I'll leave it to you to-- and people sometimes come up with surprising solutions that beat the obvious solution by a significant factor. And by the way, there's Roberts' cross operator. So he was way ahead of his time. And what about Irwin Sobel? So Irwin Sobel looked at this stuff. And this is where he got to. He said, well, this is obviously a good way of doing it. It's an estimate of the derivative at that pixel. And it doesn't have a low order error term. But then he needed to do it in 2D. So he replicated it. But why did he replicate it the particular way he did? Well, we can think of it in this way. I'm leaving out the consonants. But basically, we do an average. So this is a convolution. So that's the underlying operator. And now we're going to smooth it by doing an average over a 2 by 2 block, which corresponds to convolution with this. And if you like, we can put in 1 over 4. I'm not concerned about those multipliers at the moment. And what do we get? Well, hopefully, you remember something about convolution. So we flip one of these. Well, it doesn't do anything to this because it's symmetrical. And then we shift it over the other one. So the first position where we get something non-zero is when we have it in this position. And what do we get? We get minus 1. So then I take this operator and move it one to the right. Now it's overlapping in two places. I get minus 1 times 1 is minus 1 plus 1 times 1. That's 0. Shift it over one more time. Now it's only overlapping over here. And I get plus 1. Then I move it one row down. And now I line it up over here. And I get 1 1 times minus 1 minus 1. That's minus 2. I shift it one to the right. They all cancel out, 0. Shifted one to the right, I get 1 1 times 1 1 is plus 2. Shift it down one more, I get minus 1 0 plus 1. And there's Irwin Sobel's ex operator. Same in the y direction. So we can think of it as just a way of smoothing. And smoothing has the effect of reducing noise. So unfortunately, it also has the effect of blurring things, making them-- so there's a trade-off obviously. We can reduce the edge gradient at the same time as we're cutting down on the noise. And so Irwin Sobel managed to avoid the 1/2 pixel offset problem by having two operators, each of which is 1/2 pixel offset. And when you convolve them together, you get 0, or 1, or minus 1 offset. So that's the very front end of this operation. And they discuss-- they don't say, you should use this or that. They just give various formulas and the preferred one. So a lot of times, it will say the preferred implementation. And that's a tricky one because later on, if someone says, oh, but I didn't implement your preferred implementation, the lawyer will say, well, you're talking about an exemplary implementation, not what's claimed in the patent. What's claimed in the patent is in the claims. What you say in the specification is just a way for somebody to implement this. And just because yours doesn't implement it exactly the same way is irrelevant. What's relevant is whether it violates infringes on the claims. And so a lot of times, there'll be verbiage about, in a preferred implementation, the gain is 10 or whatever. So if your gain is 11, that doesn't excuse you, because the important part is whether the claims cover it or not. So that's the very front end. And then we're going to talk about the next step. So the next step is the conversion from Cartesian to polar coordinates for the brightness gradient. And we'll do this mostly next time. But for programming people, we'll call it atan2 of those two arguments. And the reason is that we don't want to have division by 0, which we can avoid by using the two argument version of atan. So now we have a gradient magnitude. And we have a direction. And the next step is to quantize the direction. So what we're trying to find is a maximum in the direction across the edge. So suppose the edge is running this way, brightness E1, E2. Then we want to scan across the edge and look for a maximum, E0. And the direction is given by E theta. So that's why we need E theta and e0. And unfortunately, we don't have points in all places in the plane, only on this grid. So we can only easily deal with quantized directions. So we got compass directions, eight possible directions. And we're looking for a maximum. And now on a different grid, we would have different quantization. But we would still have a problem. So the center one is going to be G0. And then, so suppose this is our quantized gradient direction. And then we'll call the gradient here G plus and the gradient there G minus. And so we have those three values. And clearly, the center pixel is an important one. It's on the edge but quantized to pixel positions. So it doesn't give us subpixel accuracy. So now, first of all, we keep only if-- so we step through the image. And we ignore nonmaxima. And now what we actually want is something that is asymmetrical. So it could happen that G0 is equal to G plus or equal to G minus. And we could say, well, that means it's not a maximum. And just throw it out. But of course, that's wrong, because we have to keep one of those two points. But we don't want to keep both, because then we have two points on the edge that are put on the edge that are a pixel apart. So that's no good either. So we have to have a tie breaker. So it's important that this condition is asymmetrical. We can make it asymmetrical the other way. That will also work. But we need to have a way of dealing with a case, where G0 is equal to G plus or equal to G minus. And then we're trying to find the peak. Now how does the curve go? Well, we don't know. But we can imagine various-- well, it's not a very good parabola. But so we have-- here's our parabola. We take its derivative. And there's the peak. And in terms of these quantities here, if you go through and substitute, if you fit a, b, and c, then you get this formula. And well, here I've used x. Here I'm using S. And we can show importantly that S computed this way is 1/2, is limited in magnitude to a 1/2. Why is that? Well, if it was, let's say, 3/4, that would mean that we'd be closer to this point. And then this should be the maximum. So you can actually show that can't happen. So that's its range of solutions, provided G0 is the maxim. And you can understand what's going on here. So if G plus is the same as G minus, S is 0. We have a balanced parabola. We have this situation. And in this case, of course you would say, oh, that's it if these two are the same. And then it gets shifted the more there's a difference between the two sides. And how much does it get shifted by? Well, we look at the second derivative, which is this difference here. Well, it's not the second derivative, but proportional to the second derivative. So we're looking at the average of G plus and G minus and comparing it to this point. And the larger that difference is, the higher the second derivative. So it just corresponds to-- b is the first derivative. a is the second. So that's one way of getting the subpixel accuracy. And then the other way they mention is the little triangle model. And that just seems weird. But for some reason, there are circumstances where it's a pretty good model. So again, we have G0, G plus, G minus. Let's see. So [INAUDIBLE]. So again, with three measurements, we have just enough information to fit this model. The assumption of the model is that the slopes of the two lines are the same. And then we just need to vertical position in the horizontal position. And in this case, S comes out to be-- and again, that formula is in the patent. And it's pretty easy to derive. So we'll talk about this patent some more. But there's some interesting technical issues aside from the patentees that we'll get to. For example, one reason an edge might not be a unit step edge is because of defocus. And so the question is, what is the shape of that curve? Because this whole business here, we're just making up stuff. We're saying, oh, maybe it's a parabola. Or maybe it's a triangle. Well, can't we just figure out exactly what it is? Particularly in the case of defocus, we should be able to figure out just what-- obviously, it's not going to be a step edge anymore. It's going to be smeared out because it's blurred. It's out of focus. But what is that? So we'll talk about that and see how that affects our method of recovering the actual edge position. And then we'll talk about some alternatives to what's in the patent. In particular, this quantization of a gradient direction is problematic, particularly on the square grid, where we've only got eight directions. And so we'll talk about other ways of proceeding that do not have that very cause quantization. Like here, we're finding a maximum in that direction or in this direction. But if the gradient is actually in-between, it's not. Anyway, so there's more. And it seems like a very simple problem. But it shows how, if you want really good performance, there are a lot of details to figure out, like this business about, what's a good way to compute the derivatives?
MIT_6801_Machine_Vision_Fall_2020
Lecture_6_Photometric_Stereo_Noise_Gain_Error_Amplification_Eigenvalues_and_Eigenvectors_Review.txt
[SQUEAKING] [RUSTLING] [CLICKING] BERTHOLD HORN: We talked some about noise gain. And in the 1D case where we have one unknown and one measurement, it was pretty simple because we just looked at the inverse of the slope. So we had some small increment here and the relationship with the small increment over here. So x, in this case, is the unknown quantity, and y is the measurement we're making, and we're trying to estimate x. And obviously, if this curve is very low slope, then a small error here can be amplified into a large error there. So that's very simple in the 1D case. But of course, we'll be looking at more complicated cases where we're trying to estimate three-dimensional quantities. And so I want to take a little detour now, and talk about at least the 2D case, which comes up in the optical mouse and some other things that we'll be talking about pretty soon. So we're going to talk a little bit about eigenvectors an eigenvalues. And if you are familiar with them, then just go to sleep for the next 10 minutes, but I think it's important to get everyone up to speed on this. So what are these? Well, if we take matrix, m, and we multiply some vector, we're going to get a new vector, which is a different size, typically, and pointing in a different direction. And so an interesting question is, do we ever have a situation where the vector we get is pointing in the same direction as the vector that we multiply the matrix with? And so those are considered special. And in some sense, they are characteristic of that matrix, and so sometimes they're called characteristic vectors with characteristic values. And I guess "eigen" is the German word for "own". It's the vectors and values that the matrix owns. And so we're looking at a situation where the vector we get is parallel to the vector that we pump into the matrix, and it may be a different size. So that's the definition. And we're mostly going to focus on real symmetric matrices. We can generalize this. But for the moment, that's all we need. And so obviously this isn't going to be true for typical vectors. Also notice that the scale doesn't matter because if e is an eigenvector and I multiply it by k, the equation still holds. It's still an eigenvector. So for many purposes, we'll either just ignore the magnitude or make it a unit vector for simplicity. So the size doesn't matter. And by the way, that includes multiplication by minus 1. So if V is an eigenvector, then minus V is also an eigenvector. OK. So then the question is, how do we find these things? And how many are there? And things of that nature. Well, one thing we can do is we can try and solve this equation by bringing this over to the other side. So I is just the identity matrix of the appropriate size, m is m by n, then I will be n by n. And so this is exactly the same equation, M times e minus lambda times e is 0. OK. So there's a set of linear equations, and we've been bragging about how linear equations are so easy. And obviously, what we can say is that. We just invert that matrix and we multiply it by-- and what do we get? 0, right? So that's called the trivial solution. And if we can do this, then that's not going to be helpful. So what is it we're looking? We're looking for the case where actually this inverse doesn't exist. So usually, we're very keen to make sure that inverses exist. In this case, we're not. So first of all, terminology. Here we have a set of equations that are homogeneous. That means that the right-hand side is 0. And amongst other things, that means that if you have a solution, then any multiple of that solution is also a solution, which is not the case normally when we're solving linear equations. There's a unique solution, and any multiple of it is not a solution, so any non-one multiple. OK. So what we're looking for, then, is that we have a singular matrix. So that means that the determinant of that matrix is 0. And so what does that mean? Unfortunately, the determinant is this complicated, messy thing. But if you think about it, we can say something about the determinant. But if we write this thing out, 3 minus lambda, what we've done is we've taken the matrix, m, which is some n by n real symmetric matrix, and we subtracted lambda off the diagonal. And now we're talking about the properties of this matrix, and we actually want this one to be singular because otherwise that equation has nothing but the obvious solution. So what's the determinant of this? Well, I don't know. You might remember that the determinant involves taking this and multiplying it by the determinant of that matrix plus this times the determinant of some submatrix. Or you may remember it as-- take one from column A, one from column B, and one from column C. And depending on whether the number of switches in direction is even or odd, you add a minus sign. Don't need to remember the exact formula, the key thing is that we're taking something from here, and something from there, and so on. Never repeating columns or rows, and there are large numbers of ways of doing that, which is why it's actually computationally expensive. But the key thing is that we can get a term as large as this one along the diagonal in terms of lambda. So if we look at the product of all of the terms on the diagonal, that'll be part of the determinant, one of many parts. And it's got lambda to the n in it, right? We're taking m11 minus lambda, m22 minus lambda, and so on. So what's the determinant? Well, it's a polynomial in lambda, and it's an n-th order polynomial in lambda. And polynomials of n-th order have how many roots? N, good. So we're going to n roots. And so that means that not only is there a solution to this, but there are going to be n solutions, and those are the things that we'll be looking for. So just to make this concrete, let's look at a very simple example, also because that's the one that we're dealing with right now with, say, the optical mouse. So we already mentioned that-- just saying that the solution is unstable that the noise gain is high isn't enough when we're dealing with multidimensional problems. It's fine for-- this is it. The noise gain is a scalar. We're done. But in the case of the optical mouse, we're recovering u and v, so it's a three-dimensional problem, and it may very well be that the error in certain directions is very different from the error in other directions. So we want to have a more nuanced picture. We can have a kind of a gross statement that says, OK, it's bad if the determinant is small. That's a good start, but that's a scalar constraint, and it doesn't tell you that actually if you move in this direction, you have very good knowledge of the motion. But if you move in some other direction, you don't. OK. So let's look at this. Well, following what we did over there, that's the condition for that set of homogeneous equations to have a non-trivial solution. So homogeneous equations, we don't run across a whole lot. We almost always have inhomogeneous equations, then they have solution unless the determinant is 0. And homogeneous equations had some very interesting properties, and we'll just look at them a little bit. OK. So what is the determinant of that? Well, it's just that times that minus that times that. And if I multiply that out, I get a second order polynomial in lambda that I can solve using the usual formula for quadratic. Actually, I simplified something here. So before I simplified it, this was a plus c squared minus 4ac minus b squared. So it's B squared, which is now this term, minus 4 times that term. And then when I multiply that out and rearrange the terms, I get this. So those are the eigenvalues. So there will be two vectors that have this property that when you multiply that matrix by the vector, you get a vector in the same direction. And the length will be changed by this quantity, that's how much they'll be magnified or demagnified. And we're very interested in that because typically what's on the right-hand side is the measurement, including the error, and so these eigenvalues will determine how much the error will be magnified. So just for cutting down on the writing, let me give this a name so that I can abbreviate things. OK. So those are the eigenvalues, and I'll be interested in how big they are. Now in our case with optical mouse, these were integrals of ex squared and integrals of ex, ey, and stuff like that, and then we can plug that into the formula to find out what those eigenvalues actually are. Now in practice, if someone's finding eigenvalues and eigenvectors, they just go to MATLAB or whatever. But I want you to get a feel for this at this trivial level of 2 by 2. And then if you're going to do a 13 by 13 matrix, of course you're not going to do it by hand, but it's useful to do this and get a clear idea of what it's all about. OK. So those are the eigenvalues, what about the eigenvectors? So they are going to be special directions in which this property holds. And so how do we solve for those? Well, now we have to solve the homogeneous equations. So we got this matrix, a minus lambda b. So this is going to be our eigenvector, the one that corresponds to the lambda. We have two choices that we pick. And so we're now assuming that we'll be plugging in a specific value of lambda, namely one of these two. So what does this mean? Well, it means that there's a relationship between this vector and that vector, and between this vector and that vector, namely they're orthogonal. So why is that? Well, this 0 up here comes from multiplying this row vector by this column vector. So it's a minus lambda times x plus by is 0. That's the first equation. And then the other 0 comes from multiplying this row vector by that vector. So I get bx. And so I can think of this as a dot product, and I can think of this then as vectors being orthogonal to one another. So this means that a minus lambda b is perpendicular to xy. And by the way, b minus c lambda, c minus lambda, is also perpendicular to x and y. So in solving these homogeneous equations, I can note that basically I'm saying that the solution is perpendicular to the rows. And so I can write down the answer very easily, particularly in the 2 by 2 case. All I need to do is find something that's perpendicular to a minus lambda b, and so how about this? If I multiply that by a minus lambda b, I just get two terms with opposite sides, and they cancel out. So there's an eigenvector. How big is it? As we said, in a sense it doesn't matter because any multiple of an eigenvector is also an eigenvector. OK. So that's one of them. Actually, that's two of them because we can plug-in the two different values of lambda and we get two different vectors. But why focus on the first row? This should be true of the other row as well. So let's try it on the other row. Well, on the other row-- here's a perpendicular to the other row. If I take the dot product of that with the second row, I get two terms that are equal in magnitude and opposite in sign. So they cancel it out. So that's also an eigenvector. And now that's getting a little confusing because now I can plug-in the two different values of lambda, so I get four eigenvectors. Well, it turns out those actually point in the same direction, and they are the same if lambda is given by that expression up there. And I'm not going to bore you with algebra, it's pretty straightforward to show that. So altogether, we can actually write the result-- if we want the unit, eigenvector, we can normalize this. So we just divide by the square root of the sum of squares of these two terms, and we get something like this. And the plus minus applies to the two different solutions, the top sign always corresponding to one case-- I should mention that this is all written up in this little four-page pamphlet that's on the materials. So you can check on it there. I'm leaving out some details. OK. You might think, well, gee, this is what it is for 2 by 2. This must be pretty complicated if I get to 10 by 10, and it is, and that's why typically you use some prepackaged tools. But it's good to see how this works. So for an m by n matrix, real symmetric matrix, we're going to have n of these, typically, and there'll be corresponding eigenvalues and eigenvectors. And they allow us to talk about error amplification, and let's see how that works, what's the relationship to error amplification. So I think I mentioned this before, but we were thinking of our vectors as column vectors, and we can alternatively, or equivalently, think of them as skinny matrices. And so I can, on the one hand, write a dot product this way, or I can write it like this. And so what is that? Well, a1, a2. And obviously, if I multiply these two skinny matrices, I get a scalar, which is just the product of a1 and b1 plus the product a2 and b2, and so on. So that's just the dot product. And that's a convenient notation, and we extend that to matrices as well. So in our context over here, what we can show is-- there's a line of algebra which I'm leaving out, which is in this pamphlet. So you can verify it. So that's surprising. What this is saying is, OK, I can the 1, multiply the matrix by that, and then I get something new, and I take the dot product with the other eigenvector, and it's the same as flipping it. Like, I don't know, m could be a rotation. So it's like I'm rotating e1 and then taking the dot product with e2, and that's the same as rotating e2 and taking the dot product with e1. Well, if that's the case, then we can show that these have to actually be orthogonal. So that's where we're going. We want to show that these eigenvectors are actually orthogonal. So let's first look at this. How do I know that? Well, because the whole point of these eigenvectors, eigenvalues was that Me1 is going to be in the direction e1, but a different length multiplied by the eigenvalue. And this one here is going to be-- Me2 is the vector e2 just multiplied by the eigenvector. So the part that takes a few steps is proving that these are actually equal, but it's in the paper. OK. Now what does this say? Well, I can gather this up, and that tells me that e1 dot e2 is 0. So that means they're perpendicular. Well, there's another thing that can happen. If lambda 2 is the same as lambda 1, then that doesn't follow. So when I take the roots of this polynomial, and the roots are all different, that means that all of the eigenvectors are orthogonal. So that's what that says. And if they're not all different, if there's a multiplicity, it turns out I can pick the eigenvectors to be orthogonal. So the example would be-- the eigenvectors are in a plane, and I can pick any two vectors in that plane. They'll all work, all of the vectors in that plane, eigenvectors, but I can pick two of them that are orthogonal. So yes, if two of the roots happen to be the same, then this doesn't force the eigenvectors to be orthogonal, but I can just pick out of all the possible ones too that are. And the idea is to construct a whole coordinate system. So I've got all of these orthogonal vectors, and they, of course, define a basis for the vector space. So in the 2 by 2 case, what happens? Well, I get a eigenvector here, and the other one better be perpendicular to it. And of course, I can use those to talk about points in the plane just as I can use my original x and y-axes. They form a basis. And so that means that I can write any vector as a weighted sum of these eigenvectors because they form a basis. So what are these alpha I's? How do I find them? Well, that's pretty easy because I just do v dot e-- I don't know, let's call it j. We don't want to get in conflict with I, which is a dummy variable in the sum. And we said that the eigenvectors are orthogonal to each other. So that means that all of these dot products are 0 except one, which is ei dot ei. And so this is going to give me alpha I, right? If I pick the unit vector version. OK. So this is the sum, but I don't care about most of the terms because they're multiplied by 0, this dot perfect. The only one I care about is where these two are the same vectors. And in that case, if I pick unit vectors, such as 1, so I just get alpha I. So it's very easy to re-express any vector in terms of the eigenvectors. OK. So then let's look at Mv is sigma alpha i, M-- So now we get to the juicy part, the conclusion here, which is that if we take an arbitrary vector measurement, and we multiply the matrix by that measurement to obtain our unknown variables, what happens is that different components are magnified by different amounts. So those directions are special in that along those directions, we know how much the error is magnified. And so in this case, obviously if we have a large eigenvalue, the component in that direction will be magnified a lot. If we have a small eigenvalue, it will be minified, it'll be diminished, and we'll be happy. OK. So that's the connection to the error gained. But we mostly are dealing with inverses. So this is what happens if we multiply by a matrix, m, but we're typically solving inverse problems where we're dealing with the inverse matrix. And so what are the eigenvalues and eigenvectors of the inverse matrix? So to see that I want to first introduce something else we're going to need. So that looks like what we did for dot product, but not quite. Now it's the other one that has a transpose on it, and we'll use this notation quite a bit. And so let's write this out, a1, a2, an. And that's not a scalar, it's not a dot product because what you're going to do is multiply the first term here by the first term here, a1, b1, and then the first term here by the first term there, a1, b2 dot dot dot a1, bn. And then we go down to the next row here, we get a2-- so again, if we treat these vectors as skinny matrices, this is what we get. So altogether, the dyadic product of n vectors is an n by n matrix, and that'll be handy for us. So let's apply that idea. So we've got V, we expanded it. Right over there, we said that because these eigenvectors form a basis, we can express an arbitrary vector in terms of it, and then we found the actual weights. And when we plug that in, we get this formula. And so we can now rewrite that in various ways. So one way to rewrite this is v transpose times eI. Or another way of writing it is eI, eI transpose, v. So this is just taking the dot product and rewriting it. And how do we get here? In the dot product, it's commutative. We can flip the v and the eI. And therefore, we can get this expression for the dot product just as easily as that one. And then taking a scalar times the vector is the same as multiplying the vector by the scalar, so we get this. And why is this interesting? Well, because when we do these matrix products, they are associative so we can rewrite this as eI, eI transpose, v. And up there, we have a sum over I, and v is not dependent on I. So we can actually factor this out like this. So these terms all depend on I. So in the sum, every term has a different one. But the v is the same, so I can separate that out. And wait a minute, I'm saying v equals something times v? So what is this? That's the identity matrix. So this is a very long way around to write the identity matrix. And the reason we're doing it is because with a slight change, we can now write the matrix, m, using the same idea. And so we can actually get to this version of m. And why is this interesting? Because now we can see the properties of the eigenvalues that we're interested in. And then how do we check this? Well, we can perform operations of m on some vector and see whether it produces the same result. So we can check this by doing, for example, m times eI, checking that this is true. Let's make it-- I guess we got eI here, so we'll make this j. That's very easy to check. And if that's true, then this is true. More interestingly is this one. So this is the one we finally get to that's the most interesting. And how do we check this? Well, an easy way is to take this expression for m, multiply it by this expression for m inverse, and show that you get this expression, the identity matrix. OK. So this may be going a little bit fast. But remember, it's all there, and it all fits on four pages like those Facebook comments that say, oh, and it's only three pages long. OK. So why are we interested in this? Here's the key. This matrix we're using to solve our vision problem, which takes a measurement and turns it into some quantity of interest to us, displacement, velocity, whatever, it multiplies components of the signal by 1 over lambda I. So it's bad if lambda I is small. So that's where we're going with this. So often, the way we can understand the performance of one of these methods is to look at what the noise gain is. And when we get up from one dimension, this is the way to do it, we take that matrix, we find its eigenvalues. And if some of them are small, we know that it's an ill-posed problem. It's not going to have a stable solution. If you make a small change in the measurement, you'll get a big change in the result. So I know I'm saying this again, and again, and again, and it's because key. It's important. So for example, in our optical mouse situation where we just have the 2 by 2 case, we end up with a diagram like this where we have two directions that are eigenvectors. And if one of them has a small eigenvalue, then it's going to be hard for us to accurately compute the motion of the mouse. And it turns out that in that case, one of them is the direction of the isophote, and the other one is the direction of the gradient. So remember that isophote are just lines of brightness is constant, and they're very handy for drawing things on the blackboard because I can't draw gray levels. And they have the property that they're perpendicular to the gradient where the gradient is just the two vector of derivatives with respect to x and y. And it turns out that the eigenvalue that corresponds to the isophote direction is very small, in the ideal case, 0, and the one in the gradient direction is not. So invert that, that means that in the isophote direction, any small error will be magnified hugely whereas in the other direction, the gradient direction, it's OK. And so that corresponds with our understanding that if you move this image in the gradient direction, things change. This isophote is now down here, and the brightness here has changed. Whereas if you move it in the direction of the isophote-- by definition, isophote meaning constant brightness, the brightness tends not to change, or not change by much, and so that's then a motion that is not easy to detect accurately. So this is kind of a little story about eigenvectors and eigenvalues. And we'll see that they'll play a role, and it's not just in error analysis, but this is one of their main uses. OK. Back to slightly less abstract stuff, photometric stereo, and we discussed that somewhat last time. So the idea is that a single brightness measurement doesn't give us enough information to recover surface orientation because surface orientation has two degrees of freedom, and so we need more constraints. And one way to get more constraint is to take more pictures. But of course, if they're under the same conditions, they'll be just the same picture with a little bit of different noise. But the difference in noise isn't going to buy you much other than maybe you can average them to reduce the noise. So we ended up with a system where we took three pictures to make things simple under three different lighting conditions. OK. And we started off with a real simple case where we said that the brightness was proportional to cosine of the incident angle, and that, of course, is the dot product of the surface normal and the direction to the light source. And in a minute, we'll talk about, what if the surface doesn't satisfy that constraint? Because obviously, real surfaces don't. They may approximate this. Some surfaces approximate this rather well, but we want to deal with arbitrary surfaces. OK. So we might have, for example, one brightness measurement. So we're now talking about a particular pixel, right? So we're going to focus on doing this at every pixel, and so looking at a particular pixel, and this is the brightness, the gray level we measure there. And I've put in this row, which I'll call the albedo, as a way of describing how much light the surface reflects. So a white surface, the row would be 1, and a black surface row would be 0, and a real surface would be somewhere in between. And one reason I do that is because I can, and because it makes the problem actually easier-- because now I've got three unknowns. And if I have three constraints that are linear, I can use my great linear equation solving methods to solve them. In a way, two images would be enough because if we have two equations and two unknowns, there's usually a finite number of solutions. The trouble is the finite number may not be 1. And in this case, the finite number is 2. And so I've disambiguated hugely. Before I didn't know what the orientation was at all, now I know it's one of these two possibilities. But it's easier to deal first with a case where we can completely disambiguate it by introducing a problem with three unknowns and three measurements. OK. So then I use a different light source, and I get a second measurement. And then I use a third light source, and I get a third measurement, and we did this last time. And this is where we use that notation for a dot product, and we use it to talk about-- let's do that [INAUDIBLE]. So what's this? This is a 3 by 3 matrix. S1 transpose is the vector to the light source one just flipped from being a column vector into a row vector. So the first row of this matrix is S1 just turned on its side, and this is S2, and so on. And so when you multiply this matrix by that vector, the first thing you get is the dot product of S1 and n, and that of course is E1. OK. And here, I'm using the shorthand notation. So I'm absorbing that lambda into the n. And that's convenient because now, instead of having to deal with a unit vector, I have an arbitrary 3-vector. And that means I don't need to deal with a nasty nonlinear constraint. The thing that leads to two solutions is a quadratic, and why do we get a quadratic? Well, because we have this second order constraint on n. But by doing this, we avoid having to do that. OK. So that, I can just write as a matrix s times vector n is vector E. So the vector E is just I'm stacking up my three measurements. So again, to be clear, so I take one picture, I look at this pixel, I get E1. Then I turn on a different light source, I take a picture. At that same pixel, I get E2. And then I turn on the third light source, I look at that particular pixel, and I get E3, and I stack them together to make this vector, E1. And so the solution is very simple and extremely cheap to compute. In particular, I can precompute this matrix assuming that the light sources are in fixed positions. So if I know where the light sources are, I know S1, S2, and S3, I can just construct this matrix and invert it ahead of time, and then this is just a multiplication of the 3 by 3 matrix by a 3 vector. And we talked about this last time, and we also said that's assuming that this matrix doesn't give us problems. So the problem would be where the matrix is singular, then we can't invert it. Or if it's nearly singular, we can't invert it. So when does that happen? Well, it happens when the rows are not linearly independent. So that's when this blows up. And for example, let's try this, S3 is some combination of S1, some linear combination of S2. So that's how we create problems, right? If the third row is just a combination of the first two rows. Or in general, if there's a linear combination of the three rows, that gives us 0. OK. So why is this bad? Well, because this means that E3 is alpha E1 plus beta E2. I just need to take the dot product of this equation with n, and I get this result, and that tells me that the third measurement is redundant. It's not telling me anything new. So no wonder it blows up, right? It's like you're cheating. For example, if you made the third row the same as the first row, it's pretty clear that the matrix is singular, and it's also pretty clear that you're not getting any new information. So it all makes intuitive sense. So I can think about this as a picture. This is a case where S1 plus S2 plus S3 is 0, and that's clearly a bad case. And I can be fancier, and I can put multipliers on these, I can put alpha on this and beta on that, and if you'd like, gamma on that. So if some multiple of those three vectors adds up to 0, this won't work. And what is that condition? How can I geometrically say what the condition is for this to go wrong? So the vectors S1, S2, and S3 are-- for this loop to close. Remember, this is now in 3D. So we've got a vector going a certain way, of a certain length, then another vector. You start at the tip of that first vector and you put the tail of the second vector there. And then when you get to the third vector, it closes. So what is the condition on those three vectors, geometrically, that makes that possible? AUDIENCE: [INAUDIBLE]. BERTHOLD HORN: They're in the same plane, right? If I had x, y, and z axes, obviously you can't do that. One goes off in the x direction, one goes off in the y direction, one goes off in the z direction, they're going to close. So the problem is if they're coplanar, so that's bad. That's when this method fails. And so obviously when you do this, you should place them so that they're kind of as far as possible from being coplanar. For example, you could put them at x, y, z axes. So you could have an arrangement where the object is down here. And then you can erect some rectangular coordinate system, and S1 goes there, and S2 goes there, and S3 goes there. And now they're not coplanar so this won't happen, they won't fall apart. And then there are questions of, well, which is best? Obviously if I make them almost coplanar, I'm going to get unstable results, some eigenvalue will be small. And if I take the inverse, the inverse of the eigenvalue will be large, and so the noise amplification will be large. OK. So if you're doing this in an industrial setting, you are in control of where the light sources go. So that's all you need to know, don't make them coplanar. And probably, the closer you can get to this arrangement, the better where they're actually at right angles to each other. So this is where I want to talk briefly about the Earth, moon, and the sun. So let's suppose that the moon is made of green cheese, and green cheese has a Lambertian-reflecting property. And so we are on Earth, and we're trying to get a topographic map of the moon, which would be a useful thing to do before people land there. Of course now we did a long time ago. But before people landed there, there was a lot of uncertainty. People didn't know, for example, whether the solar wind had pummeled the surface to such an extent that there was 10 meters of dust. And if you were landing on it, you'll just disappear into that dust. Anyway, so there was a lot of interest in trying to figure out how tall are these craters. We can see these nice craters, but what's the slope? We don't want to land on something that has a very large slope. So it would be really great if we could use photometric stereo because the sun does illuminate the moon in different ways, different parts of the cycle. So here, as you know, the moon is always showing the same face to the Earth, and the other side, which is stupidly called the dark side of the moon, is not visible from the Earth. And so if you think about-- let's take a particular point here-- while it's in the cycle here, the sun is over there. But here, the sun is in that direction. And here, the sun is in that direction. So we could easily arrange three, 10, however many measurements at different positions in the orbit, and we can use this method. Well, there's some assumptions. One of them is that it has Lambertion reflectance, which it doesn't, but we'll try and fix that later. And annoyingly, it doesn't work, and the reason is that this plane here that contains the moon's orbit is pretty much the same plane as the plane that contains the orbit of the Earth around the sun. So you can see what's going to happen that these three vectors, or more, are coplanar, or almost coplanar. The moon's orbit is a couple of degrees off the Earth orbit around the sun. So you get a tiny change, but it's not enough to make a useful measurement. And so it's amazing. We've just done something very simple here, and we've already reached a very profound conclusion, which is that you can't get the moon's topography from Earth measurements as it goes through its orbit, and different parts are illuminated differently as it does so, which is a pretty amazing thing. OK. So let's talk a little bit about the Lambertion assumption. So Lambert was this monk, and he did these experiments. Now why was he a monk? Well, because there was a time in our history in the Western world where the only people who could learn anything were in the religious domain. Ordinary people couldn't read or write, and they weren't allowed to read or write, and monks did all of these interesting things like figure out how to brew rum and whatnot. And this particular guy, he had wrapped up his lunch in paper, and it contained some oily fish. And so a piece of the paper was infused with oil. And I think you've seen this, white piece of paper with some oil on it, the oily area looks kind of darker. And then if you hold it up and move it around, you see that it has different reflecting and transmitting properties. And so what he discovered was that he could make measurements of light using this tool. Today, of course, we'd use an $100,000 computer and PIN photodiodes, but he used a piece of paper. And so the idea is this. Here's our piece of paper and here's the fatty spot, and what happens is that the paper is not absorbing any light. It's a white paper. Ideally-- it could absorb a little bit, but let's assume it absorbed none. So it only has two choices. The one is sunlight arrives and is reflected back, and a little bit of it goes through to the other side. Now in the fatty part, sunlight arrives and a little bit of it is reflected back, but a lot goes through. OK? So that's the difference about the two parts. The fatty substance basically fills in the air voids, and so the surface is no longer as reflective as before, but is not an absorbing material so what happens is the light just go through. OK. So why would this be of interest? It allows you to compare two illumination intensities, and you do it basically by illuminating this side with one source, and this side with the other source, and then you balance it so that you can't see the fatty spot. When can that happen? Well, we could write out equations, but I think you can see what's going to happen is if the same amount of light is coming in from this side as from that side, then these will balance. This will appear equally bright when looked at from here as that. So that's a very powerful idea. And he didn't really do this with these lunch paper, but that was where he went. He had white paper, which was pretty precious at the time. You may remember that people would sometimes write something-- and then because paper was expensive, they would write something at 45 degrees on top of it and maybe more. So anyway, he had this nice piece of paper. And this way, he could compare brightness. And one of the things you can do, then, is get the inverse square law. So he might put four candles on this side and one at twice the distance as one candle on this side, right? And so it should match. So he was able to do all of these amazing experiments with this very simple apparatus and get the inverse square law. And of course people like Newton would say, well, it's obvious, you don't need to do an experiment. The energy is going over the surface of a sphere, and the surface of the sphere is 4 pi r squared rr squared. So anyway, then the next thing he did was he wondered how surfaces reflect light when they illuminated from different directions. And using methods like this, he came up with what's now called Lambert law, which is that it's cosine of incident angle-- the brightness is proportional to cosine of incident angle. Now of course, part of that is, again, not something that you need to do an experiment about. We already talked about foreshortening, and we know that the amount of light falling on the surface varies as to cosine of the incident angle. But what he talked about was, how bright does it look? In other words, how much light does it reflect, and in what direction? OK. Now to talk about this in more detail, we need to have a way of talking about surface orientation because the brightness is going to depend on surface orientation, and it may do so in a more complicated way than Lambert. I put law in quotation marks because it's not a law, it's a phenomenological model. What does that mean? That means that you postulate a particular way something behaves and then give it a name. And it's not like there's a real surface that does exactly this. Many real surfaces like paper are good approximations, but they don't do exactly that. OK. We already talked a little bit about surface orientation. We said that we can erect a unit normal. So here's a little patch of the surface, and we've erected something that's perpendicular to the surface. And that's a way of talking about the orientation, and we mentioned that it has two degrees of freedom because it's a 3-vector, but there's a constraint that it's a unit vector. So 3 minus 1 is 2. So that's one way of talking about it. And for an extended surface, you can imagine that we do this for little facets all over the surface, and they'll all be pointing different ways. But if you pick a small enough area, as long as the surface is reasonably smooth, then we can reduce it to this case. We have to exclude things that mathematicians will construct like a surface that 1 where x is a rational number and 0 where x is an irrational number, we can't do that there. But for real surfaces, we can facet it. And for different facets, we can do that. Then we also mentioned since it's a unit vector, we can talk about orientation in terms of point on the unit sphere, and we'll find that pretty handy later on. For example, if you want to talk about all possible orientations, that's the whole surface of the sphere, or there's certain operations that we'll be doing on the surface of the sphere. So those are representations, but we need something else. So let's look at this. So here's a Taylor series expansion where the dots indicate higher order terms, which we can ignore as long as we make the infinitesimally small enough. OK. So the difference of these two, we can write as delta z. So that's the same equation. So I'm introducing p and q as shorthands for these derivatives. And this is just analogous to what we've done before where we introduced u and v as shorthand for dx/dt and dy/dt, partly because it's too much bother writing all that stuff, and partly because it makes it look too intimidating when it's actually very simple. So p and q are slopes. And in fact, p and q is the gradient on the surface. So remember how I said that one way to think about an image is that it's height above some ground level. So we have x and y in the image plane, and then we can plot the brightness as height above that. And then when we talked about the brightness gradient, ex, ey, I was saying that it's gradient of that surface. Well, here's a case where we actually are talking about a surface in 3D, and this is its gradient. It's dz/dx, dz/dy. OK. One way to make a picture of that is this. OK. So this is our surface with normal vector, and this edge is delta x, and this edge is delta y. And this part here is q delta y, and this part is p delta x. So this is a diagram that basically illustrates this idea. I take a small step in the x direction, delta x, and the surface goes up by dz/dx times the delta x. Then I take a small step in the y direction, and the surface goes up by dz/dy times delta y because it might go down. Just in this particular picture, it goes up. So this is another way of understanding what this equation is saying. Now we'll find that the places where the unit normal notation works for us, and there are places where the gradient notation works for us, and so we need to have ways of switching back and forth. And this is very common in computational problems where some problems are easily done in one domain, and some are easily done in another domain. So it ends up being a problem of conversion. Like Cartesian coordinates and polar coordinates, some things are easy to do in polar coordinates, some are easy to do in Cartesian coordinates so you just want to be able to convert back and forth, and so same here. So how are these related? How is n related to p and q? Well, one thing we can do is look at this surface, which has a normal n. Any line in the surface has to be perpendicular to n, right? That's the whole idea of n is perpendicular to the surface, meaning it's perpendicular to any line in that surface. And if I have any two lines in the surface that are not the same, then I'm done because if n is perpendicular to two lines, I can just take the cross product, right? Because the cross product of two vectors is perpendicular to both of those vectors. So I need to find some tangents in the surface. Well, here's one. This is an edge which lies in that surface, and what is its direction? Well, the x component of that vector is delta x, it has no y component so that's 0, and the z component is p delta x. So that's a vector in the red direction. And I can take the delta x outside because it's arbitrary length, and I get that vector. So that's a tangent. I can get another one. I can get this one here. And that one, as I move along that edge, there's no change in x. So this is 0. There's a change in y, right? I'm moving along here, so delta y. And there's a change in height, which is q delta y, and so I can take the delta y of-- OK. So what have I got? I have two directions that lie in the surface. I could have picked some other direction, those just happen to be very convenient to calculate. And so what I need to do is take the cross product, and that should be parallel to n. It'll have some size, which we don't care about. We're only worried about the direction of that unit vector. So what's that cross product? I think it's minus p minus q1. So that's the connection between the two representations, and now I can go a little bit further. I can say the unit vector is-- just normalize it. So if you give me p and q, I can compute the unit normal n, and I guess we also want to be able to go the other direction. So p is minus n dot x. So if I have to ever go the other direction, I can do this. And it looks intimidating, but it's just saying take the first component of n, namely this minus p, and divide it by the last component of n, which is 1. So why did I do that? Because what if n isn't the unit vector? Then this will take care of it. If it is a unit vector, I don't need to be this fancy. OK. So I can go back and forth between the two notations representing surface, and what do I do with this? Well, the great thing now is I have a way of mapping, in the plane, all possible surface orientations. I already kind of had that because I had the sphere. I said that all possible surface orientations correspond to points on the sphere. Trouble is, I don't have spherical paper and I don't have a spherical blackboard. So this is a projection of the surface of that sphere into the plane that's particularly easy to understand because this is just the dz/dx and this is just dz/dy. OK. So as with velocity space, this is a very useful construct, but it takes some getting used to. So points in this plane are not points in the unit. Points in this plane correspond to different p and q, i.e. different surface orientations. So for example, let's consider that point. So that's the point where p is 0 and q is 0, and that's a plane. Now suppose that the flaw here has an xy coordinate system and z comes up out of the floor, what type of a surface would have this point in this representation? Yes. The floor, for example. Anything that's level. Why? Well, because the zdx is 0 and the zdy is 0. So if it had any tilt at all, then the zdx or the zdy would be non-zero. So the floor has this property. But actually, any surface above the floor that has the same orientation-- so there's an ambiguity here. It's not telling us where this thing is. It's only telling us how it's oriented in space, and so that's going to be a secondary problem. Suppose that we come up with a machine vision method, which for every pixel, allows us to recover surface orientation either this way or in p and q. We still have to patch it together to make a complete surface, but that turns out to be easy because it's overdetermined unlike most of the problems we deal with which are underdetermined. OK. So that's that surface. Now if I go over here to p equals 1, let's suppose that x goes to the right and y goes forward, that corresponds to the surface where the slope going to the right is 1, 45 degree up. That would be pretty steep, I would slide off it with these shoes. So then if I go over here, what's that? Well, that's a surface where the slope in the y direction is 1. So it's the kind of thing you find at EMS to check out your rock-climbing boots, make sure that they are fitting well, and you stand with your toes up like this. And then this one, of course, is a combination where we have a slope to the right and a slope to the forward, and so on. And the further out I go, the steeper it gets. So that's what this plane is. Every point in this plane corresponds to a particular surface orientation. Now in application to machine vision, we find that the brightness depends on surface orientation. So this is a wonderful tool to plot brightness. All right? So just experimentally, I could take a patch of this material, I could orient it flat, parallel to the ground. I measure how bright it appears, and I put that number here, E1. Then I tilt it up 45 degrees, and I put that number over here. And I tilt it up-- what's the slope of 2? The inverse tangent of 2, and I plot it here. So I can plot my brightness values as a function of surface orientation. So this becomes kind of an image because at every point there's a brightness, but it's not it's not a transformation of an image that you take with an optical system at all. That's confusing, but it's not. So everything in here corresponds to orientation, and then we can plot whatever we want, such as brightness. OK. So where is this going? Well, one idea is we can then invert this. Suppose that we've made this map, then you measure the certain brightness. And then you go back to it and say, oh, that means the surface orientation is such and such. So that's the idea. And you're probably saying, well, really? Because maybe brightness down here is the same as there, and maybe the brightness down here is the same as there. And in fact, maybe there's a whole line of points that have the same brightness and a whole line of points that have the same brightness, and just counting equations and constraints tells you what the problem is. If we make one brightness measurement, we can't recover two unknowns. So just as with velocity determination, a single measurement won't be enough. It will be a dramatic improvement of a no measurement. So if we don't take a measurement, we don't know where we are in this plane at all. We could have any orientation for a surface element. If we take one measurement, we will be constrained to some curve, and then we need more constraint to actually pin it down to a particular orientation. So let's relate that to what we did in our example of photometric stereo, and also our discussion of Lambert. Let's suppose that we have a Lambertion surface. So there's a common confusion, which is that this stuff only applies to Lambertion surfaces. Why are we doing Lambertion surfaces? Because for Lambertion surfaces, I can show you nice diagrams, I can solve equations. In the real world, nothing is exactly Lambertion. So if you want accurate result, you will have to measure, calibrate, and we'll see how to do that. But for the moment, let's just assume that, magically, we're dealing with a Lambertion surface. So then we have the surface normal proportional to brightness. So brightness is proportional to the cosine of the incident angle, and that's the dot product of the surface normal and the incident direction. And now we want to translate that into this notation, p and q. But one thing that's important is we're looking for isophotes in here. We're looking for these curves. And so we'll be looking at places where this is a constant, but let's just replace-- so the unit vector, n, is minus p minus q1. And we can take the dot product of that with some light source direction, and then we can plot that. But there, it'll be handy to introduce another little shortcut, which is a different way of writing the direction to the light source. So we thought of that equation there as kind of a mixture. We've halfway gone from unit vectors to pq space. Let's go fully. And so the full way is to say, well, to perform the same transformation on that unit vector that we did on n, it's just the same equation. It's just this time we're talking about the vector to the light source rather than the unit normal. And so to make that clearer, that point, ps, qs, it's in that plane. And what is it? Well, it's the orientation where the incident light rays are parallel to the surface normal, right? So this is the point where-- and so for the Lambertian surface, that's going to be the brightest spot, right? Because the angle between those two vectors is 0. So the cosine of the incident angle is 1, and that's as big as cosine can get. So we picked this particular point, ps, qs because it's in that plane-- because it's the one that gives us the brightest surface. So it has some real meaning. Other than geometrically, it just means that we're illuminating the surface right down the normal. Another way to think about it is the foreshortening, there's no foreshortening. We haven't tilted the surface relative to the light source. So we don't have the same power spread over a larger area. Here it's concentrated in the smallest possible area. It's the most efficient. It's what you do with your solar collectors. OK. So that means we can rewrite n dot s in this form. So this is starting to look a little bit messy. And what are we doing? We want isophote. So we want to know, where is this quantity constant? What are the curves in that pq space, in gradient space where that quantity is constant? Well, I can square this and move things around a bit. Now this quantity here, this is just some constant, right? Because I'm keeping the light source in a fixed position. And then the question is, what kind of curve does this define in pq space? If you multiply it out, you're going to get some constant terms, some terms proportional to p, some proportional to q. The highest order terms you're going to get are second order. So when you multiply it all out, which is messy, you'll have something that's second order in p and q. So the question is, what kind of a curve does that define? And it may be hard to think about it in terms of p and q. We're not just talking about geometry in the plane. So imagine you have an x and y-- you have an equation that's got x squared, y squared, x times y, x, y, and a constant in it all added up, what curve would that correspond? Yes. AUDIENCE: [INAUDIBLE] parabola. BERTHOLD HORN: It could be a parabola, yeah. Anything else? AUDIENCE: Ellipse. BERTHOLD HORN: Ellipse, Yes. OK, great. Generalize it a little bit more. AUDIENCE: Conic section. BERTHOLD HORN: Sorry? AUDIENCE: Conic section. BERTHOLD HORN: It's a Conic section. OK. Yes, those are great examples. Overall, they're conic sections. And so yes, we can have a parabola. We can have an ellipse. We can even have a circle. We can have a line, a special degenerate case. We can have a point, even more special degenerate case, and we can even have a hyperbola. OK. So I want to plot this thing, and this is like a preview of what it's going to look like. So if c is 0-- let's look at that special case. If c is 0, then this is 0. And that means that 1 plus psp plus qsq is 0. And what kind of an equation in p and q is that? That's a line. It's a linear equation so it's just a straight line. So that's one special case. So there's going to be some sort of line in here, and it's where the brightness is 0. OK. And then another special case is where p equals ps and q equals qs. And that's a special case where-- we talked about here where the normal is pointing straight at the light source, and we're getting the maximum brightness. So there's some point over here where E is 1 if we normalize it appropriately. OK. And then the rest, you can plot from using some sort of program, if you'd like. OK. So that's a very handy diagram for graphics because if I have a surface that I'm plotting, I can easily determine the unit normal from it, I can get p and q, or I can get p and q directly. And then I'd just go to this diagram and I read off whatever the brightness is here, and I use that as a gray level, or color, in the image that I'm plotting. What we're doing is kind of the other way around. What we want to do is say, OK, I've measured E as point two, what's the orientation? In this case, that's that curve. So I don't get a unique answer, but it's heavily constrained. It has to be on that curve. OK. Now if I had more constraint, I can improve on this. For example, suppose now I move the light source, then this whole diagram changes, right? Because remember this point here is one that basically depends on the position of the light source. It's ps, qs from this equation, it's related to the direction to the light source. OK. I don't want to mess up this diagram, but imagine that I mirror image this by moving the light source over here. So I'll have a second set of isophote that now intersect with these. And if I make a measurement under those other light conditions, then the answer has to be on both curves, and then I get the solution from that. So just for fun, let's suppose that the other curve was like this. They're curves, they're not lines. So it is quite possible for them to intersect in two places. So I have a finite number of solutions in general, not one. And that's why we focus more on the case where we use three light sources instead of two. So just a note on, why are they conic sections? Well, that actually also has an easy answer, which is suppose I take a brightness measurement of a Lambertian surface-- so here's my light source, here's a surface. And from the brightness, I can calculate this angle. But of course, I can spin this vector around this line to the light source. I can spin that around, and what do I get? I get a cone. If I measure a different brightness, it'll be a different angle, I'll get a different cone, and so on. So again, imagine some third surface element, I measure yet another angle, and I get, say, this cone. So there are these nested cones, and now imagine that you cut this with a plane. So this is our pq plane, and ta-da, conic sections. And yes, you won't just get ellipses, you may get a hyperbola. As long as this bottom edge of the cone is actually below this plane, you will not get a closed curve, so parabolas are possible. Yeah. OK. Let me first address that one, other side of the line. So we said that it's cosine theta except when it's negative. This is where cosine theta I goes negative. And I haven't purposefully drawn this part of the diagram because in practice, brightness doesn't go negative. It's a measure of power, so it can't be negative. So if I were to just plot cosine theta, it would continue. But we're having max of 0 and cosine theta, so we have this part. The other part of the question is, where does it turn from being closed curve to being open? I'll leave that as a puzzle for a future homework problem. Why? Because I don't know the answer. So I'll let you figure it out. OK. That's it for today. So I guess you all know there's a homework problem out that will be due. And please make sure you are signed up on Piazza because a lot of announcements are on there about office hours, homework problems, and stuff like that.
MIT_6801_Machine_Vision_Fall_2020
Lecture_13_Object_Detection_Recognition_and_Pose_Determination_PatQuick_US_7016539.txt
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: What we're talking about today is another patent, 7,016,539, and of course it's there in materials on Stellar. And this is going up one level. So it builds on what we've done before and its purpose is to detect objects, recognize objects, determine their pose in space, inspect objects, and do a few other things. So what's the problem they're trying to solve? Well, we're trying to manipulate, perhaps using a robot arm or some other machinery, objects out in the world. And we want to know where they are and what they are. And we are going to start off by assuming that we have very accurate edge information. We went through all of that. And so what came before this? So when we look at the patent in a minute, the prior art for doing this had four different components. So one was blob analysis, and more properly, this is binary image processing. So if there's a way of distinguishing object from background, we can create binary images. And binary images have lots of advantages. One of them is they're much smaller because you've got one bit instead of eight. And they're much easier to process because there are only that many things you can do with one bit. And so they require less computing power, less memory, and in the days when memory was not so easily available, that was very attractive. So what do you do with a binary image? Well, we're not going to do a whole lot of that. But obviously one thing you can do is find some kind of threshold, and then you get hopefully connected areas, and you can find properties of those connected areas, such as area, perimeter, centroid. So there are a few things that are fairly easy to compute, area of perimeter and centroid being amongst them as is Euler number. So what's Euler number? Well, in this context, the Euler number is the number of blobs minus the number of holes. So for example, the Euler number for that letter would be 1 minus 2, or minus 1. And it turns out that some of these can be computed in very efficient ways. So first of all, there's Green's theorem that where we saw that there are certain computations on areas that can be changed into computations along boundaries. And area, perimeter, centroid are all in that category. And then there are some low level binary image processing operations. And it turns out that these three can be implemented in a very efficient parallel way by performing local computations. So for example with perimeter, all you need to do is look for places where there's a one pixel next to a zero pixel. And if you can identify all of those places and count them up, you're done. So the idea is that there's certain things that can be done by counting local properties, because area is an obvious one. Is the pixel 0 or 1? And if it's 1, you count it, and otherwise not. And you can do it by going sequentially through the image, but that's of course very expensive. So you can imagine parallel methods, including parallel hardware. People built parallel hardware for this kind of thing which will compute area perimeter and very surprisingly also Euler number. So that was one approach. And one problem with it is it involved the threshold. So if you're faced with a real world image, you have to somehow distinguish background from foreground. Maybe they are a different color. Maybe they're a different brightness, maybe different texture. So you have to find some way of separating, and that means you have to make a decision early on. And as we mentioned last time. That's not always a good thing because your decision may be wrong, and there's no way to undo that. So now once we-- oh, I said "shape" in quotation marks. So a lot of these are computed using moments. So this is based on the zeroth moment and this is based on the first moment. And what's a moment? Well, just to refresh your memory, so the zeroth moment is where you just take the integral of e of xy times 1 times 1. And if e of xy is a binary image, that's just computing the area that is 1. So and these others are computed in a similar fashion. So those are particularly easy to compute. And what's the problem with this method? Well, in a lot of cases, there is no thresholding method. There's no magic trick that will distinguish foreground from background. In the autonomous vehicle, you're looking at cars ahead of you and some of them are brighter than the background and some are darker than the background. So that method is limited to cases where we have a clear distinction. It's very important, though, to realize that these methods are widely used, but not on raw images. They used on the result of some computation where the computation combines brightness levels in some way to arrive at some conclusion and give you a binary result and then you need to know these methods. OK, there's also something called binary template where you use a master image to define the object. So this comes up a lot in these methods where you have a sample that you consider the standard, often called the golden template. And you try to make it so that its image is as clean as possible and it doesn't have any defects and so on. And you then compute something, for example, a template by thresholding. And that is then used as a method for recognizing and determining the pose. So you take that template and you move it around in the image until it matches what is there in the image. And we won't say much more about that. So the binary template method's accuracy is limited to perhaps a pixel. Then we get to things that were more widely used. So let's call that two. We get to normalize correlation. And in fact, when cognates started that was their big claim to fame that they could do normalize correlation at high speeds and get reasonable accuracy, like a quarter of a pixel position. And so what is that? Well, the idea is kind of let's try at all possible positions for the match and see which one is the best. So if we have two images, E1 and E2, where perhaps E1 is the golden master, then what we're looking at is something like-- so we take the difference of the two images and we-- well, let's square it. So this is max, sorry, min. So it's clear what we're doing here. We're taking one of the two images and moving it around and trying to find the alignment where the difference between the shifted image and the other image is as small as possible. And so there's a related method called correlation. Now, notice that right now I'm only allowing for translation. So we're just allowing for shift. And the reason for that is that, of course, we want to be able to deal with rotation, scaling, aspect ratio changes, slant, whatever. But this method is so expensive computationally that translation is about all we can do. Why is it computationally expensive? Well, because basically, you have to look at every pixel for every possible position. So if the n pixels and you're going to try m different positions, that you need n times m computations. So with n being a million or something or more and the possible positions being also all of the shifts in the image, maybe half a million, you're talking a lot of operations. These two are related in that if we expand this out, expand out the integrands, we get E1 where we have the correlation appearing in the middle. So maximizing this is minimizing that, assuming that this is constant. And it should be, right? Because this is just take the second image, and you just square all of the gray levels and add them up, and that doesn't change. This one's a little bit more questionable because you take the first image and you shift it around and then add up the squares of the pixels. So maybe some of the pixels go out of the frame and stuff like that. But to a large degree, those terms are constant. So minimizing this thing basically means we have to maximize that. OK, well, while I there want to relate this to our gradient based methods because we're talking about fixed optical flow, optical mouse, all of that stuff. And there we were computing an offset, typically a small offset like fraction of a pixel or maybe a couple of pixels. And so it looks like here's another way of doing it. We maximize the correlation. So let's look at that. So let's suppose that delta x and delta y are small, then we can use Taylor series expansion and we get E1 of xy minus-- and so then we had a minus delta x Ex minus delta y Ey. And then we have the difference of these two, which we can take to be the change in time because we can think of these two images as two images that correspond to two different times. So we're going to, let's see, minimize this. So if I divide through by delta t and take the limit as delta t goes to 0, I get that because delta x over delta t goes to u. And of course, the minus signs don't matter because I'm squaring. And delta y over delta t goes to v and so on. So that's all very familiar. Yeah? AUDIENCE: Can you remind us what the goal of all this is again? PROFESSOR: The goal is to find an object, to recognize it, and determine its pose. So right now, what we're talking about is focused on this part, the determining the pose. So we have an image of what the integrated circuit is supposed to look like. And then we have an image of a runtime image of an integrated circuit. And we're trying to put it in the right place for the next step of the manufacturing process, for example. And so we need to find out, how far is my runtime image shifted relative to the training image? And then I can move the stage and do the next stage of-- and so these are various methods of trying to get that alignment. So for example, the sum of square of differences is saying, OK, I'm going to shift one of the images until it matches the other image as close as possible. And then interestingly enough, that's actually equivalent to the correlation, which isn't obvious. Why should the integral of the product of the two things be maximum? And then, in turn, for small displacements, that's actually very similar to our gradient-based methods, right? We had that optical mouse. We wanted to know how far it moved-- very similar problem. And interestingly enough, if you read the patents on optical mice-- you probably didn't think they were, but there are dozens of patents on optical mice because, individually, a mouse doesn't cost hardly anything, so you're not going to make much money. But you can sell a billion of them, so people have thought it worth patenting. And almost all the patent recommend the gradient-based methods. And when you take them apart and you reverse engineer them, a whole bunch of them don't. A whole bunch of them use-- so the patents all say correlations, sorry. I think I said it the wrong way around. The patents are all about correlation. Hewlett-Packard, Agilent, all of those people have patents on this. But then when you take them apart and you reverse engineer them, which you're not supposed to do, you often find that they're actually using gradient-based methods because they're cheaper, more accurate, and whatever. The limitation is that the movement is not very large. So often, those mice run at a higher frame rate to compensate. A lot of the gaming mice work that way. They have a very high frame rate, so the motion from frame to frame is very, very small. So anyway, so these are all connected. So I just thought I'd throw this in because we're talking about determining the pose. So in the simplest case, pose means the shift. Of course, it will be more interesting in cases where it also includes rotation, and maybe scaling, and so on. So OK, so back to correlation-- so sum of squares of differences is used sometimes, but it has some drawbacks. So for example, if one of the two images is substantially brighter than the other, then they won't match very well, whereas the correlation will give you a high match anyway. Well, I guess, one problem with the correlation is, how high is high? Where's the threshold? I mean, so when we plot that versus delta x delta y, we expect there to be a peak at the correct displacement. And that will still occur even if you-- it's obvious. If you take E,2 and multiply it by k, it's really going to disturb the sum of squares of differences. But it's not going to remove the peak off the correlation. The peak will just be k times higher. So that's one reason people very often use correlation, even though it has some other drawbacks, computational. OK, so let's suppose we do that. Well, now, there are issues like-- as I mentioned, if the contrast is different, you will get a different peak value. It would be really nice if, instead, you had a number that said 0 is no match, and 1 is match. Here, we have something that can get big, and you're looking for the biggest. But if it's not very big, do you know? How good a match is it? Are you matching up a picture of a cat with a picture of a dog? You don't know because you're just looking for the peak. So what to do about that? Well, that's where a normalized correlation comes into it. So first, offset-- so if I add a constant to one of the images, it shouldn't change the match, but it is going to disturb this calculation. And it's, therefore, advantageous to first subtract out the mean. OK, so we'd have a new-- so we compute the average of E,1, and we subtract that from everything so that if the overall level goes up or down, it has no effect. We cancel that out. Of course, it does mean now that E,1 prime can have both positive and negative values. Well, in fact, it will have both positive and negative values, and the same for E,2. OK, so what does this do? This gets rid of any offset in the image brightness. And so it makes the process less sensitive to changes in the optical setup. But in addition, what we'd like to do is make it independent of contrast so that, in many circumstances, If the contrast changes by a factor of 2, it shouldn't change your match. Of course, there are cases where you want to know about that because it could be an indication of a defect. But in this alignment problem, we may not want to care about that. And so in that case, what we do is we compute the correlation again as before. But now, we normalize it. Oh, if you like, you can turn it into two square roots, whichever way you want to do it. So we're trying to remove as many things that could disturb the computation as possible. One is a shift in mean, and the other one is a shift in brightness and contrast. And so this one here has the great feature that if we have a perfect match, if E,1 equals E,2, what do we get? E,1 equals E,2 means this one is the integral of E,1 squared. And this one's integral E,1 squared. This one's integral E,1 squared. And multiply these. Take the square root, so the whole thing, you can see how it's canceling out. Yeah? Oh, yes, thank you. We went to all the trouble of subtracting out the mean, and then we dropped it. So OK. So the ideal value for this is 1. So if we have a perfect match, that is 1. And you can show that this is limited to the range minus 1 to plus 1. It has to lie in that range. Minus 1 is you, in some sense, have a perfect mismatch. What that means is that one of the images the negative of the other-- so not very likely in practice. And 0 means, well, there's no real correlation. Of course, if you take two arbitrary pictures, you're going to get some number. It's very unlikely you'll get 0. But you won't get a number close to 1 unless there really is a reasonable match. And so this is called Pearson's Correlation. OK, so that means that the normalized correlation method for finding the pose has the property now that, not only can we find the peak, but we can have some measure of success. If the correlation is 0.6, that's not great. If the correlation is 0.95, well, that's probably a very good match. So it probably is an acceptable match. Again, as I mentioned, it's expensive. And one of Cognex's early claim to fame was that they were able to speed it up tremendously by clever programming and making use of the multi-byte instructions that were available on Intel platform. So what's wrong with this? Well, there are a number of things. One of them is that if part of one of the objects is obscured-- so you've got your golden template where everything is perfect, and then you have the real-world thing where maybe there's a piece of paper lying over the corner of the object, or a second object is partially lying on top of it, well, that's going to mess this up because in that area, the E,1 and E,2 won't be matching. Or if you have missing parts, you've got a gear with various teeth, and some of the teeth are not there, then that's going to affect this match. Nevertheless, it was used quite a bit for a while. And now, we're going to the patent which says, we can do better. Let's see if I have a bit more luck here than I did earlier today. OK, so patent number 7016, so we've come up a bit. The last one was in the six millions. Method for fast, robust, multi-dimensional, pattern recognition. So it's a little confusing terminology. Multi-dimensional? What is it talking about? It's primarily aimed at 2D images and flat surfaces. But multi-dimensional means they want to deal with not just translation but rotation, scaling, skew, and whatever. And they emphasize that because the previous method, the best method was no more or less correlation. And it was so computationally expensive that you couldn't really do anything other than translation. And so this one is, quote, multi-dimensional. And then the usual stuff-- the list of inventors. The assignee is, again, Cognex, and then the field, which we said wasn't particularly important, and then the references. So the first bunch of references are their own citations. And I guess, up at the top on the right, some starred ones, which the patent examiner threw in the pot along with the field from where they came. And then there are several publications general public, which isn't that common in patents. OK, so abstract-- so they say it right up front. Disclosed is a method for determining the absence of presence of one or more instances of a predetermined pattern in an image, and for determining the location of each found image within a multidimensional space. A model representing the pattern to be found-- the model including a plurality of probes. So I'll talk about what probes are. Each probe represents a relative position at which a test is performed. So this is very different in nature from all of the other stuff we talked about before in that it's creating this abstract representation and using that in the match. The method further includes a comparison of the model with a runtime image at each of a plurality of poses. So he tried different positions, rotations, et cetera. And a match score is computed. And then you can define a match surface. And you're looking for the peak of that surface. But now, remember, it's in the multidimensional space. So you could be in five-dimensional space looking for a peak. And the match score is also compared with an accept threshold. So we do finally have thresholds, but the attempt is to delay any decision making, have thresholds right at the end when you have a lot of information. And used to provide a location of any instances of the pattern of the image. So that may include the case where there is more than one instance of the pattern in the image or zero instances of the pattern in the image. OK, and then, of course, the examiner pulled out one of the figures which he thought was most relevant. And actually, let's just look at it. So the idea is we have a training image, the golden standard. And this could be the top of Phillips head screw or something. And then there's, quote, training, which produces a model. And then there is a runtime image, when you're actually running the system. The robot's moving stuff around. And you combine the model with a runtime image using a list of generalized degrees of freedom-- so generalized in the sense they don't need to be just the obvious things like translation and rotation. And so at runtime, you then do this comparison, then you produce a list of results-- a list of potential matches along with scores of those matches. And there are more references that went over from-- the idea is to get all of that other stuff on the front page, so if there's too much of anything, like references, they end up on the second page. And you can see these last few were put in by the examiner. And I guess, Bill Silver, coming from an academic world, references-- you may recognize some of these names like Eric Grimson. Anyone know about Eric Grimson? Big shot here at MIT. So at that time, he was doing vision. See what can happen to you? You can be working on vision. Next thing you know, you're an important person. So sorry-- shouldn't make fun. Training image-- so there's Figures 1. We just saw that. And here is Figure 2. And now, this should look very familiar to you because most of this is what we talked about in the previous patent, right? So starting here, we estimate the gradient, the x and y components. We do the Cartesian to polar conversion, which could be done using CORDIC or some other method. And it produces the gradient magnitude and the gradient direction, again, at every pixel, so we can think of it as an image. We do a peak detection in the direction of the gradient-- quantized direction of the gradient. And from that, we get column, row, magnitude, direction. And then we do the subpixel interpolation and all those clever things like the plane projection and the bias removal. And what we end up with in the end is a list of boundary points. So that's the output of that previous one. There are a couple of extra things here that weren't the top row. So in the previous discussion, they did mention multi-scale, but they didn't talk about it much in the patent. But here, it's more important. And so we are working at subsamples of the image, possibly several different layers, different resolutions. And of course, before we subsample by Nyquist, we have to try and remove high frequencies, or we get aliasing. So we have a low-pass filter or an approximation to a low-pass filter. So we take the source image, we low-pass filter, we subsample. And we may do this several times at different resolutions. And then we do all of this edge detection stuff. And we end up with a list of potential boundary points. OK, now, talking of multiple scales, what would be nice to do is to use the course of scale that will work because the computational load depends, of course, heavily on the scale. If you're working at the full image resolution, there's a lot of computing going on. If you work at half resolution, it's a quarter the amount of work and so on. So one of the aspects of this patent is, how do I decide what is the course's level that I can use? And they go through a whole story about that. And for some reason, it ends up being the first step out there. And that's kind of there because you can't do any of the other stuff unless you've picked a particular resolution. But actually, the way it really works is you run it at several different resolutions. Then you look at the results and decide, which is the closest resolution that gives me reliable results? OK, then process the training image to obtain the boundary points. That's that previous patent. Then we connect neighboring boundary points that have consistent directions. So you start to chain them together. Organize connected boundary points into chains. And we talked a little bit about how you break them where there's a sudden change in direction because that's probably a corner of some sort. So here's a form of thresholding. As we said, it's going to find edges everywhere, including background texture. And some of those, quote, edges have the feature that they have very low contrast. So we could have a threshold and say, no, we don't want those edges. But that's, again, premature decision making. What they do instead is chain them together. And if they're consistent, even if they're weak, it's still an acceptable edge. If, when you chain them together, you can only get very short chains, or their combined weight is very low, then throw them out. They're just due to noise. So the thresholding has been postponed all the way down here. Divide chains into segments of low curvature separated by corners of high curvature. OK, and then, right now, we've got an edge point at every pixel at whatever resolution we're working at. And that may not always make sense. That may be highly redundant. So instead, we're going to replace them with a set of points that have a desired spacing. So you can control, I don't want more than 1,000 edge points in my model. Or you can say, I don't want edge points to be further away than three pixels. But we do not use the edge points from the edge preprocessing directly. Instead, we fit to the edges, and you'll see that in the pattern. So that's what this is. Create probes evenly spaced along segments and store in model. So that's creating the model. And the model also has a granularity, and it has a contrast. And the contrast doesn't so much matter in terms of magnitude-- more a matter of sign. So in integrated circuit processing, often, you'll see a contrast between different materials. But depending on the lighting and depending on the process, the contrast may actually be flipped. So what, in the master image, looked like it was darker on one side and brighter on the other side may end up being reversed in the runtime image. And so you want to be able to cope with that. I mean, there are other situations where that doesn't make any sense, where if the contrast is flipped, there's something horribly wrong, and you should just stop. But in the case of integrated circuit, the way light reflects off semiconductors is such that you may want to deal with the flipped contrast. And so part of the training is to determine what contrast the master copy has. OK, so this-- that's what we've done so far. We find these edge fragments. And so here's a question for you. Where are the pixels? So are these squares the pixels? Or the center of the pixels at the intersections of the lines? Well, I'll let you ponder that. OK, and this is explaining various parts of combining the edge-- connecting the edge fragments. So how do you connect the edge fragments pairwise? Well, you have to look at the neighbors. And that means a number of things. One of them is you have to decide what order to look at the neighbors in. And you may run into trouble, and you may need tie-breaking, very similar to the tie-breaking we talked about before. OK, well, this is about the sequence that you go around to look at the neighbors. And so now, you've got them connected up pairwise. So these are associated in the data structure. And then, you start worrying about where to break them based on curvature. So relatively low curvature up here, so those probably should stay together. And then here, there's a maximum of curvature, and you'd want to probably break up the-- and finally, you do a probe selection. So here's the top of the Phillips screw from a perfect, golden master image. And this is our model. So we don't store the image. We store these probes. And again, these probes are derived from the edges, but they're not the edge points. They're interpolated based on how many you want. And you can see that they're always pointing so they have a position, and they have a direction. That's the direction of the gradient. And so that's the model plus, as I mentioned, this granularity and contrast. But the main-- this is the key thing to remember that this is the model. And what are we going to do with that model? Well, now, we want to know whether there's a place in the image where the image matches this model. And so we take the model, and we map it onto the image. And we don't look everywhere. We only look where the probes are. So the probes are basically like things where you go to collect evidence. So like in this case, instead of having to deal with thousands of pixels, you're dealing with, I don't know, 100 probes. And you map them onto the image. You look at the gradient there. And you say, well, if the gradient is like that, and the model says it should be that, that's no good. That's an error. So you compare the gradients with the gradients actually observed in the runtime image. And you may take into account the direction of the gradient and the magnitude of the gradient. Now, it turns out that, in many cases, the magnitude of gradient is not very reliable. It depends on all kinds of factors-- materials, lighting, accidents of alignment, whereas the direction of the gradient is much more likely to be maintained, even if there's change in illumination, change of material, and so on. So when you do the comparison, you probably want to focus on the direction. So that's an example. But you can do more. And here's a different example. So each of those probes has a position, a direction, it also has a weight. And that's basically a way of saying, how important is this one? You might, for example, have some weak edges in the master image and say, well, it's probably not as important that these match very well, so you can assign a weight based on whatever. But that's one reason. Another way to assign a weight is to manually decide, these are important edges and these aren't. So in this case, the master object is the prototypical design for cell phone shape, as Apple says, rectangle with rounded corners, we patented that. And so that's that shape. And so we have all of these gradient directions along there, which is very familiar from the previous one. But then we also have these guys. So what's that? Well, these were introduced manually, and they have negative weights so that if you are matching against this one, that means there's something wrong because the object is supposed to stop over here. And so this is to take care of the possibility that an object has certain symmetries. Like in this case, we can slide it in this direction. And we maintain. All of these still match all along the bottom edge. But then we wander outside the acceptable area over here, and we get penalized because these probes have a negative weight. And the same with the movement in the vertical direction, which would maintain matches for all of these things on the side. But when you get out here, it'll say, OK, there's a negative contribution to the total. Now, it's a nice idea, but it's not really used. And the reason is that the others, you can generate automatically. These require some human intervention. I don't think anyone's really used them in some automated way. Now, the others, you can just run our edge process and the flowchart I just showed you, and you're done. OK, so here, they're describing their notation. And remember that this was in the heydays of, oh, we're doing object-oriented programming. Everything has to be an object. So all of these things now are objects. So what are these objects? So the model is what you get from the master image. And what's the model will consist of? It contains probes. Those are the things that have position, direction, and weight, and probes created by training step. And then they contain the granularity, which was that one number that tells you what's the course of scale it'll work at-- and contrast, which we discussed. So that's the whole-- that's what a model is. OK, probe object-- OK, so what's a probe? Did I miss something? No. OK, so what's a probe? Well, it has a position, a direction, and a weight. And you can see what these are-- a two-vector, a binary angle, and the weight is a real number, which may be positive or negative. What's a compiled probe object? So while this method is obviously much more computationally efficient than, say, normalized correlation because we're only looking at probe positions of which there might be dozens or hundreds instead of millions of pixels. There still is a concern with speed. And so translation is treated differently because translation is so easy to implement. You know, it's just like an offset in the INJ coefficient of a matrix axis. And so all the other transformations, rotations, scaling, asymmetric aspect ratio changes, whatever are done at the abstract level. But right at the end, the translational part is dealt with in a very simple nested loop, INJ loop that just shifts things in pixel increments in the x and y. So that means that your model can be brought into that world by taking into account all the other transformations. So the compiled probe object is the set of probes which are now specialized to image coordinates. And you can just superimpose that on the image and move it around to compute the match. But you're only dealing with translation at that level. So it's kind of a detail, but it's an important detail. And so the compiled object is very similar to-- the compiled probe is very similar to the probe. It has a direction and a weight. The big difference is that the offset is an integer in pixels, as opposed to everything up to that point was a real variable. It wasn't quantized. Well, let's see. Compiled probe-- so this is a function that compiles the probes. I'm not going to go through all of this, you'll be happy to know. So the matrix C is the non-translation portion of the map. So as I explained, that includes all of the transformations, except translation. Then what is the map? Well, the map is this transformation that has multiple degrees of freedom that we're trying to find the maximum of. Vector of-- let's go past that. OK, so far, we've pretty much dealt with how to build the model, how to take the nice image of the object that's free of clutter and background, smooth, and so on, and create the model. Now, we start to talk about how to use the model. So as I mentioned, we map the model onto the image. It's very important to remember this because we're going to talk later about a different pattern where it's just the other way around, where we map the runtime image on top of the model. And it turns out, why do we do it this way? Well, you could suddenly transform the image rather than the model, but the model has a few dozen points in it. It's very cheap to transform, whereas the image has lots of pixels. So it's going to be expensive. You can use Photoshop to rotate, translate your image. Certainly, all of that's possible. But we're trying to avoid that expense. And so in this case, it's important to remember the map is from the model, and it plunks the model down on top of the runtime image. OK, now, what do we do? Well, at every probe position, we ask the runtime image, what's the gradient here? Well, we actually pre-compute the gradient, and we sample it. And then, we compare. And as I mentioned, we're more concerned about direction of gradient than magnitude. And so how do we score this? Well, if they're the same direction, that's great. If they're 90 degrees apart, that's horrible. And so here's a grading function which is in degrees between the two directions of the gradients. And you can see that there's a bit of slop. So if you're off by up to 11.25 degrees, that's as good as 0. And that's because all of these calculations have some limitations, some noise, and so on. So you want to allow a little bit of slop in the direction of the gradient. But then you don't want to fall off a cliff. You don't want to say, OK, if it's 11.3 degrees, then it's no match. So instead, you linearly decrease it until you get to 22.5, at which point, you say, OK, well, this isn't the match anymore. That's too much of a difference, half of 45 degrees. And it's arbitrary, how you pick these things. But the picking of them has an effect later on the quality. OK, so this is a probe direction difference rating function which considers the polarity, which considers the contrast, the polarity of the contrast. So of course, it wraps around because when you get close to 360 again, you will allow a small-- I mean, this is just minus 11.25 degrees, and that's minus 22.5 degrees. So that's the function that's used in rating how well a probe matches the corresponding point in the runtime image. Now, if you say, in my situation, there may be a reversal of contrast, then I shouldn't use this because then 180 degrees difference in direction should be accepted as well. And so that's what the next figure is. So here, we are allowing a small slop around 0 and a little bit about 180 degrees, and the other angles are not accepted. And that's important for some integrated circuit situations, as I mentioned. There's a drawback, of course, which is, inevitably, we'll be looking at completely unrelated parts of the image. And there's some small chance that they happen to have the right gradient. So how often is that going to happen? Well, up here, it's fairly rare because we can roughly say this, let's see, maybe a 20-degree band. So that means 20 degrees out of 360, 1 in 18. So the chance of random matches actually pretending to be good matches is 1 in 18. Well, here, it's twice as much. So this is not as robust against noise as that one. So if you know that the contrast is not going to be reversed, don't use this. Use that. And so similarly, you might say, well, I want a wider margin. Let's go out to 45 degrees. Well, that's all very well, except that means that you're going to be accepting more of these random alignments. Then I mentioned that we can also look at the gradient magnitude. And in some cases, you might want to just ignore it because it's not as reliable. Or you can use it directly as a weighting factor. The bigger the gradient magnitude, the more likely this is important. Or you can say, well, that's true up to a point. So you set a target level, which is, perhaps, the gradient, the magnitude, the contrast on the master image, the training image. And then it saturates. So it goes up. The bigger the gradient magnitude, the better. But it has a limit. It doesn't just keep on growing. So there's a function for scoring how well the directions of the gradients match. And this is for scoring how well the magnitudes match up. And remember, these are applied at every probe proposition. So you map the model onto the runtime image. Then at every probe position, you check what the gradient is, and you can pair it with the gradient in-- so that's the key part of the method. Then, we get to the degrees of freedom. So obviously, translation, movement, in x, movement in y are degrees of freedom. But we also want to deal with rotation, and scaling, and-- OK, let me go onto the next page. So this is a description of what a generalized degree of freedom is as far as they're concerned. And then here's some examples. Well, actually, this is probably all of the ones you might ever want. So we have rotation, and it has, as parameters, an angle in degrees. And does it wrap around? It's important that the search method be informed about whether when you go off the max you come back at the min, Well, yeah, for rotation, that's certainly true. And the cycle is 360 degrees. So it turns off the 360 degrees. And how does it map the coordinates? Well, the standard formula, except maybe the minus sign is in a place that we wouldn't want it to be. And then you need to figure out the steps. You're going to explore each of these degrees of freedom in steps. Now, in translation, you might just do pixel steps. But what should you use for rotation? So it's not obvious. And then how does it change the scale of the image? Well, in this case, it preserves the scale. When you rotate something, it stays the same size. Then, you might have shear. Now, that means that you're turning a right angle into a non-right angle. And here's the matrix transformation for it. And this used to be much more of an issue than it is now because, today, images typically don't have much of a shear in them. So in the old days, images were made by devices that had electromagnetic deflection. And so the x-axis was only as perpendicular to the y-axis as somebody made it when they assembled it. So it was quite common to have some kind of angle other than 90 degrees-- not hugely different. But one or two degrees, it could be off. And so you might want to do a search for that. Or suppose that you are working in a world where things aren't exactly two-dimensional. Something might be-- instead of lying on the conveyor belt, it's tilted slightly. Well, then, if you look at the image of that, it will not image as a rectangle. It will image as a rhombus. And so you might want to, in that situation, allow for this transformation. Of course, if that doesn't happen to you, leave it out because it's going to increase the cost of computation. Then we get to size-- now, scaling. Now, you might say, well, we'll just increase linearly the size, the width, and height of everything. But that's not a good idea because what's a linear increase for a small object is relatively large compared to the same linear increase applied to a large object. So it's more reasonable to work in a logarithmic scale. So that means the increment in the size is not 0.01, but it's 1% or, let's say, 10%. And then when you increase again, it'll be not 20%, but 21% and so on. You get the idea. So that's where this comes from. So for the logarithmics size factor, it does not cycle around. And this is the mapping for it. We have the exponential part in there. And this is how the scale changes in that case. And so here's one that only scales x, and here's one that only scales y. And of course, these are redundant because in the case that x and y scale the same, we just use that rather than these two, and so on. And then there's an aspect ratio one. Again, this used to be more common than now because the scan in the y direction was very different from the scan in the x direction. Now, people make incredible high-resolution, high-accuracy image sensors, where you know that the axes are as close to 90 degrees apart as they can be. And then the pixel size is either perfectly square, or at least you know what it is. You've got to be careful about that because, often, it will be something like 504 out of a 497. And when you look at the chip, it looks like it's one to one, but read the specs. OK, so you may need to have to take care of aspect ratio and so on. Now, the bottom ones, as I said, this is somewhat redundant. Like, these down here are linear scale factors, and I already said, those are probably not as interesting. OK, so those are, quote, generalized degrees of freedom. I mean, in some sense, all you really need for 2D work is translation, rotation, and maybe scaling. But they allow for more than that. OK, some weird terminology. Probe MER is the probe Minimum Enclosing Rectangle. So for some operations, you can cut down the computation by checking only the minimum and closing rectangle. And so they carry that around as well. And so think of this multidimensional space, now, of pose. So pose is translation, rotation, et cetera. And it can be multidimensional. And one of the things that can happen is that, when we do this search for the peak, we end up in this space with values that are very close together because we got there in different ways using translation, and rotation, and scaling. And we don't want to report both of those values. And so we need to have a way of telling where the two poses are very close together in that space. And that's what this is about. We won't go into that in detail, but it's to remove any overlap. OK, overlap examples, flowcharts-- flowcharts used to be quite the thing. And I guess, let's get to the text-- lots of figures. Oh, OK. Oh, then the bottom level, the translational search should be done in an efficient way and also should allow for different resolution. And so here is our sequence of search patterns that can be used efficiently at the lowest level. And you'll notice that they're basically organize around the hexagon. And that's, as I mentioned, because you get a 4 over pi advantage in terms of work done versus resolution. And so here's the lowest level that's just like red/black checkerboard, and then there's the next highest scale, and so on. So they get a slight-- and this isn't huge. Some of the other things they do get you orders of magnitude improvement. This gets you a relatively small improvement. But working on these patterns is helpful and-- sorry. And remember, we talked about peak detection and how you had to deal with ties. You had to have a tie breaker. Well, it gets a little bit harder on the hexagonal grid. But here's one solution. So you check that the center point has a larger value than these three, and it has a larger than or equal to value than those three, and that ensures that you're not going to detect adjacent things that are-- OK, oh, OK, now, let's see if I can find this place. So one of the nice things they do is they explain the terminology. Oh, here we go. So there's a common term in patent law which is that a common expression, which is that the inventor can be his own lexicographer, which just means that if you define the term in some way in the patent, then that's the way it reads for the patent, whether there's someone else that agrees with it that's the right usage for that word or not. Sort of like I was listening to a program on Canadian Broadcasting. And he kept on talking about the neoliberal agenda and stuff like that. And I'm like, wait a minute. Everything he's talking about is right wing. How come he's calling it neoliberal? Well, he wrote the book. He called the book The Neoliberal, blah, blah, blah. So he gets to define the term. And it's my fault for using my interpretation of it. So here, they are very explicitly do that. They define all of the important parts. So Object-- what is an object? It's any physical or simulated object or portion thereof having characteristics that can be measured by an image forming device or simulated by a data processing device. Now, this is classic generalization, right? We're just talking about images taken with a camera. And the patent attorney says, wait a minute. This has much wider applications. So they're talking about not just images that were made from real objects, but graphics. So this could be used on data simulated by a data processing device. And you can read into this X-ray images. It doesn't have to be visible light images. Then what's an image? A two-dimensional function whose values correspond to physical characteristics of an object-- and then I guess we have to go up there-- such as brightness, radiant energy, reflected or otherwise, color, temperature, height. You can see how much fun they had trying to generalize this as much as possible. Brightness, the physical or simulated quantity represented by the values of an image regardless of source. So that's to make sure that just because you said brightness, it doesn't exclude anything else, like temperature, X-ray intensity, I don't know. OK, granularity-- so that's where they talk about the size in units of distance below which spatial variations in image brightness are increasingly attenuated. So when you blow an image up, there comes a point where it starts to look blurry. And you can relate their definition of granularity to that point. It can be thought of as being related to resolution. Boundary, what's a boundary? An imaginary contour, open-ended or closed, straight or curved, smooth or sharp-- take your pick. I love this thing where it says, for example, it'll say something like the words and and or mean both and and or. It's like, what? That way, they generalize everything. And it gets worse. I mean, to us thinking about logic, and and or are very different concepts. But you can now go through the whole document, replace all occurrences of and with or, and try all possible combinations, and it's still covered by that patent. Gradient-- well, we know what a gradient is. Vector at a given point in an image giving the direction and magnitude of greatest change in brightness at a specified granularity. Pattern-- a specific geometric arrangement of contours lying in a bounded subset of the plane of contours, said contours representing the boundaries. So could you have come up with that definition? I mean, this takes art. You get a degree in law before you can do this. Model-- a set of data-encoding characteristics of a pattern to be found for use by a pattern finding method. Training-- an act of creating a model from an image of an example object or from a geometric description of it. I haven't discussed that. I've assumed so far, we're making the model from a real image. But of course, if you have the CAD for the object, you might very well say, well, that's even more of a golden standard. That's really the truth. And so you could produce the model from mechanical drawing or equivalent computer vision. The downside is, if there's some process in manufacturing that does not reproduce the CAD exactly, you won't capture that because you're working-- and this is very common, for example, in printed circuits or integrated circuits. You have a perfectly rectangular layout with sharp corners. Well, when you go through and expose the material and put it in the chemical bath, that corner is not going to be a sharp corner. It's going to be rounded. And so those are situations where working with the real object is better than working with a CAD. OK, pose-- a mapping from pattern to image coordinates-- we discussed that-- representing a specific transformation in superposition of a pattern onto an image. So as I mentioned, this is unusual in that they've gone to the trouble of breaking out and listing the terms that they will use, which is very good because, then, if there's any argument about what the specification means or, indeed, what the claims mean, then that's right defined there. So let's jump ahead. So the rest of this, the usual stuff. The background of why this is the best thing since sliced cheese or sliced bread and then, what is the prior art? So they talk about all of the things we talked about-- binary templates, blob analysis, correlation, and tough transforms, which we haven't talked about. And then they get to the summary of their invention, which pretty quickly goes into the discussion of the figures and details of the figures. And then there are some equations. See here, the explanation, it's fairly long. Here are the figures. I'm trying to get to the claims. Wow, there's even an integral sign. That's neat. Yeah, wow. Oh, that's the noise calculation integral from 0 to 360. And it gets kind of interesting. There's some real math there. OK, come on. Oh, here we go, Claims. I preferred-- OK, must be down at the bottom here. OK, what is claimed is a method for determining the presence or absence of at least one instance of a predetermined pattern in a runtime image and for determining the multidimensional location-- location, in parentheses, pose-- of each present instance, the method comprising. And so then they go on and describe the details. And we'll talk some about those details. They also didn't say this up front, but they incorporate the possibility of inspection and the possibility of recognition. So the recognition is what's going to happen if you have a very good match. And so if you have the possibility of seeing more than one object, well, you run this process for each of them. And presumably, most of them won't be a good match. And one of them will. And so that's your recognition. The other thing-- inspection-- this method works even if parts of the object are obscured or parts of the object are missing. But of course, the match is going to decrease in quality as you remove things. So if your gear has lost one tooth, you'll still get a pretty good match. If it's losing two teeth, not so good, and so on. So that obviously tells you you can use it for inspection. So after you found the object in pose, and you've matched it, you can then qualify just how good a match it is. And that might be a way of doing rejection of a part that's not satisfactory. So this one is mostly method claims. I'm trying to remember if this-- the method-- and then a lot of dependent claims. The method of claim 1 plus this. And I can't remember if this one still has apparatus claims. The method of claim one-- these are all method claims. Yeah, all method claims. I think what happened is that the legal situation changed slightly, and they didn't have to do that game anymore of having both apparatus and methods. OK, where are we? OK. So I hope the idea of probes is quite clear. And the model, the model is composed of these probes. And the huge advantage of using the probes compared to using, say, correlation-- because we're now only looking at a small number of points in the gradient image. Now theoretically, we could just compute the gradient when we need to match it. But since we're going to do this matching many, many times with different orientations and positions for the model, there's a definite advantage to pre-computing the gradient. So we use that other patent to finish that job, and then we jump in and superimpose-- we map the model on top of the runtime image, and there's just a few places. And we're collecting evidence. So at every one of those probe places, we're asking the question, you does this support a hypothesis that there's an object with this pose? And then we add up all of the evidence. And that gives us a score. And then from that, we can build a score surface in this multidimensional space. Now, this is much easier to visualize if we only deal with translation. Then we have a function that varies with x and y, and we're looking for the peak. And that's our translation. We're done. But of course, we're also interested in dealing with rotation, scaling, et cetera. So it's a little more difficult, but same idea. Let me just talk a little bit about this noise issue. So it's called noise, which is kind of a misnomer in the patent. Remember, we had this scoring function for where this was the difference in degrees between the model probe direction and the runtime gradient direction. And I said that there's some probability that if you're just superimposing this in some random place in the runtime image that you're going to get a match. And so how bad is that? Well, obviously, what you want to do is just integrate-- what shall we call this thing? I don't know, scoring function. So if this was 22.5, and this was 11.25, we can calculate this integral. I forget what it comes out to. And then also, obviously, if we use the method that is insensitive to the sine of contrast, then we double the amount that we're going to get from random matches. Now, there are two aspects of that. One of them is we can calculate how much is this going to offset the result, and we can subtract, so we can take out this error. But the other one is, of course, that it's going to contribute to the noise in the result, that it's not as good. So here's an idea. So this is kind of a nuisance to compute this, to take the two directions, take the difference in angle, and then look it up in this functional table. How about if we just take the dot products of these two vectors? If you have two unit vectors, we can just take the dot product. And that's going to be the cosine of the angle between them, right? So that would be very cheap to compute because then, in the probe, I just store the V1. And in the runtime image, I've got V2 computed by that previous patent. And then, I just take the dot product, which is going to be cheaper than using that method because, in this method, I have to use atan to get the angle. And I have to use it twice, once in the training. So who cares? You only do that once. But once in the runtime image. So what's wrong with that? Well, that means that my weighting function looks like this because if they're lined up, I get 1. If they're opposite each other I get minus 1. And so that's a potential. This isn't in the patent. This is an alternate, not as good, scoring for gradient direction function. Now, this is the one that takes into account polarity, but I can also think of the one that is insensitive to polarity, which would be the absolute value of that. And so I think you can see what the problem, which is this is going to be producing a fairly large result just from random matches, right? So we'd have to take the integral of cosine, the absolute value of cosine from 0 to 360 degrees and, I don't know, I forget what it is-- pi over 4 or something. But it's a large number, whereas the number here is very small. Let's estimate this. Oh, we did. We said 20 over-- it's like, 1 in 18. I don't think that's right, but it's approximately that. This is going to be much larger, like 1 and 1/2 or something. So yes, that is much easier to compute, but it's not nearly as good, so they didn't use that. So what remains to say about this? So this was very successful. And we do still need to talk about the scoring functions. How are these measurements combined? But then, just stepping up a level, what's the disadvantage of this method? It's quantized. We are quantizing pose space. So we're only looking at, let's say, rotations that are multiples of 5 degrees. I mean, we get to decide how much to quantize it. But it is quantized. How do we make that decision? Well, we could make it very fine, but then the computation is very slow. So there's a trade off there. And so how can we improve on that? Well, one way is to use the method with fairly coarse quantization. But then, we have to search the whole pose space. Initially, we don't know anything. It could be rotated five degrees, 143 degrees, whatever. So we're forced to search the whole pose space. But once you have a number of potential matches-- and, typically, you want to retain more than one-- then you can search with finer quantization near there. And so that's another important aspect of this, and we'll need to talk about that. So the idea is that it's kind of like multi-scale, that we-- at a core scale, we run through the whole course pose space, and then we use a block just around the areas that seem to produce a good result to get a more accurate result. So what I was-- again, the patent is online. You should probably look at it. And what I still have left to do is talk about the different scoring functions because, in some cases, we would like a score that we can compare to an absolute threshold and say, OK, if it's bigger than 0.95, then we definitely have a match, that kind of thing we talked about with normalized correlation. In other cases, we just want to get a result as fast as possible. And we find the peak, and we don't really care how big the peak is. So those are two different computations. And depending on what we're doing, one may be more efficient than the other. Then, we might want to do things like remove this term, which I think this is actually 3 over 32. But in any case, it's not too hard to carry out. So let's call that N. Do we actually want to remove that? Well, that's going to mean that there is more computation, but it will be more accurate. So depending on where we are in the process, we're getting close to the answer. We want to refine it to make it as accurate as possible. Then, we'll throw that in and just bite the bullet on it taking more computation. OK, a couple of points that I highlighted here-- one of them is that if we're working multi-scale, then we'll want to use different probes at different scales. And this is kind of an even more sophisticated way of proceeding, where, instead of just fixing the quantization or fixing the granularity, we try it at different granularities. And if we do that, we need different models for each of those granularities. Then there' a need for fast low-pass filtering because we're trying to build these multi levels. And we'll talk about that because another patent, which is much easier than this one, gives us methods for very rapidly performing convolutions and low-pass filtering. Gradient direction is more reliable than brightness-- contrast, we talked about that. Each test provides direct evidence of pattern presence. We talked about how the probes will contribute evidence. Probes are not restricted to the pixel grid. Oh, that's an important one. I mean, first of all, they're derived from edge points that we already interpolated. So those were already not on the pixel grid. But then, in addition, we throw those out, and we put our own probes in separately. Accuracy limit-- oh, accuracy limited by quantization of search space. Why am I emphasizing that? Well, because that's going to take us to another pattern that does it a different way, which is not limited in that respect. So why are we talking about this one? Well, because the other one needs a good first guess. And so this is the way to get a very good first guess. And then the other one can refine it. And it's not limited to discrete steps in the pose space. OK, that's it for today.
MIT_6801_Machine_Vision_Fall_2020
Lecture_1_Introduction_to_Machine_Vision.txt
BERTHOLD HORN: So welcome to Machine Vision 6.801, 6.866. And I'm not sure how we got so lucky, but we have the classroom that's the furthest from my office. So I guess I'm going to get a lot of exercise. And I think we're going to have a lot of stragglers coming in. What's there to know? Just about everything's on the website. So I can probably eliminate a lot of the administrivia. Please make sure that you're actually registered for the course on the website and take a look at the assignments. And hopefully you've either had a chance to look at chapters 1 and 2, or you're about to. That's the assignment for this week. And there's a homework problem. And you're probably saying, God, I just arrived here. How can there be a homework problem? Well, I'm sorry. But the term is getting shorter and shorter. And if I work backwards from when the faculty rules say the last assignment can be due, we have to start now. Now the good news in return is there's no final. So, yes, there is a homework problem starting right away, but there's no final. And there's a homework problem only every second week, so it's not a huge burden. And there are some take-home quizzes. So two of the times where you'd normally have a homework are going to be glorified homeworks that count more than the others. And they are called quizzes. So total, I think, there are five homework problems and two quizzes. Collaboration-- collaboration's OK on the homework problems, but please make a note of who you worked with. It's not OK on the take-home quizzes. 6.866-- so those of you in 6.866, the difference is that there's a term project. So you will be implementing some machine vision method, preferably one that we cover in the course. And there'll be a proposal due about a month from now-- I'll let as we go along-- telling me what you're planning to do. And preference is going to be given to dynamic problems rather than single image static analysis, image motion, that kind of thing. And if there's enough interest, we'll have a session on how to do this on an Android phone. And I'm a little reluctant to do that because some of you don't have an Android phone. And I have some loaners. But you know what it's like-- these darn things go out of fashion in two years. And so all of the interesting new stuff having to do with a camera on Android is not in some of the box full of old smartphones I have. But that is an option. So one of the ways of doing your term project is to do an Android studio project. And to help you with that, we have a canned ready-made project that you can modify rather than starting from scratch. OK, what else? Grades-- so for 6.801, it's a split-- half for your homework problems and half for your take-home quizzes. So clearly, the take-home quizzes count more. For 6.866, it's split three ways-- a third for take-home homework problems, a third for quizzes, and a third for the project. And again, collaboration on the projects I actually favor, because there's just a finite length of time in the term. You've got other courses to deal with. Oftentimes, people end up postponing it near the end. So if you're working with someone else, that can often encourage you to start early and also make sure that you're making some progress. Textbook-- there's no textbook, as you saw. If you have Robot Vision, that could be useful. We're not going to cover all of Robot Vision, we cover maybe a third to a half. And quite a lot of the material we cover is referenced through papers, which we will put up on the Stellar website. So in fact, if you look at the website, you'll see there's a lot of material. And don't be scared. I mean, a lot of that is just for your reference. Like, if you're working on your project, then you need to know how to do-- I don't know-- SIFT, then it's there. So you're not expected to read all of that. So it's the Robot Vision book. It should be on the website. If it is not on the materials, so when you get to the Stellar website, there's a tab-- there's two tabs. And the second one is-- I forget what, but that's the one where all the good stuff is. And then when you get to that page, one of the windows says, Material. And unfortunately, it only shows you a little bit of it. You have to click on it to see all the materials. So it should be there. And we'll be doing this with some of the other chapters and some of the papers, as I mentioned. OK, also, of course, there are errors in the textbook. And so the errata for the textbook are online. So if you have the book, you could go through and red mark all of the bad spots. So reading, read chapters 1 and 2. Don't worry about all the reference material. You won't be reading all of it. So what are we doing today? Well, mostly I need to tell you enough so you can do the homework problem. That's one function. And the other one is to give you an idea of what the course is about. And these two things kind of conflict. So I'll try and do both. In terms of the course, I am supposed to tell you what the objectives are. So I made up something. Learn how to recover information about environment from the images. And so we're going to take this inverse graphics view where there's a 3D world out there, we get 2D images, and we're trying to interpret what's happening in the world. Vision is an amazing sense because it's non-contact and it provides so much information. But it's in a kind of coded form, because we're not getting all the information that's possible. We don't get 3D, for example. So that's the topic that we're going to discuss. And hopefully, you will then understand image formation and understand how to reverse that to try and get a description of the environment from the images. Outcomes-- well, you'll understand what's now called physics-based machine vision. So the approach we're going to take is pretty much-- they're light rays, they bounce off surfaces, they form an image. And that's physics-- rays, lenses, power per unit area, that kind of stuff. And from that, we can write down equations. We can see how much energy gets into this pixel in the camera based on the object out there. How it's illuminated, how it reflects light, and so on. And from the equations, we then try to invert this. So the equations depend on parameters we're interested in, like speed, time until we run into a wall, the type of surface cover, and so on. So that's physics-based machine vision. And it's the preparation for more advanced machine vision courses. So there's some basic material that everyone should know about how images are formed. That's going to be useful for other courses. And if you're going into learning approaches, one of the advantages of taking this course is it'll teach you how to extract useful features. So you can learn with raw data, like just the gray levels at every pixel. And that's not a particularly good approach. It's much better if you can already extract information, like texture, distance, shape, size, and so on. And do the more advanced work on that. And, well, also, one of the things some people enjoy is to see real applications of some interesting but relatively simple math and physics. It's like, sometimes we forget about this when we're so immersed in programming in Java or something. But there's a lot of math we learned, and sometimes resent the learning because, like, why am I learning this. Well, it's neat to find out that it's actually really useful. And so that brings me to the next topic, which is that, yes, there will be math, but nothing sophisticated. It's engineering math-- calculus, that kind of thing, derivatives, vectors, matrices, maybe a little bit of linear algebra, maybe some ordinary differential equation, that kind of stuff, nothing too advanced, no number theory or anything like that. And there'll be some geometry and a little bit of linear system. So you saw the prerequisite was 6.003. And that's because we'll talk a little bit about convolution when we talk about image formation. But we're not going to go very deep into any of that. First of all, of course, it's covered in 6.003 now, since they changed the material to include images. And then we have other things to worry about. So that's what the course is about. I should also tell you what it's not. So it's not image processing. So what's the difference? Well, image processing is where you take an image, you do something to it, and you have a new image, perhaps improved in some way, enhanced edges, reduce the noise, smooth things out, or whatever. And that's that provides useful tools for some of the things we're doing. But that's not the focus of the course. There are courses that do that. I mean, 6.003 does some of it already. 6.344 or 6.341, they used to be 6.342. So there's a slew of image processing courses that tell you how to program your DSP to do some transformation on an image. And that's not what we're doing. This is not about pattern recognition. So I think of pattern recognition as you give me an image and I'll tell you whether it's a poodle or a cat. We're not going to be doing that. And, of course, there are some courses on that touch on that in Course 9, particularly with respect to human vision and how you might implement those capabilities in hardware. And of course, machine learning is into that. And that brings me to machine learning. This is not a machine learning course. And there are 6.036, 6.869, 6.862, 6.867, et cetera, et cetera. So there are plenty of machine learning courses. And we don't have to touch on that here. And also, I want to show how far you can get just understanding the physics of the situation and modeling it without any black box that you feed examples into. In other words, we're going to be very interested in so-called direct computations, where there's some simple computation that you perform all over the image and it gives you some result, like, OK, my optical mouse is moving to the right by 0.1 centimeter, or something like that. It's also not about computational imaging. And what is that about? So computational imaging is where image formation is not through a physical apparatus, but through computing. So it sounds obvious. Well, we have lenses. Lenses are incredible. Lenses are analog computers that take light rays that come in and reprogram them to go in different directions to form an image. And they've been around a few hundred years. And we don't really appreciate them, because they do it at the speed of light. I mean, if you try to do that in a digital computer, it would be very, very hard. And we perfected them to where I just saw an ad for a camera that had a 125-to-1 zoom ratio. I mean, if the people that started using lenses like Galileo and people in the Netherlands, they'd be just amazed at what we can do with lenses. So we have this physical apparatus that will do this kind of computation, but there are certain cases where we can't use that. So for example, in computed tomography, we're shooting X-rays through a body, we get an image, but it's hard to interpret. I mean, you can sometimes see tissue with very high contrast, like bones will stand out. But if you want the 3D picture of what's inside, you have to take lots of these pictures and combine them computationally. We don't have a physical apparatus like an X-ray lens mirror gadget interferometer that final result is the image. Here, the final result is computed. Even more, so an MRI-- we have a big magnet with a gradient field, we have little magnets, that modulate it. We have RF, some signal comes out, it gets processed. And ta-da, we have an image of a cross-section of the body. So that's computational imaging. And we won't be doing that. There is a course, 6.870, which is not offered this term, but it goes into that. And we're also not going to say much about human vision. Again, Course 9 will do that. Now in the interest of getting far enough to do the homework problem, I was going to not do a slideshow. But I think it's just traditional to do a slide. Show so I will try and get this to work. It's not always successful because my computer has some interface problems. But let's see what we can do. OK, so let's talk about machine vision and some of the examples you'll see in this set of slides. Not all of it will be clear with my brief introduction. But we'll go back to this later on in the term. So what are the sorts of things we might be interested in doing? Well, one is to recover image motion. And you can imagine various applications in, say, autonomous vehicles and what have you. Another thing we might want to do is estimate surface shape. As we said, we don't get 3D from our cameras-- well, not most cameras. And if we do get 3D, then it's usually not very great quality. But we know that humans find it pretty straightforward to see three-dimensional shapes that are depicted in photos, and photos are flat. So where's the 3D come from? So that's something we'll look at. Then there are really simple questions, like, forgot my optical mouse. How do optical mice work? Well, it's a motion vision problem. It's a very simple motion vision problem, but it's a good place to start talking about motion vision. So as I mentioned, we will take a physics-based approach to the problem. And we'll do things like recover observer motion from time varying images. Again, we can think of autonomous cars. We can recover the time to collision from a monocular image sequence. That's interesting because think that to get depth we might use two cameras and binocular vision, like we have two eyes and a certain baseline and we can triangulate and figure out how far things are away. And so it's kind of surprising that it's relatively straightforward to figure out the time to contact, which is the ratio of the speed to the distance. So if I've got 10 meters to that wall and I'm going 10 meters per second, I'll hit it in a second. So I need to do two things. I need to estimate the distance and I need to estimate the speed. And both of these are machine vision problems that we can attack. And it turns out that there's a very direct method that doesn't involve any higher level reasoning that gives us that ratio. And it's very useful. And it's also suggestive of biological mechanisms, because animals use time to contact for various purposes, like not running into each other. Flies, pretty small nervous system, use time to contact to land. So they know what to do when they get close enough to the surface. And so it's interesting that we can have some idea about how a biological system might do that. Contour maps from aerial photographs-- that's how old maps are made these days. And we'll talk about some industrial machine vision work. And that's partly because actually those machines those systems really have to work very, very well, not like 99% of the time. And so they actually pooh-pooh some of the things as academics talk about, because they're just not ready for that kind of environment. And they've come up with some very good methods of their own. And so it'll be interesting to talk about that. So at a higher level, we want to develop a description of the environment just based on images. After we've done some preliminary work and put together some methods, we'll use them to solve what was at one point thought to be an important problem, which is picking an object out of a pile of objects. So in manufacturing, often parts are palletized or arranged. Resistors come on a tape. And so by the time they get to the machine that's supposed to insert them in the circuit board, you know its orientation. And so it makes it very simple to build advanced automation systems. But when you look at humans building things, there's a box of this and there's a box of that and there's a box of these other types of parts. And they're all jumbled. And they don't lie in a fixed orientation so that you can just grab them using fixed robotic motions. And so we will put together some machine vision methods that allow us to find out where a part is and how to control the manipulator to pick it up. We'll talk a lot about ill-posed problems. So according to Hadamard, ill-posed problems are problems that either do not have a solution, have an infinite number of solutions, or, from our point of view, most importantly, have solutions that depend sensitively on the initial conditions. So if you have a machine vision method that, say, determines the position and orientation of your camera, and it works with perfect measurements, that's great. But in the real world, there are always small errors in measurements. Sometimes you're lucky to get things accurate to within a pixel. And what you want is not to have a method where a small change in the measurement is going to produce a huge error in the result. And unfortunately, the field has quite a few of those. And we'll discuss some of them. A very famous one is the so-called eight-point algorithm, which works beautifully on perfect data, like your double precision numbers. And even if you put in a small amount of error, it gives you absurd results. And yet many papers have been published on it. OK. We can recover surface shape from monocular images. Let's look at that a little bit. So what do you see there? Think about what that could be. So if you don't know what it is, do you see it as a flat surface? Let's start there. So no, you don't see it as a flat surface. So that's where I was really going with this. I promise you this scheme is perfectly flat. There's no trickery here. But you are able to perceive some three-dimensional shape, even though you're unfamiliar with this surface, with this picture. And it happens to be gravel braids in a river north of Denali in Alaska in winter, covered in snow, and so on. But the important thing is that we can all agree that there's some groove here. And there's a downward slope on this side, and so on. So that shows that even though images provide only two-dimensional information directly, we can infer a three-dimensional information. And that's one of the things we're going to explore. So how is it that even though the image is flat, we see a three-dimensional shape. And of course, it's very common and very important. You look at a picture of some politician in the newspaper, well, the paper is flat, but you can see that face as some sort of shape in 3D, probably not with very precise metric precision. But you can recognize that person based not just on whether they have a mustache or they're wearing earrings or something. But you have some idea of what the shape of their nose is and so on. So here, for example, is Richard Feynman's nose. And on the right is an algorithm exploring it to determine its shape. So you can see that, even though presumably he washed his face and it's pretty much uniform in properties all over, where it's curved down it's darker. Where it's facing the light source, which, in this case, is near the camera, it's bright. And so you have some idea of slope, that the brightness is somehow related to slope. What makes it interesting is that while slope is not a simple thing, it's not one number-- it's two, right, because we can have a slope in x and we can have a slope in y. But we only get one constraint. We only get one brightness measurement. So that's the kind of problem we're going to be faced with all the time where we're counting constraints versus unknowns. How much information do we need to solve for these variables? And how sensitive is it going to be to errors in those measurements, as we mentioned? And there's a contour map of these nose. And I mean, once you've got the 3D shape, you can do all sorts of things. You can put it in a 3D printer and give it to him as a birthday present and whatnot. And he has a somewhat later result where we're looking at an image of a hemisphere-- well, actually an oblate ellipsoid. And we're asked to recover its shape. And these are iterations of an algorithm that works on a grid and finally achieves the correct shape. And we'll talk about the interesting intermediate cases where there's ridges where the solution is not satisfied. And the isolated points that are conical. And it's interesting in this case to look at just how the solution evolves. So here's a overall picture of a machine vision in context. So first we have a scene, where world out there. And the illumination of that scene is important. That's why that's shown, although it's shown with dotted marks because we're not putting that much emphasis on it. There's an imaging device, typically with a lens or mirrors or something. And we get an image. And then the job of the machine vision system is to build a description. And when it becomes interesting is when you then use that description to go back and do something in that world. And so in my view, some of the more interesting things are robotics applications where the proof of the pudding is when you actually go out and the robot grabs something and it's grabbing it the correct way. That's one way you can know. That's one constraint on your machine vision program. If your machine vision program is not working, that probably won't happen. So in many other cases, if the final output is a description of the environment, who's to say whether it's correct. It depends on the application. I mean, if it's there for purposes of writing a poem about the environment, that's one thing. If its purpose is to assemble an engine, then it's this type of situation where we have some feedback. If it works, then probably the machine vision part work correctly. Here's the time to contact problem that I was talking about. And as you can imagine, of course, as you move towards the surface, the image seems to expand. And that's the cue. But how do you measure that expansion? Because all you've got are these gray levels, this array of numbers. How do you measure that? And how do you do it accurately and fast? And also we've noted that somehow there are interesting aspects like one camera-- don't need two. The other one is that for many of the things we do, we need to know things about the camera, like the focal length. And we need to know where the optical axis strikes the image plane. So we've got this array of pixels. But where's the center? Well, you can just divide the number of columns and the number of rows by 2. But that's totally arbitrary. What you really want to know is, if you put the axis through the lens, where does it hit that image plane. And of course the manufacturer typically tries to make that be exactly the center of your image sensor. But it's always going to be a little bit off. And in fact, in many cases, they don't particularly care. Because if my camera puts the center of the image 100 pixels to the right I probably won't notice in normal use. If I'm going to post on Facebook, it doesn't really make any difference. If I'm going to use it in industrial machine vision, it does make a difference. And so that kind of calibration is something we'll talk about as well. And what's interesting is that in this particular case, we don't even need that. We don't even need to know the focal length, which seems really strange. Because if you have a longer focal length, that means the image is going to be expanded. So it would seem that would affect this process. But what's interesting is that at the same time as the image is expanded, the image motion is expanded. And so the ratio of the two is maintained. So from that point of view, it's a very interesting problem. Because unlike many others, we don't need that information. So here's an example of approaching this truck. And over here's a plot-- time, horizontal. And vertical is the computed time to contact. The red curve is the computed. And the barely visible green dotted line is the true value. In the process, by the way, we expose another concept, which is the focus of expansion. So as we approach this truck, you'll notice that we end up on the door, which is not the center of the first image. So we're actually moving at an angle. We're not moving straight along the optical axis of the camera, but we're moving at an angle. And the focus of expansion is very important, because it tells us in 3D what the motion vector is. So in addition to finding the time to contact, we want to find the focus of the expansion. And there's another one. This one was done using time lapse, moving the car a little bit every time. And, well, I'm not very good at moving things exactly 10 millimeters. So it's a bit more noisy than the previous one. So, yeah, we'll be talking a little bit about coordinate systems and transformations between coordinate systems. For example, in the case of the robot applications, we want to have a transformation between a coordinate system that's native to the camera. When you get the robot, it has kinematics programmed into it so that you can tell it in x, y, z where to go, and in angle how to orient the gripper. But that's in terms of its defined coordinate system, which is probably the origin's in the base where it's bolted in the ground. Whereas your camera up here, it probably likes a coordinate system where its center of projection is the origin. So we'll have to talk about those kinds of things. And I won't go into that. We'll talk about this later. So I mentioned analog computing. And now we just automatically-- everything is digital. But there are some things that are kind of tedious. If you have to process 10 million pixels and do complicated things with them, since a digital computing isn't getting any faster, that can be a problem. OK. So you can use parallelism. So there's still an interest in analog. And so here, this is the output of a chip that we built to find the focus of expansion. And it's basically instantaneous, unlike the digital calculation. And the plot is a little hard to see. But let's see, the circle, they're determined by two different algorithms. And you can see that there's some error. But overall, the cross-- the x and the old are sort of on top of each other. This was a fun project because to have a chip fabricated is expensive. And so you can't afford to screw up too many times. And of course, with an algorithm this complicated, what's the chance you'll get it right the first time? So the student finally reached the point where OPA wouldn't pay for any more fabs. And the last problem was there was a large current to the substrate, which caused it to get warm. And of course, once it gets hot, it doesn't work anymore. So he'd come in every morning with a cooler full of ice cubes and a little aquarium pump and cooled his focus of expansion chip to make sure that it wouldn't overheat. So we talked a little bit about projection and motion. Let's talk about brightness. So as you'll see, you can split down the middle what we'll have to say about image formation. So the first half is the one that's covered in physics projection. It answers the question where so what is the relationship between points in the environment and points in the image? Well, raised-- you connect it with a straight line through the center of projection, and you're pretty much done. That's called perspective projection. And we'll talk about that. But then the other half of the question is, how bright. What is the gray level at a point in color terms, RGB values, at a point? And so that's less often addressed in some other courses. And we'll spend some time on that. And obviously, we'll need to do that if we're going to solve that shape from shading problem, for example. So what is this? So we've got three pictures here taken from pretty much the same camera orientation and position of downtown Montreal. And obviously if you go to a particular pixel in the three images, they're going to have different values. Of course, the lighting has changed. So what this illustrates right away is that illumination plays an important role. And obviously we'd like to be insensitive to that. And in fact, if you showed anyone one of these three pictures separately, they'd say, oh, yeah, OK, that's plus Sainte Ville Marie. And they wouldn't even think about the fact that the gray levels are totally different, because we automatically accommodate that difference. So we'll be looking at diagrams like this where we have a light source shown as the sun, and an image device shown as an eye, and a tiny piece of the surface. And the three angles that control the reflection. And so what we see from that direction is a function of where that light comes from, what type of a material it is, and how it's oriented. And we'll particularly focus on that orientation question. Because if we can figure out what the surface orientation is at lots of points, we can try and reconstruct the surface. And there's that business of counting constraints, again, because what's the surface orientation? It's two variables. Because you can tilt it in x and you can tilt it y. That's the crude way to see why that is. And what are we getting? We're getting one brightness measurement. So we kind of it's not clear you can do it. It might be under constraint. And the image you get of an object depends on its orientation. And the way I've shown it here is to show the same object basically in many different orientations. And not only does it outline change, but you can see the brightness within the outline depends a lot on that as well. And things depend a lot on the surface reflecting properties. So on the left, we have a matte surface-- white matte paint out of a spray can. And on the right we have a metallic surface. And so even though it's the same shape, we have a very different appearance. And so we'll have to take that into account and try and understand how do you describe that. What equation or what terminology shall we use for that? So we'll jump ahead here to one approach to this question, which is, suppose we lived in a solar system with three suns that have different colors. This is what we get there's a cube. And it would make things very easy, right, because there's a relationship between the color and the orientation. So if I have that particular type of blue out there, I know that the surface is oriented in that particular way. So that would make the problem very easy. And so that leads us to an idea of how to solve this problem. So as I mentioned, there's this so-called bin of parts problems, which we were foolish enough to believe what the mechanical engineers wrote in their annual report. So what they said was, here are the 10 most important problem to solve in mechanical engineering. And this was, I forget, number 2-- how to pick parts when they're not palletized, when they're not perfectly arranged. And so here the task is to take one after another of these rings off the pile of rings. And of course, if they were just lying on the surface, it would be easy, because there are only that many stable positions. Well, for this object only two. And so it would be pretty straightforward. But since they can lie on top of each other, they can take on any orientation in space. And also, they obscure each other. And also shadows of one fall on the other. So it gets more interesting. And you can see that it took many experiments to get this right. So these objects got a little bit hammered. So you have to be insensitive to the noise due to that. And we need a calibration. So we need to know the relationship between surface orientation and what we get in the image. And so how best to calibrate? Well, you want an object of known shape. And nothing better than a sphere for that. It's very cheap. You just go to the store and buy one. You don't have to manufacture a paraboloid or something. And this may be a little odd picture, but now this is looking up into the ceiling. So in the ceiling, there are three sets of fluorescent lights. And in this case, they're all three turned on. But in the experiment, they're used one at a time. So we have three different illuminating conditions. And we get a constraint at each pixel out of each one. So ta-da-- we have enough constraints. We've got the three constraints at every pixel. We need two for surface orientation. And we have an extra one. Well, the extra one allows us to cope with albedo, changes in reflectance. So we can actually recover both the surface orientation and the reflectance of the surface, if we do this with three lights. So here's our calibration object illuminated by one of those lights. And now we repeat it with the other two. And just for human consumption, we can combine the results into an RGB picture. So this is actually three separate pictures. And we've used them as the red, green, and blue planes of a color picture. And you can see that different surface orientations produce different colors. Meaning, different results under the three illuminating conditions. And so conversely, if I have the three images, I can go to a pixel, read off the three values, and figure out what the orientation is. And you might see a few things. One of them is that there are certain areas where the color is not changing very rapidly. Well, that's bad, right. Because that means that if there's some small error in your measurement, you can't be sure exactly where you are. And the other area's where the color is changing pretty dramatically. And that's great because any tiny change in surface orientation will have an effect. And so one of the things we'll talk about is that kind of noise gain, that sensitivity to measurement error. Why worry about it? Well, images are noisy. So first of all, one of the images-- you're looking at the 8-bit images. There's one part in 256. That's really crude quantization. And you can't even trust the bottom one or two bits of those. If you're lucky and you get raw images out of a fancy DSLR, you might have 10 bits or 12. Another way to look at it is that a pixel is small. How big is a pixel in a typical camera? So we can figure it out. So the chip is a few millimeters by a few millimeters. And we got a few thousand by a few thousand columns and rows. So it's a few microns. And they're huge trade-offs. Like the one in you in your phone has smaller pixels. The one in a DSLR has larger pixels. But in any case, they're tiny. Now imagine light bouncing around the room. A little bit of that light goes through the lens. And a tiny, tiny part of that gets onto that one pixel. So the number of photons that actually hit a pixel is relatively small. It's like a million or less. And so that means that now we have to worry about statistics of counting. As you can imagine, if you have 10 photons, is it nine? Is it 10? Is it 11? That's a huge error. So if you're a million, it's already better. It's like one in the 1,000. But so the number of photons that can go into a single pixel is small. But not only is there a little light coming in, but actually the pixel itself can't store that much. The photons are converted to electrons. Each pixel is like a tiny capacitor that can take a certain charge before it's full. So anyway, images are noisy. So we have to be cognizant of that. So that was the calibration. Now we go to the real object. And again, different surface orientations produce different colors. From that, we can construct this so-called needle diagram. So imagine that we divide the surface up into little patches. And at each point, we erect the surface normal. And then these tiny little-- may be hard to see-- but they're tiny, little bluish spikes that are the projections of those surface normals. So in some areas, like here, they're pretty much pointing straight out at you. So here you're you looking perpendicular onto the surface. Whereas over here, the surface is curving down and you're looking sideways. So that's a description of the surface and we could use that to reconstruct the shape. But if we're doing recognition and finding out orientation, we might do something else. So here, you see it's actually slightly more complicated, because you've got shadows. And it's harder to see, but there's also interflection. That is, with these white objects, light bounces off each of them in a mat way, goes everywhere. And it spills onto the other surfaces. So it's not quite as simple as I explained. So what do we do with our surface normals? Well, we want a compact convenient description of shape. And for this purpose, one such description is something called an extended Gaussian image, which we'll discuss in class where you take all of those needles and you throw them out onto a sphere. And so for example, for this object, we have a flat surface at the top. All of those patches of that surface have the same orientation. So they're going to contribute that big pile of dots at the North Pole. So just cut that short. It's a representation in 3D that's very convenient if we need to know the orientation of the object, because if we rotate this object, that representation just rotates. You can think of many other representations that don't have that property. OK, so here it is. You could imagine that it wasn't easy to get the sponsor of the project to pay for these parts here. I think they were concerned they were not for experimental purposes. So this is a single camera system, so there's no depth. So the way this works is that you do all this image processing. You figure out which object to pick up and how it's oriented. And then you reach down with a hand until a beam is interrupted, then you know the depth. So here the beam is interrupted. And now the robot backs up. And here it orients the hand for grasping. And then it comes back and grasps that object, and so on. And I show this because another calibration I left out was what I previously mentioned-- the relationship between the robot coordinate system and the vision system coordinate system. And one way of dealing with that is to have a robot carry around something that's easy to see and accurately locatable. This is something called a surveyor's mark, because surveyors have used that trick for a very long time. It's easy to process the image. And you can find the location of the intersection of these two lines very accurately with sub-pixel accuracy. So you move that around in the workspace and then fit the transformation to it. And then you can use that to-- OK, back to more serious stuff. So that should give you a taste of the kind of thing that we'll be doing. And what I'm going to do now is work towards what you need for the homework problem. So first, are there any questions about what you saw? I mean, a lot of that's going to get filled in as we go through the term. So I mentioned this idea of inverse graphics. So if we have a world model, we can make an image. People who are into graphics will hate me saying that. But that's the easy part. That's the forward problem. It's well-defined. And the interesting part is, how do you do it well? How do you do it fast? How do you do it when the scene has only changed slightly and you don't want to have to recompute everything and so on. But what we're trying to do is invert that process. So we take the image. And we're trying to learn something about the world. Now we can't actually reconstruct the world. We typically don't end up with a 3D printer doing that. Usually, this ends as a kind of description. It might be a shape or identity of some object or its orientation in space, whatever is required for the task that we have. It might be some industrial assembly task, or it might be reading the print on a pharmaceutical bottle to make sure that it's readable, and so on. But that's the loop. And that's why we like to talk about it as inverse graphics. Now to do that, we need to understand the image formation. And that sounds pretty straightforward, but it has two parts, both of which we'll explore in detail as we go along. Then with inverse problems, like here we're trying to invert that, we often find that they're ill-posed. And as I mentioned, that means that they don't have a solution, have an infinite number of solutions, or have solutions that depend sensitively on the data. And that doesn't mean it's hopeless, but it does mean that we need methods that can deal with that. And often we'll end up with some optimization method. And in this course, the optimization method of choice is least squares. Why is that? Well, the fancy probability people will tell you that this is not a robust method. If you have outliers, it won't work very well. And that's great. But in many practical cases, least squares is easy to implement and leads to a closed form solution. Wherever we can get a closed form solution, we're happy, because we don't have iteration. We don't have the chance of getting stuck in a local minimum or something. So we'll be doing a lot of least squares. But we have to be aware of-- I already mentioned-- noise gain. So not only do we want to have a method for solving the problem, but we'd like to be able to say how robust it is. If my image measurements are off by 1%, does that mean that the answers are completely meaningless? Or does it mean that they're just off by 1%. So that kind of thing. Diving right in, we're going to address this problem. And it's straightforward. And we'll start off with something called the pinhole model. Now we know that real cameras use lenses, or in some cases mirrors. Why pinholes? Well, that's because the projection in the camera with a lens is the same-- it's trying to be exactly the same as a pinhole camera. By the way, there's a great example of a pinhole camera in Santa Monica. It's a camera obscura. You walk into this small building that's completely windowless. It's dark inside. And there's a single hole in the wall. And on the other side on the other wall painted white, you see an inverted image of the world. And you see people walking by and so on. So that's a nice example of a pinhole camera. So here's a box to keep the light out. And then we have a hole in it. And on the opposite side of the box, we see projected a view of the world. So let's just try and figure out what that prediction is. So there's a point in the world, uppercase P. And there's a little p point in the image plane. So the back of the box is going to be our image plane. And our retina is not flat. We're just going to deal with flat image sensors because all the semiconductor sensors are flat. And if it's not flat, we can transform. But we'll just work with that. So what we want to know is what's the relationship between these two. And so this is a 3D picture. And now let me draw a 2D picture. OK, so we're going to call this f. And f is alluding to focal length. Although in this case, there's no lens. There's no focal length. But we'll just call that distance f. And we'll call this distance little x. And we'll call this distance big X, and this distance big Z. So in the real world, we have a big X, big Y, big Z. And in the image plane, we have little x. And we're going to have little y and f. And, well, there's similar triangles. So we can immediately write. OK And although this isn't completely kosher, I can do the same thing in the y plane. So I can draw the same diagram, just slice the world in a different way and I get the companion equation. And that's it. That's perspective projection. Now why is it so simple? Well, it's because we picked a particular coordinate system. So we didn't just have an arbitrary coordinate system in the world. We picked a camera-centric coordinate system. And that's made the equation just about trivial. So what did we do? Well, this point here is called the center of projection. And we put that at the origin. We just made that 0, 0, 0 in the coordinate system. And so this is also COP. And then he has the image plane, IP, Image Plane. OK, so we did two things. The one was we put the origin at the center of projection. And the other one is we lined up the axes with the optical axes. So what's the optical axis? Well, in a lens, a lens has a cylindrical symmetry. So it has the cylinder has an axis. But there's no lens here. But what we can do is we can look at where a perpendicular dropped from the center of projection onto the image plane strikes the image plane. So we've used that as a reference. And so that's going to be our optical axis. It's the perpendicular from the center of projection onto the image plane. And we line up the z-axis with that. That's going to be our z-axis. So it's a very special coordinate system, but it makes the whole thing very easy. And then if we do have a different coordinate system on our robot or whatever, we just need to deal with the transformation between this special camera-centric coordinate system and that coordinate system. Now one of the things that's very convenient-- well, not only are they going to make me walk across campus, but I'm going to get upper body strength as well. This is great. OK, so what we do is we flip the image plane forward. So the image on your retina is upside down. And in many cases, that's inconvenience. So what we can do is we can just pretend that the world actually looks like this. That's pretty much the same diagram. We've just flipped 180 degrees what was behind the camera and in front. And it makes the equations even more obvious. The ratio of this to that is the ratio of this to that. Now that sounds straightforward and somewhat boring. But it has a number of implications. The first one is it's non-linear. So we know that things are linear, our math becomes easier and so on. But here we're dividing by z. So on the one hand, that's an inconvenience, because, like you take derivatives and stuff or the ratio. That's not so nice. But on the other hand, it gives us some hope. Because if the result depends on z, we can turn that on its head and say, oh, maybe then we can find z. So we can get an advantage out of what seems like a disadvantage. And then the next thing is-- we won't to do it today, but we'll be doing it soon-- is to talk about motion. So what happens? Well, we just differentiate that equation with respect to time. And what will that give us? Right now, we have a relationship between points in 3D and points in the image. And when we differentiate, we can get a relationship between motion in 3D and motion in the image. And why is that interesting? Well, it means that if I can measure motion in the image, which I can, I can try and guess what the motion is in 3D. Now the relationship is not that simple. For example, if the motion in 3D is straight towards me, the baseball bat is going to hit me in the head, then the motion in the image is very, very small. So you'll have to take into account that transformation. But I do want to know that the relationship between motion in 3D and motion in 2D. And I get it just by differentiating that. Then, I want to introduce several things that we use a lot in the course. The next one is vectors. So we're in 3D. Why am I talking about components? I should be just using vectors. So first of all, notation. In publications, in engineering publication, not math publication, vectors are usually denoted with bold letters. And so if you look at Robot Vision or some paper on the subject, you'll see vectors in bold. Now I can't do both on the blackboard. And so we use underline. And actually, there was a time where you didn't type set your own papers-- just a second. But somebody at the publisher type set your paper. So how did you tell them to type set in bold? You underlined it. I mean, the camera actually works the way that works up there in most cases. Some of them will have mirrors to fold the optical path. This is like a conceptual convenience, just to make it easier, not to have to. I mean, maybe some people don't have a problem with minus signs. But to me, it's confusing having that one upside down. So I prefer to do it this way. But the actual apparatus that works that way. So the other bit of notation that we need is a hat for unit vector, because we'll be dealing with unit vectors quite a bit. For example, you saw that we talked about the surface orientation on that donut in terms of unit vectors. It's a direction. So we use a hat on top of a vector. And so let's turn that into vector notation. Well, I love this. So I claim that this is basically the same as that up there, right. Because if you go component by component, the first component is little x over f is big X over big Z. The second component is letter y over f is big Y over big Z. And the third component is f over f is z over z. So that doesn't do anything to us. So that's the equivalent. And now I can just define a vector r. So this is little r. Now I've got a mixed notation, right, because I've got a big Z in here. Well, that's the third component of big R vector. So I just dot product with unit vector z. So let me write that out in full. So that's x, y, z, transpose dot. So the unit vector in the z direction along the optical axis is just 0, 0, 1. And so I finally have the equivalent of the equations up there in component form. I have it here in vector form. So that's perspective projection in vector form. Now usually at this point, you say, look, how easy it got by using vector notation. Well, it isn't really any easier looking. This is one of those rare cases where it didn't buy you a whole lot in terms of number of symbols you have to write down, and so on. Nevertheless, the compactness of that notation comes out when we start manipulating it. If you have to carry around all these individual components all the time, that can get pretty tedious. Whereas if you use the vector, it's more interesting. And as I've mentioned, one of the things we're going to do soon is differentiate that with respect to time. And then on the left, we'll have image motion. And on the right, we'll have real world motion. And the equation we get will give the relationship between the two. So this may sound a little bit haphazard and chopped up, the way we're doing today. And that's only because I want to cover stuff in chapter 1 and 2 and the material you need for the homework problem. So rather than pursue perspective projection, well we're going to jump to brightness in a second. But first, let me say something else, which is that I'm thinking of these vectors as column vectors. And that's arbitrary because we can establish a relationship between skinny matrices and vectors, either way. I can think of x, y, and z stacked up vertically above each other as a 3-by-1 matrix. Or I can write them horizontally, x, y, z, and it's a 1-by-3 matrix. And just for consistency, I'm always going to think of them as column vectors. And that's why sometimes I need a transpose. And that's what the symbol T is for. So I didn't say it here, but we can now go back to this. So if I write it this way, it's a row vector. But actually all my vectors are supposed to be column vectors. So it's stuck in the transpose. So another bit of notation. So all pretty straightforward, though. OK, let's talk about brightness. So brightness depends on a bunch of different things. It depends on illumination, and in a linear way. In that you throw more illumination on an object, it's going to be brighter. And there are few laws that are really, really linear. This is linear over many, many, many orders of magnitude. I mean, when does it stop being linear? Well, when you put so much energy on the surface that you're melting it. You have to actually have enough energy to fry it. And it's a little bit like Ohm's law, which is also one of those remarkable things that for some materials is linear over many, many orders of magnitude. Anyway, but this depends on the illumination. And then it depends on how the surface reflects light. And so we'll have to talk about that. Now, obviously, there's a difference in terms of amount. So my laptop reflects relatively little light. Whereas, my shirt reflects more light. Anyone want to guess what the percentage of incident solar radiation that the moon reflects? It's a trick / Do you happen to know? It sort of looks white in the sky. So it's got to be 90% or something? It's 11%. It's as black as coal. And so why don't you know that? Well, because you have no comparison. Now if I went up there with a sheet of white paper and held it next to the moon, you'd say oh, yeah, God, it's really dark. But no one does that, it just hangs up there and you have no reference. So this business about brightness is tricky. You got to be careful about that. And by the way, why is it as dark as coal? It's because of solar wind impinging on the surface. And you also probably know that where they were, quote, "recent" craters like only in the last few million years. They have bright streaks. That's because where the underlying material is exposed and the sun hasn't yet done its work on them. Anyway, so brightness depends on reflectance. How about distance? If I have a light bulb, it's less intense when I go further away. So there's an inverse square law. So does that apply to image formation. In the more normal sense, if I walk away from that wall, if I stand on this side of the room, that wall is only a quarter as bright as when I stand over here, right. Do you believe that? I can sell you a bridge in Brooklyn, then. No, of course, it's the same politeness. And you know that. So what's going on? Why is it not follow the inverse square law? Well, the reason is that at the same time as I'm getting closer to, the area that's imaged on one of my receptors is larger on the wall. Or if you want to think of it in terms of the little light bulb, so the LED, imagine that the wall is covered with lots of LEDs. And each of them does, in fact, follow the 1 over r squared law. But if you think about how many LEDs are imaged on one of my pixels, that goes as a square of the distance. So the two exactly cancel out. And so in fact, we can ask that one out. And so what else does it depend on? Well, it doesn't depend on the distance itself, but it depends on the rate of change of distance or orientation. And not in a terribly simple way, but we can start with a simple example. So here's a surface element, some little patch of a surface. And here's a light source. And what we find is that it is foreshortening. That is the power that hits the surface. Per unit area is less. So I can measure the power in this plane, so many watts per square meter, which in the case of the sun is about a kilowatt per square meter. But obviously that same energy is spread out over a larger area. This length is bigger than that length. And so the illumination of this surface is less. And how much? Well, we can express it in terms of this angle, which is the incident angle, theta of i. And that is the same angle as that angle, I think. And there's a co-signed relationship between this red length and this length. So we find out that, in this case, the illumination on the surface varies as the cosine of the angle. And this is something that we'll see again and again. Now it doesn't necessarily mean that its brightness, the amount of light it reflects, goes as the cosine of the incident angle. That is the simplest case. And so here's an example where we could use an image brightness measurement to learn something about the surface, because we can look at different parts of the surface. And they'll have different brightnesses, depending on this angle. Now, does it tell us the orientation of every little facet of the object? Some people are shaking their heads. No, right, because, again, it's one measurement two unknowns. Why are there two unknowns? Well, one way to see it is to think of a surface normal, a vector that's perpendicular to the surface. And the way I can talk about the orientation of the surface is just to tell you what that unit normal is. So how many degrees of freedom are there? How many numbers? Well, I need three numbers to define a vector. So it sounds like three, except I have a constraint-- three components. This isn't just any old vector-- this is a unit vector. So x squared plus y squared plus z squared equals to 1. So I have one constraint. So actually surface orientation has two degrees of freedom. And since this is such an important point, let's look at it another way. So another way of specifying surface orientation is to take this unit normal and put its tail at the center of a unit sphere and see where it hits the sphere. And so every surface orientation then corresponds to a point on the sphere. And I can talk about points on the sphere using various ways, but one is latitude and longitude. And that's two variables. So that tells us, again, in another way that a unit normal has two degrees of freedom. And if I want to pin it down, I better have two constraints. So that's the way that was going. And that makes it interesting. I mean, if we could just say, OK, I'll measure the brightness and it's 0.6 and the orientation is such and such, the course would be over. It'd be pretty boring. But it isn't. It's not easy. We need more constraint. And we'll see different ways of solving this problem. One of them you saw in the slides was a brute force one saying, well, we just get more constrained. We illuminated with a different light source. We get a different constraint because the other light source would have a different angle. And then I can solve at every point. So from an industrial implementation point of view, that's great. You can do that. You can either use multiple light sources, put on at different times, or you can use colored light sources, and so on. But suppose you were interested in, how come people can do this. They don't play tricks with light sources. And they don't live in a world with three suns of different colors. Then we'll have to do something more sophisticated. And we'll study that. How are we doing? Are we getting there? OK, so the foreshortening comes up in two places. The one is here where we're talking about incident life. But actually foreshortening also plays a role in the other direction. Whoa, high friction blackboard. Also, hard to erase. So it's really the same geometry, except now the rays are going in the other direction. and like so. And I have a foreshortening on the receiving end as well. So in a real emerging situation in 3D, we'll see both of these phenomena. There's the foreshortening that affects the incident of illumination as up there. And then there's this effect. And for example, I can illustrate to you right away the stupidity of some textbooks. So some textbooks say that there's a type of surface called inversion which emits energy equally in all directions. That's what they literally say. Well, if that's true, then that energy is imaged in a certain area that changes as I change the tilt of the surface. And as I tilt the surface more and more and more, that image area becomes smaller and smaller. But it's receiving the same power, supposedly, according to these guys. And what does that mean? That means you're going to fry your retina right at the occluding boundary, because all that energy is now focusing on a tiny, tiny area. So this is an important idea. And it comes in when we talk about the reflectance of surfaces. And we need to be aware of it. So now something I want to end up on is, we're solving a tough problem. With 3D, we only got 2D images. So maybe we're lucky and we have several. But you've got a function of three variables that's got so much more flexibility than a few functions of two variables. So why does this work at all? Well, the reason it works is that we are not living in a world of colored Jell-O. So we're living in a very special visual world. So if I'm looking at some person back there, the ray coming from the surface of his skin to my pupil is not interrupted, and it's a straight line. Why? Well, because we're going through air. And air is a refractive index, almost exactly 1. And at least it doesn't vary from that position to here. And there's nothing in between. There's no smoking allowed in this room, so it can't be absorbed. And that's very unusual. And the other thing is that person has a solid surface. I'm not looking into some semi-translucent complicated thing. So they're straight line rays and there's a solid surface. Therefore, there's a 2D-to-2D correspondence. The surface of that person-- sorry, I keep on looking at the same person. He's getting embarrassed. But 2D, we can talk about points on the cheek of this person using two vectors, u and v. And that's mapped in some curvilinear way into the 2D in my image. And that's one reason why this works. It's not really 3D to 2D. It's a curvilinear 2D to 2D. And what's the contrast? Well, suppose I fill the room up with Jell-O. And then somebody goes in with the hypodermic, injects colored dye all over the show. And then I come in the door and I'm not allowed to move around. I can just stand at the door and I can look in the room. Can I figure out the distribution of color dye? No. Because in every direction, everything is superimposed from the back of the room to the front. And so you can't disentangle it from one view. Can you do it? Yeah, if you have lots of views. And that's tomography. So we're in an interesting world. Tomography in a way is more complicated, but it's also in a way much simpler. The math is very simple. And we have a world where there's a match of dimensions. But the equations are complicated. So it's not so easy to do that inversion. I think we need to stop. OK, any questions? So about the homework problem, you should be able to do at least the first three or five questions, probably the fourth. And then on Tuesday, we'll cover what you need to do the last one.
MIT_6801_Machine_Vision_Fall_2020
Lecture_5_TCC_and_FOR_MontiVision_Demos_Vanishing_Point_Use_of_VPs_in_Camera_Calibration.txt
[SQUEAKING] [RUSTLING] [CLICKING] BERTHOLD HORN: Have another go at this demonstration using a direct HDMI connection. And let's see what happens. Can't play graph. OK, so let's first go back up here. Just make-- So that's what we've seen before. And now let's try-- OK, that's a bit bitter. So this is a webcam looking down at the keyboard. And when I hold it still, you can see A, B, and C are near zero. They're small on the right side there. And the time to contact, bottom right, is some large number that sometimes negative, sometimes positive. And now if I move it away from the keyboard, it should go green on C. The third one is the C component-- 1 over time to contact. As I approach the keyboard, it should go red, meaning danger. So there's that. And, of course, it's independent of the texture. So I can do the same thing on any surface. If I try and move it in x only, then the first bar would be large. And that's sort of true. I haven't got it oriented exactly right. But then if I move it in this direction, the second bar-- that's B. OK, so that's working a little bit better than it was last time. Start off with a correction. Somebody last time pointed out a sign era. So we had that discussion about different ways of thinking about time to contact, one of which was what we were discussing and another one was rate of change of size. And from perspective projection, we had this equation. And then we cross multiplied. And we got that. And-- sorry, equals. And then we took the derivative to get-- because this is a product and this is a constant. So we get 0. Then-- and so there's a minus sign there. And that makes sense because when the dzdt is positive, things are moving away. And in that case, the image of those things is shrinking-- negative dsdt. All right. We are busy talking about perspective projection and how we can use it in various ways. And in particular, we were busy talking about vanishing points. So, again, to explain what those are-- so here's our imaging system. There's an image plane. And there's a center of projection. And out in the world, we have a bundle of parallel rays-- vectors that are filling space. They're all parallel. And one of them is special because it goes through the center of projection. So out of all of these parallel rays-- and, of course, in much of human construction, we have parallel lines. Happens to be a fairly efficient way of doing things. And unless you're trying to be very artistic and build this data center, you'll have a lot of parallel lines. So these could be edges of buildings, corners, edges of windows. Anyway, one of them is special in that it goes through the center of projection. And it hits the image plane at a particular point. And we can use that point as representative. That's a way we can talk about which set of parallel lines we're talking about. And that's called the vanishing point. And the reason is that, first of all, for this particular line, you're looking straight at that line. And you just see a point. And what about the other lines? Well, if you go far enough out on any of these parallel lines, their projection into the image will come closer and closer to the projection of this line. Why? Well, because there's a decrease in magnification from distance in the world to distance in the image. The further way we go, we have-- f over z. And so if Z becomes very large, then the magnification becomes very small. And so any difference between these is reflected in the image by a smaller and smaller distance. So these other rays actually are also image, these other lines-- as lines. And those lines have to go through the vanishing point. So as I move outward, I guess I have the arrows reversed in the two cases. But as I go outward along these lines, I am coming closer and closer in the image to that vanishing point. So that's the idea of a vanishing point. And we can exploit that because it allows us to determine relationships between coordinate systems. And it allows us to calibrate the camera. So those are two things we'll briefly discuss today. And we started yesterday. So, first, let's talk about the camera calibration problem. So here's my image plane. Here's the center of projection. And now suppose that we live in a world of rectangular objects. And so each rectangular object has sets of parallel lines-- three sets of parallel lines-- I guess four of each of them. And so they define a coordinate system. They define three directions. And I can pick parallel lines that go through the center of projection. So I-- OK, so out here is some object. And it has parallel lines, like that. And I pick, out of the family of parallel lines in that group, one that goes through the center of prediction. And so let's call that direction x. So that's a coordinate system, which is just parallel translation of the coordinate system on the object. Those are the-- so for the moment, we're ignoring translation. We just were going to worry about orientation. So then I project those three into the image plane, as we did over there. And, well, in my case, some of them-- some of them may go out of the format. And often, the vanishing points are not in the part of the image that you're actually sensing. But that doesn't matter. We're only interested in their position in the image plane. So let's suppose that-- I don't know, we can call these a, b, and c. Or we can call them r2 and r3. So if I have my picture of the cube, as I did last time, I can define three vanishing points just by extending those parallel lines. Now, how accurately do we know them? Well, that's another story. We'll need to know how accurately we can determine lines. But let's suppose for the moment that we've got these three vanishing points. So we can imagine the little diagram in here of some rectangular object and highly distorted because, actually, in practice, those vanishing points will tend to be further out. If I draw them this closed in, I'm going to get a lot of, quote unquote, "perspective distortion." Of course, the term distortion is sort of odd because basically this is what perspective projection does. It's not like the effect of radial distortion in an image, which is an undesirable property that warps the image plane. OK, what do we do with that? Well, one thing we can do with that is try and figure out where the center of projection is. So that's where we were going last time. So we have a coordinate system that's in the imaging plane, in the image device. And we're trying to find the relationship between the coordinate system in the object and the coordinate system in the image plane. And then we're trying to find out where the center of projection is. So a couple of terms-- so one thing is the point that is perpendicular below the center of projection. So we draw the perpendicular from the center of projection down here. And that's called the principle point. And as we indicated, you'd like that to be at the center of the array. But it won't be very accurately. And to do accurate work, you need to know exactly where it is. So that's two numbers-- row and column in the image plane. And then the third number is the height of the center of projection above. And we call that f. And that is to remind us that in the lens system, that's the focal length. It's typically slightly larger than the focal length. Anyway, so there are three degrees of freedom. They're three numbers we need. And one way to think about it compactly is we're trying to find out where that point is. And that could be a difficult task to do by physically disassembling the camera. For a start, in a cell phone camera, those distances are very small. So you'd have to measure them very accurately. And it probably won't be the same after you reassemble the camera. So how do we do this? Well, if we connect the central projection to the vanishing points in the image plane, we have three vectors, which are basically these three vectors up here. Well, try and draw it that way maybe. And so therefore, they're at right angles to each other. And where does that come from? Well, our assumption is it's a rectangular object. If it were some other object and we knew what the angles were, we could use that as well. It would be slightly less convenient. But so we now know that we're looking for a point up here such that if you stand there and you look down into the image plane, the directions to these vanishing points will be right angles to each other. So that's the task. We move around in this space to find that place. And so let's start in 2D. We already mentioned this last time, that the angle made by the diameter of a circle from points on its circumference is the right angle. So conversely, the locus of all the places you could be from which those will be at right angles to each other is a circle. This diagram will come up later when we're talking about photogrammetry and ambiguity in imaging of surface terrain because if you have two landmarks and they appear at right angles to each other, you might think that that would tell you where you are in the airplane. But actually, no, because you could be anywhere on this circle and you would see them at right angles to each other. OK, so that's the 2D version. And the 3D version, of course, is you just spin this around its axis. And you get a sphere. So that constraint on the position of the center of projection is just that it lies on a sphere. So what sphere? Well, for a start, it lies on the sphere where R1 is one end of the diameter and R2 is the other end of the diameter. So I can imagine that we draw a sphere with r1 and r2 as the diameter. And so it goes above and below the image plane. And the center of projection must lie on that. Now, of course, that's not enough to tell us where it is. So we have a second sphere. We connect, say, r2 to r3. And that's the diameter of a second sphere. And we intersect those two spheres. So what's the intersection of two spheres? It's a ring. So now we have a ring. That means we haven't really solved the problem quite because we still have an infinite number of possibilities. So we use a third. We use a sphere with diameter there r3 and r1. And now we intersect those. And we're left with how many solutions? Two. Right. OK, and so there's a two remaining two way ambiguity. And in this simple case, it's simply that when we're up here, we get the same right angle condition than if we're mirror image below the image plane. Well, in our case, we know that there's a physical constraint, which is that the center of projection has to be above the image sensor so that its imaged so that the second solution can be eliminated. OK, so want to just talk a little bit about some of this is all very simple and basic. But it's good to remind ourselves of these things. So we'll start off talking about linear equations. And whenever we can, we're trying to reduce things to linear equations because we know how to solve those. And so geometrically, what are they? Well, they're straight lines. And what equations do that correspond to? Well, there's this old chestnut, which has some real problems. But that's one way of writing the equation for a straight line. Then we can just say, well, it's a linear equation like that. And in this case-- so here we've got two parameters, m and c. And what is that? That tells us that the family of straight lines is a two parameter family. If we look at this, there are three parameters. How can that be? Well, because there's a scaling we can do. So true, they're three numbers. But actually, if you divide through by any one of them, you get the same line, and you're down to 2 degrees of freedom. So it's a 2 degree of freedom world. And then we can go-- probably get the signs wrong but-- make sure I get them the right way around. Sine theta x minus. OK, so what's that? Well, that's another way of parameterizing a straight line, which we'll use quite a bit. And you can check that for sine theta equals 0, this is that y equals rho. So theta equals 0 is that line. And for theta equals pi over 2, we get x is minus rho. So that's this line-- and so on. And two parameters-- so here we got the world of lines is parameterized through theta and rho instead of over here, where it's parameterized in terms of m and c. And why is that useful? Well, the thing up there is that if your line happens to be parallel to the y-axis, then m is infinite. So there's a singularity, which we avoid in this representation. Anyway, straight lines in 2D-- the linear equation's in x and y. But now we're dealing with 3D. So let's talk about that. And so now we're talking about planes. And, of course, planes also represented by linear equations, except now in three unknowns. And one reason I introduced this notation is because we can generalize that to 3D. OK, so we can write the equation of the plane as that. So it's a linear-- I mean, if I expand out this dot product here, I get linear equation in x, y, and z. And it's just convenient to write it that way. But if I wanted to, I can write that a, b, c, d. So that makes it look like they're 4 degrees of freedom. But, of course, they aren't because of the same scale thing. If I multiply a, b, c, and d by 2, I have the same plane. So, actually, it's three degrees of freedom. And I can also see that from this representation because n is a vector-- so three numbers. But it's a unit vector. So there's one constraint. So there are only two degrees of freedom. And then I have rho. So 2 plus 1 is 3. So the family of planes in the 3D world are three dimensional, which is an interesting duality because it means that there's a mapping between planes and points in 3D, just as there's a mapping between lines and points in a 2D case. So I can either plot theta rho or mc. Oh, and by the way, if I like that representation, then you can now see the similarity where in the 2D case, this vector here has an x component that increases as x-- oh, it becomes negative larger as theta increases. So that's the minus sine theta term. And the y component is large when theta is 0 and then get smaller. And that's the cosine component. So you'll see that this equation is the same as n dot r is rho or something. And, of course, that's the same equation we have over there, just in 3D. OK, so that's all pretty obvious. Now, back to our camera calibration problem. So one way to approach this is to think of this as the intersection of three spheres. And so let's talk about that a little bit. So I mentioned last time this problem of multilateration, which comes up in robotics, where, for example, we have the distances to a number of Wi-Fi access points. And let's say we have three. And you'd think that should allow you to compute where you are. And it does. So let's solve that problem first. So we're trying to intersect three spheres. So how can we talk about a sphere? So here's a sphere. The i-th sphere. We'll have three of them. So rather than write three equations, I'll write that. And it's just that the magnitude of that vector difference is the radius of that sphere. And so we'll get three equations like this and try and combine them. Now, if I want to write this out, I can just write this because the definition of that magnitude is just the square root of the dot product of those two vectors. OK, then I can multiply this out to get that equation. And so for every sphere, I'm going to get a second order equation like this. And we mentioned last time that, therefore, by Bezout's theorem, we may have as many as eight solutions unless the equations have special structure. Well, we can exploit that by considering a second sphere-- the same equation, just i instead of i. And by subtracting them, we can get rid of that annoying second order term. And so we end up with a linear equation. And that's always preferable. So we get 2r dot rj minus ri. Then I need to say what that is. So ri squared is-- OK, so these things on the right-hand side are just constants. They're the distances are measured and the distance from the origin of the two reference points of the centers of those spheres. And what's important is that on the left-hand side, I've got r dot something. Well, that's a linear equation in the components of r. And any time I can reduce quadratics to linears, I'm happy. So OK, now we've gotten it for one pair of spheres. But we actually have more. So we can repeat this exercise for the other combinations. So when I put it all together-- OK, and then I have all of these constant terms, right? Because the transpose says I've taken a column vector and turned it into a row vector. So the first row of the matrix is this difference, r2 minus r1. And multiplying this matrix by this vector is taking the dot product of this difference and that vector. So that just corresponds to what I've got over here. Well, there's a factor of 2. I forgot a factor of 2. Do that. OK, and then similarly, the second row, I've taken a column vector, turned it into a row vector. And so the second term in the result of multiplying this matrix by this vector is to take the dot product of this vector with that vector. So that's the same equation just with i and j changed. And I do that a third time. And I get that. And hurrah, I got three linear equations and three unknowns. What could be better? OK, there are some people shaking their heads. So why is that wonderful if it's true? Well, because I know how to solve linear equations. And there's only one answer-- et cetera, et cetera. But wait, we said there were two answers. So something's wrong. So what's wrong with this? Why will this fail? So when do linear equations not have a unique solution? When there's redundancy, when the rows in the matrix are not independent, when the matrix is singular when the determinant is 0-- all different ways of saying the same thing. Now, how can I be sure that's the problem? Well, if I just add up these three rows, what do I get? I got zero. So the three rows are indeed not linearly independent. The third row doesn't tell me anything new because I could have got it from the first two just by-- if I add or subtract. If I take this one and subtract that one, I get this one. And the same with the right-hand side. So it's all consistent. So, yes, this statement is true but it's not giving me a solution because this is a singular matrix. So we always have to check that not only do we have enough equations and unknowns, but that we can actually solve the problem. Yeah? Right. OK, so his statement was that we think there should be two solutions. But if the matrix is singular, there's actually an infinite number of solutions I can construct along a line, any number of solutions I like. So what happened? What's going on? Well, what we did was we manipulated the equations. And we got some more equations. But we threw away the original equations. And that may or may not be legitimate. In this case, we've lost-- something that satisfies these three equations does not necessarily satisfy these three second order equations. So satisfying this equation doesn't guarantee that we actually have a solution. So this is another thing, another cautionary tale. And you will see this sometimes in papers. Great, we manipulate equations. We get some equations we can solve. And then, oh, actually, we're not only getting the solutions we're supposed to get. But we're getting other stuff. And so in this case, it's perfectly legitimate to derive these equations. But you can't then throw away the original equations. And in particular, in this case, we can use two of them. But we need to keep one of the quadratic ones. So we can keep, for example, the first two equations over there and keep the third one of these. And that's perfectly legitimate. Now, we got two linear equations, one quadratic, and be Bezout's theorem, we would get 1 times 1 times 2 solution maximum, i.e. 2. And that fits with what we're expecting. OK, so let's deal with this in another way. So what we've got is constraints like this. And we can write them out the way we did over there and then subtract them. And what we're going to end up with is-- and we can derive two more equations like this. But let's just stop and look at this for a second. What was this? R2. OK, so all I've done here is I've subtracted these two and reorganized the terms a little bit. So what does this tell me? Well, that product being equal to 0 means that the two vectors are perpendicular. So that's one important thing, that r minus r2 is perpendicular to r3 minus r1. The other thing we can notice is that the plane goes through r2. So first of all, this is a linear equation. So by our discussion over there, it represents a plane in 3D and passes through r2 because if r equals r2, this is 0. And so that satisfies this equation. OK, so we got two important properties. And now if we go back to our vanishing points in the image plane-- I'll draw it again so-- r1. So what this is saying is that this particular equation is a plane that is perpendicular to r3 minus r1-- so r3 minus r1. That's this vector. So this plane has this as a normal perpendicular. And so for a start, it tells you that that plane is perpendicular to the image plane. So the solutions are on that plane that's perpendicular to the image plane. But which of these planes is it? Because just saying that this is the normal perpendicular to the plane, we could have lots and lots of planes. Well, that's the second statement. It passes through r2. So we're popping back and forth between algebra and geometry because you can do all this just algebraically. But you don't really get much insight. And it's much more fun to look at it geometrically. Now, of course, I picked this particular combination. I could pick two other combinations. So what do I get from those? Well, one of them is going to give me a plane that is perpendicular to r3 minus r2 and passes through r1. So that's that one. And then there's a third one, which is going to be a plane perpendicular to r1 minus r2 and passing through r3. And so ta-da, I got a solution. So what is that called, by the way? Triangulation. OK. Well, so with triangles, there are a lot of special points that have names. And I don't know. There are at least six. I'm sure people have come up with other ones. For example, there's the outcenter, which is the center of a circumscribed circle. Then there's the incenter, which is the center of a inscribed circle. Then there's the centroid, which is the average of the three sets of coordinates. And this one is the orthocenter. I don't know. There are a few more intersections of bisectors and whatever. Anyway, this is the one we want. And it's obtainable easily by solving linear equations. So we're partly done. We're not quite done, because now we know where it is in the image plane. So this is the principal point, the one we were talking about that the perpendicular we dropped from the center of projection into the image plane. That's where that is. What are we still missing? Well, we're missing f. So we know that the solution is along a line that's coming perpendicular out of the image plane. Why? Well, because we're intersecting these planes that are each individually perpendicular to the image plane. So they'll produce a line. And by the way, of course, I don't need all three planes. I just need to intersect two. Yeah, OK. Let's start with r. So the unknown center of projection is out here. And this is the perpendicular I dropped from r into the image plane. And the equations I'm solving are these equations. And the thing I'm exploiting is that all of these have a 0, a z component. Why is that? Well, because in terms of the coordinate system I'm using, which is row and column in the image, the height is 0 in the image plane. The center of projection is out here at some non-zero z. But these points are actually in the image plane, which has height 0 relative to the image plane. So none of these vectors have a third component. And so all of these equations here, I can think of, really, as equations in two vectors, not just x and y. And I only need two of those to solve for x and y. But that's just algebra that represents this geometric insight. So one of the things that-- what do you do with this? Well, camera calibration. But you can also do some fun things with it otherwise. So, for example, if you take an image and you find this point, and you find that it's not in the middle of the image, like here's your picture and you do this construction and you find the vanishing points are, I don't know, outside the image. And then you discover that, oh, the center of projection is, I don't know, here-- the principal point. Well, then whoever took this picture either had a very funny camera or they cropped you out of the picture. Very commonly done when relationships break up. You take this great picture where you look really good. Then you cut out the other person. And so that's one of the kind of lighthearted-- or maybe not so lighthearted, in that case-- uses for this technology. Another one is to try and question whether an image is an original or has been modified. And this came up in did Admiral Peary get to the North Pole or not? And his proof was partly in the form of photographs. And what you can do-- and we know what camera he used. So we know the focal length, et cetera. And, well, we also know what altitude the sun should be at that time of year. So you can do some photogrammetry and discover that, well, for example, one of the important pictures has been cropped. So this is the kind of technology that will let you do that. And I guess the Photogrammetric Association, which is into this sort of stuff, picked that up. And they published a book that kind of questions his claim to have reached the North Pole. I'm interested in that partly because there are wonderful hoaxes in exploration, a lot of which are easy to understand. Someone spent years raising money, years finding people to work with them. They get within 150 miles of the North Pole and there's no one around. Are they going to come back and say, no, I didn't make it? Well, if they're really, really honest, that's what they say. But if they sent back the only other guy that knew how to operate a sextant to measure the altitude of the sun, they might just be tempted to say, yeah, I got there. Anyway, so doing this kind of vanishing point analysis can sometimes alert you to problems with image manipulation. But we're using it for calibration instead. So we can write down the two linear equations for that point. But it's not-- I mean, you know how to solve two linear equations. So we still have to find f. But that's OK because we now know x and y. And all we need to know is the third component of that vector. And we end up with a quadratic. And we know how to solve a quadratic. So I won't do that. But here's another interesting thing. I kept on saying that in typical cases, the vanishing points will be outside the frame. And so one thing we might want to do is, given this positioning of the vanishing point, can we say something very quickly about the f, the focal length? Well, here's a-- let's take a really simple case. So here, the vanishing points-- and they happen to be equally spaced in the image plane. We just turned the cube so that we're looking at the three faces equally. And let's suppose that this distance is, I don't know, v for vanishing point. And then the question is, what's if? The general approach to this involves plugging in the x and y we get from this and then solving a quadratic in f. But maybe we can do this without all of that because it's a special case. We should be able to figure this out. So we can think of this in terms of corner of a coordinate system. OK, so we have 1, 0, 0; 0, 1, 0; and 0, 0, 1. And we need to know the distance from the origin to this plane. And, of course, it'll vary depending on where we are. But somewhere out here is the point that is closest to the origin. And it's the one we indicated by distance rho from the origin. So the question is, how far is that point from the origin? Well, presumably it's a point where-- what's another variable name? A, a, a. It's symmetric. So you would imagine that at that point should have the same x, y, and z coordinates. And so the dot product of that with the unit normal to this plane should be 1. So the unit normal to-- the perpendicular to this plane comes symmetrically straight out equally in x, y, and z. And to make it a unit vector, we have to divide by the square root of 3, right? Because we've got 1, 1, 1. And so I think this is that A is 1 over square root of 3 because it's 3 times-- yeah, OK. So this is also what we called rho before. OK, so that's the distance from the origin to this point. And that's going to be our f. But what is v? Well, in this diagram, this is v. So in this case, v is square root of 2. And f is 1 over square root of 4. So there's a relationship which is that v is square root of 6 times f, or f is v over square root of 6. So in this special case, we can easily calculate what the focal length or the principle distance is. And it's substantially larger than-- sorry, I got this the wrong way around. F should be-- no, it is right. And so v will typically be substantially larger than f, the principle distance. And so often the vanishing points will be outside the frame that we've actually captured an image of. And this is a special case. We can solve it in a general case. It's just algebra. So that's application of vanishing points to camera calibration. Now I want to talk about another application. I mentioned the case where we just slap a cell phone camera onto a car's window. And we want to relate the images we're seeing to some three-dimensional world coordinate system just by identifying features in the image. So what sorts of things can we see in the image? Well, if it's on a straight road, we'll see the curb, and we'll see road markings. And those are supposedly parallel. And so they will produce a vanishing point that we can detect in the image. If we're lucky, there's also a horizon. So we get a second constraint out of that. And what we want to know is how is this camera oriented relative to the road and relative to gravity. So that's the kind of problem we're trying to solve. And I guess we got rid of that also. So this is really about orientation. So the transformation between a world coordinated system that's lined up with the road and gravity to the camera coordinate system is translation-- there's some shift-- and rotation. And we're going to focus here on just recovering the rotation. So where do we start? Well, the same diagram. Let's suppose we're lucky and we actually have all three vanishing points. In the application I mentioned, we don't. But let's take the easy case, where we have all three vanishing points. So we got v1, v2, v3, r1, r2, r3. And now that we have a calibrated camera, we just connect those up to the center of projection. We now know where the center of projection is. That is we've got the three numbers. We've got two for the principal point and one for the principal distance-- or if you like, just the coordinates of the center of projection. I don't know. Let's call it p or something. We've called it r over here. OK, then we know that the edges of this rectangular object that we're looking at to get the vanishing points have directions that are just defined by these lines. That is-- let's call this the x-axis. And then there's another one, which will be the y-axis. And here we go. So they look a little bit funny because of the way I picked the vanishing points. But they're supposed to be at right angles to each other. And, of course, the first thing I can do is check. I've got some algorithm that finds the vanishing points. I have calibrated the camera, supposedly. I connect these up. And I take the dot products. And they better be small. They're unlikely to be exactly 0. So that's the first thing, to check that they are, in fact, at right angles to each other. And then I have now-- what do I have? I have the unit vectors in the object coordinate system measured in the camera coordinate system. So my definition of x, y, and z is still in this coordinate system over here. So I'll do this-- T minus r1. And, of course, once I know that they're parallel and they're unit vectors, I can just compute them by normalizing-- et cetera. So, important to understand that those x, y, and z unit vectors are in the camera coordinate system. OK, so now suppose I have some point in the object. And let's see, did I call these primes? And it's going to be-- yeah, they're vectors. So that's in my camera coordinate system. And what is that vector in the original object coordinate system? So everything now is measured in the camera coordinate system. Oh, I shouldn't be-- OK. So it's alpha in the direction of the x-axis and beta in the direction of the y-axis and gamma in the direction of the z-axis. So what is the vector in the object coordinate system? Where are its components? Its x component is alpha and y component is beta. So this vector here that I've written in the camera coordinate system corresponds to this vector in the object coordinate system. So, I mean, that's the definition. We have three axes in that world. And we express the position of a point in terms of a weighted sum of those three directions. And so in my camera coordinate system, this is what it looks like. In the object-- the rectangular block coordinate system, it looks like that. So what's the transformation? So I got-- let's see. R is-- so let's see. The T, again, means I'm transposing a column vector into a row vector. So the first row of that matrix is the unit vector in the x direction laid out as a row. And so the first component of r over here is the dot product of this x unit vector and r prime-- this guy. And well, that's what you see in that equation. The second component is the product of y with this guy, and so on. So that's my transformation vector between the two coordinate systems. We'll be doing more of this. So don't panic. So that's a very important matrix. And it represents the orientation of one coordinate system relative to the other. And I claim this matrix is also normal. What does that mean? That means that the rows are perpendicular to each other. If you take the dot product of the two rows, you'll get 0. Take the dot product of those two rows, you'll get 0. And by construction, that's true because we made a unit vector. We're assuming that they represent the axis of the coordinate system, so they're perpendicular. And then it's also normal that each row has magnitude 1. And we did that by construction because we constructed a unit vector. So amongst other things, we're now led to understand that rotation is represented by also normal matrix and quite a few photogrammetric tasks involving finding that matrix. And in this case, we were able to do it rather straightforwardly because we explicitly could determine the direction of the coordinate axis in the object written out in terms of the coordinates of the camera. And, of course, if we want to, we can invert this. So if we've got-- maybe we need to go the other direction. Maybe we know the coordinate r and we want to find the coordinate our prime. Well, and it turns out that for rotation matrices, we can show that they're actually just the transpose. So that follows from the property of orthonomality. So in this case, going back and forth between the two coordinated systems is particularly easy. Onto something else, finally. So we spent quite a bit of time talking about where perspective projection and all of the stuff that's connected with it, including the derivatives, which gave us the motion field. And then we talked a little bit about vanishing points and exploiting them for camera calibration. And let's go back now to the other part of the puzzle, which is brightness and what we can do with it. How can we exploit measurements of brightness? We already talked about foreshortening. So here's a small facet of the surface of an object. And we can specify its orientation by talking about the unit normal. And then he is an observer. And let's call this direction the viewing direction. And there's also illumination. In the case there's just one source of illumination, we'll call that s for source. And, well, we can draw some angles here. So this is theta I for incident angle and this is theta E for emitted angle. And, well, that's not enough. We need we need another angle to fully describe that situation, because if we just specify the incident and the emergent angle, we can spin one of these around. So we need some azimuth angle. And we'll talk about this later. But for the moment, just imagine we take this ray from the light source and we projected down into the plane, and we take this ray to the view end, we project it down into the plane. And then we measure this azimuth angle. Now, the observed brightness is going to depend on those parameters. It's going to depend on the material of the surface, one. And it's going to depend on the light source. And it's going to depend on those angles. An extreme case is a mirror, where there's only one direction where you see anything. The reflected ray goes off in a particular direction. And unless your camera or your eye happens to be there, it's dark. But the more interesting cases, where we have a piece of yellow note paper. And pretty much any direction I look at it from, it has the same brightness. And we'll talk about that more later. We're going to greatly simplify this story right away by talking about the illumination. And we've talked about foreshortening. And so we know that if we have a patch of a certain area, it's going to get less and less power from the light source the more it's tilted relative to the light source direction. And the foreshortening makes the apparent area be the true area times the cosine of that angle. And so just in terms of power getting in, that's the magic number. Well, if cosine theta i is less than 0, that's not true because that would mean you get negative power. So what does cosine theta i being negative mean? That means it's greater than-- the angle is greater than pi over 2. And so that means that you've turned the surface to face away from the sun. So we should really be saying max of cosine theta and 0. But that's so tedious, we're just going to implicitly say that. And then the surface may or may not reflect the light. It may absorb some of it. And it may reflect differently in different directions. But let's take a really simple case. Let's start off with that-- not to say that we're going to be stuck with that. OK, so this is a model of a matte surface that's very unmirror-like. It reflects light in various directions. And it has the special property that no matter what direction you look at it, it has the same brightness. And we'll talk about exactly how we measure brightness. So for the moment, all we're going to make use of is let's imagine there's a surface where the brightness only depends on how much power is going in. And therefore, it depends on cosine theta i. And so how can we exploit that? Well, we make a brightness measurement. And it's going to be proportional to that. Let's not worry about the proportionality factor for the moment. So the brightness is proportional to that dot product in this diagram between the normal and the-- right? Because n dot s is cosine theta i. And so does that allow me to determine the surface orientation? So can I solve this for-- see, where I'm going with this is I'd like to recover the shape of the surface. And one way I can do that is to look at every little facet and figure out if its surface orientation, which is little different from-- you might think that, well, we'll just construct the depth map. We'll find some way of estimating the depth at every point in the image. But that's something we can't do from a monocular image. We'd like to be able to recover shape from monocular images as well. So we're trying to recover n. Can we do it from this? Well, this is just one constraint. And the surface normal has 2 degrees of freedom, right? It's the unit vector. We want constraints. So 3 minus 1 is 2. So we got two unknowns. And we only have one equation. So that doesn't work. But let's imagine that we're in an industrial situation. We have control of light sources and so on. And we can take a second image with a second position for the light source. And you saw that in the slides that we did at the beginning. Well, that's already looking better because now we have 2 constraints and 2 degrees of freedom. We have a match between the number of unknowns and a number of equations. But they're not linear. And so where's the nonlinearity come from? Well, the thing is that if we're solving for n, we need to enforce the constraint that it's a unit vector. And one way of thinking about it is we're trying to solve these three equations for n. And these are nice and linear. And this one isn't. So by Bezout's theorem, there may be two solutions. Another way to see that is suppose that you measure the brightness, and therefore you can estimate cosine theta i. And therefore you can estimate theta i. What do we know? Well, let's suppose we know the directions of the light source. And you know this is the direction to the light source. And then I know that there's a certain angle between the surface normal and the light source. Well, that doesn't give me the answer because it could be-- I can rotate this around s. And so I actually have a whole cone of possible directions. So this is s. And this is theta i. And my normal could be any one of these on that cone. Now, if I have a second measurement, second light source, that gives me a different cone of directions. And if the normal has to be on both of them, well that means that I look for the intersection of those cones. And there will be two. And, of course, if I intersect two cones, I get two lines. But I have the additional constraint that it has to be unit normal. So it's on a unit sphere. So I then take those two lines intersect and them with a unit sphere. And I get two points OK, well there could be bad things happening. If your measurements are wrong, these two cones may not even intersect. But we'll ignore that for the moment. And we could write out the algebra for that. It involves solving a quadratic. We know how to solve quadratics. But we can actually turn this into a linear equation problem and make it somewhat more interesting. I mentioned that, amongst other things, one of the things that will affect how bright something is in the image plane is the reflectance of the surface. And reflectance, unfortunately, is a very fuzzy term that means something else for everyone. So I'm going to call talk about something called albedo. And so that's a quantity that would be between 0 and 1. And it's simply telling you how reflective the surface is, how much of the energy going in comes out again versus how much is absorbed and lost. And I put albedo in quotation marks because it has a very well-defined meaning in some technical areas, such as astronomy. It means something slightly different in the case of astronomy. It means you've got a spherical planet. And what's the ratio of power in over power out? And so that means it's some average of lots of different directions. Here, I'm talking about just a particular orientation. But it's a very simple concept, just a piece of white paper presumably has an albedo that's close to 1. And black coal has an albedo that's, like, 0.1. And can you have a albedo greater than 1? No, right? Otherwise you'd be violating some law of physics. But you can get super luminous surfaces by cheating. So if you have fluorescent spray paint, for example, or if you starched the color in your white shirt, which I know you all do, then when you illuminate it with sunlight, they are brighter than bright. They shouldn't be as bright as they are. And that's because they're converting ultraviolet into visible. So they're not violating second law of thermodynamics. They're converting energy outside the spectrum into energy in the spectrum. And in that case, the power total visible power out can be larger than the total visible power in. And so a surface with that kind of property-- if you put starch in your white shirts, they will appear brighter than a 99.99% reflective magnesium sulfate powder, which is one of the standards that people use. But generally speaking, 0 less than rho less than 1. And so what does that do? Well, now we have a slightly different situation, where E1 is rho n dot s1. And E2 is rho n dot s2. And so now, actually, I also have 3 degrees of freedom in terms of unknowns because I've got the unit vector-- that's 2. And I want to recover the albedo. That makes it 3. Well, that means I can't do it with two equations. So let me add a third one. OK, so that seems to be a match. I have three degrees, three unknowns. And I have three measurements. And now I'm going to define n to be this quantity. So I'm going to define the three vector where all three components are actually independent to the variable. And that vector will encapsulate the things that I want to know-- rho and unit vector n. And, obviously, I can recover very easily-- if I can find this n, I can easily find rho because it's just a magnitude of n. And I can easily find the unit vector by just dividing. And the reason I do it is because this is going to be more convenient. So then I have transpose. Very similar to what we had earlier today, except this time it's going to work. So what's going on here? Well, the first result of multiplying this matrix by this vector is the product of the first row with this column vector. And so that's this. That's E1 and so on. So this is just a compact way of writing those three equations. And I can then write-- I can write the solution that way. And I'm done. And from that end, I can then recover rho and the unit vector surface [INAUDIBLE]. So a number of things to discuss about that. One of them is that this is assuming that as is invertible. I can easily construct cases where that's not the case. For example, if s3 is actually just the same as s2, then the determinant of this matrix-- two of the rows of the matrix are the same. The determinant would be 0. I can't invert it. And it makes sense. I'm not getting new information. If s3 is the-- if my third light source position is the same as the second light source position, I measure the same brightness. So it's intuitively clear that that's not going to work. And then you can think of other things. Suppose that s3 is half of s1 plus half of s2. That's a little different because it's saying, OK, there's one light source here. There's one light source here. And now I'm going to put one right in the middle between them. That gives me a third measurement in the image. Well, it turns out that you can predict it from the other two because if E3 is n dot s3 and s3 is 1/2 s1 plus 1/2 of s2, you can calculate what this is. And so they can't be coplanas. So the light sources have to be spread out in your firmament. You can't put them in a plane. And this has important implications for astronomers because the orbits of the planets and our moon are pretty much in the same plane. And therefore, as the sun orbits around the Earth, you don't get different pictures. So that's one thing. This isn't going to work unless we pick three independent light source directions. Another thing is that we can pre-compute this. If we know where the light sources are-- say you're doing some industrial inspection, you control with the light sources are. You just compute s, take the inverse, and you store it. And then at every pixel, you have three frames, taken three exposures. And at every pixel, you simply form this vector and multiply the pre-computed s minus 1. And there's your answer. I means, it's incredibly simple and very little computation, very efficient. Then another thing is that it's sort of fortuitous that we need three to do this. We need three light sources to do this. And one reason that's interesting is because cameras typically have three sets of sensors, RGB. And so we might be able to exploit that. So one thing we could do is instead of having three light sources that come on sequentially, we could use three colored light sources and then separate out from R, G, and B. And there's a little bit more work to do because a particular color light source is going to not just excite R or G or B but some linear combination. But the important thing is we'll have three different linear combinations. And then we can use our magic matrix algebra to deal with that. So that's a possibility. And that is in a way more convenient. And it's faster because we don't need to turn light sources on and off, although turning LEDs on and off is pretty fast. But we can just illuminate them with colored lights. The only problem with that is if the object is colored because then the objects-- unless it's uniformly colored, if different parts of it have different colors, that's going to confuse this algorithm and make it believe that the surface orientation is something that it's not. So that's one quick look at what we're going to do in terms of measuring brightness. This is a particularly simple case. And it's a case that's, in a way, contrived because we're saying that we controlling the light sources, which has application to industrial use. In fact, if you look at some recent patents from Cognex, they've decided to use something like this. But it doesn't really mesh with understanding biological vision systems, except in the very, very deep ocean where anglerfish have light sources that they dangle in front of them to attract prey. We don't typically find animals illuminating the environment with different colored light sources in order to figure out the shapes of their prey or something like that. But from the point of view of recovering information from monocular images in an industrial setting, this is an interesting approach. Now, do real surfaces follow this simple rule, cosine theta i? No. So you can't use this directly. But you can build a look-up table. So, in fact, you don't even have to model mathematically how the surface reflects light. You just calibrate it using a shape that you know. So for every-- suppose you take a sphere. For every point on the image of the sphere, you can calculate what the surface orientation is, what n is there. And then you measure brightness in three images. And you can build a table. Of course, the table is going the wrong way. The table is going from surface orientation to E1, E2, and E3. What you really want is the inverse. Measuring E1, E2, E3, you want to know what the orientation is. But that's just numerical inversion of a table. So that's something we can do. But we'll go in different directions next time, starting off with different projection. So we know that real cameras perform perspective projection. And we spent a lot of time dealing with that. But in some cases, we can approximate it by a projection that's much easier to handle. Call that orthographic projection. And the condition for that is that the range in depth is very small compared to the depth itself. And in that case, you can kind of assume that the depth is constant. And if z is constant, then f/z is constant. And there's a constant magnification. And in that case, we can use a simplification called orthographic projection, which will exploit in our efforts to reconstruct surfaces from images. Any questions? OK.
MIT_6801_Machine_Vision_Fall_2020
Lecture_22_Exterior_Orientation_Recovering_Position_Orientation_Bundle_Adjustment_Object_Shape.txt
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: End of the photogrammetry section. We'll just briefly talk about exterior orientation, which is the fourth of the photogrammetric subjects we are talking about. So what's this about? This is best illustrated by thinking about a drone flying above some terrain of which we have a detailed model. So we know where points are in some global coordinate system. And we have a camera perspective projection. We get images of those three points. And the question is, where are we? So p1, p2, p3 are known. Let's assume that the center of projection is p0. That's what we want to find. And we also would like to find the attitude of a plane in the world. So it's the same old thing, rotation plus translation, except in this case, we have a mix of 2D and 3D information. The coordinates of the points in the world are given in 3D. So we have a terrain model. The points corresponding in the image are in 2D. So let's see. One question right away is, how many correspondences do we need? So we know that we're looking for six degrees of freedom. And so one question is, how much does each image point contribute? How much constraint does it provide? So we've got images of-- so here's our image. We've got three points. And naively, we can just say that well, every time we measure where something is in the image, we've got x and y, so two numbers, two constraints. And so with three of them, we should have enough constraints to solve the problem. So we need three or more. And in this case, that naive argument actually is correct. We only need three. And this problem, I guess, I don't know if it was Church to first solve it. Or in any case, he wrote it up in a textbook. I don't know, 1950s. So it's an old photogrammatic problem well known. Of course, machine vision people didn't read that stuff. And they reinvented all of photogrammetry. And they actually did kind of a poor job of it, came up with something called projective geometry, for example. So anyway, we go back to the roots, which is photogrammetry. OK, now one thing you might say is, suppose I don't care about the attitude. Then I only have three degrees of freedom. Can I solve that problem? Well, unfortunately, these are coupled. So you can't cheat and just solve for the variables you want. OK, so what do we know? Well, let's assume that interior orientation is known. So that means x0, y0, f are known. And so then when we have a given point in the image, we can just connect image point to center of projection. And we have a ray in space. And we know that the object is along that ray in space. And we have that ray in the camera coordinate system, right? So with the origin at the center of projection, and so on. OK, so if we have these three rays, it's like having three sticks. And you're trying to arrange for the sticks to go through three points in the 3D world. So you move around the position of the center of projection until that happens. That's what we're doing. So if we have the rays, we can calculate these angles, so the three angles. OK, so those are known. Once we've got our image points, we can construct those rays. And so we can just take the dot products to get the cosine. And we can take the across product and take the magnitude of that to get the sine. And that gives us a-- 8 and 2 will allow us to calculate the angles. OK, so we know those. So what don't we know? Well, what we don't know is the length of these legs of the tripod. So one way we can think about this is that our task here is to find r1, r2, r3. And we're not quite done once we've got r1, r2, r3. But we're just about done. Because then we can construct p0 by intersecting three spheres, right? So if we know r1, we know that the plane is on a sphere with radius r1 about the point p1. We know r2. We know it's on a sphere with radius r2 about point p2. And two spheres intersect in a circle. So we're not done. We haven't quite solved the problem. So then we take the third one. There's a sphere of radius r3 about p3. We intersect that circle with that sphere. And we get two solutions typically. So there's potential ambiguity. But basically, we then have solutions. And if we have more than three points, we can resolve the ambiguity. You can also see there's going to be some ambiguity just by thinking about, suppose we have the length of those three tripods. Is there some other position than the one I've shown, where those lengths would be exactly the same? So think about moving that airplane somewhere else. We're claiming that with three, we've got a solution possibly however, more than one solution. So work with the plane B. And those lengths would still be the same as they are there. Well, if I move it a little bit, that's not going to work. Because then I'm going to screw up one or the other of these lengths. If I move it to the left, I increase r3 and decrease r1. But imagine that we're flying on the ground. Then we can have a mirror image of the position of the plane. And we get exactly the same three lengths for the tripod. So if we draw the plane that contains p1, p2, p3 and think of that as a mirror, and then we mirror image that plane, then that has the same length. Now, of course, that's one way to disambiguate it. Because typically, planes don't fly on the ground. So we can resolve it that way. Also, there's an issue about the cyclical order of the images that would be different if we were looking at it from underneath. It's like looking at some writing from the wrong side. It's mirror imaged. And similarly here, if we look at it from the mirror image position of the plane upwards, everything is a mirror image. So we can resolve it that way. So if we only have two solutions, then we can easily get rid of the problematic one using some argument like that. OK, so how do we find r1, r2, r3? Well, there used to be books full of formulas of triangle solutions. So if you know one side and two of the angles, or if you know two of the sides and one angle, all sorts of combinations. And why was that done? Well, because it was important for navigation. And it was important for surveying. So people used to know these things, but not so much right now. So they are in the appendix of the book. And the appendix is on Stellar. For example, there's a rule of sines, which is just that a over sine a is b over sine b. And there's the cosine rule. Basically, those are the only two you need. You can solve all these problems using those two rules. Sometimes it's convenient to have some of the other rules. Because they make the job shorter. OK, so our problem is we have this angle. We know that. What else do we know? Well, if we the digital terrain model, then we know this distance. We can calculate that similarly for that one, and so on. So in this triangle, we know that angle. And we know this distance. And the question is, what's r1 and r2? Well, that's not enough information to solve using the sine or the cosine rule. But we can write an equation involving the unknowns, r1 and r2, and all of these known quantities. And I won't be doing it. Because it might come up in the quiz. So the result is going to be that we have three of these triangles. We get one equation out of each of them. So we're going to have three nonlinear equations in the three unknowns, r1, r2, r3. And then we can talk about solving. It might not be easy to find a closed form solution. But at least we can talk about how many solutions there might be. And numerically we can always solve that problem. OK, that's r1, r2, r3. And then, as I said, we still need to intersect those spheres, so a little bit of algebra there. And then we have the position of the plane. If we have more sightings, more correspondences, that's always better. Then we can formulate the least squares problem. And if we're worried about outliers, we can use ransack. Suppose we have 10 correspondences. Then we might pick three correspondences to get a solution, take a different set of three correspondences to get a solution, and so on. And so we've done all of that in other contexts. So I won't want belabor that. So what's left? Well, what's left is finding the attitude. What is attitude? Well, it means it's the orientation relative to the ground coordinate system. So there's a rotation we need to find. Well, the thing is that once we know where the plane is, the center of projection p0, then we can construct these vectors in 3D in the ground coordinate system. Just subtract p0 minus p1 or p1 minus p0 and p2 minus p0 and p3 minus p0. So we can construct three vectors. For example, we could say-- And this gets rid of the translation. We're only interested in the directions. But we also know these vectors in the camera coordinate system. So based on the image positions here, we connect those up to the center of projection, which is the origin in the case of the camera. And we get three vectors in that coordinate system. So they correspond-- three vectors in the camera coordinate system to those three vectors in the world coordinate system. So that's pretty heavy constraint. That means that we should be able to relate those two coordinate systems. And we know that we've taken out the translation. So all that's left is a rotation. And so depending on which way we go, are we interested in a transformation from the camera coordinate system to the world coordinate system, or from the world coordinate system to the camera coordinate system? But we're going to end up with something like this. Now, of course, these vectors may have different lengths, if we just do these subtractions. We're only interested in the direction, so unit vectors. And so-- we expect that. And as I said, it could go the other way around from the world coordinate system to the camera's. So we have three equations like this. And what we're looking for is r. And let's represent it as an orthonal matrix, in this case. Well, we can stick these three equations together into one matrix equation, right? So we have-- now, all of these things are now known. We have the interior orientation of the camera. And so we know how to construct the three vectors from the center of projection to the three image positions. And we've calculated where p0 is. So we've got those three vectors. And so what is this? Well, this is a three by three matrix. And this is a three by three matrix, the first column of which is the vector A1. Second column is A2. Third column is A3. So we have a product and all three by three matrices. And we can just solve for r by inverting one of them, so very straightforward. Interesting question is, is the result also normal? So I'll leave that an unanswered question. OK, yeah. Is in the camera coordinate system. So it's p1 prime, p2 prime, and P3 prime. So A1 is p1 prime. Because in this coordinate system, the other hand's the center projection of this origin, 000. Prime subtract. So that's the ray to that point in the environment as seen in the camera coordinate system. And as I said, you know, this is the minimal case. We just have three correspondences. In practice, we would like to have more, get better accuracy. And then we use least squares. And there's no longer a closed form solution. But we can use this to get started. Just pick three of the answers, three of the correspondences, to get an initial guess, and then have some iteration that minimizes the least squares errors. Just make sure that what you minimize is in the image plane. Because this is where the measurement error lies, not some other arbitrary. So there's that. Now, suppose that I mean, this doesn't need to be a plane. It could be some tourist camera in-- I don't know-- some famous square somewhere in the world. And maybe there's another tourist with a camera. And so there's a related problem. So here is one camera position. And here's some famous sculpture, cathedral, whatever. And here is another camera position. Here's another camera. And there could be hundreds of these. And you may have seen the results of this. And in this case, we do something called a bundle adjustment. And again, this is an old photogrammetry problem, which machine vision people have rediscovered and made a hash of. But they did finally get it right. And so what's the problem? Well, it's a nonlinear optimization. And the method we proposed, we talked about, Levenberg-Marquardt, is a good one to solve nonlinear optimization problems. So what are the unknowns? Well, the unknowns are a set of points in the environment of which you may have more than one view. And we don't know where they are to start off. And so part of the problem is finding where they are in some world coordinate system. What else is unknown? Well, the other thing that's unknown is we don't know where the cameras are. So we need to allow for the cameras to move around. Those are unknowns. And then, well, the cameras have an attitude in space. So there's a rotation. I'll just write it that way. So we tweak things to make the errors as small as possible. And again, the errors are the errors in the image, not out in 3D. So what else? Well, in most cases, we don't know what the camera properties are. So we need to also do interior orientation, and maybe, if you want to be accurate, some allowance for radial distortion. So assuming we have some initial guess, then it's just a matter of pumping it into this black box, which minimizes the error by tweaking all of these parameters, lots of parameters. But presumably, you've got lots of constraints, lots of pictures. And so people have made incredible reconstructions of all sorts of things, and not necessarily from multiple cameras, but for example, one camera flying on a drone. So there's some volcanoes in the African rift zone that are very rarely visited. And somebody flew a drone over one of them and made a very detailed complete 3D reconstruction of the current shape of the caldera on that volcano. And this is the method. So when you're flying the drone, you don't know exactly where you are. But you do know approximately. And that helps start the solution. So any time you have this nonlinear optimization, you want to be near the solution or you might get sucked into a local minima that's not the global solution. OK, there's one thing we haven't really talked about a lot, which is how do you find these interesting points? We talked about in detail how to find edges. We haven't said much about interesting points. If you want to, there's an online resource on Stellar that describes one of several methods that do that. And it's by Lowe, who was the guy who patented the original method. And since then, there have been lots of alternative methods that do not violate the patent and are faster, and maybe not as accurate, or maybe more accurate. There's a whole industry coming up with identifying areas that are likely to be easy to find again in another image and describing those areas. So that you can do a good match and find them again in another image. OK, so that's bundle adjustment briefly. I mean, there's a whole industry on that. But we have all the basics. By going through all of the other photogrammetry problems, we've developed all of the tools you need to implement something like this. OK, switch the topic. So we worked our way up real low level stuff, filtering, aliasing, subsampling, edge detection, and so on. And then we did the photogrammetry. We also did some work in 2D on recognition and determining position and attitude. So let's try and do that in 3D. And it's obviously, not going to be as good. If you have Robot Vision, this is chapter 16. But you don't need Robot Vision. Because there's an online resource specifically on this topic called Extended Gaussian Images. So what are you trying to do? Well, we're trying to describe 3D objects. If the 3D object is polyhedral, that's not that hard, lots of interesting representations. We can get the coordinates of the vertices and then construct a graph showing which vertices are connected to what, and then maybe talk about the faces. And each face is connected to the vertices on that face in the graph and to the edges of the face. And each edge is connected to the two vertices and to the two faces that come together. So you can imagine a nice, typical, computer science solution involving some linked data structure. So polyhedra aren't that difficult. So we'll not say a whole lot about them. And in a derogatory way, they've been called blocks world problems, like children's blocks. So that's where we started in the 1960s. And hopefully, we've progressed away from that. So what other representations can we have? Well, we look at graphics. And they typically will use measures, which are just perfect for that application. So we can approximate any curve. So we're interested in curved surfaces, now that we've given up on talking about blocks world. And we can represent any curved surface with whatever precision we want by approximating it by a polyhedral surface with lots of facets. And that works great for rendering. Because for each of those facets, we can determine the surface normal easily by just taking a cross product of two of the edges. And we can get the edges by subtracting vertices. So it's a very straightforward calculation. And then we use a reflectance map, or something like that, to figure out how bright to paint that little facet, and so on. So it's very convenient for output of pictures. But what about the things we want to do? So what is it we want to do? Well, a couple of things. We'd like to find where things are and how they rotated in space. And I guess we call that pose. So that's position and orientation. And the other thing we might want is to do some recognition. And so let's see how well this representation works. Well, we could try and do alignment by taking two meshes and trying to bring them into alignment, which would mean we'd have to sort assign a vertex in one alignment in one of the objects with a vertex and the other one, and maybe minimize the square of the distance between them. But it's not very meaningful. Because these vertices don't have any particular meaning. It's not like they have a label on them that's meaningful. And if you digitize the surface again, what are the chances of getting that particular mesh? Zero, or close to zero. And so for alignment, this isn't particularly useful. I mean, you can do something. You can say, OK, I'll do an iterative thing, where I have approximate alignment. And so for each vertex, I can find what's currently the nearest vertex, and try and reduce that distance, and hope that it'll converge, that process. But it's not great. And for recognition, you can't even say, OK, this has 320 facets. And the other one has 360 facets. So it's probably the same object. It's not. So you need to do more. And there are ways of progressing from that representation and helping deal with alignment and recognition. But we'll look at a more elegant method, which has some limitations. So this is not a problem that has been cleanly solved for all possible situations. So what are we looking for in a representation? So one thing is this is like physics. We'd like to understand invariance and symmetries. So what kind of invariance? Well, what I'd like is that if the object moves translation, then the representation doesn't change, well, in a significant way. For example, if it means that all the x-coordinates are incremented by some fixed number, then that's not invariance. But it means that I'm keeping a representation that is changed in a very simple, understandable way. So that's translation. And one way I can deal with that is say, well, just reference everything to the centroid of the object. And that gets rid of the translation component. And I've solved the invariance problem. Then rotation, OK? So what I'd like is that-- now of course, if I rotate, it's likely the representation will change. But I would like it to change in some understandable, systematic, simple way. If I consider, for example, perspective projection images, they don't change in a simple, understandable way. As we know, if we take a 3D object and we rotate it in front of a camera, we get images that are not simple changes from a previous image. The perspective projection induces a complex, messy transformation. So if I then want to recognize the object, or I want to align it, that's not a good representation. Because the transformation is very complex. So I can't have rotational invariance. Or rather actually, I don't want it. Because I'm going to try and recover the rotation. But what I'd like is that when you rotate something, the representation changes in a very understandable, simple way that I can exploit to both handle the recognition and handle the alignment problem. OK, so now there are lots of attempts at doing this, finding a representation that satisfies those criteria. Let's just look at one. So this is generalized cylinders. So a cylinder, when someone says cylinder, you tend to think of a right circular cylinder. That is one that has a circular cross-section. And it's obtained by sweeping a generator along a straight line. So in this case, we can think of this object as created by taking a circle and moving it along a line. And perhaps, even more constricted, that circle is perpendicular to the line. And so we generate that cylinder. So that's the most strict definition of a cylinder. And we can generalize it a little bit by changing the shape of our generator. And now we can generate more complicated shapes. But they still have the property that the cross-section anywhere is the same, if I cut it perpendicular to the axis anywhere along the length. and I guess the mathematical definition of a cylinder allows for that version. Now I can introduce a couple of other things. One of them is I can tilt the generator relative to the axis. Well, that doesn't do a whole lot. That's just a for shortening transformation. But suppose I allow the size of the generator to vary as I go along. Well, then I can generate cones, for example. So again, we're sweeping along a line. And now we're allowing the size to change. Then we can allow the line along which we sweep to be curved. So far, we've had a straight line. So I might have a curve like this. And then let's take a circular cross-section, but of varying radius. So I can generate a shape like that. And we can combine these. And we could even allow the generator to change as it goes along the shape. But now it's getting out of hand. Then you can do anything. And it's not unique anymore. So this was an idea that was pursued for a while as a representation for objects, so that we can determine the alignment and recognize them. And it was somewhat of interest when people were trying to represent human bodies. So you can imagine that you can represent arms, parts of limbs, as generalized cylinders and then kinematically link those together, and build a 3D model that was kind of like an artist's wooden puppet that would have parts that were each individually generalized cylinders. There are some problems with this. So one is that in order to do recognition, you would like the representation to be unique. It's going to be harder if there's an infinite number of different ways of describing the same object. And here's an example. Here's a sphere. And I could represent it as a generalized cylinder by having that axis and then circles that grow in size and shrink in size, a perfectly good representation of a sphere. Unfortunately, there's an infinite number of those. Because it could be this axis and those circles. So the sphere is kind of a tough case in particular, because of the symmetries. But the same problem shows up elsewhere and in particular, when we allow for inaccuracies in the data. Sometimes it's hard to tell the difference between objects that do have a unique generalized cylinder representation and objects that don't. So this was used a little bit. It's not been overly successful because of the reasons we described. And there was a tension between allowing more freedom in the generation of these generalized cylinders versus assuring that there was some semblance of singularity, uniqueness. So that you could solve the problems that the whole thing is designed for. OK, so instead, we're going to look at this representation. And again, keep in mind that this is an active area of research, unlike some of the 2D problems, which people kind of agree on solutions. Here, each proposed solution has some limitations. And so we'll look at the limitations of this representation as well. OK, so let's go back for a moment to polyhedra We said we didn't want to do polyhedra as a good starting point. So as I mentioned, one way to describe the polyhedron is to give the vertices and then the graph coordinates in 3D. I mean, the other ways. But one way you could have a list of vertices with 3D coordinates and then a graph structure that tells you which vertex is connected to which vertex, which face has what edges, and so on, that graph structure. But another way is to look at the faces and draw unit vectors perpendicular to them, and then multiply those by the areas. And then throw away that whole graph structure and just remember those quantities. So we'd have a vector n1, which is that, and then a vector n2, which is this. And that's, interestingly enough, under certain circumstances is a unique representation of that object in the sense that there will be only one object that has that representation. And that's something we want. Because when we do recognition, we would like uniqueness. We would like it not to match some other object. And it's kind of surprising. Because we've thrown away a lot of information. We've thrown away the relationship between the faces. We've thrown away actual coordinates, corners and stuff. And yet, here's a representation. And Minkowski has a non-constructive proof long ago, that this is unique for a convex polyhedra. You know, it's interesting that oftentimes a theorem will be ascribed to a person that varies with geographies. They go to another country. And oh, this is Green's theorem or actually, no, it's Stokes, or it's-- well, in this case, Minkowski got to have his name on this theorem. Because there wasn't competing theorem invented by someone in the English speaking world. So he actually got to be the guy with the name on the theorem. Now interestingly enough, the proof is not constructive. Meaning, if you give me these three quantities, I can tell you there's only one convex polyhedron that corresponds to those. But I don't have an algorithm to construct that thing. And so there was some effort for a while in the machine vision world to come up with an algorithm. And I guess, Katsu Ikeuchi came up with an iterative algorithm that would solve that problem in a very slow fashion. But I guess people pretty quickly realized who cares? Because what's our job? Our job is recognition and alignment. It's not reconstruction. So if we describe the object using these quantities, we want to compare those against the model library, and match them up, and figure out, how do we have to rotate this object we're seeing so that it lines up with the model in our library? We're not in the business of saying, OK, we need to reconstruct this in 3D. I mean, we may have already done that. But the fact that it's a non-constructive proof isn't the deterrent. It's not important, not relevant. OK, so how do I use this in a more interesting case? So this is just a polyhedra. Oh, and by the way, it's not too hard to prove that-- when you stack these vectors tail to tail, they form a closed loop. That is the sum of these vectors, n1, n2, n3, is zero. And we'll prove something similar in a moment. So that's constraint on what would constitute a valid representation. If you get a bunch of vectors that don't add up to zero, then you know it's not a closed convex object. And that can happen if, for example, you've left out one of the facets. Then yeah, they won't add up to zero. But other than that, any combination of vectors does represent some convex polyhedron, as long as it satisfies that constraint. OK, so let's take a more complex object, like-- I don't know-- a ICBM re-entry vehicle. Somewhat simplified. There's a cylindrical part. And maybe there's a conical part. And I guess there's a flat part that we don't see here that's on the back. So what I can do is approximate this using our mesh polyhedral representation. For example, I can cut it up into slices, such that the normal for all the points on these slices, well, they're not exactly the same. But they have very little variation. And with a conical section, I can do that. OK, so the idea is, I mean, I can make a much finer mesh. But I'm actually combining things that have similar surface normals. And then what do I do? Well, then I compute all of these quantities, the area ij times the unit vector nij. And I keep those. Now, of course, I've got to be careful. Because I just said that the mesh representation isn't good. Because I might draw a different mesh. So now, I go to unit sphere. And I plot these vectors. So each of them has a direction in space that then corresponds to a point on the sphere. And I put down a mass at that point corresponding to the area of that patch. For example, if I divided the patches up more finely, I would have had two areas half the size. But they both contribute to the same point on the sphere. That's why I could go with them this way. So I could cut it up into tiny little pieces. It doesn't matter. What's important is how much mass ends up on the sphere here. OK, now let me do this for the whole cylindrical surface. Well, the surface normals for this cylindrical surface keep on turning. But they're all in a plane that is perpendicular to the axis of the cylinder. So then imagine cutting that sphere with that plane. I'm going to get a great circle. So I'm going to put down masses all along this great circle. And that's the part of the representation that corresponds to the cylindrical surface. So for example, over here, there's a unit vector that points out that way. That's going to be on the sphere somewhere here. And I put down a mass. And in this case, the way I've cut it up, all of these masses are the same. OK, what about the conical part? Well, same thing. I construct a unit vector. I take into account the area. Let's call this bi and-- I don't know-- mi, just for variety. And that ends up on the unit here somewhere. And I put down a mass there. And now, if I consider all of the facets on the cone, the surface normal will change. And they're not in a plane. But if you think about the surface normals, they form a cone themselves with the complementary angle of the cone. So then I cut the sphere with that cone. And I'm going to get a small circle. So if you think about all of the facets of this cone, they will all contribute to points on the sphere on that small circle. OK, and then, well, there's a piece missing. If I want to describe the whole surface of this object, which is the plate at the end. And the plate at the end is going to end up behind. But somewhere on the sphere, there's got to be a big mass that corresponds to that area. Why is it big? Well, because everything in that area points the same way. So that whole large back plate area contributes to mass at a single point. So it's like an impulse. And there's my representation for a non-polyhedral object. And you can see now how we could use this in various ways for the task we've set ourselves, alignment and recognition. So we could have a library of objects. And for each of them we pre-compute this representation. And then we can do a comparison. Now, the comparison is not least squares in the plane. It's on the sphere. So it's going to require a little bit of thought about how to implement that. But basically, we want to get them lined up, so that where this one has a lot of mass, the other one has a lot of mass. And so you could imagine inventing some measure of how they correlate, how well they match up. And that can then be used in two ways. The one is orientation. So I'd have to take one of the two representations and rotate the sphere until things line up as best as they can. And for recognition, I then do the subtraction to see how well they match up. And so this representation does provide for the two tasks that we described, lots of details to fill in. Then we said that we wanted certain properties. One of them was quote "invariance" or simple transformations resulting from translation and rotation. Well first, translation doesn't get into it. Because we're only looking at surface normals. So if you take this object then you move it, we get exactly the same representation. So it's invariant to translation. Rotation, what about rotation? Well, rotation has a very simple effect on this representation. If I rotate this object and just rotating the normal vectors, and that means that I'm rotating where they end up on the unit sphere. So it's just like rotating the unit sphere in an equivalent way. So the change in the representation resulting from rotation is very simple, very intuitive, easy to understand, easy to implement. And so it satisfies that constraint. So in general, what we're going to be dealing with is kind of a density. So crudely speaking, if I have a certain mass here, that means there's an area on the object equal to that mass that has that orientation. So it's like the mass at any one point here tells me the area that has that orientation. In the case of discrete facets, in the case where I'm taking a limit and taking a continuously curved surface, what I'm dealing with is density. So the density of points on the surface tells me something about the curvature. So if the object is highly curved, the neighboring surface normals will be pointing in very different directions. And that means that they'll be spread out on the sphere. And we get a low density. So low density corresponds to high curvature. And conversely, high density corresponds to low curvature. And we can see this right here, where the thing with the lowest curvature is the plate, the end plate. And it gives a huge contribution to the sphere. Because it has very low curvature. OK, and of course, we'll have to say what we mean by curvature. Because this is 3D. So it's a little different. Now, one way to get started on this is to look at a 2D version of this first. And so first of all, what's the idea of the extended Gaussian? What's the Gaussian image? Well, the important thing to keep in mind is the relationship between points on the object and points on the sphere. What do they have in common? It's the surface normal. So if I want to find the part of the sphere that corresponds to this particular patch of the surface, I just find the point on the sphere that has the same surface normal. So go back to 3D for a moment. Suppose we have an Earth that is not a sphere. And we'd like to draw a map that's based on spheres. Well, then we have some way of mapping between them. And there are lots of conceivable ways. Well, Gauss came up with one, which is basically to say, I'm going to map this point to the point on the sphere that has the same direction of the normal. And that's the one we actually use. Because if you are saying we're here at MIT at 42 and half degrees latitude, that is not this angle from the center of the Earth, which we sometimes mistakenly say. What it is it's this angle. And so when we're dealing with a circle, all of these are the same. But when we have some other shape, we have to be clear about which directions we're talking about. And in this case, we find that the one that-- why do we use this? Well, because it's easy to determine. You've got gravity pointing perpendicular to the surface. And then you have rotation about the celestial sphere. So you can determine the north celestial pole. Or you can look at where the sun is and what time of the year it is. And that that's the angle you're going to measure. Now, there are subtleties there, like the centrifugal force, the uneven distribution of masses in the Earth, and so on. OK, so Gauss basically said one way of mapping between the convex object of arbitrary shape and the unit sphere is to just identify points that have the same orientation. And that's point to point. Well, we can generalize that to shapes. So suppose that we have Africa here, then we can map it onto the sphere. And why is that convenient? Well, because lots and lots of clever methods exist for mapping spheres onto planar surfaces for map making. But the first step is you need to convert it from the ellipsoid to a sphere. OK, so how do we do it? Well, for every point here, we look at the surface normal and we find the point over here that has the same surface normal. So that's the basic idea of the mapping. We make correspondences between points that have the same surface normal. And you can see that this mapping is actually invertible. So if I'm on the sphere and I want to know what point this part of Madagascar corresponds to over here, I just find the point that has the same surface orientation on this other convex shape, as long as it's convex. Now there's a problem when we have non-convex shapes. Because there might be more than one point that has the same surface orientation. And so that's the limitation of this method, that it has very nice properties for convex objects, has some issues with non-convex objects. But for the moment, let's focus on convex objects. So in the case of convex objects, this mapping is reversible. OK, back to 2D. So the idea is that, again, we now from some shape to a circle and maybe I should make this shape less circle-like, given that my circles aren't that great anyway. OK, so here's a convex object. And now what we want to do is take a patch, well, in this case, a short line segment, on that surface and map it onto the sphere. And how do we do that? Well, we look at the surface normals. So there's a surface normal at the beginning. And that'll correspond to some surface normal here. And there's a surface normal at the end. And because it's convex, it's changing monotonically in between. So that whole range of surface normals in between maps into the whole range of surface normals here. And we'll just parameterize that unit circle in the plane by the angle, eta. OK, and so what do we want to do? Well, we want to put down a density, which is inversely proportional to the curvature. So the mass, delta s, that's proportional to delta s, is going to end up being spread out over this part of the unit circle. So if we have high curvature, it's going to turn rapidly. And whatever the mass is gets spread out at a large angle. And conversely, if we're in the flat part, the surface normals to turn very slowly. And all of that's going to end up in a very small segment over here. And so the density will be high. And so we're going to end up with a continuous quantity of that angle. And that's the thing we're interested in. So first, let's pick some arbitrary point. So s is the arc length along the curve from here to there. And then again, these normals are parallel. So this must be angled eta. And so what we're interested in is curvature. Let's start with that. It's the turning rate, right? So for example, if you're not turning, then k is zero. So it's the rate of change of direction. Or it's the inverse of one over radius of curve. OK, and then the density is going to be the inverse of that. So the density-- and that's it. So that's our representation for a convex closed curve in 2D. We just map onto a unit circle the inverse of the curvature. And that representation is unique. There is no other closed convex object that will have that same distribution. Now in the 2D case, it's actually invertible. Now, you can see how you could make a transition from a discrete case to continuous case. You can just divide this up into lots of little facets that are straight lines. And each of those facets will contribute point mass on the circle. And then as you produce the size of the facets, they become smaller and smaller and closer and closer together. And all that matters is the density. How much mass is there per unit area? OK, we call this thing G for Gauss. OK, now I'll show the inversion, even though we'll find that, as Minkowski found, there's no inversion in 3D. But just to illustrate some of these ideas some more. OK, so we're at an angle eta. And delta s is going to be perpendicular to that. And so when we make a small change, we get delta s is minus sine eta. Delta s and delta y equals cosine eta delta s, right? Because we're going to move back by an amount, delta x. If I blow this up-- OK, this is eta by something that's proportional to sine eta. And then move in 1. And so all I need to do is integrate that equation. Because it tells me as I move along how far I move. Now, I may not know delta s. I'm probably integrating in eta. So let's see. We can say x is x0 plus-- and then change variables. [INAUDIBLE] And of course, this is-- And similarly, there'll be an equation for y. Put it under there. So y is [INAUDIBLE]. So in the 2D case, I can invert it. I can actually obtain the convex object that corresponds to that circular image. I mentioned that's not the case in 3D. While we're there, I've been a little bit sloppy about limits. I haven't put them in. But we can do that. And one interesting question is, what is this? So those are the quantities that appear there. I started x0. And I drew this integration. And I construct this whole supposedly closed convex object. And so I should get back the same point, right? So therefore, that integral better be 0 when I go all the way around the loop. So again, because I started at x0, I integrate over the whole curve, I should be back assuming it's a closed curve. So we're assuming that it's a closed convex curve. Then those integrals better be zero. And so that means that the centroid of that mass distribution-- I keep on coming back to thinking of it as a mass distribution. There's a density on the circle, or the sphere, is at origin. Because these are really-- the integral of x, g, blah, blah, blah. And sorry, yeah, x. And this would be y. So the integral of x weighted by this thing is zero. And the integral of y weighted by that thing is zero. And those are the moments, the first moments, you use in calculating the centroid. So this means that, OK, this mass distribution on the circle has to have a special property, which is that they may be more here or less somewhere else, and so on. All of that's fine. But it better have the property that the centroid is at the origin. So that's one limitation, which is exactly the same as what we had in the polyhedral case, that some of those vectors were zero. It's the same statement really. So that's a limitation on what distributions are legitimate. But that's it. Other than that, you can have your masses arranged any way you want. OK, so let's look at an example. It's all very well in theory. Let's think of a circle of radius r. Always good to start with something really simple. Well, in that case, the curvature, what's the curvature? So the curvature is k is d theta over ds. So how do I find out? Well, one way I can think about it is to relate the surface normal direction to the arc length along the circumference of the circle. So I can say that s is our theta, assuming that theta is measured in radians. OK, so that's very simple for a circle. And then I need the a to the s. Well, I can get the a to the s is then 1 over r. Because eta is 1 over r times s. OK, so that means that the curvature is just the inverse of the radius of curvature for a circle. Now, in a more general case, we can still talk about the radius of curvature. Suppose the curve is not a circle. Suppose that elliptical shape, or something. We can still talk about radius of curvature. Because we can fit a circle locally to that part of the curve and ask the question, what is the radius of the best fit circle at that position? OK, so circle is very easy. And it's not particularly interesting. Because it's the same all the way around. It has the same-- g is constant. So g of eta is 1 over k. And that's r. And so it's constant all the way around. And by the way, this shows that we have this not very correct interpretation, which is that the value for any particular angle eta is how much of the object's surface has that as a surface normal? So this is saying that, first of all, in the case of a circle, that quantity is constant. It doesn't matter which direction we're looking at. And then also, that goes up as the radius. Because as we make the circle larger, it gets flatter and flatter. So more and more of it have approximately the same orientation. So that's a useful way of thinking about that. OK, so we won't be able to use this for determining orientation. Because the orientation is ambiguous with that much symmetry. So we need to come up with a better, more complicated example. And I'm doing this in 2D now. Because for 3D, I'm just going to write down the result. It's too boring to work out. So by torturing you with a 2D version, I am saving you the pain of looking at the 3D version. So let's look at an ellipse. And we'll line it up nicely, so the equations come out easily. The center of the ellipse is at the origin. And the main axes are lined up with the x and y-axes. And of course, we know that one way we can-- and that's a so-called implicit form of the equation for an ellipse. There's a wonderful book that's gone out of print many times and then got reprinted, which talks about different ways of representing curves. And you think, well, there's one and there's perhaps another. No, there's a dozen that are commonly used and dozen more that are less commonly used, so loads of representations. And we don't teach much about them these days. But here's another, which is more useful for our purposes. OK, so what is this? Well, we can think of this as a squashed circle. So we basically multiplied-- imagine the circle of radius a and we've squashed the vertical dimension by this factor. And we get that. And that theta is the angle in the original circle. So there was some original circle. And we somehow squashed it to produce that. So the theta is not an angle in that diagram. It's an angle in the diagram of the non-squashed version. OK, but you can see that if you take x over a squared and y over b squared you'll get 1. And that's kind of in many ways a more convenient representation. Because you could use it to generate the circle. This one here, how do you generate the circle? Well, I guess you could try all possible x and y's. And some of them will produce 1. And some won't. And you can put down a point there. But if you want to draw it, this is much more convenient. Because you just step through theta and compute a polyhedral approximation of however fine a detail you want. OK, so that parametric representation is great. And that relates to the Earth as well. The Earth can be thought of as a sphere that's squashed in the vertical direction. And just remember that those angles aren't the same, just as we talked about that's not latitude. And it's not geocentric latitude either. Oh, and by the way, if we needed the area, it's pi times ab. And you can check that works for the limiting case, where we are dealing with a circle. So what do we need to know? Well, we need to map it to a circle based on the surface normals, or in this case, the normal to the curve. So we need to compute the normal to the curve. Well, we can start by computing a tangent. So how do we do that? Well, we just differentiate with the parameter. And that gives us a vector that goes along the curve. So we're going to look for something like that by differentiating this with respect to theta. And so we get minus a sine theta b cosine theta. Then you first define this vector r, which is this thing. So these are now two vectors. Because we're in 2D. OK, so that's the tangent. And the normal, of course, is just perpendicular to that. And so how do I do that? Well, I flip x and y and change the sine. And that's not a unit vector. But it's a vector in the direction of perpendicular to the curve. And that tells me where I am on the sphere. Now, on the sphere, on the circle, I have that. And so these directions have to match. So the direction, that's not a unit vector. But if I normalize it, then that should match that. If I match those up, I get-- let's see. a sine theta. where n is the length of that vector. So let's say n squared is b squared cos squared theta plus a squared sine squared theta. OK, well, let me just define another vector. In analogy with the vector we have over there. And then we find that-- And the details of this aren't terribly important, just algebra. The important thing is that this is what we end up with. And the quantity we're interested in is just the inverse of that, G is 1 over k. So this is the curvature. And the quantity we want is the inverse of the curvature. So one thing that's interesting is to ask, what are the extreme of this? And so you would imagine that the extreme are going to be at the ends of the semi axes. And so we would expect that the extreme will occur for theta equals 0 and eta equals pi over 2. Well, 0 and pi is the same thing. And pi over 2 and 3 pi over 2. And in that case, we end up with ab over a cubed, which is b over a squared and ab over b cubed, which is a over b squared. So I'll draw that ellipse again. Let's see. a is the large one. OK, so details aren't that important. But what we've done is we've computed the extended circular image for an ellipse. And it's a continuous function of eta, the angle on the unit circle. And it varies, unlike the circle. And it has a maximum and a minimum and a maximum and a minimum, as you go around. And they depend on the semi-axes. And as you can imagine, the a is the larger of those two semi-axes. So here we see the curvature is quite high. And b is the shorter axes. And the curvature is small. And so there's a continuous distribution on the circle, which we can now use to determine the orientation of an ellipse that's not lined up with a coordinate system. Because we'll have that same distribution of this angle. But it'll be rotated. And so in order to get the match, we have to take one of the two and rotate it until it's a good match to the other one. And similarly, once we've done that, we can check how good a fit it is. And if it's a good fit, then we do, in fact, have an ellipse. If not, well then, the object's not an ellipse. And so if we have a library of objects, what we would do is do this calculation for each object in the library and find the one that is the best match. I know all these different angles, just like the Earth image over there. So theta is a parameter. It doesn't actually show up in this diagram. Where it comes from is the theta in the circle, before the circle got squashed. And so this used to be theta. But it's now gotten decreased. Whereas, eta is the position on the sphere. So we map from this space onto the unit sphere of directions. So there's a relationship between the two, which let's see, it's something like b10-- I've got it somewhere. So tan theta is related to tan eta by something like this. And this is a, if I got it right, this is a important formula used in geodetics. Because it relates the geocentric angle, the angle we make at the center of the Earth to the angle of the latitude that we use for computing latitude. And in the case of the Earth, the difference is not very large. The flattening is only 1 over 292. So those angles are pretty close. But it's important to keep them separate. Yeah, I got it right. OK, so that's a 2D version. And actually, there are applications of this in 2D. And you can do more interesting things in 2D also. For example, you can do some filtering operations. So you can do convolution on the circle, which is different from convolution along the line. Because things wrap around. And that's a whole another topic. There's a paper on that on Stellar as well, in case you were interested. Let's go back to 3D. That's the problem we're really interested in. And so we start with Gauss mapping, which basically connects points on a surface to points on a unit sphere based on surface normal orientation. And that's four points. And then we extend that to shapes. So we might have some object here. And there's a shape there. And then there's a corresponding shape over here. They're related in that every point here has a surface normal. And that gives me a point in that patch. So let me call this the object. And this is the area delta o for object. And this is the sphere. And this is an area delta s. And the curvature is just defined as, or the limit of-- So again, that intuition that if we have a very flat area, then almost all of that area is going to end up really close to the same place on the sphere. So that ratio is going to be very small, meaning the curvature is very low. If on the other hand, I'm looking at something like this, where it's very highly curved, well, those surface normals are going to be really spread all over the [INAUDIBLE]. They're going to correspond to a large area on the sphere. And therefore, this ratio is going to be large, high curvature. So this is Gaussian curvature. Now curvature in 3D is more complicated than in 2D. So this isn't the whole story about curvature. This is just a convenient single scalar quantity that measures curvature. By the way, if I doll around the circumference of this area in a certain direction, I'll go around the circumference here in the same direction, as long as this is convex. Now, what about non-convex surfaces? Well, if you think about what's a non-convex surface? Well, like a saddle point, or one of our hyperboloids of one sheet. So saddle point, think of a Pringle chip. So here's a surface with negative curvature. And if we trace around the surface normals around the outside, and plot those on the sphere, they will actually travel in the direction opposite. So for non-convex object-- and in that case, we consider the curvature negative. So that formula up there should take into account the sine of the area, so to speak, which is the direction. So if the two directions match, then it's positive. And if this one's going around in the other direction, it's negative. But that's not going to happen for convex objects. And we're mostly going to be talking about convex objects. So take a very simple example. We take a sphere of radius r. And k is 1 over r squared. And therefore, g is r squared. So that's an analogy with our 2D case, where k was one of r and g was r. So what does that mean? And where does that come from? Well, it's pretty simple. It's the ratio of these two areas. And in the case of a sphere, I can actually just take the whole thing. So this is a unit sphere. So its area is going to be 4 pi. This is a sphere of radius r. So it's areas 4 pi r squared. And so if I take their ratio, surface delta s over delta o, I get one over r squared. And so again, that's consistent, that if you have a small sphere, it has high curvature. And conversely, for a large sphere, you want to have high curvature. OK, so this is kind of the key for us. And conversely, g is delta o over delta h. And by that I mean in the limit as we make those quantities smaller and smaller. OK, so what we're doing is intimately tied up with Gaussian curvature. Because it's just the inverse of the Gaussian curvature. Oh, I guess we're out of time. But one of the interesting things we can do now is talk about integral curvature, which applies to surfaces that are not smooth. So suppose that we're looking at a brick. It has a rectangular corner. We can't really talk about its curvature. Because it's zero on the faces and infinite on the edges. But we can actually talk about an integral of curvature over part of the brick. So we'll do that next time. And we'll talk about how to use this in recognition and alignment. And there will be a quiz out on Thursday.
MIT_6801_Machine_Vision_Fall_2020
Lecture_4_Fixed_Optical_Flow_Optical_Mouse_Constant_Brightness_Assumption_Closed_Form_Solution.txt
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: We started off talking about the two aspects of image formation, where and how bright. And where we talked about perspective projections. And in a camera-centric coordinate system, it's very easy to just get with that, and the extent of that to be able to talk about motion. And so we just differentiated that. And we had a slightly different version of this where we made use of the perspective projection equation in order to write it in. Then we introduced the idea of the focus of expansion. That is the place where u is 0. And clearly, that's where this part is 0. And so if you solve for x, we get x over there. It's u/w. So the focus of the expansion is the point in the image towards which you're moving. And then we introduce the [INAUDIBLE] and talked about various ways of estimating that. Then from a somewhat different point of view, we looked at the image solid. So we're thinking about an image as-- brightness as a function of x and y, and sometimes x and y and t. And so there's an image solid video. And in this case, we looked at the possibility that the brightness of an image of some point in the environment doesn't change with time. So we introduced the constant brightness assumption. So as we follow some point and its image in successive frames, we are saying that in many circumstances, the brightness won't change. And we can exploit that. So if we-- it's quite interesting to look at this image solid and slice it in different directions. And you'll see kind of a very streaky nature because of this phenomenon. So the slices, of course, are not independent. And we see that things are streaky, like extruding toothpaste with multiple different colors in it as things move in the image. And then from this, we got the brightness change constraint equation, which gives us a relationship between the movement in the image and the brightness gradient and the time rate of change of brightness. And we then addressed the problem that this is not giving us the ability to solve locally for velocity, because this is a linear equation in u and v. So it just defines a line in velocity space. So looking at a single pixel, we can't recover the motion, unlike the 1D case, where we could. And so we need more constraints. Well, a very extreme form of constraint is where everything is moving at the same speed. And so as I announced, there's a paper under Materials on Stella that goes into that. And it's doing the last part of the previous homework problem. And what it does is minimize some errors. So if this applies at every pixel and we have a constant u and v for the whole image, as in the optical mouse case, then we can write the problem this way. And again, that integral should be small, or it should be 0 if there was no error in u and v, and if there was no noise. But we're satisfied just to minimize that. And that's going to be our best estimate of u and v. And it's highly overconstrained. We've got one of these equations for every pixel. And we're only looking for two unknowns. So this is a case where we have millions of equations. And we've only got two unknowns. So that's very favorable. That means that the result is going to be much more accurate and reliable than it would otherwise be. So doing that, we obtain the linear equation in the unknowns with a symmetric 2 by 2 coefficient matrix. So we just run through the image, and we estimate the gradient Ex Ey. We estimate the rate of change of brightness, Et. And then we just accumulate these totals. And when we're done doing that, we have two linear equations in u and v. And we all know how to solve linear equations, particularly if there are only two of them. Now, as usual, we need to look at when that fails. And so we did that. We said that, well, it depends on the determinant. So the problem is when that coefficient matrix is singular. So we have a problem if that is 0, or if that's equal to that. And we looked at various conditions, like e equals 0. Of course, if you have a black image, then that won't work. And also, if Ex is 0 or Ey is 0 as well. And just let's look at one more. Suppose that we have an image like this. And what we're trying to do is figure out whether this motion recovery is going to work. So how can we attack that? Well, in two ways, the one is what type of an image is that. Can we intuitively see why that's going to work or not work? And the other one is just to break down and compute the derivatives. So e sub x is going to be the derivative of this thing plus the derivative of the argument with respect to x many times. And so the derivative of this with respect to x, of course, is just a. And take out the x so it doesn't look confusingly like a times. And then I can look at the y derivative. So this is a very particular type of image. And in this case, I have Ex and Ey. f may be some complicated function. But the important thing is that Ex and Ey are in the same ratio everywhere. And so in this integration, I can replace Ey by b of a times Ex. And so this is going to be true. This condition will be true, and it'll fail. So this is another way of saying the condition under which this method won't work. And then we can look at what kind of an image is that. Well, as usual, I can't draw gray levels on the blackboard, and not draw gradients. But I can draw contours of constant brightness, isophotes, which are perpendicular to the gradient. And so what are the isophotes? Well, the isophotes are where E x, y is constant. That means where f of ax plus by is constant. That means where ax plus by is a constant. And that's the equation of what? It's a straight line, right? So the isophotes are straight lines. Also, they're all parallel straight lines. They only differ in C. a and b are fixed ahead of time. So this is the kind of image that gives us trouble. Yeah, we know that, right? Because if it slides in this direction, we can't measure that. There's no change in the image. If it slides in that direction, we can. But it doesn't allow us to determine the other part of it. So then that's the optical mouse problem. So from that we went to time to contact. And we looked at-- so we had this. There are various ways of rewriting that. Let's see. Which way around do I want to do this? So w is the z component of the motion in the world. So that's dZ dt. And I don't know. That may or may not ring a bell. But that's a derivative of log of z. So if I plot things on a logarithmic scale, this is just the slope of the graph on that logarithmic scale. So that's interesting. So it's all dependent on ratios, fractional parts, rather than absolute values. So what's important is by what fractional part does z change in a certain time interval? Not by how many meters. And that's one reason why we could do this without calibration. We didn't have to know what the focal length of the camera is, for example, because it's only the ratio, the fractional part that matters. And now another way to think about time to contact is in terms of image size. So suppose here is some object in the world of size S, and here's its image of size little s. Then I can write an equation relating those quantities based on the triangle, similar triangles, this triangle on the outside, and this triangle in the camera. So I've got s/f is S/Z. It's just the lateral magnification of the camera, which in this case is much smaller than 1. The image is much smaller than the object. But we call it magnification anyway. Well, I can cross multiply. So I have that relationship. And why do I do that? Well, because now I'm going to differentiate that and see how it changes with time. And so s is changing. If we're approaching the object, then s will be increasing. So s will be changing. So we're going to have Z ds dt. And it's a product. So we also get Z. Get the other one. We're going to get s times dZ dt. And then we need the derivative of this product. Well, the size of the object, presumably, is constant. We aren't changing the imaging parameters. So this is the derivative of that is 0. And the derivative of 0, of course, is 0. So we get this relationship. And so this tells us that-- which way around did we use it over here? S ds dt over S is Z over dZ dt over Z. So the change in image size, in fractional change in image size is exactly the fractional change in distance. And so for example, if the picture of the image of the bus increases by 1% as you go from one frame to the next in your video sequence, then that implies that the TTC is 100. And so that's-- at 20 frames a second, that means it's only 5 seconds away. So one conclusion is that in a lot of important practical cases, the time to contact is not tens of frames, but hundreds of thousands of frames. Otherwise, you'd have a problem. You'd probably be just about ready to crash into something and may have trouble compensating. So if the time to contact in many cases is thousands, that means that the fractional change per frame is 1 over thousands. And so the fractional change in the image from frame to frame is relatively small. And so that means if we were to use a method that's dependent on actually measuring the image size, estimating how big the picture of the bus is in your image, it better be really accurate, like one part in several thousand. And that means it's going to need subpixel accuracy, that it won't be good enough to simply measure where the front and the back of the bus is in the image. And that's why that turns out to be not a good way to estimate the time to contact, as opposed to the method that we described. So that's one thing I wanted to get across, this very simple relationship that if there's a certain percentage change in size between frames, that translates directly into a certain percentage change in the distance. And that directly translates into time to contact. So it's a very easy way to understand that. Now, when we did this, or you did it, we had a very simple situation. We started off moving directly towards a wall so that we had constraints on both the direction of motion and on the surface we're looking at. So the very-- you may remember the very first thing we calculated was C, which was the component in the Z direction. And we had some simple ratio of two integrals. And that's the case where we're moving straight towards the wall, the wall is-- what does it mean? The optical axis is perpendicular to the wall. Z is constant on the wall. It doesn't vary as we go left and right. So that's a very simple case. So then we said, well, let's be slightly more general. Let's assume that we could also be moving sideways as we're going along. And then we added the motion in x and the motion in y. And we had the slightly more interesting problem where we were looking for three unknowns, A, B, and C. And we ended up with three linear equations and three unknowns. And right now I remember what I was going to say. Again, this is all spelled out in the paper that's on the website on time to contact. I guess the full title is "The Time to Contract Relative to a Planar Surface." And now the paper discusses some other things as well. It's just like the other paper. The first half is exactly what we did in class, and then it goes off into some other directions. Same here. So is this the most general we can get? Well, we're still making an assumption that Z is constant. So we're approaching the wall, and the optical axis is perpendicular to the wall. Well, what if our camera is tilted, or conversely, we're approaching a wall that that's tilted in the world? So a different generalization has Z be a tilted plane so that it's no longer the case that the depth is constant as I scan left and right or up and down in the image. And well, what's the equation of a plane? Well, it's going to be some linear equation in x and y. So one way I could write it is in that form. So this could be a more complicated model to look at. And well, you might expect at some point the equations get pretty complicated. And in fact, there might not be a closed form solution. And that's fine. You can do it numerically. But in terms of understanding what's going on and the noise effect, it's nice to focus first on cases where there is a closed form solution. So what you can do, actually, is say, fine. So now we will generalize it by allowing the plane to be tilted. But let's go back to the case where we were moving straight down the barrel, straight down the optical axis. And in that case, then we'll only have these three unknowns. So instead of having A, B, and C be the unknowns, we have these three or some function of them. And so we want to do that because it's pretty straightforward, more messy algebra. We end up with three linear equations and three unknowns. And we can solve for that. So we can do the time to contact in the case where the surface is tilted. So we're no longer making the assumption that we're driving into the side of the truck coming out of a parking lot. But we could be coming at an angle. So that's an interesting case to consider. What if we do both? What if we allow the surface to be tilted as well as our motion to be general? Well, we can formulate that problem pretty easily. And we end up with six unknowns. Unfortunately, they are no longer linear equations. And so from the point of view of pain and agony, they're not much fun to write down. And also, it's unsatisfying because we end up with this mixed set of equations. Some of them are linear. Some of them are quadratic. And it's very hard to say anything general about them. Whereas in these special cases, we can do a full analysis of noise and so on. So we're not going to do that. Just you know that it's there. Then we get to why is the surface planar? Well, the surface is planar because it's going to give us linear equations. So we start with a planar surface. Now, real surfaces may not be planar. And then what? Well, we can approximate them by polynomials, some locally quadratic surface. And we go through the same process. We set up a least squares problem. And we, unfortunately, won't find closed form solutions. But we can set it up so that some numerical process gives us a solution. Now, the reason we're not doing that is mostly because it actually doesn't buy you anything. So in practice when you implement this, you find that modeling the surface as planar gives you a very good estimate of the time to contact. And if you model it as something more complicated, now you have more unknowns, which is good in a way, because it allows you to model the world more accurately. But at the same time, you lose that overconstraintness. Every time you introduce more variables, there is an opportunity for the solution to squiggle off in another direction. So there are some pluses and minuses. And overall, the only time you want to even think about that is if the object has a shape in depth where the depth change is similar to the distance from the object. So if the truck is over there-- I'm 50 meters away from the truck, and the side of the truck is-- one side is maybe 2 meters closer to me than the other, it makes no difference. I mean, it's a 4% change in distance. And that won't affect anything. If I am right in front of the truck, I'm 2 meters away and one side is 1 meter and the other one is 4 meters, then yes, then you may need this. But we found in practice that we don't need that extra level of sophistication. These two things are sufficient in practice. Then let me just briefly talk about multi-scale. So when I showed you the implementation, when you got really close to impact, things kind of just fell apart. So the graph of time to contact estimated was quite similar to the actual time to contact. So this was a contrived situation with constant velocity so that the time to contact decreased linearly as time went on, because we got closer and closer to the surface. And then when we looked at the computed results, they were something like that, noisy. And that's partly because the measurement of the position was not very accurate. It was eyeballing down on the measuring tape. There was an interesting offset vertically, so there's a bias. So it's not just noise. Actually, the estimated time to contact was overestimated, which in itself isn't good, because if you're about to crash into something, you don't want to be told that actually it'll take longer than the true time. On the other hand, since it's a systematic fixed bias, you can compensate for it. It doesn't mean that you shouldn't try and figure out where it comes from. But it's pretty simple to just fit a different slope to this. But then at the end here, we had some spikes. And basically, the results were not reliable. And we already mentioned some reasons for that. And one of them is that the image motion is large. So earlier we said that when the bus is far away, the image motion is very small. The image of the bus will tend to expand and contract and move by a fraction of a pixel. And that's where these methods really excel. As we go along, we've been making some assumptions that certain distances, the epsilons are small. When we estimate E sub x, E sub y, E sub t, for example, we're taking a finite difference and saying, oh, this is almost a derivative, because epsilon is small. Well, that won't work if we have a large jump in x, y, or t. And so that's one reason that this falls apart. I mean, there are other reasons. One of them was that the camera went out of focus. And so you didn't have a clear picture of the object anymore. But this first part is easy to deal with. As I already mentioned before, I just want to reiterate that if we have an image with less resolution-- say we have half the number of rows and half the number of columns in the image-- well, then the motion in terms of pixels per frame is half what it was before. And so what was a large motion in the original raw image is now half of that. And so that means that you'll still have it falling apart, but it will fall apart later. So this part will still be OK. And then it'll fall apart down there. And of course, then you can repeat that process, and say, OK, so now in that image suddenly the motion has gotten to be more than a pixel per frame. So let's, again, subsample, average and subsample. And then we can continue it. And so multi-scale just means that we work in that set of images that become smaller and smaller. And we can handle motions that become quite large. And also, I mentioned that if we do the simple 2 by 2 block averaging, that means the second image is only a quarter of the size of the first one. So the amount of work is 1 plus 1/4 times what it would have taken just on the raw image. And then we do it again, so that's going to be a 16th. And so the total amount of work-- well, writing the code, of course, takes a bit more effort. But in terms of the time, it's not a big penalty to work at multiple scales. And you get hugely improved results. And we'll talk a little bit later about how to do the subsampling. Those of you who've taken 6003, of course, realize that you can't sample without getting aliasing unless you've low pass filtered first. So actually, what you want to do is low pass filter, and then subsample. And you don't necessarily need to subsample on the scale in x by 2 and y by 2. You could subsample by-- I don't know-- square root of 2, which is less aggressive and introduces fewer artifacts. But for the moment we'll ignore that and just take the very simple idea of 2 by 2 block averaging, which is a crude form of low pass filtering. And it doesn't do exactly what you need to do, but it removes-- it suppresses some of the high frequency content. And while there will be aliasing artifacts, they'll be greatly reduced. And that's such a simple method to implement. And it works pretty well. Well, let's talk a little bit about what to do-- what do I do with time to contact? So there's a number of interesting applications. Every year there are several incidents with airplanes on the runway where wingtips are taken off. So these planes have very long wings, and they're typically swept back. So they're not terribly visible. And so there's an opportunity to bang them into a building or into another plane. And that's a really expensive thing to deal with. It's not life threatening in most cases, but it's something that you try and avoid. And as you approach the place that you get on the plane, there's often some person down there with some red lighted stick. And they're called wingmen. And the reason is they walk under the tip of the wing so that the pilot can look back and see where's the ground projection of the wing. And I'll try not to hit that staircase with the wing. So it looks like you could implement time to contract. I mean, you can implement it easily on Android, for example. So you could build a really cheap little box. The only purpose in life for it is to look out and see if something is rapidly approaching, and then give you a warning. And so we've thought about suggesting that to airplane manufacturers and such. And it didn't get anywhere. But Boeing took the idea, and they came up with a $150,000 radar solution. And that's obviously going to be much more fun for the corporation to implement than something as silly as this. So anyway, sour grapes. The next project is NASA landing on Europa. So Europa is a long way away. And we have some idea of what's on there, but not a whole lot. So it's not like we have detailed topographic maps and imagery. And so they want something that reliably brings down a spacecraft. And so one idea is to use time to contact in control. So let's look at how we do that. So we have a typical control system. We input some desired time to contact. Then we have an actual estimated, and we subtract the two. And that gives us some kind of error signal. And we multiply that by a gain, and we use that to control the jet, the rocket engine to change the acceleration. And then there's a dynamical system, which is second order in the sense that we're controlling acceleration, not height directly. Height is two integrals down from the part that we can control. And then there's an imaging system. And so this system does something very simple, which may not be the best you can do, but it's easy to analyze, which is to try and maintain the time to contact the same. So if your measurement says that at the current rate of descent you're going to have a shorter time to contact than desired, then it'll add some more force to the engine. And conversely, if it looks like you're kind of hovering too much, you should be dropping down, then the time to contact will appear large. And then the error signal will cause you to reduce the engine output. So a very simple system. And we saw that time to contact is very easy to implement. And it doesn't care what the imagery is to a large extent. So we don't really know what the surface of Europa looks like, except from far away. But we're not depending on some particular texture, or calibration, or topographic map of the surface or something. This method works with any texture. Well, except we saw that there were certain special textures where it would fail. But presumably, Europa hasn't been painted in one of these unique stripy patterns. What are the dynamics of this? Because this idea of time to contact control can be used in other situations as well, such as in autonomous cars. But let's focus on the descent here. Now, why constant time to contact? Well, we don't really know how to accurately, reliably, easily compute height from a monocular camera image, unless we have some target, like there's a Walmart, and we know what size Walmarts are. So we can compute how high we are. Well, there aren't any on Europa. I hope so. That won't work for us. So if we could separately compute height and speed, then we could do much more sophisticated things. But we know that we can very robustly get their ratio in a very simple way. So that's the attraction there. So we've got Z/w is T. And now we are assuming that T is constant. It's very curious that when DARPA had the grand challenge, of course, MIT was involved in that. And we instrumented a car. And there was one sequence that we thought would be interesting to compute after the fact. They've recorded all this video. Let's do something with it. Let's compute time to contact. And so there's the vehicle coming out of a parking lot, and it's approaching the MIT transport little bus. And if you plot the time to contact, it's almost constant. So the vehicle is slowing down. The closer it gets to the bus, the more it slows down. And so I have no idea what in the control algorithm of that autonomous vehicle did that, but it was interesting to observe that it used a constant time to contact control to approach the bus without running into it. Well, here's an equation that we can solve. So it goes Z over dZ dt is T. And so dZ dt is 1/T Z. And well, there's a ordinary differential equation that you should know the solution for, keeping in mind that T is a constant. So dZ dt is proportional to Z. What sort of function does that? Quadratic? Exponential? Anyone? OK. So we get Z is e to the minus t over T, something like that. We differentiate that, and the derivative is proportional to the function itself. And so if we implement this constant time to contact control system, what we'll find is that we're going to get a descent that looks like this. So here's our Z0. So it's going to be nice and gradual and smooth, and it will never get there. So that's the downside. So what do you do there? Well, we can do what flies do. They use constant time to contact when landing on the ceiling. And when their legs touch the ceiling, they stop. So we can have wire hanging down from our spacecraft, and when it touches the ground, we shut off the engines. And then it'll just fall those last meter or 2. And that method, of course, has been used in planetary spacecraft before. But we can just combine it here with the time to contact. So at a certain point, we do have a distance measurer, an actual wire, and then it just drops under gravitational control until it hits the surface, I guess. So that's one aspect of this. And we can go into an error analysis of what will happen there. Just for fun we can compare this with a more traditional approach. So a more traditional approach would be we run the engine at its rate of speed. And we decelerate. And we only turn it on at the time where we need to turn it on so that we don't crash into the surface. So now that method requires that we know the distance to the surface and we know the velocity. So we have to know a lot of stuff, where with the time to contact control, we don't. You just keep the time to contact constant. So one advantage of this alternate method is it's more energy efficient. So here we'll be kind of going almost into a hovering mode. So we're kind of wasting fuel in a way. So there's a trade-off. So we're ignoring the fact that the spacecraft is getting lighter as it's using fuel, and so the acceleration will change. We'll just assume that it's constant. And so we integrate that once, and we get a constant of integration. And we can apply the boundary conditions, which are that at some point we reach the surface. And then we integrate a second time. We integrate that, and we get Z is 1/2 a t squared plus some other constant of integration. And again, we use the boundary condition, and we end up with z is 1/2 a. Of course, you can do this the other way around. It might be easier to start off with that, and then just differentiate to get to the constant acceleration. So why did I even bother doing this? Well, because it's interesting to ask, under that type of control, what is the time to contact? And so how do we compute that? Well, we need Z and dZ dt. So we just take the ratio of these two. So T is-- and so that's going to be 1/2. So that's saying what the time to contact is during any part of this maneuver. And it's fascinating that it's a half of what you get over here. Over here, of course, the time to contact would be T minus T0. But because of this constant acceleration control, we get that result. Anyway, it's just a way of comparing constant time to contact control with a more traditional approach. The more traditional approach is somewhat more fuel efficient. But it's much harder to implement. It requires accurate estimations of distance and velocities. Anyway, apparently, NASA decided they can accurately estimate distance and velocity. So these crazy machine vision programs, who knows whether they'll work? So they're not going to do that. But you can imagine that there are other applications for this. Because this is a sensor that's really dumb. I mean, it's simply a matter of brute force estimating gradients and accumulating totals, multiplying, and then solving a bunch of equations. So it's very straightforward. Now, before we go on to another topic, I want to point out another generalization that we'll take up later, and that's optical flow. So the paper on Stella that talks about the optical mouse problem primarily is-- what's the title? "Computational Fixed Flow, Determining Fixed Flow." Fixed flow, what does that mean? Well, it means that the motion of all parts of the image are the same. And as I explained, for an optical mouse, that's a very good model. That's very accurate. But as I'm walking around the room, the motion of different parts of the image are not the same because you're at different distances. And we saw how we can get that out of a perspective projection equation by differentiating. And we get some terms that are affected by division by z, and so on. And then, also, some of you may be moving around, independent motion. It's not that. So these problems are particularly easy to solve when there are only a few parameters. So for the optical mouse there are two-- motion in x, motion and y. Well, you might be turning the mouse. So it could be three. And we'll probably take that up in a homework problem. So it'll be a slight generalization of what we did with the added feature that you're not only tracking the position of the mouse, but if the user is turning it, you also want to recover that. Maybe not because that's an interesting input to your GUI, but because it might screw up the estimation of the x and y motion if you do rotate. Anyway, so those are all cases where we have a fairly small number of unknowns, and we have a hugely overdetermined system. We have some measurement at every pixel. What if we don't have that? Well, that's going to be a problem, because at every pixel we have an equation like that. So we have a constraint. So if we have 10 million pixels, we've got 10 million constraints. Great. Except at every pixel now, we also have an unknown velocity. Whoa! Cheap chalk. So that means that we have twice as many unknowns as there are equations. Well, that's a prescription for disaster. So highly underconstrained, and it's an ill-posed problem. And in this case, it's ill posed in the sense that there's an infinite number of solutions. And in fact, if you give me a solution, I can easily construct another solution, because all I need to do is at every pixel I need to obey this constraint. So at every pixel, I'm somewhere along this line. And now you give me the, quote, "correct solution" that says here for that pixel. Well, the thing is that I can go there. It doesn't change anything. And so that's pretty dramatic. That means that I can systematically go through the image, and at every pixel I can give you an infinite number of other values that will work. So we'll need some heavy constraint. And one constraint that we can use and will use later is that neighboring points in the image do not move independently. They may not move at the same velocity, but they tend to move at a similar velocity. So that's a good thing and a bad thing. I mean, it's good in that, oh, here we've got some sort of constraint. And it's a bad thing, because it's not like an equation that says u plus 3v is 15 or something. It's more vague. It's like, oh, it's varying smoothly. What does that mean? What's smoothly? What epsilon change can you allow? So that makes that problem very interesting and non-trivial. We'll get to that later. We don't have the tools at this point to do that, but it's just to alert you to the fact that the fixed flow isn't the be all and end all. There's more to come. Now, suppose that you don't have the tools to solve that problem. Well, you can do something, which is divide the image up into pieces, and then apply fix flow to each piece. And the idea is that, well, if we make the pieces small enough, there won't be much variation in the velocity within that piece. And so the assumption that u and v are constant in that little area isn't such a bad one. And so now there are all kinds of trade-offs, because if we make these boxes very large, we get only a very coarse image of the flow. We only have a flow vector for each of these boxes. On the other hand, if we make the subimage areas very small, we have much less constraint. And we have much less of that wonderful noise suppression property of an overdetermined system. And also, when we look at small areas of an image, they are much more likely to be similar to the type of image that we talked about that doesn't work. If I look at a very small area of the image and it just has this edge in it, well, that's exactly the kind of thing where the aperture problem comes in. And I can't determine what the motion is in that direction. So yes, you can do this. You can use what we have-- the fixed flow method on a grid of a chopped up image. But there are trade-offs, and they're somewhat unpleasant. If you make these areas fairly small, some of them may be even more or less uniform in brightness. If I'm looking at that gray wall, it's uniform in brightness. And if it moves, I can't tell. It's just the same. So anyway, that's something that has been done and works, but it's not the solution that we'll be looking for. We're moving towards talking about brightness more than perspective projection. But I want to just do one more thing with perspective projection. And that has to do with vanishing points. And these play a role in camera calibration or sometimes finding the relative orientation of two coordinate systems. And just to make that sound less mysterious, if we have man-made objects, they often have planar surfaces. And they often have right angles between their planar surfaces, unless you go over to start there. And so when we look in the images, we're going to find a lot of straight edges often. And often, a lot of them are parallel. And so we can actually exploit that. So if, for example, you're hovering above some building, like the main buildings at MIT is all on a rectangular grid, you can determine from the image certain vanishing points. And from that, you can determine your rotation relative to the coordinate system of that rectangular block. And so that's something there. Or if you look at a cube or some other calibration object, you may be able to recover parameters of the imaging system. So it's important in the camera calibration, which is, of course, important in robotics and other applications. So let's just see what this is about. So in the first homework problem, one of the questions you were asked is, what's the prediction of a line? And there are various ways of approaching that algebraically or geometrically. One geometric way is just to say, here's my line. I'm going to connect it to the center of projection. And what do I get? Well, I get a plane. If I connect-- if I look at the locus of all those lines that connect that point to the line, it forms a plane. And then what is the image of it going to be? Well, I'm going to intersect that with the image plane. So what's the intersection of one plane with another plane? It's a straight line. So there's a simple way of seeing that line in 3D projected into a line in 2D. It's a funny projection in that if you were to mark this like a measuring tape with equal intervals, and then you look at the projection of those marks, they won't be equally spaced, because the part where the measuring tape is close to you will image with larger magnification so the marks are further apart then. So the fact that a line goes to a line is a little bit overconfident. It's making us think that we understand the problem very well, when actually it's a little subtle. So the other way is algebraic. And so one way we can do this is to say that a line in 3D can be defined in various ways. One of them is we have a point on the line, and then we have a direction. And we might as well use a unit vector to define the direction. So that's one way of talking about a line in 3D. What are other ways? Can you think of some other ways? We can do a parametric representation where we have, for example, x is some function of some parameter T, and y is some function of parameter T, and z is some function of parameter T, or we can have some implicit representation, or we can intersect two planes. And why is that convenient? Well, the equation of a plane is a linear equation. And so when you intersect two planes, we're dealing with two linear equations then. We've already seen that having linear equations can be a plus. But let's stay with this. And in component form, of course, that just means it's x0 plus s. What did I call it, alpha or something? Alpha s. So then we're going to now project that into the image using perspective projection. And we can, of course, use the component form that we used up there, or we can use the vector form. And so we get-- so unfortunately, this doesn't lead to some very elegant, nice result. Or rather, it doesn't without-- sorry. I guess Z is dependent on gamma. Unless we go quite a long way further than I want to when it becomes nice and elegant again. So that's our transformation. And so s is a parameter that varies along the line. So different points on the line have different values of s. And if that's a unit vector, then s is actually a measure of length along there. Because of that division by Z, it's kind of messy. But one thing that's interesting to do is to look at what happens when we go very far along the line. So we go there. So we make s very big. Well, that means that we can ignore s0, and we can ignore z0. And we get alpha over gamma. Gamma, not beta. And that is called the vanishing point. So as you move along this line, you start to go more and more slowly in the image. And you approach, but never reach this point. And so actually the image of an infinitely long line is not an infinitely long line in the image plane. It starts somewhere, and this is where it starts. Then as we move along the line in 3D, we don't move along the line in 2D in a uniform way, because when we're very far out, we can move a long way in 3D. And it only has a tiny effect in the image. So that's what makes it kind of awkward. Now, one thing that we should immediately recognize is that parallel lines have the same vanishing point, because the offset x0, y0, Z0, the origin of our line, they don't come into this equation. It's only the direction that matters. And so that's interesting, because that means that if we have a bunch of parallel lines in 3D, they are all going to give rise to the same vanishing point. So if we're looking at a rectangular building, there are three sets of edges, each containing parallel lines. And so we expect to see three vanishing points. So let's see if I can do this. This might be-- Now, this is a bit extreme because I picked the vanishing points pretty close. So here's a cube. Well, not that good. But anyway, if it was for real, there would be three sets of parallel lines, which each give rise to a vanishing point. And so what? Well, the point is that if I can find those vanishing points in the image, I can use them to my advantage to learn something about the geometry of the image taking situation. And just to let you know that this isn't some vague theoretical thing that nobody cares about, here's an application. So every year a few people are killed on the side of the road because of distracted drivers, even before texting. And so there's an interest in trying to warn whoever's stopped there-- police officer, construction worker, whatever-- that there is a car on a trajectory that may impact them. So how do you do that? Well, you can stick a camera, maybe an Android phone, in the window of the cruiser. And it's watching-- this is mostly nighttime-- it's watching headlights going by. And it's monocular, so it doesn't have depth information. But it can figure out whether the trajectory is possibly dangerous or not. But one of the things it needs to do is to figure out the geometry of its coordinate system and the road's coordinate system. You don't want the person who's using it to have to come out with surveying equipment and measure the angles and so on. So the camera has to, on its own, try and determine the rotation of its camera-centric coordinate system relative to the line, to the road, x, y, and z. And so one way to do that is to look for a vanishing point. So in the case that the road section is straight, you can use image processing methods to find those lines. And then you can intersect them to find the vanishing points. And then you can use those to recover at least two parameters of the transformation. One is how much is the camera turned relative to the direction of the road, and the other one is how much is the camera turned relative to the line to the horizon. So we've got pan and tilt. And we can get pan and tilt using vanishing points. Ugh, I don't want to do that. So let's think about using this in camera calibration, another application. So in calibrating a camera, there are several parameters you're looking for. The main ones are the center of projection. So you say, well, yeah, the center of projection is where the lens is. Yeah. But where's the lens? And relative to what? So here's our integrated circuit sensor. And up here is the lens or pinhole for the moment. And presumably, whoever built this thing tries to put the center of projection above the middle of the chip. But that's not necessarily going to be accurately done. I mean, these things are microscopic in some cases, like this thing. It had a focal length of 4 millimeters. And so we're dealing with very small quantities. Plus, the pixels here may be 10 microns, or in the cell phone maybe only 5. So it's very unlikely that you would be able to position the lens in such a way that it was accurate to one sensor position. So we need to recover that position. And we also need to recover the height of the center projection above the image plane. And ultimately, we'll be using a camera-centric system that's origin at the center of projection. But to do that, we need to understand how row and column in the image sensor translate into x and y in that camera-centric coordinate system. So the short answer is if you give me a coordinate system in the chip, which is a, let's say, column and row count, I want to know where that point is. And typically, the row and column count won't be taken from the center of the chip, but from one of the corners. And when you give me the position of the center of projection, what units do I want? The size of the pixel. It'd be nice to have it in microns. But if you don't know the size of the pixels, you can't do that. And conversely, you don't need to know the size of the pixels for projection. We can express that. Like the focal length, we can express it in terms of pixel size, particularly easy if the pixel is square. I mean, otherwise, you've got to deal with the fact that the x and y dimensions aren't on the same scale. So that's the task. Tell me where that point is. And it may need to be repeated, particularly if it's a camera that has zoom capability, because then all of that's going to change as you zoom in and out. And you'd want to perform this kind of calibration again. So there are various ways of doing this. One is-- and we'll talk about this some more later. But here's a very simple one. We use a calibration object. And so a calibration object is something with a known shape. And we already talked about that when I was showing the slides, where we used the sphere as a calibration object. So would a sphere be a useful calibration object here? So the image of the sphere, as you saw in homework problem one, is a conic section. And so for example, if the sphere was directly above on this line, its projection would be a circle. The sphere is up here. And how do I know that? Well, because I connect the sphere to the center of projection. And I get a right circular cone. And I extend that down here and intersect it with this plane, and intersect the right circular cone with a plane perpendicular to its axis. And you get a circle. But when I move the sphere to the side, it's going to become elliptical. It's going to become-- imagine-- take an extreme case where this is way down here. And now you project it into the image, it becomes a very elongated ellipse. And if you move it down far enough, it'll be a hyperbola. So yeah, you can do that. But it would then require that you accurately determine the position of that figure, whether it's an ellipse, or hyperbola, or circle, or parabola, and its parameters. So that can be done. But the noise gain is high. It's not a very good method. The big advantage of this is a sphere is easy to make, easy to obtain. A cube isn't. So let's try a cube. I know that a cube isn't easy because when my father got his master's in operating equipment to make objects, his task was to make a 1 centimeter cube. And it had to be accurate to a micron. And it apparently took him quite a while to do it because it had to be accurate to a micron, and the sides had to be at right angles, and so on. So making a sphere as a calibration object is somewhat easier than making a cube. But the cube has some huge advantages. And one way of exploiting it is this diagram. So if we take an image of the cube, we can detect the edges. We'll talk about that later. And then we can extend them to find the vanishing point. And by the way, the vanishing points don't have to be in the image. It could be that the image you actually see is this thing. The vanishing points are in that plane, but they, in many cases, are not in the image. And that's why I struggle in part. Aside from having no drawing ability, I struggled with that diagram because in the real situation, they tend to be further out. And so the perspective distortion, as it's called, wouldn't be as extreme. So what? So I have a calibration object that's a cube. And then I take a picture of it. And I find the vanishing points. What then? Well, one thing I know is that the cube has three sets of parallel line. And those are at right angles to each other. If they weren't, then it would be a parallelepiped, not a cube. So they're at right angles to each other. And so that's very important because it brings me to the equations for the vanishing points. It means that the directions to the vanishing points are at right angles to each other. So here's my center of projection, and then I have three vectors corresponding to the three sets of lines. So there's one that's going up and down, and one from that side, and one from that side. And I can draw them through the center of projection. So here's one, here's another one, and here's a third one. And so what are those lines? Well, those are lines in the direction of the 3D lines, which I guess we've lost now. So we said that all of the lines in this parallel bundle project into the same vanishing point. But there's one that's special, which is the one that goes through the center of projection. So think of this whole slew of parallel lines. And they all end up at the same vanishing point. But they can be represented by a single line. We can just pick one. We pick the one that goes through the center of projection. And so that's what this is. So this is-- and then because of the way our projection works, there's a point down here and a point down here in the image plane. So if you tell me the three vectors that point in 3D along the lines, I can tell you where they will be imaged just by this construction, because here's the direction that's going along one-- here all these parallel lines. They all going in that direction. And all I need to do is follow this backward into the image plane, and that's going to be its vanishing point. So that's vanishing point 1, vanishing point 2. I called them something else. Sorry. I guess I called them a, b, and c. Now, the neat thing is I know that if my calibration objects is a cube, that these three vectors up here aren't just any old three vectors, but they are all at right angles to one another. Hard to draw that, but there are three relationships between them. So let's call the unknown center of projection p. That's our task. Find p, given a, b, and c. Well, then I can say this. So why is that? Well, p minus a is the vector along this line. And p minus b is the vector along that line. And those are the same vectors as these two. And we know they're at right angles. And the dot product of two vectors that are at right angles is 0. So we get the calibration, obviously. We take an image. We find the vanishing points. And then we have those three equations, and we have three unknowns, namely the components of p, the center of projection. Or another way of looking at it is we need to know this height. Call it f. And we need to know where this is on the image sensor. So there are two degrees of freedom here, one degree of freedom there, three total. But simply speaking, we're trying to find where this is. And that's a vector in 3D, so it has three degrees of freedom. We've got three equations. Great. Well, if you look at them, you'll see that they're second order in p. p is the unknown. And they're second order. They're quadratic. So there's a finite number of solutions, except for pathological cases. But what is the number of solutions? So with a single quadratic, we know that there are possibly two. So maybe there are more solutions. So there's a thing here called Bézout's theorem, which we'll use quite a bit. And it says that the maximum number of solutions is the product of the order of the equations. So for example, if I have two quadratics, it is possible there might be four solutions, 2 times 2. Here I've got three quadratics, so it's possible there could be eight solutions. So that's unpleasant, so we want to talk some more about that. By the way, it's called Bézout's theorem after Bézout, who wrote this up, and actually Newton used this result in his Principia. But he didn't really formalize it. He just used it like anyone would know this, that kind of thing. And when Bézout wrote it up, he didn't actually rigorously prove it. So there's a whole lot of-- it's as usual. Someone ends up with their name on something, and there's a whole interesting story behind it, like, OK, he didn't-- he wasn't actually the first, or he didn't actually get it right, or something. Anyway, it's an important theorem for us. Now, with linear equations, the product is always 1, right? 1 to the whatever power is 1. So with linear equations, as long as we can match constraints with unknowns, we're done. Unfortunately, in other cases it's not quite that simple. But we can do something, which is notice that these aren't just any old quadratic equations. They have a very special structure. And we can subtract them pairwise to get rid of the second order term. So we can get-- so let's see. If we subtract the first and the second, we end up with this. This is 0. And then we can get a few more. So terrific. We've reduced it to three linear equations. And we know that they have only one solution. Well, not so fast. Are those three linear equations linearly independent? We have to worry about that edge case where the matrix is singular, and so on. Well, the truth is that if we add two of these, we get the third one. So they're not independent equations. So actually, when we get to the third one, we should stop. So yes, we can reduce it to two linear equations. But we're still left with one quadratic. And so the answer is that there are two solutions. And we won't actually do the algebra, but it's pretty straightforward. I'm not quite done with this. I wanted to say something more, but we're sort out of time. So one thing that I wanted to still say is that those linear equations, what do they define in 3D space? Planes. So each of them defines a plane. And what we're really doing is we're intersecting those planes. And it turns out that two planes intersect in a line. And it turns out in this case that that line contains the third plane. So the third plane doesn't get you anything. And then just to summarize how this works, and we'll finish this next time, often, we are in need of calibrating a camera. Time to contact is one of the few places where we didn't need a calibrated camera. And to calibrate a camera, we often use calibration objects. I mean, it doesn't have to be a cube. It could be, for example, the corner of a room, as long as you know the geometry of it. And then in that case, often vanishing points are helpful. And we can work out the geometry of the vanishing points, which lead us to a set of equations. And when we solve them, we find the position of the center of projection. Now, if lenses were perfect, that would be it. So this would be it for camera calibration. And we'll talk more about camera calibration later. Our lenses have radial distortion, and there are reasons why people don't completely get rid of that. So actually in practice when you do real robotics camera calibration, it's this plus the radial distortion parameter. So OK. That's it for today.
MIT_6801_Machine_Vision_Fall_2020
Lecture_21_Relative_Orientation_Binocular_Stereo_Structure_Quadrics_Calibration_Reprojection.txt
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: So we're talking about relative orientation, one of the second of four problems in photogrammetry relevant to binocular stereo, as well as motion vision structure, structure from motion. And we developed a solution. And we're wondering about circumstances where that won't work. And in particular, are there surfaces where we can't determine the relative orientation? And we got there. We found that there were quadric surfaces in that family of surfaces. Now, keep in mind that this is in a coordinate system that's been specially lined up to make the equation simple. So this is, first of all, the center is at the origin. And the axes are lined up. So in a more general case, we'd have a more complicated looking equation, where we don't just have the second order terms. But we also have first order terms and zeroth order terms, constants. But in classifying the shapes, this is a convenient form. And we noted that if we have all positive signs, then we have an ellipsoid. And if A, and B, and C are the same, then it's a sphere. If we have one negative sign, we have a hyperboloid of one sheet. If we have two negative signs, we have a hyperboloid of two sheets. And if we have three negative signs, then we don't have any real locus. But if we extend it to complex numbers, then we can talk about an imaginary ellipsoid. Now, the particular equation we got from the problem of relative orientation fell in this category. And actually, the way we can get there is to note that it didn't have a constant term. So it was second order, but it only had the second order term and the first order term. And so think of it as an equation like that. And that means that R equals 0 is the solution. And so that point is on the surface. And what was that again? Well, that was the origin of the right-hand system. Or in the case of motion vision, that's the place where the camera is at time 2. So that's interesting. That means that this weird surface is one where we actually have to be on it with our right eye, or move on it, in the motion vision case. Well, it turns out that in addition, if we plug in a minus B for R, that's also a solution. And what is that? Well, if we move left along the baseline by B, then we're at the origin of the left-hand coordinate system. So actually, that surface has to go through both eyes, so to speak. Or in the case of motion vision, we have to start on the surface and end up back on the surface. And this shouldn't be surprising, because everything should be symmetrical between left and right. It just so happens that we picked the right-hand coordinate system as our reference. But we would expect the same to be true of the left-hand coordinate system. OK. In addition, because this is a homogeneous equation, i.e. some polynomial in R is equal to 0 with no constant term, there's no scaling. We can't tell the size of the vector. So R equals kb is also a solution, right? If R equal to B as a solution, or R equal to minus B, then R equal to kb is. And that means that the whole baseline is in the surface. So this has a number of implications. One of them is, it suggests that, well, maybe this is a rare case. How likely is it that you're on the surface with both camera positions and the whole baseline between them is on the surface? The other thing it means is that it's a ruled surface. That is, we can draw lines in the surface, which of course, we can't do in the case of the ellipsoid, right? If we draw a tangent, it touches the surface at one place. And then, it goes off into space. But apparently, the surface we're interested in is ruled. And that means, without doing all the detailed algebra, it has to be that one. Because neither one of these two is ruled. Also once we've decided that it's a hyperboloid of one sheet, we know that it actually has two rulings. That is, at any given point, we can draw a line that stays in the surface going in one direction. And there's a second one that crosses it. So that's pretty interesting. And that corresponds to the method used to manufacture tables, chairs, what have you out of straight sticks where you use two sticks that cross. And so it's interesting that we find this surface. OK. It seems like a very special case. Why are we worrying about it? Well, the thing is that this is the general equation for the quadric. But it also covers a large number of special cases. Parabolic hyperboloids and whatnot-- you can find the whole list of these special cases online. I won't go through all of them, just focus on one in particular, which is planar. Now, we know that this is the equation of a plane in 3D. And if we take a second equation of some other plane in 3D, now both of these are linear in X, Y and Z. But if we multiply the two of them, then the product will be equal to 0. And what is that equation describing? Well, it's describing those two planes. And if we multiply it out, we get a quadric. We get something that has up to second order terms in X, Y and Z. And so curiously, a surface like that falls into this category. And planar surfaces are pretty common in the world. So we need to worry about that. Now, it turns out that for these to have no constant to them, one of the planes has to pass through the origin of the right system and the origin of the left system. In other words, has to pass through the baseline. And so one of the planes is an epipolar plane. And what's the image of an epipolar plane? It's a straight line, right? Because it's coming right through your eye. And so you're seeing it edge on. And so that's not particularly interesting, because we don't see any of its surface. But the other plane is arbitrary. The other plane can be anything. And that's scary, because that means that potentially, when we're looking at a planar surface, we end up with this ambiguity. And it's not just a problem for binocular stereo and reconstructing topographic maps. But it's a problem for motion vision as well, recovering structure from motion. Because the two problems are really the same, mathematically. And well, we'll pretty much stop there-- except that we haven't really looked at how the geometry affects things beyond this. So in particular, it seems unlikely that we would run into this situation. Because you have all these special circumstances. But maybe your real surface is pretty close to one of these. And then, you won't run into this problem exactly. But you have large noise amplification, high noise gain. So we would like to stay away from that situation. And it turns out that the field of view has a big influence on that. So I won't be proving anything about that. But basically, if we have a large field of view, then this problem becomes much more well-posed, much more stable. If we have only a narrow, small patch on the surface, the chance that that patch happens to be very similar to one of these is high. And so it's pretty unstable. So high field of view-- a large field of view. And so that led to some amusing things. One of them was that people figured this out pretty early when they started doing aerial photography. But they didn't have the lens technology to build something that had incredibly high quality image reproduction, which they needed so they could see a lot of detail. Low radial distortions-- so they didn't have the image distorted, and also a large field of view. So what they did was, they stuck a bunch of cameras together into a very rigid structure. So there was a set of steel beams. And you stuck these cameras on there. And they are called the spider heads, because spiders have eight eyes. And so there's a similarity to this solution. Of course, it doesn't mean now that you have to calibrate these cameras relative to one another so that out of the individual pictures, you can compose a mosaic picture as if it was taken by a single camera with a wide field of view. But it makes it clear that this was a well-known problem going back 100 years. At least from a practical point of view, the mathematics took a bit later. OK. Relative orientation structure from motion-- that was number 2. Let's go on to number 3. And as we go along, we do less and less detail, because a lot of the basic ideas are common to all of this. OK, so relative orientation-- let's go on to interior orientation, which is basically camera calibration. And you'll say, oh, but we did that. Well, we had one idea, which was using vanishing points. So the problem was finding X0, Y0 and F, the principal point and the principal distance. And we had one way of doing that, which was to image a rectangular brick shape. So our calibration logic could be a brick machined more carefully. But that was the basic idea. And that works. It's not very accurate. It's hard to make it very accurate. And so what we want is a more general method. Also we need to take into account radial distortion. And the method that we developed using a rectangular brick doesn't lend itself to that very well. So what's this about radial distortion? Well, I mentioned that we can make these glass analog computers that are so powerful in redirecting rays into just the right direction so they come to fine focus and give us a very detailed image, but there are trade-offs, all kinds of trade offs. And I mentioned that in fact, it's impossible to make a perfect lens. And you basically have to decide what you're going to put up with, and what not. And radial distortion is something that generally, unless you are taking a picture of an architectural structure with lots of straight lines, you don't notice much. So if you're taking pictures of people, and forests, and cats to post on YouTube, it doesn't really matter that there's some radial distortion. Because nobody will ever notice it. And so in designing lenses, a lot of the other problems were reduced in difficulty by allowing radial distortion, which basically just means that there's some point in the image-- the center of distortion. And when we express coordinates in polar coordinates, then the image doesn't appear where it should. But it appears somewhere else along that line. And that error varies with radius. And it typically is approximated using a polynomial. So the usual notation is something like this. So you can see that delta x delta y is proportional to x and y. So it is a vector that's parallel to the radius vector. So it is along that radial line. And then, the lowest order term is r squared. And why is that? Well, I'll leave that for a potential homework problem. Why is it we're not including k0 of r, or something? Then, the next term is after the fourth. And in many cases, the first term is good enough to get you somewhere. And so often, we only find that first term. Yeah, towards the edge. But those coefficients tend to be very small. So if you buy a telephoto lens from, say, Zeiss, you get with it a plot of the radial distortion. And they're hand calibrated. That's why you pay a lot of money for those lenses. And you'll see that there's this quadratic rise. But there's also a higher order drop. And that's usually taken care of by that second term. So yeah, the distortion gets worse a lot as you go out towards the corners of the image. And it's not much of an issue in the center. In machine vision, often, we can get away just with a first term quadratic, just make it simple. Now, how do you measure this? Well, a famous method used in the past is the method of plumb lines. So you go into an area with lots of space like a parking garage. And you take some string and some weights. And you hang the weights along those strings. And why do you do this? Well, because now, first of all, the strings will be straight lines. They'll be stretched out by the wire. And then, they'll be parallel, because presumably, gravity points in the same direction in those places. They're not far enough apart. And then, you take a picture of that. And then, your picture might look, if you exaggerated, might look like that. And so first of all, you can see that there's a distortion. And then also, you can see what type it is. This is called barrel distortion, because it looks like the staves on a barrel. In some cases, you might find the opposite. And that's called the pincushion distortion. I guess we don't do much sewing by hand anymore in our modern world, anyway. So that may seem like a strange concept. But people used to need a way to put away their pins. And they had a little cushion. And the shape of the cushion is the shape that we see there. So they're pins on the sine of k1. And the other thing is, of course, once you've got these images, you can look at the radius of curvature of these lines. And you use them to estimate k1. But we'll do it by actual measurement of images. One subtle point here is, do we want to go from undistorted coordinates to distorted? In other words, we'll take our perspective projection equation. That tells us where things should appear. And then, we look at the image to see where they actually appear. And then, we fit this polynomial approximation to that. Or do we want to go from the distorted to the undistorted, again using a polynomial? And why might we want to do that? Well, because the distorted is something we actually can measure. We don't have a way of measuring the undistorted quantities directly. So the two are related, of course, by what's called the series inversion. Not a surprise that that's what it's called. But it's not something that's usually taught. And it's not entirely trivial. But you can automate it and use some algorithm that will take you from the polynomial going in one direction to the polynomial going in the other direction. The way it affects us is that it affects what coordinate system we want to do the final optimization in. Because obviously, it's going to be easier to do the optimization in one of those, depending on which way around you've expressed the polynomial. And I guess we're going to try and minimize the error in the image plane. So from that point of view, it may be that we want to go from undistorted to distorted. But we'll hang onto that. So okay. By the way, radial distortion-- how about tangential distortion? Well fortunately, we don't have to worry about that anymore. But just for reference, this is what it would look like. No, sorry. Minus y plus x. Again, polynomial-- something growing with power of the radius in the image. So it's not a problem in the middle of the image. And then, it gets worse when you get to the corners. And now, it's in a direction that's orthogonal. So delta x delta y-- that vector in 2D-- is now perpendicular to this vector, to the vector xy, right? So if we take the direct product, you can see here. So that used to be a problem, because imaging systems were electromagnetic vacuum tube apparatus. And in addition to radial distortion, they would have tangential distortion based on exactly how you placed those electromagnets, and so on. It's not a problem in modern devices, because the chip has perfect geometry. And the lenses, if they're rotationally symmetric, only have radial distortion. But just be aware of the fact that it's called radial distortion. Because there's another one. There are some additional factors that we're ignoring, because they're small, and because they depend on the quality of the lens assembly. One is called decentering. And so in particular, if the center of distortion is not the same as your principal point, so you're distorting about a point other than what you would normally consider the center of the image as far as perspective projection is concerned, then you're going to get an offset that depends on position. And usually, it's very small. Because usually, things are assembled accurately enough for that not to be a problem. But if you want to do really high quality work like aerial photography, then you do need to include that. Another thing to consider is if the image plane is tilted. So here's our lens. And now, your image-- of course, it won't be tilted that much. But it's a mechanical thing that somebody built. So there's a possibility that there'll be some small error there. And that, of course, means that the magnification isn't quite constant across. And the focus will also be an issue. But if it's a very small effect, the focus won't be affected much. But it will still introduce a distortion. And it turns out that these two are related. So if you want to, you can have a more complicated model for distortion. We're not going to do that. But it turns out in the end, when we do the nonlinear optimization, you can have whatever you want. We're going to start off with some closed form formulas. But at the end, we're just going to throw up our hands and say, oh, this is a difficult nonlinear problem. We'll just give it to one of those packages. And at that point, your model of distortion can be more complicated without high penalty, other than overfitting problems, like you've put in so many parameters that it's going to specifically tune it to your experimental measurements. And the resulting apparent high accuracy is bogus. But beyond that, we can have more complicated-- so what's the strategy? So what we're going to follow is size camera calibration method with some modifications. Yeah. Well, that's a good question. Because we find that mechanical adjustments and fine tuning at the manufacturer is expensive. Software solutions tend to be cheap. And so in modern times, it's been mostly a matter of extending the model of distortion and having a couple more parameters to tune. In the past, it was indeed a matter of fine tuning when you manufactured. It wasn't done for anything except aerial photography, where you want the geometry to be absolutely straight. And they would have fine adjustments. And if you tweaked any of them, your warranty was dead. So naturally, they spent a lot of effort to get it squared up. And in some cases, the fix was not actually adjusting the tilt of the image plane, but introducing a prism wedge, a very small angle that would compensate for it. And you'd measure how much tilt there is in the image plane, and then go to the storeroom and pick out the compensating element that would just get rid of that component. OK. So Tsai came up with this scheme. And it involves, as you might imagine, a calibration object. And the calibration object could be anything that you know coordinates on very accurately. And we'll have to make a distinction between planar calibration objects, which obviously are easier to make and keep in the storeroom, and make very accurately using lithographic reproduction methods, or three-dimensional calibration objects like the rectangular brick that we talked about, which are harder to make, harder to maintain accurately. And then on the other hand, they have some advantages in terms of calibration. So there's a tension there. So we get correspondences, this time between image points and known points on this three-dimensional object. Now, what makes it not quite so easy as when we were talking about the vanishing point method is that we're unlikely to be able to determine, by getting a tape measure, what the relationship is between the calibration object and the camera. So they're sitting on the floor. We take a picture of it, put the camera on a tripod. And we can go out, and we can measure how far is the first point on that object. But then, how is it rotated in space? And it's just not practical, particularly since you don't actually know where the center of projection is, that front nodal point that we talked about when we're talking about image projection. And so that means that we need to add exterior orientation. So rather than find just the interior orientation of the camera as we did when we used vanishing points, now we're going to solve the problem of figuring out where the calibration object is in space and how it's rotated, as well as finding the camera parameters. And that produces much more accurate results, because there aren't any external measurement errors, or errors because we don't know, in this complicated lens with many elements, exactly where is the front nodal point. It's not like there's a little mark on the side. Because there can't be a mark on the side. It's right inside the lens. So how would you denote it? OK. So let's see. So that makes it more complicated, right? Because interior orientation has three degrees of freedom, if we ignore distortion for the moment. And how about exterior orientation? Well, that's translation and rotation. The translation and rotation position, the calibration object. So that's six degrees of freedom. So we've taken something that's pretty simple only, has three unknowns. And we've turned it into something that has nine. But it does actually make the problem simpler and produces much more accurate results. OK. So in the interior orientation, we have the good old prospective prediction equation. OK. So xc, yc, zc is in camera coordinates. So if we know some point in the camera coordinate system, we can calculate the position of the image. And x0, y0, f-- that's the interior orientation. Right? It's the principle point and the principle distance. OK. Now, the strategy here is going to be that we try to eliminate some parameters that we don't like that are difficult to deal with, like radial distortion. So we're going to try and find a method that, right away, modifies the measurements in such a way that the results are not dependent on radial distortion, and then get closed form solution for some of the parameters out of all of these parameters, and then finally, when we no longer can find closed form solutions, resort to number crunching. And so why do we even bother with this? Well, because the numerical methods minimize some quantity which has multiple minima. And we want the true one. We don't want to get stuck in the wrong local minimum. And so we need a good initial guess. And this is how we get the good initial guess. So in the process, we're allowed to violate many of the principles that we've established, because this isn't going to be the answer. This is our first guess for the number crunching iterative solution. And so for example, we said that we should be minimizing the error in image position. But that's very hard to do directly. So we're going to minimize some other area that is related to the image position error. And that's OK, because we're not going to stop there. This is just to get the initial condition. OK. So the xIyI-- that could be just the row and column in the image sensor. Or it could be millimeters from some reference point. But it's very convenient to just use the row and column numbers. And then for f, well, f could be in millimeters. But it could also be in pixels. Suppose that our pixels are square. And we just use the row and column number as coordinates for image position. Then it's very convenient to express f in pixel size. It's 1,000 pixels. Why? Well, because when we apply the perspective projection equation, then the units above and below match. And so we can use any units we want-- millimeters or pixels. So OK. Now, so there are three parameters here. And then, we add to that exterior. And that of course, is rotation and translation. And so we have a vector for the camera, which is going to be a rotated version of vector for scene plus some translation. So this is again, the camera. And so that's, of course, the xc, yc, zc we've talked about over here. And this is the scene, or object, or world coordinate, or whatever you want to call. It's not really a world coordinate system. It's a coordinate system in the calibration object. So we know the calibration object very accurately. And we know its coordinates relative to some system that's embedded in that object, like maybe the corner of that cube, or rectangular brick. OK. Now as I said, we're going to ignore our better instincts. And for a start, we're going to use rotation matrices here. 2, 3r, 2 xsyszs. OK, so this, of course, is a coordinate in the collaboration object in its own coordinate system. And then, we're going to rotate that. And then, we're moving the object out by this distance. And of course, that's the unknown. This is unknown. And that is unknown. And as I mentioned before, we're using the equations in a weird way. Because normally, you would use these equations to take a position in the calibration object and transform it into a position in the camera object. That's what their purpose in life is. But we're turning this upside down. Those are the things we know. And we don't know what this matrix is. And we don't know what that vector is. And so those are the things we're going to try and recover. OK. So now, we combine interior orientation and exterior orientation. And what we get is-- and a similar equation for y. r21 xs plus r22 ys plus-- and then, the bottom is exactly the same as up there. OK. And again, written that way, it basically allows us to map from coordinates in the calibration objects to image coordinates. But we want to use it instead to recover as many of these things as we can. OK. Now, I mentioned that it'd be convenient to get rid of the problems of radial distortion and fine tune things right at the end to get those coefficients. And also it turns out to be difficult to get f at first, and tz. So f occurs here. Now, one way we can deal with that is to look only at the direction in the image. So I guess it disappeared up there. But if we work in polar coordinates, radial distortion just changes the length. It doesn't change the angle. Similarly, if I change the principle distance f, all that happens is that the image gets magnified with f. And so it moves radially. And the same happens if I change the distance z to the object. All that happens is the magnification changes. And so again, we're moving along the radius. So the idea is, let's do something to deal with that angle and forget the radial distance in this polar coordinate system. And so for example, we can do this. We can divide these two equations, right? Because now, we've gotten rid of f. We have a new equation that doesn't involve f. And let's see. xs plus-- and we also have gotten rid of tz. 23 zs plus ty. Right? So tz occurs here and there. And those two terms cancel. And so we're just left with tx and ty. So we've combined two equations, two constraints. We get a new one, which has fewer of the unknowns in it. So it's going to be easier to find. So now, we cross-multiply. And we gather up terms. And we get xs yi prime r11. And I'm gathering up terms in such a way that it's clear what are the unknowns. And of course, in our case here, the unknowns are the components of r, and tx and ty. xs-- that's going to be-- well, these two are equal. And if I bring this over to the other side, I get this minus that equals 0. So there's an equation. And let's look at that equation to see what's in there. So first of all, xs and ys and zs-- that's coordinate and the calibration object. We know those. Then, we've got xi prime and yi prime. Those are image coordinates. And we've measured those. So the things in parentheses are known. OK, just a second. And what's unknown are r11, r12, et cetera, et cetera. So this is a linear equation in those unknowns. Oh, OK. They are these things. So let me just-- yeah. Thanks for pointing that out. So in order to do this, we need to know where the principal point is so we can subtract it out. And of course, we don't actually know where it is. But for this purpose, we just need an approximation. And so we can take the center of the image sensor. Unless we know better, we can take half the number of rows and half the number of columns, and put it there. Now, that's going to be a problem if we're dealing with image points that are right close to it, because the directions to those points is going to be affected a lot by any small error in our guess at what the principal point is. So what we do is-- and what Tsai doesn't mention is that we throw away all of the correspondences that are close to the assumed center of the image, right? Because if we're dealing with the direction of a ray out here, and this center is wrong by some small amount, that's not going to have a huge effect on the direction. But if we're dealing with a point in here and we move the center, that's going to have a large effect on its angle. So we're cheating by saying, oh, we have a guess at what x0 and y0 are-- the principal point, which in fact we're trying to determine. But it's OK, because it's only an approximation. And we don't use the data where it matters, where that effect would be the most severe. So OK. So we subtract out-- we reference everything to the image center that we assume is close to x0 and y0. And then, we get this linear equation, the unknowns. And we get one of these equations for every correspondence, right? So every time we say, oh, this point in the image is that point on the calibration object, we can write down one of these equations. And of course, the xs and yI's will change as we go. OK. Now, it's a linear equation. And how many do we need? Of course, with just one, we can't solve, because we've got a bunch of unknowns. So we need to count how many unknowns we have. OK. So now, we got rid of some stuff. So what's left? So what's left is-- so those appear in there, as do these. And tx appears, and ty. So what doesn't appear? Well, so out of all of the unknowns we're trying to solve for, these are the ones that appear in the equation. There are eight. That seems to make sense. Eight of those. And then, there's a bunch more that we don't. Now, keep in mind that we're not enforcing orthonormality of the matrix, right? Because we're pretending that those are nine unrelated numbers, not three degrees of freedom of rotation. So there are really six numbers here, when we know that rotation only has three degrees of freedom. Now as for these other three here, if we ever get these six, we should be able to get this one by cross-product, because we know that the rows of the rotation matrix are orthogonal. And so if you find a vector that's orthogonal to row 1 and row 2, then it's going to be parallel to row 3, right? And so cross-product or the first two rows gives us something that's parallel to the third row. So we can recover this one afterwards. But right now, we're not even enforcing that this is supposed to be a unit vector. This is supposed to be a unit vector. And these two vectors are supposed to be orthogonal. We're just going to go with this. OK, the eight unknowns-- so that suggests eight equations are needed, eight correspondences. But that's not quite true, because this equation is homogeneous. The result is equal to 0. And we all learn how to do a linear equations. I don't know about your education, but typically, there's not much emphasis on homogeneous equations. And they come up quite a bit in machine vision. So it's important to know what to do with them. So one feature of a homogeneous equation is that the linear combination of some variables equals 0. If I double all the variables, it'll still equal 0. So there's a scale factor issue that I cannot, from the homogeneous equation, figure out what the scale factor is. And well, then a method of solution is, take one of the unknowns. And just set it to whatever you want. So for example, here we might say ty equals 1, right? Why? Well, because whatever the solution is, I can scale it so that ty equals 1. And conversely, if I get a solution with ty equals 1, then I have some multiple of the true solution. And the equation doesn't tell me what multiple. I can't figure that out from that equation. So this is a way to proceed. OK. So that then reduces it to linear equations in seven unknowns, right? Because I've fixed one of them. And so that means that I need seven correspondences. So your calibration object should have seven points that are easily identified, preferably from any point of view. And that means you probably need more than seven, because some of them will be hidden. So let's see. So if you have a cube, you typically hide from a general position one of the eight sides. You'd be left with seven. Oh, that's a nice match. So a cube is not a bad calibration object. OK. So then out of this, we get some multiple of the true solution. We get r11 prime, r12 prime, and so on. So this method of solving the homogeneous equations will give us that. Oh, so by the way, if we have exactly seven correspondences, we're going to end up with seven linear equations and seven unknowns. And we know how to solve those using-- I don't know-- Gaussian elimination, or MATLAB, or whatever you want. What happens if we have more than seven correspondences? First of all, that's desirable. The more correspondences you have, the tighter your solution, the smaller the error is. And with seven correspondences, you will get a perfect fit. Does it mean you have no error? No. But if you take more correspondences, you can estimate the error. So it's not only that you get a better answer, but you also get an estimate of what's wrong with it. So typically, you'd use more than seven. And that means that your system of linear equations is overdetermined. And then, you use least squares, pseudo inverse, standard stuff to find the best solution. OK. But whether it's seven or more, now we have this. And we have to figure out what the actual solution is. Well, we know that these are supposed to be unit vectors. So we can calculate a scale factor. So we can compute a scale factor to make those be unit vectors. And oh, what if those two don't agree? Well, that's a good sanity check. If you do this, and you find that those two scale factors aren't approximately the same, then there's something seriously wrong. For example, you have misidentified the correspondences. You thought this was the point on the corner of the cube on the left, but it wasn't. Then, these will come out different, typically. In practice, they won't ever be exactly the same. So you can take the average, if you like. And so now, we can scale this vector to turn it into this one. OK. So what we've done now is, we have a first estimate of everything except f and tz and radial distortion. And this was closed form. If we do pseudo inverse, that's a closed form solution. OK. So next step is going to be finding f and tz. But while we're here, we're trying to make these unit vectors like they're supposed to. But we haven't really enforced that they're orthogonal. So that'll be another check. So you could take r11 prime, r21 prime, et cetera, r22 prime. So this is supposed to be equal to 0. And again, if it's not, then that's a potential problem. In practice, it'll never be exactly 0. But if it's large, then that means, again, something went wrong in your calculation. But we're going to need the full rotational matrix in a moment. And so that means we're going to take the first two row and take their cross-product. But if these aren't orthogonal, then we're going to get some sort of messy matrix that's not orthonormal, and so on. So squaring up-- so we have two vectors. And they're approximately orthogonal. How do we get a pair of vectors that is orthogonal, and that's as close as possible to the two vectors we start-- so what is the nearest set of orthogonal vectors? So let's throw it this way. Here are two vectors, those first two rows of the rotation matrix. And they're not quite orthogonal. And now, we want to make some small adjustment so that we get new vectors that orthogonal. And then, we take their cross-product. And we have the complete rotation matrix. By the way, heaven forbid that we end up with a reflection matrix. That's something we want to worry about. Now, if we are the ones taking the cross-product, we can make sure that it's a rotation, not a reflection. OK. Well, it turns out-- and this is boring least squares-- that the smallest adjustment is the following. That is, the adjustment in a is in the direction b. And the adjustment in b is in the direction a. And so now, how big is k? That's the only remaining question. All right. We want the new vectors to be orthogonal. And so there's the equation. And we have to solve for k. So you get a dot b plus a dot a plus b dot b into k plus k squared into a dot b equals 0. So there's a quadratic for k. Solve for k and iterate. Well, if we get the exact right value of k, we don't have to iterate. But we'll see in a second, that's actually not going to happen. Why? Well, look at this quadratic. The first term and the last term are 0 at the solution, right? We want them to be orthogonal. And so near the solution, those two terms are going to be very small. And so this is going to be a nasty, numerically unstable quadratic equation. We're not used to that. We're used to seeing more complicated equations being nasty. But this is one case where the quadratic actually fails. And so instead of solving the quadratic, we do that. Where does that come from? Well, suppose that k is very small already. Then, k squared times a dot b is even smaller. So forget that. And then, solve the rest for k. And if you need a solution, those two are going to be unit vectors-- a dot a plus b dot b is 2. And so you get minus a dot b over 2. And that's why I said iterate, because rather than try to solve that quadratic very accurately, you just solve that simple equation and iterate it a couple of times. So instead of using the standard formula for the solutions of a quadratic, we use this approximation. Anyone awake out there? [CHUCKLES] Is that your standard formula for solutions that are quadratic? OK, probably not. But believe it or not, that is a formula for the solutions of a quadratic. And it's sad that we don't know this. So what's the other one? Well it's, minus b plus or minus-- right? So this is the one we all taught. It turns out, this is also a formula. And the way you can check it is that if you have the two roots, x1 x2 product is supposed to be c over a. x1 plus x2 is b minus b over a. So you can easily check that both of these formulas are right by checking the roots product and sum of the roots. So why do I bring this up? Well, in this formula here, depending on the plus or the minus that you're using, you may be subtracting nearly the same size quantities. And we know that since computers can't represent real numbers exactly, there is going to be a loss of precision. So if you have 2.111111, and you subtract 2.111 with a 2 in it somewhere, you get a very small number. And you know that you can't really trust that number, because it only has a limited precision. So every time in the case of real solutions, one of the two answers you get is rather poor, right? Because in one of the two cases, these two have opposite signs. And every time you subtract two floating point numbers, you lose precision. The trick is that the signs over here are the opposite of the ones over here. So you get one of your solutions from this one where the signs match. And then, you get the other one-- I guess I should have written it this way. And you get the other solution from this one. And this is how you get accurate solutions to quadratics. So little side note there. So we could have used this to get a good answer for k. But a very simple method is just that iteration. And you can see that k will eventually tend to 0. And then, when you're satisfied that it's small enough for numerical precision on your computer, you can stop. OK. So that was the tweaking of the rotation matrix components. So now, we have a replacement for r11-- that first row and the second row-- that are actually orthogonal. And then, we can get the third row by cross-multiplying. And we have a full rotation matrix. Remember, though, that this isn't the final answer. Because we haven't followed our rules about how to do this, how to get accurate results. OK. This is a good time to talk about the planar target case. So planar targets are very attractive from the point of view of being easy to make, easy to store, and having high accuracy. And so for example, if you had your wheels aligned recently, you might have been at a place that uses machine vision for wheel alignment. And what they do is, they mount a calibration target on the wheel, and then rotate the wheel, or rotate the steering wheel to measure two different axes. And what's that target look like? Well, it has a pattern on it that has a feature that it's very possible to get incredibly high accurate position of corners of the pattern to 1/100 of a pixel, even better than with our edge finding methods. And how is it mounted? It's planar. It's mounted on the side of the wheel at an angle. But it's planar. And why are they using planar? Well, because it's possible to cheaply manufacture incredibly accurate planar patterns. But there's a downside. And so let's talk about that. So here's our planar target. And I guess we call this coordinate system s. And we can construct a coordinate system there. It makes sense to construct it this way, where x and y are in the plane of the target, and all of your coordinates are known in xs and ys, and zs is 0. That's the direction perpendicular to the target. Well then, we can follow the same methodology up there. Except now, there are certain terms that don't matter anymore, like r13. It gets multiplied by zs. R23, r33-- none of those occur anymore. OK. So we get this equation instead, because that term in zs drops out. So big deal. And now, we cross-multiply. And instead of getting the equation up there, we get the slightly simpler equation. OK. And again, same thing. The things in parentheses are measurements, the things we know. Then, r11, r12, et cetera are things-- they're the unknowns that we're looking for. But now, there are fewer. So now, if we list the unknowns, there are six instead of eight. And since these are homogeneous equations again, we turn them into inhomogeneous equations by setting one of the parameters equal to 1. And then, we have five equations and five unknowns. So one great feature of this approach is that we only need five correspondences now instead of seven. And again, usually, we would use more and use least squares to get a more accurate solution. Oh, by the way, what happens if, just by chance, ty in the real world is actually 0, that there's no translation in the y direction? Well then, this method is going to have a problem. It means that all your other parameters are going to be huge, and probably inaccurate. So that's something about this approach to solving a homogeneous equation. What do you do then? Well, set tx equal to 1. So I'm presenting it this way. But actually, to get a numerically good solution, you would want to check the result. And if it's the case that ty is actually very close to 0, then switch to the other one. And so I didn't make a point of that before. But OK. Well before, we recovered the full rotation matrix just by squaring up two of the vectors and then taking a cross-product. And so here, hmm. Now, we've only got the top 2 by 2 piece of the rotation matrix. And so I won't write down the solution, because that's what you're supposed to do in the homework problem. So OK. So let's suppose that we can do this either for planar or nonplanar. But you can see how this is different for the planar case clearly. Now, there's another subtlety that I didn't bother with, because it's not really relevant anymore today. But in the old days, you weren't quite sure about the relationship the aspect ratio of the stepping in the x direction and the stepping in the y direction. Because they were produced by very different effects. So one of them was just lines. And this is true even of CCD and CMOS sensors, which were pixel-- they were sensors, discrete sensors. But the way they were read out usually was in an analog form. So you took the discrete signal out of your row of sensors, turned it into an analog form. And then, a bot in the computer chopped it up and digitized it, but not in any way related to the size of the pixel step in the row, right? The frame grabber had its own clock. And so as a result, the spacing horizontally and the spacing vertically were controlled by different things. The spacing vertically was-- I've got different rows in my sensor. I know exactly what that is. And horizontally, it was, well, what's the relationship between the clock in the frame grab and the clock in the camera? And so we needed another parameter that scaled x relative to y. And it turns out that you couldn't find that parameter with a planar target. And it makes a bit of a mess of the algebra. So I didn't want to go there because today, of course, you look at the manufacturer's spec sheet. And you know exactly what the aspect ratio is of the stepping in the x and the y direction. But again, it brings out the fact that the planar target is different. OK. What's left to do? Well, we don't know f. And we don't know tz. And we also don't other things. But let's focus on that. How do we find them? Well, we use the same equations. And I won't write them out again, just multiply them out. So this is just a perspective projection equation, where we combine interior with exterior orientation. And you can see that now, we need the full rotation matrix. And it'd be good if it was really orthonormal. And again, the terms in parentheses are the ones that we know at this point. And so the unknowns, of course, are f and tz. So this is a simpler problem than the one we had before. So this stuff-- all of this, we can calculate, right? Because we've got the image measurements. And at this point, we've got tx and ty, and the components of the rotation metric. So we can calculate all of that. We can calculate all of this. And we just need to solve for f. And that means we actually need only one correspondence, right? Because from one correspondence, we get these two equations. And we're looking for two unknowns. So now, of course in practice, we would never do that. We would use all of the correspondences we can lay our hands on and do least squares. But the minimum number is 1. OK. So that gives us f and tz. And there is a little problem here, though, which is that I need depth variation. So it's very tempting to do this with your calibration target. So here's the image plane. Here's your planar calibration target. And here's the lens. All right. This seems like a nice arrangement. And what's wrong with that? Well, we know that perspective projection has in it multiplication by f and division by z. So if we double f and double z, nothing happens. That's a scale factor ambiguity. So that means that in this case, you cannot discover f and tz separately. You can only determine the ratio. And that's, of course, unsatisfactory. And so what you need to do is have variations in depth. And there's issues about how much, and what's the best, and so on. But basically, it's not going to work if it's perpendicular to the optical axis. And this could be-- I don't know-- 45 degrees, 60 degrees, depending on what you're trying to do. And if you go into the place where they do your wheel alignment if they're using machine vision, you'll see that when they mount the calibration target on the wheel-- so they have a camera looking down parallel to the axis of the car-- and they mount this thing on the side of the wheel so it has to stick out. But instead of mounting it perpendicular so that you get the best view in the camera, they're mounted at 45 degrees. And why? Because exterior orientation is ambiguous if you don't do that. You can only determine the ratio of f over tz. You can't determine f and tz separately. OK. Now, we're almost done. We've got estimates of most of the parameters. So what's missing? What's missing is the principal point and the radial distortion. So it turns out, no one's come up with a way of closed form solution for the principal point. And so we just give up at this point and say, okay, now we need to do the nonlinear optimization. And the idea is that here has an error between xI and xp. OK. So if we have all of these parameters, we can calculate forward from some position in the world to some position in the image. And that means, then, we would hope that that was going to lie right on top of the image point we actually measured. So the calibration object will have a bunch of points that are easily identified. And then, you look at point number 3. And you pump it through the rotation translation perspective projection. It gives you a predicted position in the image. And you saw it somewhere else. And that's an error. And that's the thing you're trying to minimize. And so I can write it this way. That's what I would hope. And of course, in terms of least squares minimization, I would just take the sum of squares of those two terms. And how do I get the xp and yp? Well, I apply the rotation matrix to translation matrix and the principle point information that I have, as well as the radial distortion. So now, I have something that depends on r, t, x0, y0, f, k1, possibly k2, maybe some more. So I've got a whole bunch of parameters. And now, I have this huge minimization problem. And as we mentioned last time, there's this wonderful package invented eons ago that does it. And what's it called? LMdiff. And it's in the MINPACK in the original Fortran, if you like. But it's been translated into lots of other languages. And of course, it's built into MATLAB and whatever. So now, we just set up this least squares problem where we're trying to come as close as possible to satisfying a bunch of equations of this form, one pair for every correspondence. And we do it by tweaking these parameters. Well, there's this little problem here. R is highly redundant. It has nine numbers, and only three degrees of freedom. And so we could try and impose the constraint. So this package works for unconstrained minimization. And so imposing R transpose R equals I, and determinant of R equals 1 plus 1-- that's going to be hard. So what to do instead? Well, what Tsai did was Euler angles. And what you can do instead is Gibbs vector, which is-- let's see. Omega hat-- so this so this is a nonredundant representation. There are only three numbers. That's the good part. It has singularities. That's the bad part. This one is nonredundant. It has three numbers. The bad part is that it blows up if you rotate 380 degrees. Now, if you know that's not going to happen because of the way you set up the calibration object, then that's a perfectly acceptable way of proceeding. That works pretty well. The other one is, of course, to use unit quaternions, which have no singularity. Right. So we can use that for the parameterization of quaternions. Unfortunately, it's redundant, right? Because there are four numbers for three degrees of freedom. But what you can do is really simple. You add another equation. So there's going to be an error term that is proportional to the difference between the size of this quaternion and 1. And you can determine how strongly that's enforced. And as you turn that up, you get the solution. So that's one implementation that works very well independent of conditions like avoiding 180 degrees. Now, Levenberg-Marquardt finds local extrema. So if you put it down in the wrong place in parameter space, it'll be perfectly happy to walk into some other local minima. And that's why we had to do all the other work to get an approximate solution first. Otherwise, we could have done away with all of that stuff and just start there. But we can't. A very important question is noise sensitivity, or what we've been calling noise gain. And we alluded to it in several places, like over here where the calibration plane, if it's perpendicular to the optical axis, then the noise gain on f and tz is huge-- infinite-- while the ratio is perfectly well-determined. But it's hard to say something general because of this numerical optimization, and because these methods here were approximate. They didn't enforce the conditions directly. So how do you address the noise gain issue? Well, as happens in many cases where all you have is a numerical method-- and this is where the advantage of an analytic method comes to the fore-- if you only have a numerical method, you can use Monte Carlo methods. So how do you do that? Well, you take your measured image positions. And you add some noise of known statistical properties. You add some Gaussian noise, noise with some known variance or standard deviation. And you do the computation. And you get a different answer. And you do this many times. And you look at the statistical properties of the answer. And you look at its standard deviation. And then, the ratio is the noise gain, right? Very straightforward method. Once you've written the code to solve this problem, you just take the inputs and you fiddle with them, and do this many, many times. And each time you get an answer. And then, you look at the distribution of the answer in the parameter space. And that way, you can do what normally you would do if you had an analytic solution. And this is how you find out things like-- that this absolutely does not work. If the calibration plane is perpendicular to the optical axis, you get a huge noise gain in a certain direction in parameter space. And you find out that the higher order coefficients of radial distortion past k2 are very poorly determined, that they're very sensitive to any kind of noise measurement. So it's probably not worth trying to get them. Oh. OK. So what are we going to do next after you come back is go, again, one level up. So we started off real low-level stuff. Then, we went to this intermediate stuff. The next thing we're going to do is talk about representation of shape, and recognition, and determining the attitude in space. So it's parallel to what we did in 2D. When we were doing the patterns, we did recognition and attitude determination in 2D. Now, we're going to do it in 3D. And we've got all the tools for it, now, to talk about rotation in 3D. So have a good holiday.
MIT_6801_Machine_Vision_Fall_2020
Lecture_8_Shading_Special_Cases_Lunar_Surface_Scanning_Electron_Microscope_Greens_Theorem.txt
[SQUEAKING] [RUSTLING] [CLICKING] BERTHOLD HORN: Let's have a quick review of what we learned about photometry. So there are a number of concepts, one of which was irradiance. And we use the symbol E for it. And it was power per unit area. And it's a way of talking about light falling on a surface. And it's what we measure in the image plane and convert to what's commonly called a gray level. So the quantity of interest here is directly used when we're imaging. But it's also, of course, a measure of light falling on the objects that we're imaging. Then we talked about intensity, which applies to a point source. And it describes the power per unit solid angle. And so we had to define the solid angle. And it's the quantity that typically varies with the direction. So if you have a good old incandescent light bulb, it's very low intensity in the direction of the base, because that's blocked by the base and some higher intensity in other directions. And that's a quantity that isn't of a whole lot of interest to us here. It's just interesting because, a, it's simple to define. And, b, it's the terminology incorrectly used to talk about the quantities that we are really wanting to talk about. So the important one is radiance, which is basically a measure of how bright a surface appears. So again, we have a little facet of the surface. And we're looking at how much power is emitted per unit area and per unit solid angle. And that's of interest to us, because that's what we actually measure with our instruments, cameras, and that's also obviously relevant to what we see. That small solid angle is perhaps the entrance pupil to your eye. So we then looked at cameras and anything with a lens in it. And we came up with this relationship between the radiance of a surface that we're imaging, that's L, and the irradiance E of the corresponding part of the image. And so it gives us a direct relationship between something out there loosely called brightness and something inside the camera loosely called brightness. And the reason we can be loose about it is because they're proportional to each other. So and then there's the pi over 4, which is just a constant factor. And then there's this 1 over f-stop squared, which is kind of obvious, because we're limiting the solid angle-- the omega over there by opening or closing the aperture on the lens. And the area of that goes as the square of that ratio. And it's the area, of course, that we need when talking about the solid angle. So then the next question is, OK, we're measuring E. And it's proportional to L. But where's L come from? What determines the radiance of a surface? And we already indicated that well, illumination, it's going to be directly proportional to the amount of illumination. And it's also going to depend on the geometry. So how is the surface oriented? And it depends on the material. And that's where we-- bi-directional BDRF. And that's where we introduced the bidirectional reflectance distribution function, which is a function of the incident direction and the emitted direction. So we have a light coming into a surface. And we have light re-emitted from that surface. And that's obviously the idea of reflectance. How much of that light going in is reflected. Except it's not quite as simple as that. It's not simply a ratio of what percentage of the incoming light is reflected. But we're interested only in the light that's going to hit the camera or the eye. So we're actually using this terminology. So it's going to be delta E-- let's see, delta L of theta emitted. So this is the radiance of the surface. And this is the irradiance. And so, that's what you imagine some definition of reflectance to be. And it's the detailed fine grain definition of reflectance, from which we can derive other quote reflectances. For example, albedo, which is the total output power divided by the total input power. Well, in order to compute that, we just take this quantity, and we integrate over all possible output directions. Because in this case, we're interested in the total power going out, not just what's going to a particular light sensor. And in the process, we may need to think about spherical geometries. Then we said that this quantity, this BRDF, has to satisfy a constraint. Which basically says, if you interchange the directions to the source and the direction to the viewer, the BRDF should come out the same. And that's because if it wasn't, then we'd be violating the second law of thermodynamics, which periodically people try and do, but generally don't have too much success with. And so, we can't just have any old function there. By the way, in computer graphics obviously they use models of surface reflectance. And quite a number of those models violate this constraint. And yet we don't seem to care. We like the pictures, which suggest that this isn't something critical to a human or machine vision, other than it's kind of a shortcut. If you've measured one of them, then you've got the other one. So it cuts the number of measurements you need to take in two. Because you can just by symmetry find the other one. So example. Well, we've been talking about Lambertian surfaces. And the Lambertian surface has the property that it appears equally bright from whatever viewing direction you have. And if it's an ideal Lambertian surface, it also reflects all the incident light. So property number one, and this is the condition that's usually misstated in terms of emitted energy. So it's usually stated incorrectly as it's emitting light equally in all directions. So that's going to greatly simplify whatever formula we come up with for the Lambertian surface, because it's not going to depend on two of the four parameters. And then the other conditions is that if it's an ideal Lambertian surface, it reflects all light and doesn't generate any of its own. So as I indicated, a lot of work with the BRDF, the BRDF is the atomic thing. It's the low-level detail. And in many cases, we're interested in integrals of that. So for example, if I don't have a point source, I have a distributed source like the lights in this room, how can I deal with that. Well, I can simply integrate over a hemisphere of incident directions. So I'd integrate over that quantity, taking into account how much light is coming from each direction. So similarly, here we need to integrate over a hemisphere to get all of the energy that's coming off the surface. So we had this way of dividing up using-- so we use the polar angle. And then there was also an azimuth angle. So this is one way we can talk about the possible directions, two parameters. And if we perform the integral, we need to take into account the area of this patch, which is obviously going to involve delta theta and delta phi. But it's also going to get smaller the closer we get to the pole. And since we're measuring theta from the pole, this would be sine theta. It's kind of unfortunate that they didn't pick latitude. But they picked co-latitude. But whatever, we can do that. So now we're dealing with-- we're trying to integrate our all emitted direction. So in this particular case, we're talking about those quantities. So azimuth, well that range is over a 2 pi range. So we're going to be integrating from minus pi to plus pi, for example. The polar angle, well, we're not interested in points below the horizon, because the object itself is blocking. It's only emitting above the surface. So we only have to deal with 0 to pi over 2 for the polar angle. And then we're going to have to include this term here. And that's obviously the Jacobian-- the determinant of the Jacobian of the coordinate transformation. But I find it easier just to draw the diagram. And then what's in here. Well, there's F. And now, we've decided that F is a constant. So let's just do that, just write F. And then the light that's falling on the surface depends on the incoming radiation and the angle. And we're saying that all of that gets reflected. So the lights coming in at a certain angle, there's foreshortening, so the power deposited on the surface is E cosine theta I times the area of the surface. And we're saying that that's all going to be reflected. So when we integrate the reflected light, which is the BRDF times this quantity, then that should equal the incoming light. So we can just cancel it out conveniently. So what we're looking for is what is this constant value of F for the Lambertian surface. Is it one, or some other convenient quantity. So first of all, the azimuth angle doesn't appear anywhere in the integrand. So we are going to evaluate this quantity and then just integrate over phi 2 pi. So that's just 2 pi times that inner integral. So we're going to have 2 pi times 0 2 pi over 2 into-- oh, no. We've already dealt with that. F, well actually, we can take the F outside, so it's a constant. Now this is a sine 2 theta E. And so if we integrate that, we get minus 1/2 cosine 2 theta E in the limits 0 to pi over 2. So we plug in pi over 2, we get cosine of pi, which is minus 1. Minus 1 times minus 1/2 is 1/2. And then we subtract what we get by plugging 0 in here, cosine of 0 is 1. And so we're going to subtract minus 1, which is like adding a 1/2. And so, this whole thing comes out to be 1. And so the result is that F is 1 over pi. So that's it for Lambertian surface. That's the BRDF for the Lambertian surface. And that's as easy as it can get. And there's some question about why it's 1 over pi and not 1 over 2 pi. So let's think about that. So if you think about the hemisphere of possible directions. So here's our surface element. And it's radiating into all these directions. And what is the solid angle that's occupied by that hemisphere? Well, 2 pi, of course. So the object is radiating into the hemisphere that's above its level, above the plane through the surface. And that's 2 pi. And if we were radiating energy equally as isotopically into that hemisphere, then F should be 1 over 2 pi. And so those people who say it's radiating equally in all directions would end up with 1 over 2 pi for that. So what's wrong with that? Well, what's wrong with that is that it appears equally bright from all directions doesn't mean it's radiating equally in all directions. So imagine that you're on the surface of the sea, and you're looking in at this object. There's going to be foreshortening. So if you're straight above it, you see its full area. If you're off at an angle, you see an area that's reduced by the cosine of the polar angle. And so, what does that mean. Well, that means that if you emitted the same power, then the power per unit area would be growing. And when you get to be on the horizon, now the area of that surface element is pretty much shrunk to nothing. But you're still radiating the same power, supposedly. Well, that means that the power per unit area is infinite, and it will fry your retina. So you don't want that. And that's not what it does. It is radiating this in this direction and in that direction. So but it's in proportion to the area. So that the power per area stays constant. So it appears equally bright. So that's condition number one. It appears equally bright. And so, that means that actually it's radiating more up here and less down here. And in just such a way that we end up with 1/2, we end up with 1 over pi instead of 1 over 2 pi. So again, the idea that Lambertian surface radiates equally in all directions is wrong. And it'll give you the wrong answer here. Now how do we use this? Well, let's-- simple case. Notice that there's no cosine theta ion here. So what's with that? We've made a big fuss about Lambert showing that it depends on the cosine of the incident angle. Well that's because that controls the foreshortening of the incoming radiation. So suppose that we have a distant source of radiation. And that it has an irradiance perpendicular to its rays of E0 watts per square meter. Now we are illuminating that surface. And of course, that surface has an area that's larger. So if we call this A, and we call this area A prime, then A prime A is A prime cosine theta I. So that means that this captures a smaller amount of the incident radiation than it would if it was oriented perpendicular to the surface. So we find that-- so if we measure our incoming light in terms of surface area, then L is 1 over pi times that power per unit surface area. Just as you'd expect, there's a 1 over-- there's the BRDF. If instead we measure it relative to the incoming radiation, perpendicular to the direction of that radiation, we have to take into account the foreshortening. And then we get the familiar expression for Lambert's law. And so that's a little thing that you have to keep in mind and avoid confusion. Here is an example of how you might get confused. Helmholtz reciprocity. You look at this formula, and you say, oops, there's no cosine theta E. So it doesn't satisfy Helmholtz reciprocity. So it's not a physically possible surface. But the Helmholtz reciprocity applies to the BRDF, not that. And here, this is obviously if I interchange theta I and theta E, it's the same. It's one over pi. So we have to be a little bit careful when we ask questions about Helmholtz reciprocity, for example. This is a perfectly valid formula, but that's not the one that you want to apply Helmholtz reciprocity to. It's instead the BRDF, underlying BRDF. So that's Lambertian, which is really simple. And we should have some other examples. So let's see. So let's try this. So this is another example. So I'm not picking this totally at random. We're going to use this particular type of surface quite a bit. So I might as well introduce it at this point. So for this one, the BRDF, let's see, isn't the constant. It's something like that. It's 1 over the square root of cosine theta I cosine theta E. And in that form, we can immediately answer the question, does this type of surface satisfy Helmholtz reciprocity. Well, yeah. If you interchange theta I with theta E, you get the same answer. Now when we use this model in practice, we're adding illumination and looking at how bright the surface will appear under certain illumination. And so, we do what we did over there. So the radiance is going to be the irradiance times the BRDF. And it's the irradiance in terms of power per unit area on the surface patch. And that's going to be affected by the foreshortening, because when it's tilted, it's going to receive less power. So that's interesting, because that's now going to be-- so here we have a surface that acts quite differently from a Lambertian surface. Instead of having cosine theta I, we have this funny ratio. And so, it turns out that this type of behavior is what we find on the lunar surface. Well, more specifically, the area, the dark areas, the mares, where volcanic eruptions have occurred to fill in the basins-- but actually, rocky planets in general, and asteroids-- some asteroids. And it's not a bad model for them. And it's significantly different from Lambertian. And by the way, this was the basis of the first methods for recovering shape from variations in brightness. Now let's see if we can learn something about this type of surface. So one question is, if we look at the moon, what are the isophotes. Now of course, we know that the lunar surface has some texture on it and rays ejecting from craters that are brighter than the background and so on. But let's pretend that the lunar surface was pretty much uniform in its reflecting properties. And what we'd like to know is what is the contour map of brightness. Now if it was Lambertian, we know that all of the points that are the same angle from the sun, where the surface normally has the same angle with respect to the sun have the same brightness, cosine theta I. And so, if we were to look at the isophotes on the sphere, they'd all be nested circles. And then if I project them into the image plane, those circles are at an angle. So the circle gets turned into an ellipse. And eccentricity depending on just how much of an angle. So this is what I'd expect to see. And this is what I would see if say I took that calibration object, that sphere painted white in the lab. If you plot its isophotes, they pretty much look like that. So that's for Lambertian. So what about this other material? Well that's a little bit more tricky. So let's see how we can do that. Now with an a Lambertian, we could just set cosine theta I as the constant. And that means theta I is a constant. And then you find all the places where the angle between the direction to the light source and the local surface normal is the same. And you just spin that around to get a cone, and you're done. So this one's a little bit harder. So we're doing it for this one. And so here we have L is constant for-- so we're now looking for all of the points on the surface that have a certain ratio of cosine theta I over cosine theta E. Well we can write this in terms of unit normals that might make it easier to see what's going on. So cosine theta I is the unit normal dark product with a light source direction. And cosine theta E is the unit vector with respect to the viewing direction. So we're now looking for all values of n that make this a constant. And so, for the constant C, that's what we get. And now-- and so we have a dark product that's equal to 0. That means we have two vectors that are perpendicular to each other. And so, let's fix the constant C for the moment. Then this is some fixed vector. And this is saying that all of the ends that satisfy that, all of the ends that have the same brightness must be perpendicular to that vector. So what is the set of vectors that are perpendicular to a particular vector. What does that look like. What's the locus of the endpoints of those unit vectors. So we have some vector. And then we're saying that n is perpendicular to that. That's one n. Well, here's another one. So we actually get a plane. So that if we think of the unit vectors of all the points on the surface that have the same brightness, they all lie in a plane. So that's already useful information, which isn't the case here. Because the unit vectors-- this unit vector points up towards the pole a little bit. And this one points in a different direction. So that's already very non-Lambertian. Well we're not going to do all the details, but we can benefit somewhat from thinking about spherical coordinates to figure this one out. Now again, it's sad that they picked a polar angle instead of latitude. So it looks a little different from your usual formula. But of course, it's really just the same thing with subtract from 90. So I can always write a unit vector in this form. A unit vector has only two degrees of freedom. And I've picked the polar angle and the azimuth here as those two convenient parameters. So what I'm going to try and do is make some advance in understanding this by substituting for all of them in that form. Now I'm going to end up with an algebraic mess unless I pick my coordinate system as well. And so I happen to know that here's a good way to pick the coordinate system. So here's the direction to the sun. Here's the direction to the Earth, where the viewer is. And usually, we think of the North Pole as being up above this plane, right angles above that plane. And this isn't going to be perfect, because the plane in which the moon orbits around the Earth is not exactly the same plane as the plane that the Earth moves around the sun. But let's pretend it's the same. So we're going to pick this preferred coordinate system where this is the z direction. And so, the sun and the Earth is at Z equal 0. And that means that when we write the vectors for them, we can leave out to third part. So their position depends only on the azimuth, since we've picked this convenient coordinate system. So it's a little bit simpler. There's only that one unknown. And the third component is 0. And so now, we go back to our expression for these normals. And I guess I can write it here. And I suppose I do need the other board. So before we do that, by the way, there's something you can already ascertain right now, which is what happens at full moon. Well, at full moon, from Earth you're looking at the moon in the same direction as the incident light from the sun. So that means that theta I is the same as theta E. So that means that it's constant. What does that mean? Well, that means that the disk you see should be uniformly bright aside from the surface markings that we discussed. And that's completely normal Lambertian. If we had a Lambertian surface sphere illuminated from the same direction as the viewer, you're holding the flashlight next to your camera, then we expect to see isophotes that look like this. Because the incident angle here at 0 and then it increases to 90 degrees. So the cosine of the incident angle goes down. And so if we are looking at a sphere, and it has isophotes like this, we recognize it has a three-dimensional shape, and it's kind of spherical. And all of a sudden here, that's not the case. And so, that's pretty interesting. Next time you look at a full moon, you realize that it doesn't look like a ball. I mean, you know intellectually, that it's, well, unless you belong to the Flat Earth Society, you probably believe the moon is flat as well. But leaving that out, it doesn't really look round. You can see that it must be sort of round, because the outline of it is a circle. But it doesn't look quite right. And this is why. Because it in fact, in opposition at that time, is pretty much uniformly bright. And this is it. It's because it's not Lambertian. It's a different microstructure. And the Hapke model is a pretty good one for predicting that kind of thing. Well, let me kind of jump from that. So we're going to have n.s-- which way around did I have it? n.s over n.v is a constant. And now I can plug in the spherical coordinate versions of those vectors and leave out a couple of steps. What I'm going to get is a bunch of terms Canceled, and we end up with that. Now unless, of course, if theta S is the same as theta V, i.e. opposition, then we just get 1. But suppose that we're not, then this can only be true if that is true. So what does that mean? Well, it means that all of the points on the surface that have the same brightness have the same azimuth. So in our coordinate system, that means that here's a great circle, not drawn very well, but that has a fixed azimuth, fixed angle. If you go into the center, and we look at this direction, that's the same for all of them. So that's one isophote. And here's another one. So the lines of constant longitude are isophotes. And that's again very different from Lambertian and it makes the moon look odd. I'm just thinking of something. That when I was a little kid, we went for a walk in the Black Forest in Germany, and the moon was just rising. And the adults thought they'd have a little joke at my part, and they said, well, how far away do you think the moon is? And I'm like, OK, if they ask me that, it must be much further away than I think. So I said, I don't know. 100 meters. And they all laughed. So anyway. So it's hard to estimate properties of celestial bodies. For example, I already mentioned that we would be surprised to know-- that most people are surprised to know that the reflectance, the albedo of the moon is about 0.1, which is the albedo of coal. And yet it looks so bright in the sky. And that's because we don't have any comparison. We don't have anything near it. And all we can measure is the product of incident light times reflectance. And then we try to separate that into those two components. Now in our own world, often we have that the illumination is more or less constant of an area. And so we can separate changes in reflectance from changes in brightness, particularly if we have some calibration objects like a piece of white paper. You recognize that, and you say, OK that's one. And everything else can be measured in relation to that. But if we only see the product of illumination and reflectance, it is totally ambiguous. We don't know whether it's dark because the illumination is weak, or because the reflectance is low, and so on. So we can go a little bit further with this. For example, we might say, well, suppose we take a picture of the surface under two different illumination conditions. Can we find the orientation, the surface orientation locally. And yes, we can. And it's just photometric stereo the way we've done it before. But then the remaining question is, can we get the shape of the surface. So let's talk about that a little bit. So this Hapke model, we've looked at it here in terms of the angles. But we also know-- and the unit vector-- we also know that we can use the gradient as a way of talking about surface orientation. So let's look at what this is like in terms of surface orientation. By the way, you may wonder why there's a square root. I mean, it would still be-- well, it's partly because we want to make sure that we satisfy Helmholtz. Because if it wasn't the square root, then when you divide by cosine theta I, you get, I don't know, 1 over cosine theta E, which is not symmetrical. So that wouldn't work. And the other reason is that you want the integral of all the outgoing radiation to equal the incoming radiation. And that doesn't work if you don't have the square root. It becomes infinite. So now we can plug in for the various unit vectors. So this was our way of converting from gradient to unit vector. And then, we can use the same notation to talk about the position of the light source. And we've usually chosen the coordinate system so that Z is the direction that's coming straight up at me. So it's along the optical axis. So Z is the viewing direction. So V is just Z. So now, I'm going to take those dot products. And this was the messy part of Lambertian. We had this nonlinear term. So if we're trying to plot isophotes and so on, this would create a second order component. So we ended up with conic sections. But if we now take the ratio of these two, we get something that's linear. And rs is just a shorthand for this thing. So rs is square root of 1 plus ps squared plus qs squared. And it's constant if the light source is in a fixed position. And it's just a nuisance to have to write that out all the time. So we're not quite done, because actually, we want the square root of this thing. But what do the isophotes look like in gradient space? So well if the square root of this is a constant, then this quantity itself is a constant squared. So we can look at isophotes in terms of this formula. So what are the isophotes? Well it's when 1 plus psp plus qsq is a constant. And that's a linear equation in p and q. So it's-- what is the curb in pq space? If it's a linear equation in p and q? It's a line. Right. So I can-- which is going to be great compared to Lambertian, which had these conic sections. So that's one line. Now suppose I plot another isophote. Well, it's going to be a line again, just with a different constant, because this will be different. But the psp plus qsq will be the same. So it'll have the same orientation. So the other isophote might look like that. And so there's a whole bunch of parallel lines that are the isophotes. Now, they're not equally spaced, because I'm taking the square root of this thing. But other than that, so I'm going to have, I don't know, something like that. So this is my plot in gradient space. And there's one particular line, which is where brightness is 0, where I've turned 90 degrees away from the light source. And just as with our Lambertian cosine theta, we need to be aware of the fact that brightness can't be negative. So actually, this part of the diagram is 0. So why is this exciting? Well, because it's linear. It's going to make it very easy to solve all sorts of problems. And so first of all, since we are coming from photometric stereo, suppose that we have one lighting condition. And then we have a different lighting condition. Well, then we'll get straight lines, but different straight lines. So I don't know, maybe like this. And then obviously, if I have the two measurements, I can find the intersection of the corresponding lines. So suppose that the measurement in the one lighting condition was that. And on the other lighting condition it was this line. Then there's the intersection. So that's the answer. That's the surface orientation. So photometric stereo is very easy. And of course, I can-- this is geometrically. I can do this with equations. Just we have two linear equations of this form, 1 plus psp plus qsq equals something. And of course, we know how to solve linear equations. So that's kind of neat. And there's no ambiguity. With Lambertian, we had to conic sections intersecting. And they could intersect in up to two places. No, actually, by Bézout's theorem, we got two second order equations. And by Bézout's theorem, there might be as many as four solutions. And so it turns out that, well, that's for arbitrary second order equations. But what if you have the particular equations we have. Well it turns out that in that case, there can be only two. Anyway, here there's only one. So that's an advantage. Then another thing that we can read right off this diagram is that-- so we don't know from one measurement, we can't determine the surface orientation, as usual. But we do have something pretty powerful, which is the surface orientation in a particular direction. And so when we-- oops. So this is for a particular orientation of the coordinate system. I've picked some x, y, z coordinate system. And this is what I get in the diagram. What if I pick a different coordinate system. What if I turn this by some angle. I don't know, call it alpha. Well, it turns out that then I turn this. And if I turn it the correct way, I get a pretty neat result. I'm just stating this. I'm not proving it. But you can prove it, keeping in mind that p is dc, dx, and q is dc, dy. And p prime is dz, dx prime. And q prime is dz, dy prime. And then use chain rule, and you basically get a rotation through an angle alpha. So it's sort of surprising that the first derivatives rotate the same way as the coordinate system. But that they do. Why is that great? Well because, too bad I messed up that diagram. Well, let me do it again. So I'm just copying that first diagram. And now, suppose that I pick this as alpha. Well then, when I measure a particular brightness, let's say this. As usual, I don't know the surface orientation. But I know one component of it. I know that the component in this x prime direction has a certain value. Because all of the points on this line have that same slope in that direction. I don't know anything about the slope in the direction at right angles. But I know the slope in that direction. And that's different from Lambertian. Because in a Lambertian case, we had this curve. And so the orientation was different along the curve. Here we've got a line. And all points on that line have the same distance from this origin from here. So you can find p prime. And then, what is this angle alpha? Well, it's obviously some function of ps and qs. Tan alpha is, let's see-- and how do I know that? Well because I want this straight line, the 1 plus ps, plus psp plus qsq, I want that to become the vertical axis. And so I have to find the angle that will make one of the two terms disappear after the rotation. So anyway, that it's not a particularly important point. But that's the angle I actually want to use. So this is more exciting than you might think. Because what this means is, I can look at the surface, if I pick the coordinate system right, and measure the brightness. And I immediately know how steep the surface is in a certain direction. So I could then traverse. I could say, it's going up by 1 meter and 10. So I'm going to take a 1 meter step. And it's a tenth of a meter up in z. And then I look at the brightness there. And now I again calculate the slope. And it's whatever the slope is, it allows me to calculate how much height I gain or lose in the next step. And so following that idea, I can actually get a profile of the surface. I can actually keep on going and measuring brightness, and computing the slope, and taking a small step. So the idea is, I'm here. And I measure the brightness. And it gives me a slope. So now I can take a small step in that direction. If I go at right angles, I could fall off a cliff. I know nothing about the slope in the direction at right angles. So now I'm at this point. I measure the brightness. It gives me a different slope. And I take another small step, and I'm at that point, and so on. So you can see how I can get a profile on the surface. Now of course, there are issues of accuracy, because we already mentioned that it's hard to measure brightness accurately. And also, the surface may not be perfectly uniform. There may be some variation in reflecting properties and so on. But conceptually, I can do this. And by the way, I can go in the other direction as well. Because if at this point, the slope has a certain value, I can go in the other direction by minus the slope times the step size. And so I can continue this profile on this side. Well that requires that I need some sort of initial condition. So I need to know z at my starting point so that I can incrementally change z. Do I know z? No. When I measure brightness, brightness gives me information about orientation not about absolute depth. And remember the formula L equals pi over 4 E, blah, blah, blah? z doesn't appear in there. That's very important. That when I walk towards the wall, it has the same brightness. And we went through that argument, two changes that are proportional to r squared, which cancel each other. So z doesn't appear in there. And conversely that means that, I mean, in a way it's nice. It means that things don't-- you don't burn your eyes when you get too close to somebody-- well, most people. And so it's pretty much the same brightness. So that's the good part, that you can recognize things because their color doesn't change as you move around. The bad part is you can't get distance that way. You can't invert that process to get distance. So we don't know the distance. So that's an important consideration here. I can get this profile if I have the initial value, but so what happens if I don't know the initial value? Well the profile might be up here. It'd be the same shape. So I can get the shape of the profile. I just can't get its absolute vertical position. So that's pretty exciting, because it means that, for example, I can look at the moon other than at full moon when everything is independent of surface orientation, and I can run a profile like this. And it so happens that in the case of the-- we had the coordinate system up there. And not too surprisingly, the direction is going to be parallel to the equator. So of course, you typically do this on a very small scale. But suppose I start here. Well, I can get a profile that way. Well, why don't I start somewhere else. Why don't I start here? Well I can get a profile there. Get a profile there. So you can see how I can explore the whole surface. I can get lots of profiles. And they may not be very accurate and whatever. But do not worry about that for the moment. And there I've got the shape. And as I said, typically you'd be doing this in a small area like in some crater where you might think of landing or something. You wouldn't do it for the whole moon. But this gives the direction of the profiles. It's parallel to the equator. It's along lines of latitude. So that's the good news. The bad news is, I don't know how these relate to each other. Because when I'm standing here, I have no idea what the slope is perpendicular to this profile. And the same with all the other profiles. So the good news is, I can get the profiles. The bad news is, they're all independent. And now you can start imagining various heuristics like saying, oh, well, there aren't any gigantic cliffs. So typically, neighboring profiles will be similar. And maybe the average height along one profile should be the same as the average height along a profile next to it, and so on, and then stitch them together into a 3D surface-- attempt to stitch them together into a 3D surface. But that's going to depend on prior information, like what are the topographic properties of the lunar surface. Because if you had a surface with those reflecting light properties, and these were all-- you could shift each of these independently in the vertical direction, you get the same image. So you don't know which of those it is. So another idea is, if I have a crater that has some sort of rotational symmetry, not perfect perhaps, then I'm scanning across like this. Well if I'm lucky, this cross-section will be very similar to this central cross-section. So once I've got this horizontal cross-section, I can pretend that I know the vertical cross-section. And that will tie them all together. And of course, that makes an assumption about the symmetry of the crater and so on. Anyway, that's what people did. Now for a moment, I want to have a complete change of topic. We'll get back to this later. It's the very beginnings of shape from shading. And this was the first shape from shading problem solved, because it's so easy. And at the time, there was a strong incentive to solve it. So I want to get back a little bit to lenses. And there's a reason. Because we'll be switching to orthographic projection. And I'm going to try and justify that. So we talked about thin lenses. So a thin lens has the property that it has exactly the same projection as a pinhole perspective projection. And the advantage is that it actually gathers a certain amount of light. Now real lenses aren't thin. And so, if you actually look at a catalog of fancy expensive high quality lenses, there will be all sorts of diagrams of many different elements. I don't know, just keep on going. Lots of individual elements symmetrically arranged around some optical axis. And so how do those work, and why do they do that? Well, as I already mentioned, it's impossible to build a perfect lens. There will always be trade-offs between different kinds of aberrations. But by compounding, by adding different lenses, you can compensate. So for example, glass has a refractive index that varies with wavelength. And so, that means that the focal length will depend on wavelength. And that means that red light will be brought into focus at a slightly different place from blue light. And so you get chromatic aberrations. You get fringes, color fringes around things. Well you don't see that in your camera. That's because they have then put in a second lens of a different material that has a different wavelength dependence carefully designed to compensate for that. And depending on how fancy they are, it compensates exactly at two wavelength. Or if you're more fuzzy at three wavelengths. Anyway, so there is a need for compound lenses. And they then have different properties. But those properties can be approximated very well as follows. So I don't know if you remember, you should remember, that for the thin lens, we had this notion that the central ray was undeflected. So a ray coming into the center of the lens at an angle alpha would be emitted at an angle alpha. Well the thick lens can be approximated this way, which is very similar. So these points are called nodal points. And anything coming into the front nodal point at a certain direction will leave the back nodal point in the same direction. The planes through those points are often called principal planes. Now in the thin lens, the two nodal points are on top of each other, and the two principal planes are on top of each other. And not too surprisingly, the distance between them is called thickness. And usually the notation is T. So that makes it actually quite simple to deal with thick lenses, because it doesn't change things a whole lot. I mean, it does mess a little bit with our lens formula, because now A and B are not measured relative to one place, but they're measured relative to those points. So it's-- and just how do you compound the lenses to create this effect? Well, that's not our job. The people at Zeiss and such know how to do that. So why are we even talking about this? Well, because now there's a neat trick you can do. It turns out that T doesn't have to be positive. That is the nodal point, front nodal point can be actually behind the real nodal point. So who cares? Well if you make this pretty large, you can make a short telephoto lens. So normally, a telephoto lens is one that has a long focal length obviously. And small field of view. And the lens with a long focal length means you need a tube, a long tube. Well if you make T negative, you can compress it. And you can get a significant reduction. If you typically buy a telephoto lens from Nikon or Canon, and you look at how many millimeters its focal length is, if you actually go measure the lens, you'll find that in many cases the lens is shorter than the focal length. And so that's one trick to play with this. Then another one is to move one of these points far away off to infinity, in fact. So and this is used quite a bit in machine vision. So why? Well there are a couple of reasons. One is that when we have prospective projection, the magnification changes with distance. So if you're, say, looking down at a conveyor belt and reading labels and whatever, or trying to make a precise measurement of some dimension, well the image size will depend on focal length and the distance to the object. And if there's any variation in the distance to the object, that's problematic. Or say you're doing, I don't know, printed circuit board inspection, something like that. And you want to be insensitive to small changes in distance, well if you can get rid of perspective projection, that would be good. So how do you do that? Well what you need is a very far distant center of projection. If you move the center of projection far away, then that effect of varying magnification with distance gets less and less. Because that cone of directions gets more and more parallel. And in fact, if you could move the nodal point to minus infinity, then there would be no change in magnification. And it's amazing. But yeah, by building a compound lens, you can do that. So that's object space, stella centricity. And as I mentioned, a lot of machine vision systems commercially used in the industry use this. They're not cheap. Partly because they need a lot of glass. And the reason they need a lot of glass is that normally, a lens images a cone of the world. A telecentric lens, because the center of projection is way back, actually images a cylinder. So if you think about it, here's the center of projection. Here's the lens. Now normally, you're imaging this whole area with a magnification that changes with distance that gets smaller and smaller-- the image of an object gets smaller and smaller the further the object is back. Well now, imagine that you move this point way back there. Then this cone becomes shallower and shallower until eventually you have a cylindrical volume. So an object space telecentric lens will image a cylindrical volume. And that means that the lens has to be as big as the object, otherwise it won't be imaged. And actually, it has to be a little bit bigger. So that means that if you're trying to read a circuit board or something, you may need a lens with a substantial bit of glass. And accurately made, so and that gets expensive. But it's done. Now that's moving one of these nodes. Now you can actually move the other one as well. So let's keep this one in the same place, but move the other one, so image space. So we have the same kind of diagram with a cone of rays on the other side hitting the image. So we have-- here's the image plane, and here is our center of projection. And we know a number of things. One of them is that if your image plane isn't in exactly the right place, the magnification changes. So if I move my image plane there, the magnification is different. Now in order to achieve a sharp image, I'm going to focus the lens, which ta-da, that means I'm either changing the focal length of the glass, which is not possible, or I'm moving the lens relative to the image plane, and therefore I'm changing the magnification. Maybe by a small amount, but if you're making accurate measurements, that's a drawback. So that's one issue. And the other one is that cosine to the fourth law, we really don't want that. Now if we move this center of projection off to plus infinity, then this cone becomes more and more like a cylinder. And first of all, that means that as I move-- if the image plane is moved, if I got it in the wrong place, it doesn't change the size of the image. It may make it more blurry. That's another issue. But and so, that's very useful in the metric situation, where you actually trying to measure something. So it turns out that-- what's the terminology for this-- there are lenses which are telecentric on both sides. Hmm. Double telecentric. Now that makes sense-- double telecentric. So why does this cosine to the fourth go away? Well because that came from the inclination of the rays coming into the sensors. And so by moving the nodal point way out, now the radiation reaching a particular sensor is coming in perpendicular to the sensor. And so that actually has other effects. So here's our little sensing element. And the radiation is coming in this way. And before it might come in at an angle. So particularly near the edge of the sensor, the light is coming from the center of the lens, and it's coming in at an angle. And so we get that effect. Well there's another reason not to want that. And here's one, which is that often the sensor has right in front of it a set of little lenslets, which concentrate the incoming light into a smaller area. So they don't create your image or focus your image or something. What they're doing is they're taking the light that is covering a certain area and concentrating it into smaller area. And this is very common. And why? Well because their circuitry. So the surface of the sensor isn't all sensitive to light. If we look down on it from above, it might look like this. And then there's a lot of switching circuitry and stuff around it. So here's the area that's actually sensitive to light. And then there's this other thing. And there's something called the fill factor. So of course, there are many different designs. But that's something that happens in many designs. So there are different issues. One of them is if you image without the lenses, you're throwing away light, not measuring it. What's worse, you have to protect the circuitry from the light, because light goes into the semiconductor, creates electron hole pairs, and oh, if that's right in the middle of your MOSFET, that's not such a good thing. So that means you have to put a metal layer on top of it. So that's one reason why people add the little lenslet arrays. The other reason that is aliasing. So those of you may remember in 6003 or some other equivalent signal and system course, that when you sample discreetly, you have to be sure that there aren't high frequency components in the signal. And in our case, we could have sharp transitions in brightness from one area to another. And those will create effects, where some high-frequency component is it looks like a low-frequency component. Maybe alias down to a lower frequency. And how do you avoid that? Well you low-pass filter first. And then there's a wonderful theorem that says, if you have a low-pass signal, you only need to sample it twice the bandwidth of the signal, and you can reconstruct it perfectly. And what's the relevance here. Well, if we are not-- if we're measuring over the whole area, we're performing a crude form of low-pass filtering, we're block averaging, which isn't-- you know the real low-pass filter is a sync function, but we can't build that. So if we have the large pixel, we get a certain amount of low-pass filtering that's advantageous. And fancy cameras have additional mechanisms for this. But if we have these smaller areas, it's more like point sampling. It's more like we didn't low-pass filter, we're just sampling. And that has very bad aliasing effects. So by using this lens array, we're actually using the light from that whole area and measuring it, because we're projecting it onto the sensitive area. And so we reduce the aliasing problem. Now this works great if light is coming in more or less perpendicular to the surface. It's not so good if the light is coming in at an angle. Because then you'd have to somehow change the scale of the lenslet array. And even then, you can't make it work correctly, because there's a spread. The lens has areas that are in different directions. So anyway, cut a long story short, there are several reasons why people like light to come in perpendicular to the sensor, starting with the cosine to the fourth. And so, in high quality digital SLRs, the lenses tend to be image space telecentric, or at least partially. I mean, they don't actually move the center of projection all the way to infinity, but they move it far enough out that that cosine to the fourth becomes negligible, and we have those effects. So that's telecentric lenses. And these used to not be available. And it took a while for people to figure out how to design them. But now they are all the rage. So double telecentric. So where are we going with this? Orthographic projection. So we said that we no longer have a dependence on distance. That in the object space telecentric device, an object of a certain size will be imaged the same size independent of its distance. The sharpness of the image will change, just as in a normal lens. But the size will be the same. And similarly for the image space, telecentric. So what we're really doing is taking perspective projection equation and making the focal length huge, so that our center of projection is far away. And in effect, we can then pretend that we're dealing with orthographic projection rather than a perspective projection. So we saw that perspective projection was quite useful in a way. So this is where we started. Because we had this dependence on depth, and particularly in terms of motion that was helpful. But now, suppose that the changes in depth in the scene are much smaller than the depth itself. Then we can write x is F over z0, times x. So if z is approximately-- so this is another way to get to orthographic projection. We can make z pretty much constant. And how can we do that? Well one way is to add a very large number to z. And that's essentially what happens when we move the center of projection. We're adding a very large number to z. And so, some small variations in z aren't going to make any difference. The projection is pretty much independent of the position. And so, we have a linear relationship between x and y in the world and x and y in the image. And amongst other things, this means we can measure distances, sizes of objects, independent of how far away they are. Now in many cases, it's convenient to just pretend that that scale factor is 1. And often we'll just use that version of it. Hmm. I guess it's 12:30. So that's where we're going to go. Orthographic projection is useful in practice with telecentric lenses. And it's also going to be greatly simplifying some of the problems we're going to work on. That's not-- so this is a little bit like the Lambertian thing. A lot of people say, oh, these methods only work for Lambertian. No they don't. They work for everything. It's just that for anything but Lambertian, the equations get messy. And it's the same thing here. The kind of reconstruction we're going to address next can be done under perspective projection. It's just complicated and not very insightful. If the math gets very complicated, you lose track of what you're doing. When we change to orthographic projection, it turns out that many of these problems become quite clear. So there'll be a new homework problem as usual on Thursday. So.
MIT_6801_Machine_Vision_Fall_2020
Lecture_17_Photogrammetry_Orientation_Axes_of_Inertia_Symmetry_Orientation.txt
[SQUEAKING] [RUSTLING] [CLICKING] BERTHOLD HORN: We're switching topics. So far, we've focused on sort of early vision, the low level material, what to do with gray levels, how to detect features of certain types, in particular edges. We talked about very accurately finding edges. We could continue along that line. We could talk about finding so-called interesting points, but let's move on to the next stage where we're actually using that information to achieve some purpose. And so we'll be talking about photogrammetry next. And the name is composed of two roots, which basically mean measurement and images. So it's about measurement from images. And it has its origins in right after photography was invented because people saw the potential for map making. You would send someone up in a hot air balloon, which was popular in those days. And then you have a picture, which is kind of a map if the ground is flat and you had the camera just the right way. If you were over hilly terrain, then taking multiple images might allow you to reconstruct a three-dimensional surface. So that's the kind of thing we'll be looking at. And in particular, we're going to talk about four different problems, which have the names-- and these are just classic problems from photogrammetry, which have been reinvented by people in machine vision and different given different names. But basically, that's what they are. So this is the one we'll talk about first. And it's about finding the relationship between something in 3D and something else in 3D. And it could be, for example, that we have two different coordinated systems, two different sensors. Imagine the autonomous vehicle with two laser sensors. And we're trying to relate those to coordinate systems. So they're different kind of tests. One is how do you use the measurements from these two coordinate systems to get 3D information. But before you do that, you first have to figure out how those two systems are related. So you need to find out the coordinate transformation from one sensor coordinate system to the other. So that's one kind of problem. And the symmetrical problem to that is where we have a single coordinate system, but either they are two objects or the object moves. And we want to know the relationship between before and after. And it's actually just the same problem, and we'll talk about that. There are some advantages to attacking the 3D problem first. One is that it has a closed-form solution. It's always nice even though, in some sense in machine vision, we often are more concerned with the second problem, which is 2D to 2D, right? So the main sense that we're concerned with is a camera, and we get images, 2D images. And for example, we might have two cameras, get two 2D images. And we'd like to use them to recover the third dimension. So that's binocular stereo, for example. That's just one application. And again, there are two types of problems, two types of applications. One is how do I recover the three-dimensional information from those two images. And the other one is how do I first get the two cameras lined up, so that I can do that. How do I find the relationship between these two cameras, where they are in space and how they are rotated, so exterior orientation. So that it is a mixed problem, 2D to 3D. And this has to do with the situation where we have an image and we also have a 3D model of the thing that we're looking at. And so we're trying to figure out where we are and, again, how we're oriented in space. We're going to keep on hearing about rotation and orientation because that's a problem in all of these areas. And so I don't know. You could imagine a drone flying over this are, and you have a map of this area. And you're trying to figure out from the image where you are. So that'd be a difficult problem of that form. Or you're on a spacecraft, and you're staring at the stars. And you're trying to figure out the rotation in space of your spacecraft. And then the last one is very similar. And I just write it the other way around, 3D to 2D. And that is interior orientation. Exterior and interior refers to the camera. So interior orientation is basically your camera calibration. So we make the camera to certain specs. We have the lenses with certain focal length. And we have some kind of metal case that puts the image in a position relative to the lens, but it's never going to be exactly according to spec. So in order to make very precise measurements using these cameras, we need to first figure out their properties. And we already talked about finding the principle point, and that's form of interior orientation. So since I have something here, I should have something in the last one. OK. And in terms of applications, it's kind of obvious. The original application in photogrammetry was making topographic maps. So you'd fly a plane along some predetermined path taking pictures at regular intervals, and then a person would look through an instrument sort of like binoculars. Except you're presenting two of these images, and the person's eye-brain combination lines them up and basically determines the height at any particular point. And in order to plot the map, there would be an artificial point where you've introduced two points in the two images the person is looking at. And their position determines the height. And so the person basically places that artificial point on the surface they see and moves it around at that height and thereby creating a contour. And of course, we don't do that anymore. It's all now automated using correlation, then convolution. But before we do that, we need to figure out how the cameras relate to each other. OK, just to give a really simple example of the kind of thing that happens in photogrammetry-- so humans have many depth cues, a dozen or so at least. And we already talked about some of them. We talked about shading, and we talked about shape from shading. And another one is binocular stereo. So if you talk to the woman or man in the street and ask them, how do we see 3D, typically they say, oh, we have two eyes. So this is an important depth cue at least from the point of view of how people think we see 3D even if it's not the most important depth cue. But let's take a look at that and look at it in a highly simplified special case situation. So in this situation, we have two cameras. And we've lined them up so that they are perpendicular to the baseline. So each of the optical axes is perpendicular to the baseline. And the optical axes are parallel. And the lenses have the same focal length and so on. So in practice, we need to be more careful. And we need to allow for differences between the cameras. And typically, it'll be not very practical to perfectly align them up this way. There's someone at Draper Lab-- Sutro was his name-- who spent a long time working on stereo for Mars rovers long before there really was a Mars rover. And he was concerned with getting these cameras lined up. And let's just say that he never succeeded. It's too difficult to get them lined up accurately enough. And so what we're going to do later is actually figure out how these cameras relate to each other. But just to get a feel for this, let's look at this situation. And we'll pick an origin that's halfway between the lenses. And the baseline length is b, so b over 2 from each of those. And then let's suppose that they're a distance f from their respective image plains, where f is, as we know, not the focal length, but slightly larger. And then we've picked some point out here. And we'll image it in both cameras. We're leaving out the y direction, which is perpendicular to the blackboard because that's less interesting from this point of view. And so up here, we have a point at X and Z. And our task is, from xl and xr, the position in those two image plans, to determine big X and big Z. So we have the two images. The object does not image in the same place in the two images. And from the difference, we can calculate where it is, whereas from a monocular image, of course, we have that scale factor ambiguity. We can tell what direction something lies in, but we have no idea how far away it is. Adding the second eye or lens gives us that ability. Well, we just use similar triangles. So using similar triangles-- because x and y are relative to this origin. And so if we want to have this distance, we need to subtract out half the baseline. And similarly, on the other side, we have to add in half the baseline. OK, so two equations, two unknowns, we can solve. So the depth is inversely proportional to the disparity. So the disparity is the discrepancy in position if you were to superimpose the left and the right image. And so that gives us directly the depth, the distance. And that also allows us to determine the sensitivity to error. Obviously, if we're very far away, the disparity will be very small. And then any small error in measuring the disparity is going to produce a large error in depth. OK, and xr plus xl-- and again, the disparity comes into it and the baseline. And this time, we're sort of taking an average of the positions. So we can determine the 3D position-- well, we've left out y, but that's pretty easy-- if we have those two measurements. And one thing is clear. If we increase the baseline, the disparity gets larger, and so then the measurement will be more accurate. If we increase the focal length, then the disparity will be larger, and we improve the measurement accuracy. But of course, you have other constraints on these quantities. You don't want the two cameras to be too far apart if possible. So for example, in autonomous vehicles, it'd be nice if the two cameras could be close enough together, so you could hide them behind the rear view mirror, rather than have them go all the way across. And if they are that far apart, then you have a calibration problem because they're unlikely to stay perfectly lined up over the lifetime of the car. So you'd have to have some automated way of rediscovering the relationship between those two coordinate systems. OK, so this is a very special case, not one that one could rely on in practice, as I mentioned. And so we'll have to talk about what happens if these two cameras are not exactly equal and they're not exactly arranged in this geometry. What if there's a rotation between the two cameras? What if the baseline is not perpendicular to the optical axis and so on? How do you calibrate for that? And that's going to be the topic of a later conversation. So let's dive right in, problem number one. Let's look at absolute orientations. And one version of that is we have two LIDARs, laser range-finding imaging devices, that give us 3D coordinates. And we're going to use that information to get a picture of the world around our autonomous vehicle or whatever. And to do so, we need to get the relationship between those two systems. I mean, we try to line them up in some reasonable way, but we can't really depend on it. In the case of the topographic reconstruction from aerial photography, it's obviously even worse because the two camera positions aren't connected by some rigid object. The plane flies along, takes a picture. It flies along, takes a picture. And the pilot, of course, tries to maintain a constant altitude, but the air is not a constant medium. So the orientation of the plane is not going to be maintained exactly. And so we need to compensate for that. Notice that, in all of this, we're presupposing that we somehow found points. And we've seen how to find edges very accurately, and that's one type of input that we can use. Similarly, there are methods for finding quote, "interesting" points. And you know, SIFT is an example. And I think I have David Lowe's paper on the stellar site. It's a little opaque. And since he wrote it and patented it, there have been many other variants that typically are computationally much faster, but may not be as accurate and so on. But anyway, we're assuming here that there's some way of finding interesting points and finding them again in another image. So we're leaving aside the matching problem. We're assuming that's been done. And so, now, we just have the problem of here's a bunch of coordinates in the two systems. What do we do with them? So I'm going to call these left and right coordinate systems just because one of the big applications here is to binocular stereo. But of course, they could be before and after. So the vehicle took a picture of the world, then it moved, then it took another picture. So don't assume that, because it's l and r, that that's-- so here's a point in the environment. And we're going to find we'll need more than one point, so let's put a subscript on them. And we measure it in this coordinate system, and we get rli. And we measure it in this, and we get the vector rri. And again, there's this duality where we could be talking about one object measured in two different coordinate systems, or we could be talking about two objects measured in the same coordinate system. And this is sort of like, is the camera moving or the object moving? It doesn't matter. We just occasionally need to be careful about the sign because the sign will flip. OK, so now, if we know where those coordinate systems are, we have a very happy situation. We simply project out this ray into 3D and this ray in 3D, find out where they intersect, and we know pi is, that point i. And that's sort of ultimately what we want to do. And they typically won't intersect. So we'll find the point where they have the closest approach, which is where the line connecting them is as short as possible. And that will be a line that is perpendicular to both of them. So there's an interesting little geometry problem there, how to do that. But right now, we're focusing on another problem, which is, before I can project these out into the world to find their intersection, I need to know how these two coordinate systems relate to one another. And so how do three-dimensional coordinate systems relate to each other? Well, there's rotation and there's translation. And so our job is going to be to find the rotation and to find the translation. And we've talked about this before. In 3D, there are 3 degrees of freedom to rotation, and there are 3 degrees of freedom to translation. And that's slightly surprising. You know I'd say, oh, yeah, because it's 3D. So it's got to be three. But as we mentioned in 2D, rotation only has 1 degree of freedom, not 2. And in 4D, it has 6. So it's n times n minus 1 over 2, not n. And so it's a fortuitous accident that, in 3D, 3 times 2 of over 2 is 3. So it's slightly confusing. It makes it look like rotation is simple when it isn't. OK, so we're looking for six numbers. And we will need enough constraint so that we can find them. So we're looking for a transformation of this form where this is a translation which is fixed. That is this formula applies to all the points, Pi and Pj. And this is our rotation. And I've purposefully written it this way with a parenthesis to indicate that we are not going to insist on that being represented as an orthogonal matrix, which is what would happen if I left out the parentheses. You would assume that I'm talking about a 3 by 3 matrix multiplied by a vector, but we'll be looking at other representation. And we'll see why. OK, so those are the unknowns. So with an equation like this, there are typically two ways of using them. The one is you know the rotation and translation. You plunk in r of l, and you compute r of r. That's sort of the easy thing and certainly what we use it for, but that's easy. Our problem is different. We know some rl, rr pairs. That is there are certain points we've measured in both coordinate systems. And our job is to find the rotation and the translation. OK, now, before we go on, we should talk about some properties of rotation. And we already mentioned them, but let me do it again. So if we represent r as a 3 by 3 matrix, it's better be orthonormal. And that means that the dot product of rows is 0 and that the dot product of rows with themselves is 1. And so a quick way of writing that is that way. That is the matrix times its transpose is identity. And you can check that. That means that, I guess, the columns, if we treat the columns as vectors, those are orthogonal. And they're unit vectors. And it turns out that you can prove that, if the columns have that property, then the rows have that property. So we could also write it as r times r transpose equals i, same thing. And then you look at this and say, wait a minute. We've got a 3 by 3 matrix equals a 3 by 3 matrix. That's 9 constraints. And that seems excessive because we only got 9 numbers. So if you take out 9 constraints, there's nothing left, no degrees of freedom. Well, the thing is that this is a symmetric matrix. So it's actually only six constraints, right? Because the diagonal plus, say, the top right triangle is 3 plus 3 is 6. So there's 6 constraints. The 3 by 3 matrix has 9 numbers. We've got 6 constraints. 9 minus 6 is 3, 3 degrees of freedom. So typically, that's where people stop. That's all the conditions we need on the matrix. But actually, it's not. You need an additional constraint, which is different in that it's not the quality of some number to some value. It's really a single bit of information. And why do we need that? Well, here's the example of why we need that. So this matrix, if you take its transpose-- it's the same matrix-- and then you multiply it by itself, you get the identity. So this matrix satisfies that first condition, but it's not a rotation, right? Because it's flipping the z direction. So it's a mirror image. It's I'm standing on the floor, and there's another person talking on the other side with his feet up. And that person's left hand would be my right hand and so on. So no rotation would turn me into that other person. It's like turning your left hand into your right hand. It can't be done with just rotation. It can be done trivially with a reflection. And reflections satisfy this. So in order to eliminate reflections, we have to introduce that constraint. So if, say, we set something up like a least squares problem-- we're trying to get these cameras lined up. And we make some measurements, and there's some errors that we're trying to minimize. Well, then we also need to enforce the orthonormality constraint. And annoyingly, we also need to enforce this constraint. and that's hard. So that's one reason why we won't be using orthonormal matrices much. OK. Oh, by the way, this means that the inverse of a rotation matrix is easy to obtain. I mean, we'll use some of these properties as we go along. OK, so one way of thinking about this is in terms of a physical model where we have a cluster of points, a cloud of points that we've, say, got from the left coordinate system. Then we have a cluster of points from the right coordinate system. And we want to superimpose them. We want to line them up as best as possible. And one model for that is, if this is one of the left-hand coordinate system points and this is one of the right-hand coordinate system points after we've transformed it, we want to make that distance as short as possible. And in terms of physical model, we can think of that as a spring that's connecting the two. Why is that? Well, because by Hooke's law-- let's call this distance e for error. By Hooke's law, the force is proportional to the displacement from the rest length. Let's assume these springs have a rest length of 0. And then the energy is the integral of that. So it's a half the displacement squared. And so we can think of this as an energy minimizing problem, which is what a physical system like that would do. It would want to adjust itself to minimize the energy. And we've run across that in a way in that last pattern-- not the last, the one before last where there was a translation and rotation in the scaling based on attraction of points to corresponding points and so on. OK, so we can think of this and we can approach this in a sort of least squares way. So what we're trying to do might be this. So we define some kind of error. So we choose to transform the vector in the left coordinate system by rotation and translation into the coordinate system of the right system, and then we compare it with what we get there. And ideally, they should be the same. And if they're not, then we get some error. And now, this introduces some sort of apparent bias, that we're treating one of the two coordinate systems differently from the other. And that's undesirable. You know, like, why should one coordinate system have preference over the other? So we want to be sure when we're done to check that, if we had found the transformation from the right to the left system, it would be the exact inverse of what we found going from the left to the right system. Otherwise, there's something wrong. And we already discussed this for example in line fitting. y equals mx plus c is not desirable because it pretends that there's no error in x and there's all the error and y when, in many of our applications, the error is in image position, which includes both x and y. So similarly, here, we'll be looking at transformations and methods that have that property that there's that symmetry. And that symmetry is not apparent in that formula, but so we'll have to actually check for it. OK, so what we want to do is, of course, minimize that. And what do we have to play with? Well, we've got r, and we've got the translation r0. So that's sort of, in a nutshell, the problem we're trying to solve. We've got a bunch of corresponding measurements in the two systems. And we're trying to find the coordinate transformation between the two systems using it. And again, as I mentioned, that's sort of the other way of using those equations. Normally, you would be just going in the forward direction of the problem. You plug in your rl, rotate, translate, and you get rr where the rotation and translation are known. And what's not known is the correspondence between the coordinates of points. And here, we're doing the very opposite. We're assuming that we know the coordinates of the points, pairs of them. And we're trying to find the transformation that makes that work. OK, well, there are a bunch of methods we could use to do this. One of them is to separate the problem of finding the translation from the problem of the rotation. And actually, all of the methods we'll look at take that approach because it greatly simplifies the problem. We, in each case now, only have to deal with 3 degrees of freedom at a time. And one way we can do that is to pick some sort of reference point in each of this point of clouds and go with that. And so here's a way of taking, say, all of the left points and constructing a coordinate system in it. So let's suppose we have point 1 here. So this is a measurement of some point. And we're going to use that as the origin. So out of all the points we've measured, we pick one and declare that to be the origin-- OK, step one. Then we look at a second point, and we connect these two by a line. And that's going to be one of our axes. Now, the separation between 1 and 2 won't be a unit. So we don't use the actual difference between 2 and 1 as our coordinate. We normalize it. OK, so let's write it over here, x hat unit vector rl2 minus rl1. Oh, sorry. Let me just define x and then define x bar in terms of x-- save some writing. OK, now, we take a third point. Now, we can't sort of connect these up and say, OK, that's the y-axis because then x and y-axis won't be perpendicular. But what we can do is decide that the plane defined by 1, 2, and 3 is the xy plane. So we can certainly do that. And actually, then based on that idea, we can just remove, the vector from 1 to 3, the component that is in the direction of 1 to 2. And what's left is going to be automatically perpendicular to us. And that's the y-axis. So we pick y equals rl3 minus rl1. And then we find the component of that rl3 minus rl1 in the direction of x hat. So we picked this vector from 1 to 3. But we take out a component that's in the x-axis direction, which we do simply by taking the dot product of that vector with the unit vector in x direction. That gives us a scalar. And then we multiply that by the unit vector in the x direction. So that removes this component, the one that's parallel to the x-axis. And we're left with something that's perpendicular to the original. And then, again, we can make a unit vector. So this won't be a unit vector in general, of course. OK. And it's going to be easy to show that x dot y dot is 0, that they're perpendicular to each other. You can check that very easily just by taking the dot product of this with our definition for x. Or you can just show that the y vector is perpendicular to the x vector, not the unit, really. OK, so we have the origin. We have the x-axis. We have the xy plane. Yeah. So we have an object, and we've identified quote "interesting" points on it. And we've measured them. So it's a cloud of points. And they could be corners, not edges. Because edges don't give us full 3D constraint. So if our calibration object or whatever was a cube, we might very well have these points. Or if it's someone's head, then it might be the position of the iris of the eye and so on. And we've measured those in both coordinate systems. That's the important thing. But right now, we're just working on the left coordinate system. OK, well, at this point, we define z to be the cross product of x and y because the cross product is perpendicular to both vectors in the cross product. So z is going to be perpendicular to both x and y. So now, we've got a whole coordinate system. We have a triad of unit vectors that define the coordinate system. So we've done that for the left, and we only need three points. So if we measure the whole bunch of points, we don't need them. We just need three. Now, we do the same thing for the right coordinate system. And you know, I'm not going to go through the process again. We'll just get an Xr hat, a Yr hat, and a Zr hat. And I suppose I should call those x left hat, y left hat, and z left hat. So what I do is I take my two clouds of points measured in these two coordinate systems, and I use them to build an access, an x-axis, a y-axis, and a z-axis. And now, all I need to do-- quote "all I need to do"-- is to map those unit vectors into each other. So I need to find a transformation that puts xl into Xr and the transformation that puts yl into Yr and so on. OK, now, because I've artificially removed a translation by picking one of the points as the origin, I don't have to worry about translation for the moment. I only need to deal with rotation. I've separated that problem. So I've got then Xr hat is Rxl hat, right? So the unknown rotation R does this, and it also does that. I mean, if I just had that first line, that wouldn't pin down R because there are lots of ways of rotating one vector into another. It's not unique. So that's what I expect. And now, my problem is solving for R. So I've got these three vector matrix equations, but I think you can see that we can compose them. We can stick them together into one equation. So we could have Xr hat, a Yr hat, a Zr hat. That's now a matrix, right? Because in the representation we're using, these are column vectors. So this is a 3 by 3 matrix and the same for the right-hand side. And I think you'll agree that this single equation is equivalent to these separate equations. Because if I multiply the matrix by that first vector, I get the first vector over here and so on. So that makes the answer trivial, right? R is-- let's see, I need to post-multiply. Right. Because if I multiply both sides of that equation up here by the inverse of this matrix, it'll drop out over here. It's multiplied by its own inverse, so it gives identity. And then over here on this side, I get this expression. Does the inverse exist? We always ask that question. When does this fail? Well, it should. Because by construction, those things are orthogonal to each other. So you've got kind of an ideal case where the three columns of the matrix are orthogonal to each other as opposed to being linearly dependent. That's the other extreme end of the spectrum. OK, so there we are. We solved the problem. So we said that, if we remove translation for the moment, we have 3 degrees of freedom left to find. And then we said, OK, we have three correspondences between the two coordinate systems. So that sounds about right, three constraints, three unknowns. But those constraints are much more powerful because they are not just a scalar equals a scalar. They are vector equals a vector. So each of them is worth 3 points. So we actually have 9 constraints. We're saying that point number 1 transforms into point number 1 in the other system. Point number 2 transforms into-- and so on. And so each of those are vector equalities, you know? Rl of 1 is going to map into Rr of 1 and so on. So we have three vector equalities. Or if you like, how much constraint do you need to pin a point down in 3D? Well, three constraints, right? You need x, y, and z or some other three numbers, like distances from three-- I don't know-- Wi-Fi routers or something. You need three things. So in this case, we've got 9. But wait a minute, rotation only has 3 degrees of freedom. So what's going on? It's excessive. So we don't need three points. We can just stop with two, right? Well, but we said with two we just get the x-axis. And then the whole thing can still rotate about the x-axis, so that's not good enough. We do actually need all three. But what's going on is that we are not using all the information we get, right? So let's make a tally here. So of the point number 1, we are using all three of its components. So this provides three constraints. Then this one here, we're not using the distance from 1 to 2. We're only using the unit vector in that direction. So it's a direction. So it's really providing only two. And then this one here is even worse. We're actually only using one constraint from that. And so the total is 3 plus 2 plus 1 is 6. Oh, well, that includes the translation. So that explains some of what's going on, that this particular construction does give us a rotation, but we're not using the information from the points equally. So we've made some arbitrary decision. We picked number 1, and we're using all-- so that seems wrong, right? Because I mean, who's to say which point's more important than another? I mean, there might be weight associated with it because, when you measured them, you found one point that's really distinctive and you're very certain about where it is. In some other matches, that might not be so good. But that's a different issue. OK, so that's one thing about this that's less than optimal. So another one is what if I picked this point first? So this is my 1. And that's my two, and that's my 3. Will I get the same transformation? No, I won't get the same transformation because I've selectively neglected information. And now, I am selectively neglecting in a different way. I'm throwing away something else. And so that also tells you that this isn't ideal. Now, in practice also, we might be inclined to measure more than three correspondences. If we have these two LIDARs on the vehicle, they're giving us hundreds of points. And there will be some error in all of those measurements. But again, as we know noted, if you have lots of points, least squares or some other method can dramatically improve your accuracy if you can make use of all of those correspondences. So this doesn't. It just uses three points. And yeah, you could now repeat it picking three other points and do that lots of different ways and averaging. And that would be better, but it would be kludgy, ad hoc. OK, so ad hoc method number one, it's-- and in some sense, this might be a way to go if you needed an initial guess or if you needed a really rough answer. Say you had a more precise-- so this could be like pat quick and pat max where you use one to give you a fast approximate answer, and then you use that as initial condition for something else that gives you a more precise answer. OK, so here's method number two. And we briefly talked about this in 2D, so let me just refresh your memory because we didn't really spend a lot of time on it. The idea was that we had some sort of blob, a binary image. And we're trying to make some measurement on it that could help us identify or align. And we said that one way of doing that is to look at the axis of inertia, right? So if I have a cloud of points as opposed to some binary image, I can do the same thing. I can find the axes about which the-- oh, wrong way around. Excuse me-- minimum axis, maximum. So why do I know that's the minimum? Well, because the inertia depends on the integral of the distance squared times the mass. Let me put r. And so about this axis, these distances are all very short. So the sum of squares of those distances is going to be small, whereas about this axis at right angles, a lot of the distances are large. So I'm going to get a huge inertia. And you can show that they have to be at right angles to each other. And you can show that you can find them by finding the eigenvectors and eigenvalues of a 2 by 2 matrix. OK-- so 2D. Let's go to 3D with the same idea. So there's some object in space, a bunch of points. So we've identified these points in a way that we can find them again in another image. Now, we're trying to establish a coordinate system. Because we know that, once we've got a coordinate system in the left data and the right data, we just have to correlate those two coordinate systems. And over there, we use that method to find a coordinate system. But we could also look for axis of minimum inertia in 3D just as we do in 2D. And in 3D, we're then also going to find that there's a perpendicular axis that has a maximum inertia. And the difference about 3D is that there's a third axis, which is a saddle point. So if you move the axis in one direction, the inertia gets bigger in a quadratic way. And if you move it in another direction at right angles, the inertia gets smaller in a quadratic way. So it's sort of like that, kind of hard to draw. But here's the saddle point axis. And then if you move it in this direction, the inertia goes down, same on this side. But then if you move it at right angles, the inertia goes up quadratic. Anyway, so if we can figure out what those axes are, we should be-- well, let's do the math. It's not that hard. OK, so what we're looking for is expression for the distance from the axis. Because, again, the inertia is the integral of that distance squared over the object. OK. So that means we need to figure out a formula for-- so this is the point r, vector r. And here's the axes. And we need to find that distance. Now, I'm going to cheat a little bit because I'm going to pick the centroid as the origin, same trick. I want to separate the problem of finding the translation from the problem of finding the rotation. And we know in 2D that the answer is that it goes through the centroid. And then we had that formula with-- we knew the sine and the cosine of the angle, so we could compute the angle. OK, so we're going to do the same thing here in 3D. Let me blow up that picture a little bit. So this is the distance r. This is our vector r. This is our axis. Let's call it omega hat is the direction of the axis. And here is r prime. We drop it perpendicular on the axis. And what we want to know is the distance between r and r prime. So what is r prime? So let's see, we need to find the component of r in the direction omega. OK, that's that component. And we're going to subtract that out. No, I'm already one step ahead. Let me say r prime is just r projected onto omega. And this is actually our r minus r prime. So this is the length we're looking for. OK, and so we need r squared. Don't worry too much about this algebra because we won't be using this again. But at least try and follow it, and tell me when I screw up. So if I multiply this out, I get r dot r. And then I get minus 2 r dot omega hat squared plus r dot omega hat squared. That's because there's an omega hat dot omega hat. This is a unit vector, so that's just a 1. So I can just forget about that. Well, and since the two things have the same form, so I end up with r dot r minus r dot omega hat squared. So that's my distance from the axis. So that's what's in the formula for inertia. And so I'm going to then say that the inertia-- and what I'm interested in is how that varies as I change omega. So omega says, unit vector, they can point in any direction. And I'm looking for the directions where that value takes on an extreme one or maybe a saddle point. But starting off with just the minimum, I want to find where the minimum. And, well, that doesn't look particularly simple, but we can rewrite this. So first of all, let me take this and rewrite it. Now, dot products are commutative. So I can write it this way. I can write that. And then we have this notation where we write the dot product in that form. And these multiplications of the skinny matrices are associative, so I can write it that way if I want to. And this, of course, is the dyadic product, which we've run into a few times before. It's a 3 by 3 matrix, a very simple structure. OK, now for my next step, I'd like to rewrite this also. So I'm going to say that's the same as r dot r into omega dot omega. Well, in fact, you know, I had this over here. I just dropped the omega dot omega because it's 1, but let me put it back in again. OK, then that is r transpose r omega transpose. Oh, sorry. So we're going to do the double integral of that-- the triple integral of that. And so that I can write as r dot r into-- this I is the identity matrix. And I wrote the inertia with the blackboard bold I, so that it wouldn't get confused with this. OK. So now, I've got the whole thing is omega transpose times omega. And I have two terms. I have this thing, which is the identity matrix-- the integral of r dot r times the identity matrix. And then I have this thing, which is rr transpose. So given the points, I can-- well, I've written it as an integral. Of course, if we have a finite number of points, it would be just a sum. So this first part is just an identity matrix multiplied by some scalar. And this is a dyadic product. And so, what I'm looking for is an extremum-- of that whole expression. Now, I've run out of different versions of i. So I don't know, let me call it A. So we have something of this form. And I'm looking for, let's say, a minimum. And even though it says A, this is called the inertia matrix. So given some three-dimensional shape, I can compute this 3 by 3 inertia matrix. Or given a cloud of points in 3D, I can compute this matrix. This is just a detail of how to do it from the points. And so what is that problem? Well, that's just the classic eigenvalue eigenvector problem, right? The axis I want for the minimization is the eigenvalue associated with the smallest eigenvalue, obviously. And then I can look for, also, the other axis that maximizes it. And that's the eigenvector corresponding to the largest eigenvalue. And then there's a third one, which is in between. And those three axes are at right angles to each other. In some nasty cases, they're not, but we can make them in that case at right angles to each other. So it's a very simple eigenvalue eigenvector problem. Now, finding eigenvalues and eigenvectors of 3 by 3 matrices is a little bit more work than 2 by 2, which we did by hand. But of course, you know, you've got all sorts of tools for doing this kind of stuff. And there is actually a closed form solution. Why? Well, because polynomials of degree 3 do have a closed form solution even though we generally only remember the formula for quadratic. OK, so what are we doing? So what we've done now is we've taken the point cloud in the left coordinate system, and we've built a coordinate system basis in it. We've got three axes that are at right angles to each other and just based on the distribution of points. And so we have a left-hand coordinate system based on these three vectors, the ones that minimize, the ones that maximize, and the ones at right angle, that saddle point. Now, we do the same thing for the right-hand coordinate system, same construction, right? If there's an elongated point cloud, the x-axis is going to be along the length of that cloud and so on. And then we just use the method we had before. I guess it's gone, but the same method we used for method number one. And that gives us then another way of relating the two coordinate systems or relating the two positions of the object because those problems are duels to each other. So this sounds a little bit more reasonable. Because first of all, it treats all the points equally. They are all thrown into the pot into these sums. We're not picking one and using all of its information and only using a little bit of the second point. Also, we're using all the points. And we've constructed a least squares problem that performs some sort of minimization. So we're using all of the points. So what's the problem? Why is this not generally used? Well, for certain kinds of problems, like lining up cryogenic electron microscope images, this is fine. This works. It's not always ideal. And one reason is that it fails on the symmetry. Well, suppose I have a sphere. It doesn't have a minimum maximum and saddle point axis because its inertia about any axis is the same. And then you might say, well, that's not fair. Sphere is completely symmetrical, so we have to accept that. So we might grudgingly say, OK, so it doesn't work for a sphere, but that's sort of reasonable. Well, what if I told you that, if I give it a tetrahedron, it also won't work? It turns out, if you compute it's inertia matrix, it's inertia is the same in all directions and the same with the octahedron. So these are just special cases. I mean, there are obviously an infinite number of figures that have that problem. Again, the inertia of this, surprisingly, is independent of the direction. Even more surprising, if I take a cube, now you'd think with a cube-- you know, there are three well-defined axes. So what's going on there? Well, the trouble is that the inertia about those three well-defined axes are all the same if it's a cube. If it's a brick, then you're fine because the three axes will have different inertias. And you can line things up that way. But with the cube, again somewhat surprisingly, you can pick any axis, you get the same inertia. OK, so what's going on? What have we lost? What have we forgotten? Well, we haven't used all the correspondences. Well, in this particular ad hoc method, we have taken a point cloud. And we've sort of found its elongation and we've built these other axes without thinking about the other point cloud at all. Then we've done the same thing for the other point cloud. But we actually-- assuming that we know the correspondences. So we know that this point is here and that point is there. And that's a very different problem, right? Because even with a sphere, I can line these up correctly if I have the correspondences. So this is kind of a two-edged sword because we'll see that there are some methods that don't depend on correspondences. They're very handy. They're very appealing. Because if you screw up the correspondences, they're not going to be affected. On the other hand, you often run into this type of problem where the method won't give you an answer because of some symmetry. And so they're generally not as accurate as methods that do take into account-- OK. So enough of these ad hoc methods. I bring them up partly because people actually use them and often don't know that there's a problem with them. That's one reason. And the other reason is that sometimes these are useful to give a quick answer, and it's systematic. I mean, there's no iteration or search or anything. You just compute the eigenvalues and eigenvectors, which, for a 3 by 3, is basically a closed form operation. OK, so let's go back. So the problem is not the translation. The translation is going to be pretty easy to deal with. So let's focus on the rotation. So generally speaking, we have a bunch of different methods of representing rotations. So let's, first of all, look at the properties. OK, preserve dot products, that means-- right? That's sort of obvious. And that has two sort of corollaries. One of them is that preserve length, which is just what you get if you make a the same as b. And they preserve angles, right? Because, for example-- and what's interesting is they also preserve triple products, so R of a. Now, the triple product of the three vectors is just the volume of the parallelepiped that you draw based on suppose this is vector a. This is vector b. This is vector c. You can imagine drawing this three-dimensional shape, which has parallel surfaces. But it's generally not a brick. It's skewed generally. And this is the volume of that object. And now, imagine taking those three vectors and rotating them all with the same rotation. Well, that just rotates this figure. And it'll retain its volume, right? And that's this quantity. So that makes sense. What's important is that it doesn't flip sign. If we had replaced that rotation by a reflection, that triple product would have flipped sign. And you would say, well, what's a negative volume? Well, it just means that you don't have the vectors in the sequence of the right-hand rule. You have a left-hand rule. So you know, I was talking about reflection in the ground reversing z. Well, obviously now, if I take that triple product-- I suppose I should write it out, a dot b cross c. And if I flip the sign of one of these, I'm going to get a negative sign. So when I say volume, we should really take into account the sign so that the magnitude of that triple product is the actual volume. And the sign of it tells you whether it's been flipped from left-hand to right-hand coordinate system. While I have this here, let me just make sure we understand that there are a large number of equivalent ways of writing the triple product. So in terms of our earlier discussion, all of this corresponds to orthonormality. And this corresponds to that condition about the determinant being greater than 1. OK, so those are all the properties we need to proceed. So now, we set it up as a least squares problem. We've got correspondences between vectors measured in the two coordinate systems. Then we're trying to find the transformation between them. to find some sort of error. OK, do I want to start with the [INAUDIBLE]?? Oh, OK. So we get rr. So that's the transformation we're dealing with. And we can then define an error for the i-th point. And of course, what we're trying to do is minimize that error. And what we have to choose is the offset and the rotation. OK, so again, we're arbitrarily picking the left coordinate system and mapping it into the right coordinate system and then comparing the result. And they should be the same, ideally. There will be some small error in practice. So we're trying to make that error as small as possible. And we try and find the parameters of the transformation by minimizing that error. OK, now since I know the answer, I can transform things in a way that simplifies the problem. So I just take all of the coordinates in the left-hand coordinate system, add them up, divide by n to get the centroid. And I do the same in the right-hand coordinate system. And I pick those as the origins, basically. So I'm going to subtract out that-- OK. And this is in line with the idea that I want to separate the problem of finding the translation from finding the rotation. So this way I'm going to get rid of the translation. We'll worry about it later. It's very simple. OK, by the way, while we're here, we might wonder what that is. So I've changed all the coordinates to be relative to the centroid. Now, I take them all and add them up. What do I get? Well, you could plug this in here, and you're going to get n copies of that. And you get the sum of these. And then you look at that formula. And when you divide by n, they all cancel out, right? And it's a vector. So one feature of moving through the centroid is that now the centroid's at the origin of in new coordinates. That makes sense, right-- and similarly for the other one. And we need this property in the second cell. OK. So I'm going to plug these new coordinates into my error formula. So to do that, I need to solve this equation for r sub ri. Well, it's hardly worth writing out. It's just moving this to the other side. And if I make those substitutions, then I get this where the offset now is different. And so my new problem is-- so there's a square. And I can multiply these out, but I'm going to group them. So I'm going to treat this as the difference of that thing and that thing and then undo the square. And I get-- now, there are other ways to get here. You can differentiate with respect to the translation, but I think this is the most obvious. OK. Now, over here, this whole center term goes away because it involves the sum of r of sub ri. That's 0, right? And then it involves the sum of the rotated version of r sub li. And we know that sum is going to be 0 as well. So this whole term disappears. So that makes life a lot simpler. OK, so then the next question is, what should we choose for the offset for the translation r0. Well, this term doesn't depend on the translation at all. So forget about that one for the moment. This one does. It depends on the square of it. So how do we make it as small as possible? 0, right? So now, you can see a couple of things. One of them is over here we find that the optimum solution has r0 prime 0, which means that this is the case. r0 is-- so this is the formula for the translation. Now, of course, right now, we can't use it because we don't know what the rotation is. So we don't know what big R is. But it's very intuitive. It says that the translation is the difference between where the centroid is in the right coordinate system and where the left coordinate system centroid is after you rotate it. So the centroid maps into the centroid. So we've accomplished one objective, which is to separate the problem of finding translation from the problem of finding rotation. And then we have a formula there. If we ever find the rotation, we just go back to that formula, plug it in to find the translation. OK. And then what's left is-- so basically, that gets rid of this term as well. So that's gone. So we're left with this part and only worry about rotation. So our job is to find-- so our new error-- right? That's just that error term. And we're now going to minimize. And that means-- and I'm just taking this term and squaring it. I suppose I can write this out here. This is dot. So I'm just expanding this out. And so from the first two terms here, dot product of these two gives me that. And then I have the dot product of that with this. That gives me this part, which occurs twice. And then, finally, I have the dot product of these last things. And since we said that rotation preserves length, I might as well compute the length before rotation. So that's very convenient. So these last two collapse into that, and r doesn't appear in there. So this is fixed in the sense that it doesn't depend on the rotation. This is fixed in that it doesn't depend on the rotation. And so we now focus on this term except it's got a minus sign. So instead of minimizing, we want to maximize this. So that's sort of, in a nutshell, our remaining problem. And if we can do that, then we're done. And notice how this intuitively makes sense. Think of the cloud of points, and there's a centroid. Now, connect each of the points in the cloud of points to the centroid. So there's a vector from the centroid out to each point, sort of like a spiky-- what's the term in sushi? Uni, that's it. It's uni. It's a sea urchin, right? It's got these spikes, OK? And so what we're doing here is we're taking one of these sea urchins and we're bringing the other sea urchin into alignment. And the way we're doing it is we're saying that corresponding spines should have a small angle between them. Their dot product should be large, right? Because the dot product between two vectors is proportional to the cosine of the angle. So the largest you can make that is 1, which happens when theta equals 0. So this makes perfect sense because we're taking each of the corresponding spines of this spiky ball and saying, put a contribution in that's proportional to the dot product and rotate it to make that as large as possible. So our whole rotation problem has been simplified down to this if we can solve that. And so, well, we are calculus people. So one thing you might think of is-- we've done this kind of thing before-- we'll differentiate with respect to r and set the result equal to 0. And, well, that's not quite right because r is not a scalar. It's not even a vector. We know how to differentiate with respect to a vector. It's a whole matrix. But we could just take the 9 elements of the matrix and string them into a 9 vector and differentiate with respect to the 9 vector. So we could do that. The problem is those 9 numbers aren't independent. They have to satisfy all of those annoying constraints. And so it's actually very hard to impose these constraints subject to R transpose R is I and determinant of R is plus 1. So you can go down that route, but it gets incredibly messy. And so we won't be doing that. We'll be looking at other representations for rotation where this problem comes out very easily. But it's like a lot of things where there's a bit of an effort to develop the technology, the representation. Once you've got the representation, it's trivial. So we'll spend the next class building up that representation. OK.
MIT_6801_Machine_Vision_Fall_2020
Lecture_18_Rotation_and_How_to_Represent_It_Unit_Quaternions_the_Space_of_Rotations.txt
[SQUEAKING] [RUSTLING] [CLICKING] BERTHOLD HORN: OK, let's start. This should be a fun diversion from other things we're doing. We'll develop methods that are useful in photogrammetry and robotics, in particular, dealing with rotation, which has some strange properties. So for example, in the case of translation, things are commutative. If I go 5 meters in x and 10 meters in y, I end up in the same place as if I go 10 meters in y and 5 meters in x. This is not the case with rotation. And to illustrate, I suppose that this is the x-axis and this is the y-axis. So I'm going to take this eraser and rotate it 90 degrees about the x-axis like that and then 90 degrees about the y-axis. And I end up there. And I take the equivalent eraser. And I rotate it 90 degrees about the y-axis and then 90 degrees about the x-axis. And I'm there. So fortunately, that worked. So they're not the same. And so it's not commutative. So it's a little bit more difficult to deal with, particularly when it comes to least squares problems and fitting of data to observations. So here is an outline of what we're going to be talking about-- properties of rotation, different representation for rotations, and in particular, the one we're going to pick, which is Hamilton's quaternions. And they are more general. But if we restrict attention to unit quaternions, they map directly onto rotations in 3 space. They allow us to talk about a space of rotation. So for translation, obviously, 3 space is our space of translations. And if we do some optimization, that's the space we're searching in. That's the space we tessellate. That's the space where we get our answers, whereas it's not so obvious with rotation. Say you want to take an average of force on a seat belt over all possible orientations of a car. That's something that, if it was translation, it'd be trivial. You just integrate over some volume of space, divide by its volume. And you're done. So that's something we'll be able to do. Applications are, for us, mostly in photogrammetry, although they're very important in robotics and graphics and control of spacecraft. So we'll talk about some of that. And in particular, using this, we are going to develop a closed-form solution to the problem that we started talking about last time, which is either measurements in two different coordinate system of the same object or measurements in one coordinate system of two objects or one object that moved. So and just for fun at the end, we'll briefly talk about division algebras and space-time. And so here are some of the things I mentioned, where we need to have a good representation for rotation-- obviously, in machine vision, for recognition, and for determining the orientation of an object in space, for the robot arm to, and so on-- so down to protein folding, cryo-- electron microscopy. We all need a good representation for rotation there. And in particular, with robotic situations, we're dealing with motions in space-- Euclidean emotions, which can be decomposed into translations and rotations. And the Euclidean motions have the property that they preserve distances between points. They preserve angles between lines. And they preserve handedness, which sometimes we forget. But if we don't put that in, then we're going to get reflections. As you know, most of the biologically important chemicals exist in one particular form. The mirror image form doesn't work. And so it's kind of important to preserve handedness. Put together, those mean that we preserve dot-products. And we preserve triple products. And we're leaving out, by doing this, transformations of free space that would otherwise be perfectly reasonable, such as reflections and skewing and scaling. We'll talk a little bit about scaling. But mostly, for the moment, we'll leave it out. So rotations, first Euler's theorem-- any rotation of a rigid object has the property that there is a line that is not changed. That's the axis. Then there's the parallel axis theorem, which says that any rotation about any axis is equivalent to a rotation about an axis through the origin plus a translation. So for us, it's going to be very convenient to separate translation and rotation this way that when we talk about rotation, that we're trying to recover-- we're just going to talk about rotation about an axis through the origin, because we deal with the translation separately. And because of the parallel axis theorem, we can do that. Then if we figure out how a sphere rotates, we've basically figured out how everything rotates, because we can-- and that sort of simplifies thing. We can just think of a sphere rotating in space and all the possible ways that can happen. And that, then, corresponds to rotation of space. When we talk about attitude, we basically mean orientation. We mean orientation relative to some standard. So for example, we might have models as we had when we're talking about the patents, except now in 3D. And they'll be in some preferred coordinate system. And when we talk about locating that object, we're talking about finding its position in space and finding its rotation relative to that reference rotation. So by the way, this is going to be on Stellar, this whole presentation. Then there's a four-page blurb that summarizes everything you want to know about quaternions. Or maybe not. Maybe everything you need for this course. Degrees of freedom. So we talked about this last time, that there happened to be 3 degrees of freedom to rotation, and that can be slightly confusing. What's interesting is that rotational velocity is much easier. Angular velocity-- all we need is a vector that gives us the axis and a rate, so many radians per second. And if we take the axis and multiply it by the rate, we have a 3 vector. And those 3 vectors add, just like translations. So if you're spinning around the x-axis at 1 radian per second, you're spinning about the y-axis at 1 radian per second, then you're spinning about the 1, 1, 0 axis. So it's very easy to just add these up. And of course, that can change. So now, you've moved a little bit. And the next time instance, the axis will be different, So but instantaneously, rotational velocity is very easy. And this, for those of you who know about these things, corresponds to the Lie algebra versus the Lie group. The rotational velocities or the Lie algebra. It has all the nice properties of an algebra. The rotations, which are sort of the integral of that, don't. They don't have quite those nice properties. Poisson's formula tells you what the velocity is of some point when there's a known rotational velocity-- right-- rotational velocity. So omega is that vector, the axis of which is the axis of-- the direction of which is the axis of rotation. And the magnitude of it is the rate of rotation. And then if we have some point, r, we just take that cross product to figure out what the velocity is. And obviously, it's going to be perpendicular to r. And it's going to be perpendicular to omega. And you can get its direction by using the right-hand rule. So rotational velocity is simpler. They add. And we just saw finite. Finite rotations don't commute. And we talked last time about how they're n times n minus 1 over 2 degrees of freedom. And for n equals 3, that happens to give you 3. So often we talk about rotation about axes like I just did-- rotation about the x-axis, rotation about the y-axis. And if you follow that view, then you might think that in n dimensions, rotation is n dimensional. And therefore, it is better to think about rotation as rotation preserving certain planes. So for example, we can preserve the xy plane. We can rotate in such a way that the xy plane is not changed. Things in the xy plane are moved into xy plane somewhere else-- yz plane and zx plane-- also, 3. But that one works, because in 2D, we only have the xy plane, 1 degree of freedom. And in 4D, we have six combinations. And that's the number of degrees of freedom of rotation in 4D. So before we go further, here's an interesting thing that we will exploit. And also, it's a kind of preview of a method we'll use, which is cross products. Cross products, as I mentioned, are kind of confusing, because you take the product of two vectors. And you get another vector, which isn't quite right. But it happens to have 3 degrees of freedom. So we represent it as a vector. And again, that's a coincidence. What is it? Well, it's perpendicular to the two vectors you started with. And so in higher dimensional space, if you take-- if you say, what are the things that are perpendicular to two vectors, well, they're not going to be a vector. There's going to be a whole subspace. So the generalization of a cross product would be something more complicated. Like, one way to generalize it in an n-dimensional space, is it's that vector that's perpendicular to n minus 1 other vectors. Well, here, n minus 1 happens to be 2. So that's the way we get our cross product. And it's actually very convenient for certain operations, particularly, matrix vector operations, to represent the cross product in this way, where we take the skew-symmetric matrix. And we can represent A cross B by multiplying that skew-symmetric matrix by B. And it has the right degrees of freedom, right, because A has 3 degrees of freedom. A skew-symmetric matrix has 0 on the diagonal. And the off-diagonal is symmetric. So you again have 3 degrees of freedom. So that seems like a reasonable isomorphism. And then once you've done that, you've gotten rid of this weird thing, the cross product. And you have something that you can manipulate in combination with other vectors and matrices. Now, of course, why did we pick A to expand as the matrix? We could equally well have picked B and have written it this way, equally valid. And you'll notice this matrix is a transpose or negated version of that matrix. So the matrix for B-- so it's order dependent. And we know that, because cross products don't commute. So there are loads of representations for rotation, which tells you that there's a problem. With translation, we just have xyz, be done with it. So one notation that's very useful is axis and angle. So we have an axis, which can be given as a unit vector. And we have a number of degrees that we rotate or radians that we rotate through. So that's 3 degrees of freedom, because the unit vector only has 2, and plus the angle gives us 3. Now, sometimes we can derive other notations from that. One of them is the Gibbs vector, which tries to combine those two into a single thing. So this is sort of inconvenient in a way, because you've got a 3 vector, which happens to be a unit vector. And then you have this thing. So the Gibbs vector is a vector-- and it turns out that you don't want to just multiply by theta. That doesn't give you a nice algebraic properties. It's much better-- plus theta wraps around 2 pi. So that doesn't really make much sense anyway. And Gibbs found out that you can get some useful properties by taking tan theta over 2. So now, you have a vector. It's no longer a unit vector. So it has three components. So it has the right number of degrees of freedom. Only bad thing is it blows up at theta equals pi, right, because 10 of pi over 2 is infinite. And theta over pi-- so is that a problem? Well, rotating through 180 degrees, that's a perfectly reasonable thing to do. So it's not like we want to exclude that, because it's not a reasonable thing to do. So that is a problem. And it's very similar to y equals mx plus c when the line happens to be perpendicular up, parallel to the y-axis. Then we have Euler angles. And there's a classic text in mechanical engineering by Gold which explains about Euler angles. Basically, you rotate about x. You rotate about y. You rotate about z. If you're in an airplane, you have roll, which is rotation about the axis-- length-long axis of the plane. Then you have pitch, which is going up or down. And then you have yaw, which is going left and right. So it's very convenient to think about that. But as you know, the result will depend on the order. So yeah, if you have very small angles of roll, pitch, and yaw, which is a comfortable airplane flight, it probably doesn't matter a lot, because you get pretty much the same result if you compose them in different orders. But if you have large angles, like 90 degrees, it's definitely not going to work. So that means if you define Euler angles, you've got to do several things. One of them is you have to define the order of operations. Does the x-axis come first and so on-- plus other rotations about the new axes after you rotate or about the original axes. Anyway, as a result of this, as Gould points out, there 24 different definitions for Euler angle. And there hasn't been an international meeting to say, thou shalt use this one. So it's pretty confusing. Then we have, of course, orthonormal matrices, which I'm sure you're all very familiar with. And they have these two constraints on them. And then this one is a kind of exotic one. This represents a rotation matrix in an exponential form. So what does this mean? Well, here this capital omega is that 3 by 3 skew-symmetric matrix that you get for cross product. And so what does it mean to have exponential with a matrix in the exponent? Well, you just use the formula for e to the x, except now, x is the matrix instead of scalar. So you're going to end up with 1 plus x plus 1/2 x squared and so on, where x now is a matrix. So you can make this work. But we won't pursue this. And the others-- so here's another one. Stereography-- so there are lots of projections of the sphere. And there's a whole cottage industry dating back hundreds of years, because people found it necessary to take the spherical or almost spherical Earth and map it onto a plane. So they could sell maps. And so there are lots of these. One of them is conformal-- that is, it preserves angles. And it's called stereography. And as we said, a rotation of a sphere induces rotation of the space. So we can take a sphere, map it onto the plane, then mush things around in the plane, and map them back onto the sphere. And if we mush things around just the right way in the plane, then we actually have induced the rotation on the sphere. We'll look at a picture of that. Again, we're not going to use that. But it's kind of amusing. The thing you do in the plane is to treat the plane as a complex plane and do some homogeneous transformation on that. And then, of course, physicists have to invent their own stuff. So there's a type of 2 by 2 complex matrix that can be used to represent rotation that Pauli came up with, the same man who's famous for saying, that paper isn't even wrong as a criticism of one of his colleagues. So 2 by 2 complex numbers-- that sounds like 8 degrees of freedom. Well, they're not just any. They have to be Hermitian. And they have to be unitary. So it turns out that after you do that, there are only 3 degrees of freedom left. Then we get to Euler parameters, sometimes also called Gonzalez parameters-- Rodrigues parameters. Excuse me. And then we get to unit quaternions. Now, of course, there's a relation between all of them. And in fact, it's important to be able to convert between them. And we'll do some of that. And it's a little bit like many other problems, where you're trying to solve something. And it turns out, if you have the right representation, the answer is easy. And so often, the problem, then, ends up being the conversion. And so then you find that, well, the conversion is-- just like in the trivial case, we could have polar coordinates versus Cartesian coordinates. And maybe the answer in polar coordinates is trivial, but it's not in Cartesian coordinates. And then you're left with a conversion. Well, in that case, the conversion is not too hard. But here, sometimes it is. So let's start with that. This is Rodrigues. Rodrigues was a banker in Paris. And apparently, he found banking a fairly boring profession, because he'd spend his nights doing math. And so this is one of his results. Basically, the operations we're going to want to do are two. One operation is take a vector and rotate it, or find out what that vector is in a rotated coordinate system equivalent problem. The other one is composing rotations. We've rotated a little bit in this-- around this axis. Now, we're going to rotate about another axis. What is the overall result? And that turns out to be non-trivial. So this, here, is addressing that first problem. So we have a vector, r. And we have an axis of rotation that's here vertical. And we're going to rotate through an angle theta about that axis. And that will take us to a new position, r prime. And the question is, how do you compute r prime? Well, if we first convert it to a 2D problem in this plane indicated by this ellipse, then it's very easy, because we just take this vector and that vector and combine them in a weighted fashion. So this dotted vector is this vector times cosine theta plus this vector times sine theta. So you can see how, when theta is 0, the dotted vector will be equal to this one. And when theta is equal to 1-- pi over 2, it'll be equal to this vector. And all the way along, it retains its length and so on. So in 2D, the rotation as we know it is very simple. And like mathematicians want to often do, we reduce a complicated problem to a simpler one we already know how to solve. But that's in this plane. And so we first have to figure out what these two vectors are that we are combining using cosine and sine. And so first of all, this one here-- so how do we get that one? Well, we take r. And we subtract out this vector, right? And the difference is going to be this vector here. Well, that means that we need to know what this vector is. Well, that's just projecting r onto omega. And since omega is the unit vector, it's very easy-- there is a formula. And so therefore, this vector is r minus that thing. And then finally, we need to know what this vector is. And basically, that vector has to be perpendicular to omega. And it has to be perpendicular to this, because in this plane, it's pi over 2 away from that vector, right? So by taking the cross product of-- let's see-- this thing here with this thing here, we end up with, after simplification, just omega cross r. And it makes sense, because it's going to be perpendicular to r. And it's going to be perpendicular to omega. Anyway, you put all of those things together. You get this formula. So r prime, the vector after rotations, is cosine theta r, the vector before rotation, plus these terms. And you can see that rotation is not quite that trivial. So anyway-- so axis and angle is a useful notation. And it's often a good way to visualize things. It's easy to say, OK, we're rotating about the x-axis through 90 degrees-- easy to understand what it is. What's a disadvantage? The disadvantage is composition. If you have two rotations given as axis and angle, how do you combine them into a single? We know by Euler's theorem, there's a single rotation that represents the combination. But how do you combine them? Well, it turns out what I would do is convert both of these to some other form, like orthonormal matrices. Multiply the orthonormal matrices. Go back to this form. So that's a disadvantage. There is no-- you can rotate vectors. That's the one operation we want. It's hard to do composition of rotation. So OK, well, we'll skip this. This is the exponential-- the algebra view of things. And that, too-- let's forget that. I just wanted to point this one, which we mentioned. So here, the idea is that here's our sphere that we're going to rotate. And we project it onto a plane, which happens to be tangent at the North Pole. And this projection-- if your center of projection is the South Pole-- so you go from the South Pole to a point on the surface and that-- then extend that line until it hits that plane. And you will see a couple of things. One of them is you can map the whole sphere onto a plane-- well, except the South Pole, right? The other one you can see is areas are going to be distorted, right, because if we're dealing with, say, Alaska up here, it's going to be mapped onto the plane without much magnification. But if we're looking at Australia, it's going to come out way over here. It's going to be huge. So this particular map projection is actually used for the polar regions, because most of the other map projections don't work very well there. But it does have that feature. And why is it liked so much? Well, because it doesn't distort shapes. So it preserves angles. Most-- all other projections have the property that they will distort shapes. They will distort areas. You have your choice. You can pick projections that preserve area. You can pick projections that preserve shape angles. But you can't do both. OK, so we start off-- we take our sphere. We projected it onto this map-- this plane, which is the complex plane. Then in that complex plane, we do this operation. So z is a complex variable coordinate in this plane. And z prime is its new position. So and a, b, c, d are complex numbers. So everything in that plane is going to get moved around. And now, we project it back on the sphere. And the amazing thing is that the pattern on the sphere is just the rotated version of what it was before. And therefore, we can think of rotation in 3 space, instead, in terms of some operation in the complex plane, which is pretty amazing. So that's yet another representation. OK, so what are the desirable properties? We already mentioned some of them. We need to be able to rotate vectors or equivalently rotate coordinate systems. But also, we need to be able to compose rotations. We would like to have an intuitive representation. Like, axis and angle is sort of intuitive. It's easy to understand what that is, whereas if you look at the rotation matrix, you look at the numbers-- what do they mean? I don't know. You multiply a vector and see what happens. But the numbers don't directly mean anything. If you know a little bit more about it, you say, well, if I look at the trace with the sum of diagonal elements, it's-- I don't know-- 2 times 1 minus the cosine of the angle of rotation. But it's not very intuitive. Then is it redundant? Well, orthonormal matrices are a prime example of something that's definitely not redundant, because we have 9 numbers to represent three things. Yeah. Right, that's a very good point. So so much for my story about it not being intuitive-- well, if you think about rotating the x-axis, i.e. 1, 0, 0-- multiply the matrix by that-- you get the first column. So as he's pointing out, the first column is a significant vector. It's what happens to the x-axis. Second column is what happens to the y-axis. Third column is what happens to the z-axis. So you can understand the rotation matrix that way. And similarly, you can talk about the rows going in the other direction. But you wouldn't know the axis of rotation or the angle of rotation. So but yes, that's a good point. OK, redundant-- so let's see. Axis and angle-- well, axis and angle is kind of redundant, because if we have a vector, it has three components plus an angle is four. Gibbs vector is not redundant, because we've multiplied the vector by the tangent of theta over 2. But we don't want singularities. Gibbs vector has a singularity. So we'd also like it to be computationally efficient. And we'll get into that a little bit. And we want to be able to do lots of things, for example, interpolate orientation. So in graphics, we might often have some person performing some dance or action in the world. And we would like the artist to only have to specify certain distinct points rather than every frame. And so we'll need to interpolate. Well, if that person is rotating, how do we interpolate? How do you do a partial rotation? And so you might say, well, half a rotation-- that's the square root of the rotation matrix, right, because if you multiply that by itself, you get the rotation matrix. But how do you get the square root of a matrix? So that's something. Another thing we might want to do is get averages over ranges of rotation. For us, a big deal is going to be optimization. We're going to try and fit coordinate system. We're going to try and fit measurements to models. And in that case, we need to be able to do things like take a derivative and set the result equal to 0. Well, the derivative with respect to the rotation matrix-- that doesn't make a lot of sense. What is it? Well, something with nine components. But how do you enforce the constraints? Then in some cases where we can't get a closed form solution, we might want to sample that space and try some subset. But we'd like to sample the space in an efficient way, which means we want to sample it uniformly. So if there was a translation space, we'd just chop up space into voxels equally spaced. And we'd be done. Or if we go into-- for a random sampling, we could just generate a random x between the range and the random y between-- in a certain range and the z in the certain range. But how do you do that for rotations? Now, if you did it in Euler angles, you could say, OK, well, I'll take the first angle between minus 90 and plus 90 and then the second angle and so on. And you get into non-uniform sampling of the space. For example, a much easier example is on the sphere, we can use latitude and longitude as coordinates. But if you use that to control your sampling, you're going to sample much more closely near the two poles, right? So that's one-- add one more dimension, and then you get the problem for rotations. So the idea of having a space of rotations is very attractive, because then we can tessellate that space. We can sample it uniformly or randomly. We can compute averages or whatever. So here are some of the problems we already talked about. The orthonormal matrices-- they're redundant. And the constraints are complicated. Euler angles don't make it easy to compose rotations, just as with angle and axis notation. Best thing you can do is convert it to orthonormal matrix and multiply. And then there's gimbal lock. So I know you're all too young for this. But there was a time when man went to the moon, which was a very exciting time. And when you listened on television and you ignored the stupid commentators that were talking over the communication from the cabin, you'd hear them talk about approaching gimbal lock. So what was that? Well, they-- Draper Lab built their navigation system. And it had gyroscopes in it. And gyroscopes basically have three-- that type of gyroscope has three axes. And it has a part that is spinning at high speed and wants to continue spinning around that axis. And then the cage basically allows the spacecraft to rotate about that fixed direction. But as with Euler angles, you can get to a situation, where two of the axes line up. And then you've lost a degree of freedom. And if you move through that point, you've basically lost your orientation in space. So they were carefully instructed not to go through that point, because their automated systems wouldn't work correctly. And so that's exactly the problem with Euler angles, that when you rotate one of them through 90 degrees, it lines up with the next one. And you've lost a degree of freedom. And so by the way, how do you solve it? Well, the easiest way is to have four axes. And in that case, unfortunately, you can no longer treat it as a passive system. You have to have an active system that drives the axes so that you never have two of them line up. Anyway, Euler angles-- Gibbs vector had that singularity. Axis and angle has the problem of composing rotations. And then of course, we don't have a clear idea of what is the space of rotations. So we get to Hamilton, who lived in Dublin, Ireland. And he was fascinated by algebraic couples. So the view is that we think of complex numbers as pairs of reals. And can we somehow generalize that? And you want to have a nice algebra. You want to be able to add, subtract, multiply, and very important, you want to be able to divide. I mean, you can do all sorts of wonderful things. But if you can do that, then you can solve equations by subtracting, cross multiplying-- doing all those things we're used to. And so there was a lot of interest in physics, particularly in dealing with vectors in space. Well, they weren't called vectors, but dealing with points in space. And so the natural thing was, let's go from real numbers to complex numbers to triples. And then what can we do with the triples? Well, you can multiply them. And of course, the dot products not much used for this purpose, because it's a scalar cross product. Well, the trouble is if you've got a cross b, c equals a cross b. Can you say a equals c divided by b? No. So that was his problem. And it really puzzled him for quite a while. And he trained his children to, every morning he came down for breakfast, to ask him, papa, can you multiply triplets yet, which were these vectors-- we later call vectors. And then on this fateful day, he got it. And he went with his wife on his usual Sunday stroll through Dublin. And he committed a criminal act on the bridge. He engraved in graffiti the basic equation you need to solve this problem without explanation, just the formula. And then later when he described it, he's saying, "And here, there dawned on me the notion that we must admit, in some sense, a fourth dimension of space for the purpose of calculating with triples. An electric circuit seemed to close, and a spark flash forth." Remember, it was 1843. So there's a lot of excitement about electric stuff but long before Tesla and Edison. And the key thing was that he decided you can't do it with three. There's no way to do it with three. But you can do it with four. And so then he wrote a book, which is basically incomprehensible. It's like 800 pages of heavy math. But we'll simplify it a lot. So the insight was, you can't do it with three components. And then he took his cue from complex numbers, where we introduce a, quote, "imaginary" number. And he said, well, maybe if we have more of them, maybe there are other things, when you square them, they become minus 1. And so that was his key insight. He used the notation ijk, such that i squared and j squared and k squared are minus 1. And then very importantly, he added this ijk equals minus 1. And this is what he engraved in the bridge. And it's still there, although heavily worn by mathematicians touching it to see if they can get the inside from-- anyway, from those you can get everything else. For example, if you want to know what ij is, well, you just multiply ijk by k. And you get k squared, which is minus 1. So it's going to be minus 1 times-- and so on. So you get k So these others are all subsidiary. He didn't bother engraving them. He just took the-- importantly, multiplication is not commutative. So ij is k ji is minus k. And that sort of corresponds to cross products. So this kind of suggests a connection with vectors. And so there are lots of different ways of thinking about quaternions. One is just simply as a real part and three different flavors of imaginary parts, rather than just one imaginary part. But then you can take these three and think of them as a vector in space and treat the whole thing as a scalar, q0, plus this vector and use this notation , which has a scalar, comma, vector part. Or you can just write it as a four-vector, a thing with four components. So I use this notation with a little circle over it to denote a quaternion. It's not standard. So don't get confused by that. But when I'm writing stuff, it's very important for me to distinguish the scalar, the vector, and the quaternion. So I use the little circle to denote that. Another way of thinking about quaternions is as 4 by 4 matrices. And that brings us back to that isomorphism we had that was useful for cross products. There would be an isomorphism of quaternions with 4 by 4 matrices that allows us to do multiplication. And yet another way is to think of a complex composite of two complex numbers. So complex numbers are real plus i times the real. Now, imagine that you replace the real with complex numbers. You've got something with four components, because you've got to be careful, because you've got two different types of imaginary things. That's actually a way we aren't going to use. But it's a way that leads you to other things. For example, you can use that idea again. And now, you get, instead of four components, eight. And then again, you can get 16 components. So you can build these algebras that have 1, 2, 4, 8, 16 components. And they get weirder and weirder. We're going to stick with four. That's as far as we need to go. OK, so here's the view of multiplication using Hamilton's basic explanation of how to multiply these things. Here are two quaternions. You're going to multiply them. And obviously, we're going to get 16 terms. And then j times k is minus i. And so we can gather them up as, again, a scalar part and then these three imaginary parts. And that's the bottom baseline truth. But it's kind of too detailed. It's kind of hard to keep-- remember that. And it's, for us, who now-- remember this was before vectors. But today we're all into vectors and scalars. So in that notation, it's a lot easier to write it this way. So here's a quaternion p, a quaternion q. We multiply them together. And we get this quaternion. And the scalar part of it is just the scalar parts multiplied minus the dot product of the vector parts and so on. And it's non-commutative. Why? Because it includes this cross product at the end here. So if we interchange these two, we're going to get q cross p, which is, of course, minus p cross. OK, so this is actually a way we're going to use it. Just it's much more compact than this. Here is another notation that sometimes very useful. And this is analogous to the isomorphism we had between vectors and the skew-symmetric 3 by 3 matrices. Here, we have a isomorphism between a quaternion and a 4 by 4 matrix, which happens to be orthogonal. And if the quaternion is a unit quaternion, it's also a unit normal. So this, actually, in our case is typically a orthonormal 4 by 4 matrix. And so this multiplication up here can be written as this product of a matrix times a vector. And there's quite a bit of structure to this. You can see the first column is just the quaternion p. And then you can see that it's sort of skew-symmetric, except for the diagonal, which would be 0 if it was skew-symmetric. And so we can write the product of two quaternions in that form. So we can write pq-- so this is just like cross product, where we said a cross b is some matrix times b. Similarly here, pq can be written as this matrix times q. And the matrix looks like this. And the matrix is orthogonal. And it's normal if it's a unit quaternion. And if we get rid of p0, then it's skew-symmetric. And again, just as with cross products, we have the choice of either expanding the first part into a matrix or the second part. So here's the other version, where now, we've turn q into a 4 by 4 matrix. And this 4 by 4 matrix looks a lot like this one. The first column is pretty much the same. The first row is pretty much the same. What's different is this 3 by 3 submatrix is flipped. Its transpose. So they're very similar. But and this corresponds, again, to the fact that multiplication is non-commutative. If those two pieces were the same, then this whole operation would have been commutative. So now that we've got the basics and different ways of representing it, it's easy to prove some of these basic results. So first of all, it's clear that it's not commutative. And that's why we're talking about it. We wouldn't be talking about it if it was a commutative because then there is no way that it can represent rotation. It's associative, and that's not too hard to prove. Just use the formula for multiplication. It's just a bit of messy algebra. Then we define a conjugate. And the conjugate in analogy with complex numbers just means you negate the imaginary part. And then the next result you can get is that the conjugate of a product is the product of the conjugates in reverse order, which is just like matrix multiplication-- AB all transposed is B transpose A transpose. So there are some analogies here. And if you into physics, you realize that different heroes of physics had different ways of approaching quantum physics. And some of them like to do it with the complex number view. And some of them like to do it with a matrix view. And you can find that they're kind of equivalent. Yeah, so I've done a little of that. And I'm going to do more of it. And the main reason is that we can get a closed-form solution, because we can differentiate with respect to a quaternion. We can't differentiate with respect to orthonormal matrix. So that's kind of the main reason. But there are these subsidiary reasons, like if you can't get a closed-form solution, you can do a search in the space of rotations. And you can sample that space efficiently, either uniformly or randomly once you've got the notion of a space of rotation, which is the case with quaternions. OK, so a few other dot product-- well, the dot product, if we think of it as a four-vector, it's just the ordinary dot product of a four-vector. If we think of it in terms of scalar and vector, well, then it's the product of the scalars plus the dot product of the vectors. But underneath, it's just q0 squared plus qx squared plus-- sorry-- p0 q0 plus px qx, et cetera. And therefore, there's a norm. We can define a norm. And this is perhaps the most important. If we multiply q by its conjugate, it turns out, we get a real quantity. There's no imaginary part. And it's this. It's the dot product of q with itself times e. Well, e is this quaternion that has a scalar-- it's a scalar 1. And I could have just written 1 there. But that looks sort of funny. So I invented this symbol, which is a quaternion that has no vector part. And why is this important? Well, if you can do that, then you have division. So the multiplicative inverse is this. And that was the problem with the triplets. There was no way of defining an inverse. And here, we can define an inverse, well, except for q equals 0. But then that's always a problem. So and so this means, for example, that we can get the inverse of a rotation very easily. First of all, in the rotation, we're going to be dealing with unit quaternions. So q dot q is 1. So in the case of that unit quaternion representation, the inverse is just the conjugate, sort of like in the case of orthonormal matrices. The inverse is just the transpose. OK, and then there are a few other sort of minor properties. And you can just take those on faith. Or you can check them by doing the multiplications. So the dot product of products is the product of dot products. All surprising that. And then this first line is the special case of the second line, which is more useful in calculations. And then this one's pretty handy. So what's happened here? Well, sometimes we have a quaternion on one side. We'd like to move it to the other side of a dot product. And it turns out we can do that as long as we conjugate it. So we've moved the cue from the left side of the dot to the right side of the dot and just conjugated it. And it's easy to prove that. We just multiply by cube, conjugate, and so on. So we're going to use unit quaternions to represent rotation. And we'll see right now how that's related to the other notations for rotation, but we're dealing with vectors. So what about vectors? Well, we'll need to have a quaternion way of representing vectors. And of course, it's obvious. We just leave out the scalar part. Conversely, if we wanted to represent the scalar, we could just leave out the vector part. OK, so we're going to convert our vectors in 3D to these funny quaternions, purely imaginary quaternions, manipulate them in that space, and then bring them back into 3D. For this special case only of these types of quaternions, there's lots of interesting properties. First of all, the conjugate, of course, is just negating it, because conjugate means negating the imaginary part. The dot product is just the dot product. So this is the dot product of the quaternions. This is the dot product of the corresponding vectors. And since the scalar part is 0, it's obvious that this is the case. So we can easily Compute dot product. If we multiply two of these special quaternions, we get this funny thing. It has the dot product in it and the cross product. And this is why some critics of this notation called it a hermaphrodite monster. It's got both sexes built into it, dot products and cross products. And they thought this was a disadvantage. For some purposes, this is actually an advantage. Then if we take the product of r and s and take the dot with t, we get the triple product, rst. And the triple product is often pretty useful, aside from computing volumes or something. And again, this is very simple to prove. Once you have a 0 scalar apart, then all you're going to get is the vector part of t dotted with this thing, r cross s. And of course, that's just r cross s dot t. That's the triple product rst and so on. So that's how we represent vectors. OK, scalars-- yeah, we can do that. Finally, we get to rotation. So how do we do rotation? Well, it turns out that simple product of a quaternion with another quaternion won't do it, because it takes us out of 3D into 4D. We need an operation that takes us back into 3D. And so this is a little bit like you can do rotation in the plane by doing some operation that takes you out into 3 space. But then you have to have another part of the operation that takes you back into the plane, now, in a rotated form. So this is like that. If we multiply two quaternions, we're basically doing an operation of rotation in 3 space. And now, we need to find a operation that doesn't undo that but takes us back into 4 space. And this is exactly what happens here. So we take our vector, r, turn it into a quaternion with 0 scalar part. Then we pre-multiply by q and post-multiply by q conjugate. And magically, we're back in the real world so that this quaternion, r prime, has a 0 scalar part. And so this is how we're going to do rotation. Now, there are many different ways of analyzing this. One is to use that trick we had of representing quaternion multiplication as multiplication of 4 by 4 matrices and the four-vector, right? So we take these first two and turn them into that. And then we take the second multiplication. And this time, we turn the q conjugate into a matrix, this matrix over here. OK, so this operation is actually equivalent to that operation. And what is this? This is a 4 by 4 matrix. It's a product of two 4 by 4 matrices. And if you multiply them out, you get this. And this-- the most interesting property of this is that the first row and the first column are mostly zeros. And the result is that if you have a vector in 3D, which has a zero part-- scalar part and multiply it by this, there'll be no 0 scalar apart, right, because you're multiplying this by 0. And then you're adding all of these zeros multiplied by whatever. And you get 0. So all this has proved so far is that this operation will, in fact, get you back into 3D. So and this-- by the way, this submatrix is-- and this actually is our 3 by 3 orthonormal rotation matrix. OK, so that's the equation we're going to use for rotation. And this is the scalar part. We can compute the scalar part. And if the scalar part of the original vector is 0, we get 0 for the scalar part of the new vector. The vector part can be computed this way. So if you don't want to jump into the ocean of quaternions, you can do everything using scalars and vectors. Here's the formula, right? So just dot products and cross products-- very easy. When we use this operation, we can easily-- well, with a bit of algebra, we can prove that it preserves dot products, right? You just take this formula, apply it to r, then apply it to s, take the dot product. And you get r dot s. And so it preserves dot products. Similarly, it preserves triple products. Remember we had this special form that this multiplication gave us a triple product of the vector, the underlying vectors? Well, it preserves triple products. So that's it. It's a rotation, right? It preserves dot products. It preserves triple products. So that's good. We don't know yet what rotation, but we know it has to be a rotation, because it preserves length and angles. And it preserves handedness. And then the other thing we were talking about was not just rotating a vector but composing rotations. We wanted that to be easy. Well, so suppose we first rotate-- we take our vector, r. We rotate it using q. And then we take the result, and we rotate it using p. So we write this. And because these operations are associative, I can rewrite it this way. And that tells you, oh, composition of rotation is just multiplication of quaternions-- it's very easy and actually computationally efficient, as well, which is very different from axis and angle notation or Euler angles, where it's very hard to compose rotations. So then we need to figure out what rotation is it, or if we have specified rotation in some other form, how do we turn it into quaternion? Well, this is the formula that we just sort of derived on the previous page. And here's Rodrigues' formula from a while back. And then if we identify corresponding pairs, first thing you notice is that q, the vector part, is parallel to omega. So the vector part of the quaternion is in the direction of the axis, which sort of makes sense. And we can actually prove that easily. If you plug q in here for r, what happens? Well, this part drops out, because a a cross 2 is 0. And then this becomes q dot q times q. And this becomes something times q. So the whole thing-- both of these are vectors in the q-direction. So r prime is going to be in the q-direction-- so very easy to show that q becomes q. And that's Euler's axis theorem. So it's very easy to see that the vector part is parallel to the axis of rotation. We don't know yet how long it's supposed to be. And we don't know what the angle is. But anyway, leave out some of this-- conclusion is this is our representation-- our conversion. So if we know axis and angle, we can compute the quaternion. So that's one of the conversions between the-- we have eight different ways of representing it. So they're 8 times 7-- 56 different conversions we could possibly do, but we're not going to do them. So this is one. The other one we saw earlier, where we produce the orthonormal matrix. That's another important one, because we use orthonormal matrices all the time. We need to be able to go back and forth between axis and angle, which is this formula, and between the orthonormal matrix, and that was that previous formula. Now, so there are two things about this. One of them-- this is a unit quaternion. We easily proved you take its dot product with itself. And you get cos squared theta over 2 plus sine squared theta over 2. So that's one thing. So we're not talking about arbitrary quaternions, just unit quaternions. And then the other thing that's a little bit annoying is that minus q is the same rotation as plus q. Why? Well, imagine you plugged minus q into this formula. The minus is just-- disappear. So what does this mean? It means that if we think of the sphere as representing the space of rotation, opposite points are actually the same rotation. So it's very nice to think of a sphere in four dimensions as the space of rotation. And then we can produce ways of sampling that space and taking integrals and taking-- computing averages and so on. But you have to keep in mind that you really only want half the sphere, because the opposite side, the reflected side represents the same rotations. And [INAUDIBLE] points are identified. So we're going to use this in photogrammetry and in particular, in absolute orientation. We already talked about that. And this is from last lecture. So we got points in space identified, measured in two coordinate systems. We want to know the coordinate transformation between those two systems. And it's dual to the other problem, where we have the same coordinate system. But we have two objects-- or we have one object moving. And we modeled it this way, where coordinates in the left system are rotated, and it's translated to produce coordinates in the right coordinate system. And we want the best-fit rotation and translation, meaning we want to minimize some error. And what we're given are a bunch of corresponding points measured in the 2 coordinate system-- left-- left and right. And then we talked about this mechanical, this physical analog that we want to make these errors small. And so one way to do it is to have these springs, which have energy proportional to the distance squared. And the energy-- total energy in those springs is the sum of squares of errors. And the system wants to minimize that. So that's the physical model of how it finds the rotation and translation. So then we did this. So this is the error term, right, because if I take the left coordinate and I rotate and translate it, I get this. And that should be equal to the right coordinate. And I subtract them. I get the remaining error. I square that. I add all of those errors. Then first step-- let's find the translation. And we did this last time. One way to do it, which is not the way we did it in class, is just differentiate with respect to the translation, r0. And this notation-- this is the norm, which is really the dot product of this thing with itself. If we differentiate it, we get twice this. And we set that equal to 0. And then we can split. So we have a sum of three terms. We can split that into three sums. So for example-- of course, we get rid of the 2. That's not very interesting. And we get this. And final result was just that the translation is what takes the centroid in the left system after rotation into the centroid of the right system. So that's kind of intuitive and nice. It says that whatever your transformation is, it should map the centroid of this point cloud into the centroid of that point cloud. And one of the nice features of it is that you don't even need correspondences. You just compute the centroid of those clouds. So for this part, you don't need to know which point in the cloud corresponds to which point in the other cloud. And of course, right now we can't get the answer, because we don't know r. But this is the formula that we're going to come back to at the end to get the translation. And so at that point, we can move the origin to the centroid and add a dash to the r's, indicating they are now measured with respect to the centroid. So when we do that, the formula simplifies. Now that we're minimizing this expression, where these r prime and R r prime and rl prime are defined in terms of coordinates with respect to the center, we subtract out the centroid-- subtract out the centroid there. And OK, then we take this. And we multiply this out. Again, the non-squared is just the dot product of this thing with itself. So think of it written out that way. Then when you multiply out, you get four terms. You gather them up, and you get this. And here, we use one little trick, which is that we noted that the length of a vector is not changed by rotation. So over here, we really get r times the left vector squared. But we know that after rotation, it's the same length. So we just replace it. OK, and at that point, these two things are fixed. They're given by the data. They're not going to depend on the rotation you pick. So forget about them. We need to focus on this middle term. It's the only one we have control over. We can change r. But it has a minus sign. So we're trying to minimize something. So that means we want to maximize this term. And that's what we got to last time. And I was explaining it in terms of the sea urchin analogy, where we've got these spines. And we're trying to get them to be as lined up as close as possible so that their dot product is as large as possible. The dot product of corresponding point-- now, we need correspondences. And this is the classic problem in spacecraft attitude control. You have the direction to stars in your cameras. And you're trying to line them up with the catalog directions. And of course, you want to make the-- and it's a good analogy, because in this case, we don't care about the length. We don't have the length. We don't know-- from the camera, we have no idea how far away the stars are. And we don't care. And if we move sideways, actually that direction doesn't change, because in comparison to the distance to the stars, our own motion is microscopic. So this would be the thing you want to maximize in that case. And but how do you do it? Well, we're sort of tuned to using calculus to solve all these problems-- just differentiate with respect to the unknown transformation. But what does it mean to differentiate with respect to r? Now, I'm sort of making fun of it. But you can actually pursue this further. What you need to do is not just differentiate with respect to r but impose all of those constraints that I talked about. And you can do that. It just gets messy. You have to impose that r transpose r is the identity. And you have to impose the determinant of r is plus 1. Now, that's the hard one. OK, so let's not do it that way. Let's use our new fangled quaternion notation. So here, we have the dot product of two vectors. And we can express that here as the dot product of two quaternions with 0 scalar part. So we map this r li prime into a quaternion, r li prime. And we map this R r i prime into this quaternion. And all you've done is add the 0 scalar part. I've also flipped them around, because dot products commute-- so can do that. OK, and this is where I used that trick. Remember when I said that sometimes you want to move something from one side of a dot product to the other. Here, I've taken this quaternion, flipped it to the other side. But I had to conjugate it. Now, it's already conjugated. So conjugate of the conjugate is the thing itself. So I get this expression. And then I make use of the property that I can write this quaternion product as a matrix times vector. And I can write this as a matrix times vector. So I get that. And then I make use of the fact that a dot product is just the left thing transpose multiplied by the right thing. And then finally, I can pull out the q. So this is important. I can separate out the q on both sides. So it doesn't vary with i. And so I end up with this product. So this is where all my data is. All my measurements are buried in here. This is a 4 by 4 matrix that I get from my measurements. And now, all I want to do is make this thing as large as possible-- well, not quite, because I can make this as large as I like by making q large, right? So I have to be careful to impose that constraint. And so quaternions are not a redundant representation, because they have four numbers. And you have to impose this extra constraint. OK, and then there are different ways of doing this. So first of all, let's see. What is N? Well, N is, again, got from the data. And there are two ways to do this. If you know about Lagrange multipliers, you can add another term that imposes the constraint. So the constraint is that q dot q equals 1, or minus 1 minus q dot q equals 0. So you can impose that way. And we'll do it another way that doesn't involve needing to know about Lagrange multipliers. Anyway, in either method, we can now differentiate with respect to q, which is amazing. We can differentiate with respect to rotation. Of course, in the case of translation, no one's surprised you can differentiate with respect to translation or position. But for rotation, that's quite something. If we do, we get this. And this is just using the ordinary rules for differentiation of a dot product and differentiation of a matrix times a vector, which by the way, are in the appendix of the book, which by the way, is on Stellar. So if you need to refresh your memory about differentiating with respect to a vector, it's all there. And so what does this say? Well, this says that Nq equals lambda q. Well, that should remind you of stuff we've done before, right? It's an eigenvector. So q has to be an eigenvector, and lambda is the eigenvalue. And since we want to make this as large as possible, we pick the eigenvector that corresponds to the largest eigen-- which is sort of unusual. In the past, we've always wanted to minimize something. We wanted the smallest eigenvalue here because of that sign flip. We want the largest one. And what is it the eigenvector of? It's a 4 by 4 real symmetric matrix, which is constructed from the data. And so a couple of notes on that-- so one of them is we said for 2 by-- for a 2 by 2 matrix, we actually explicitly gave the eigenvalues and eigenvectors, because we know how to solve quadratics. And I mentioned last time that for 3 by 3, it can be done, because the characteristic equation is a cubic polynomial in lambda. And someone knows how to solve cubics in closed form. By the way, this used to be a big game, particularly amongst the Italian mathematicians. They would-- these problems were a real puzzle to them. And they were very proud when they solved them correctly so. But there wasn't the same sort of modern publication and tenure and whatever. It's like you've solved this tough problem. And you're very proud of it. What do you do with it? You don't want to just give it away. And yet, you want to be able to say later, you know, I got that 10 years ago. So they've come up with all these weird ways of coding and coding their solution and like in a story or as a sort of parable or some sort of other mathematics. And then later on, Ferrari, who was one of the guys who did this, could say that, oh, I solved this years before Cardano, who is another guy, because look at that poem I wrote. It tells you how to do this. So anyway, they discovered how to solve cubics. Here, we're going to have a quartic, right, because it's 4 by 4. It's going to be fourth order. And guess what. They have closed-from solutions. And those Italian mathematicians and others figured out how to do it. Basically, how do you do any of these things? Well, the usual trick-- you try and find a way of reducing third order to second order, because you know how to do second order. And quartics-- first step is reduce it to a cubic. Solve the cubic, and then the rest is easy. And so can you solve fifth order? No, OK, so it's good that some of you know that that's it. You can't-- now, of course today with computers, we can always numerically solve any of these. But if you want to have a, quote, closed-form solution, then you do need to know that you can solve 1, 2, 3, 4. But that's it. Then higher order, you can't solve in closed-form using addition, subtraction, multiplication, and square roots. OK, so this matrix down here-- it look-- is more familiar to you than N. That's because this is simply the dyadic product of the vector in the left coordinate system and the vector in the right coordinate system. So this is a 3 by 3-- each of these is a 3 by 3 matrix that's a dyadic product. And you just keep on going, stepping through your point pairs and adding them up. And you get a 3 by 3 matrix, which is not symmetric. And so it has nine independent quantities. And let's think about N. Well, N is 4 by 4. So it has 16. So how does that work? It turns out that N is rather special. It turns out N is symmetric. And it turns out that it's not that easy to prove. The one mistake in this paper was that I think I said something like, it's obvious that it's symmetric. But it takes-- anyway, it's symmetric. So that means 4 by 4 symmetric-- we've got 4 on the diagonal, and then we have 6 more-- no, 10-- we have 10 independent-- so it's not 16. But it's 10. But it's still too many, right, because we only have 9 in M. Well, it turns out that this matrix is very special, because the determinant-- the characteristic equation up here has this important property, that that cubic term is 0, which actually makes it easier to reduce it to a cubic. And so N is a symmetric 4 by 4 matrix with a trace of 0. So that means we've got 10 minus 1 is 9. So now, it matches. M has 9 independent values. OK, we don't need to really know-- and if you're actually going to implement this, you might want to know this, but just do know that you can compute all the coefficients of the characteristic equation. And then you go off and solve your quartic. So the main application is going to be absolute orientation, but it also has lots of other applications that we already talked about. And if you go over to Draper Lab, where they do all this spacecraft control, you'll find people, who know about quaternions, because that's what's used there. Desirable properties-- so we have this table. Let's see how we're doing. Ability to rotate vectors-- yes, we-- qr q conjugate. Ability to compose rotations-- yes, p times q is the composition. Intuitive, nonredundant representation-- well it's almost nonredundant. It has four numbers to represent 3 degrees of freedom. And but the redundancy is very simple. It's just the unit vector-- unit quaternion. Is it intuitive? Ah, one part-- the vector part is in the direction of the axis. That's sort of intuitive. Computational efficiency-- we haven't talked about that yet. How is it compared with matrix operations? Can we do things like interpolate orientations? Yes, suppose that you have-- your artist has the ballerina doing a turn and got it-- got her in this position and then in that position rotate it, how do you do all the inbetweening? Do you take the initial rotation matrix and the final rotation matrix-- take some sort of weighted average? Well, that's not going to be orthonormal. So here, it's trivial, right? You just take that angle. Suppose that the rotation is about an axis through an angle of 90 degrees. You just split it up into however many frames you need and compute the quaternion correspondingly. We can take averages of a range of rotations. If you want to know the average loading on the seat belt as your car is tumbling in all possible orientations, you can easily do that. And we can take the derivative as we saw. So we can do optimizations, least squares. If there's no closed-form solution, we at least have a way of sampling that space of possibilities in the uniform or random way. Oh, OK, well, let's see. How late is it? Well, let's do a couple more things. OK, so this is where we're going to go after absolute orientation. This is relative orientation. And this is kind of more-- so absolute orientation is very important for photogrammetry, making topographic maps from aerial photographs. But in terms of understanding human vision, one of a dozen depth cues is binocular stereo. And so relative orientation there is more important. And so that's the problem of, we've got two eyes. And the eyes don't get depth. They only get directions. So we have the directions in each of those coordinate systems to points out in the world. And I show in 5. And it turns out 5 is the minimum number you need to solve this problem. And your problem is to find out what that baseline is and what the relative orientation of this coordinate system is relative to that. In other words, we have a translation. And we're going to have a rotation. So in that respect, it's the same problem. It's just that our data now is not as good. But-- in absolute orientation, we actually know 3D coordinates of all of these points. And that makes it possible to get a closed-form solution. Here, we only know the directions. So that's the problem we're going to solve after absolute orientation. Then if you're trying to describe the kinematics of a robot manipulator, well, these days people are lazy. You just buy it. And the software does all that for you. But in the old days, we had to walk in the snow. There was no bus. Well, in the old days, you actually had to understand the geometry of these devices. And that means that as you go through each link, you are doing a translation and a rotation. And so if you want to avoid problems with coordinate systems lining up, particularly in the wrist, then quaternions are a good way to represent the rotation. So in the wrist, for example, you can imagine that if this part now comes up, there will be a time, where this axis in here is parallel to that axis. And you've lost a degree of freedom. So computational issues-- now, I guess these days computers are pretty fast. So we don't worry about this as much. But if you have to do this for every point in some object in your graphics representation, this could be an issue. So let's see how expensive it is. So again, there are two things we want to do-- compose rotations and rotate vectors. So for composition, we know it's just multiplying two quaternions. And if you do it the naive way, just following that formula, you get 16 multiplies and 12 adds. And if you do it with an orthonormal matrix, of course, it's a 3 by 3 matrix product, which is 27 multipliers and 18 adds. So we win-- definitely win here. Let's go to rotating a vector. So we use this formula. And we can expand out the part we want, which is the vector part-- 22 multipliers, 16 adds. Compare the matrix-- oops, 9 multiplies and 6 adds. So here, we lose. But this can be rewritten in this form, which has the advantage that you compute the qr once, and you use it in two different places. And you get it down to 15 multipliers and 12 adds. And this is naive stuff. People that get serious about this go further. For example, with a 3 by 3 matrix, they don't represent it as 3 by 3 matrix. At most, you take the first two rows, because the other one's implicit. It's a cross product. And so both-- so you could have a competition, where you just keep on going on both sides. And they end up being similar, but with the advantage for composition going to quaternions and the advantage for rotating vectors going to orthonormal. OK, how about renormalizing? So one of the things that happens-- suppose you have a graphic program that's trying to do some motion sequence. And you're composing small incremental rotations. And if you represent them, let's say, as orthonormal matrices, you can imagine that because of the limitations of floating point arithmetic, that errors will accumulate. And after a while, your 3 by 3 matrix is no longer orthonormal. And so you'd want to sort of square it up. And that can be done. But it requires doing this. You take your matrix, and you multiply-- you take its transpose, multiply by M. And you take the inverse of the square root of the matrix. So right there I'd imagine that you may not know how to do that. But it can be done. But that just suggests that this is not an operation that you'd normally want to do. It's expensive. With quaternions, sure enough, you take the unit quaternion, and you multiply it by another unit quaternion and so on. And because of the limitations of floating point arithmetic, after a while it isn't a unit quaternion. How do you square it up? Well, trivial-- so this is one other small difference between them. So just about at the end-- so we talked about the space of rotation being the 3 sphere, unit sphere in 4D with antipodal points identified. And it also happens to be identical to the projective space p3 although we don't use that. And it's not as handy. Then sampling regular and random-- so how do we sample this space? Well, in 3D, we can easily come up with a regular sampling of space or even the random sampling. But it's not so obvious how to do it here. And one way is to find regular patterns. If we're-- say we want to sample a sphere regularly. That's not so easy either. Certainly, latitude and longitude is not going to do it. But one thing we can do is project a polyhedra onto it, because polyhedra are completely regular. So if we project, say, a icosahedron or a dodecaicosahedron, which is a soccer ball-- anyone here know about soccer-- then we end up with a pattern on the sphere that is regular. But there are only some special ones, like regular solids. There are only a few of them. Well, similarly here, except now, we're in 4D, and the corresponding things are rotations rather than polyhedra. And so it turns out that if you look at the rotations of polyhedra, they will give you a nice, even sampling of that space and the rotation groups-- so the tetrahedron has 12 rotations that will align it with itself. The cube and the octahedron have 24. And the icosahedron and dodecahedron have 60. So these give you a perfectly uniform spaced sampling of that space. Unfortunately, that's not a very fine sampling, right, because 60 in the four-dimensional space. So there are tricks to give you a finer sampling. But they all start with those simple ones. And just while I have the slides here, if you pick the coordinate systems right, you get very simple expressions for these rotation groups. So each of these is a rotation. This one, of course, is the identity rotation-- does nothing. This is the rotation about the x-axis by 180 degrees. And this is about the y-axis and so on. So this is the group for the tetrahedron. They provide a regular spaced sampling of that space. This is for the hexahedron and the octahedron. And then the prize is this one. This is the rotation group for the dodecahedron and an icosahedron, where these a, b, c, d's are given by these expressions up there. These should look familiar. This only works if you line it up nicely with the coordinate system. Anyway, this would be a sampling. So you could try these 60 orientations for your search or your averaging. And then you can interpolate further. OK, well, we've run out of time. So that's it for today.
MIT_6801_Machine_Vision_Fall_2020
Lecture_9_Shape_from_Shading_General_Case_From_First_Order_Nonlinear_PDE_to_Five_ODEs.txt
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: Recovering the shapes of objects using image brightness measurements. And we talked at some length about photometric stereo, which was a method that gets us the result. But it's kind of unnatural in that it requires multiple exposures. And we're about to transition into shape from shading, which is a method for doing this with a single image. And we've already talked about a few different types of surface materials and their reflecting properties. In particular, we talked about the Lambertian surfaces, and we started to talk a little bit about Hapke, which is a model for the reflection from rocky planets. And I want to introduce a third one, which has to do with microscopy. So that's a scanning electron microscope. But first, for comparison, here's some pictures of transmission electron microscopes. As you probably know, they allow you to achieve a huge magnification, much larger than with visible light, because you're limited by the wavelength of light. And the wavelength of electrons at kilovolt energies is very much shorter, potentially allowing you to image large molecules. And they haven't yet achieved their limit. They theoretically would be able to resolve individual molecules and atoms, and in some cases, they do. Anyway, if you do a search, electron transmission, electron microscope, you click on Images, you get all of these pictures of these types of microscopes. And then eventually, you get some pictures of things that might have been imaged with an electron microscope, like this thing on the right. And this is a cell with a nucleus and some vacuoles and so on. And it's an extremely useful technique for seeing fine detail, but it has its limitations in terms of interpretation. So you look at-- well, what we will do in the moment is look at the comparison with other kinds of microscopy. So let's see. I was going to click on this thing. So I guess you can make sense of it, but it's a very thin slice of something, and you'd need to know the details of what that is. Now if in comparison, we do a search on the scanning electron microscopes and click on Images, we get this. And you have to scroll down before you get a picture of a microscope. Why is that? Well, because scanning electron microscopes make these great pictures that people love. The transmission electron microscope make great pictures that scientists involved in the research love, but they're not easily interpretable whereas these scanning electron microscope pictures, they're very popular. They're just fabulous. I mean, look at that. I mean, you can immediately see the head of this jumping spider, and it's six of its eight eyes and so on. And why is that? Beautiful pictures that we find very easy to interpret, know exactly what that is. And this is a surface of who knows what. But you can immediately tell the shape and where the crevices are and so on. So the point is that these scanning electron microscopes produce images that we find easy to interpret. So I don't know. Can you think of a reason why we don't find the transmission electron microscope images so easy to interpret, but these we can given where we are in the course? Well, if you look at these pictures, you see that there's a variation in brightness. So that's pretty obvious. But it's a variation that's sort of special. As you approach the edges, things get brighter whereas if you're looking at the frontal surface, they're darker. So there's a variation in brightness depending on surface orientation. Ta-da, that's what we're talking about. So the reason we find these images so appealing and easy to interpret is because they have shading. They Have a dependence of brightness on surface orientation. And so this is obviously the head of some moth. You can see the compound eyes. You can see the schnozzle there all rolled up. So it's so easy to see what's going on. There's an alien landing on-- and so the only thing that's a little bit odd is if you look at these shapes, so this is obviously some ovoid football-like shape, it's darkest in the middle, and it gets bright towards the edges. So that's sort of anti-Lambertian. With Lambertian, we get the brightest surface reflection where the surface is perpendicular to the incident light. And so we expect that as you approach the occluding contour, it gets darker. So if we look at the isophotes on, say, the image of a sphere, it'll be concentric curves that sort of drop off towards the edge. And these go the other way. And so that's kind of interesting because if we grew up in a world where most things were sort of Lambertian, we'd get very good at interpreting shape of things that have that type of surface. And yet we're able to interpret the shape of this very well. And at one point, I had a UROP take a bunch of these pictures and reverse the contrast because they should be much easier to interpret. Why didn't anyone think of that? And well, it was not a very successful UROP assignment because with the reverse contrast, the pictures didn't look any better. In fact, they were marginally harder to interpret, and we had some explanations for that. So it shows a couple of things. First of all, this modality of microscopic imaging is very nice from our point of view because we can understand these images. We don't need some complicated calculation. And the other one is that the human visual system apparently can do this as well, and it's not hardwired to have any particular idea about what the surface material is. It can adapt and deal-- I mean, we can see that this is not a Lambertian surface. At the same time, we can have a pretty good idea of what the shape is. I mean, maybe not metrically accurate, but that's another story. OK, so that's what we're going to talk about next. So let me get rid of that. How do we do this? How do we implement this? Yes, I want to shut down. So first, a little note on how these scanning electron microscopes works. You're all probably too young to remember cathode ray tubes, but people used to create images by deflecting electron beams and having them impinge on phosphorus, which would then glow. And those devices had a source of electrons. Basically, you're sort of boiling electrons off some surface. So that's a little heated coil. So that's the source of electrons. And they just sort of bubble off the surface. And then you accelerate them by having some electrode, which we'll call the anode. And, well, a lot of the electrons end up there. But other electrons are accelerated through. And then we have lenses. And these typically are magnetic. And they can also be electrostatic lenses. And the idea is that we're going to focus this beam onto some object. So make some space for some object down here. So with suitable cleverness and enough money, you can focus that beam down to a very small point. And then what do you do? Well, the electrons hit that object, and some of them bounce off. So we have backscatter, backscatter the electrons. But most of them penetrate. And in the process, they lose energy and create secondary electrons. So you can sort of think of them as bumping into molecules and losing some energy and creating-- bumping electrons off ionizing things. And some of those secondary electrons come out of the object, and they're gathered by some kind of electrode, which is also-- there are other effects. For example, there will be some fluorescence at X-ray energies. So some people measure the X-rays coming off. And there may also be visible light and so on. But we're going to focus on the secondary electrons because that's what's used in imaging and creating those images that I showed you. So why does the secondary electron current, which we measure here, vary with the surface? Well, first of all, the way I've drawn it, it's not a whole lot of use because we just get one measurement. So what we really want to do is scan this. So some of these electrodes-- well, typically, it's done magnetically. As you know, electron in a magnetic field will turn. And so using two orthogonal magnetic fields that you can control from the computer, you can direct that beam anywhere you want, and you can scan the object in a raster-like fashion, for example. So this always reminds me of Mr. Farnborough, who has-- his claim to fame is that he has a dirt road named after him a few miles from where I live. He's the guy who invented television. Nobody's giving him any credit for it, and that's because he was a very clever person but not a good businessman, and he got shafted by the people who ran RCA, et cetera. Anyway, probably never heard of him. But when he was asked by a reporter, you know, how did you come up with this idea-- I mean, to us now, it all seems pretty obvious. But at the time, what was available? Well, there was telegraph, the beginnings of telephone, and the big question was audio is a one dimensional signal. So we understand how to measure that, how to transmit it, how to reproduce it. But this image is 2D. How do you do that? And there were all kinds of crazy, harebrained schemes. And supposedly, his inspiration were the furrows in his father's farm because I guess he had to walk behind the oxen and hold the plow. And he decided that, well, just as I can plow this whole area by turning it into a 1D problem, I can do the same with images. To be honest, I think that's an explanatio he came up with when a reporter asked because a lot of times, people need some way of understanding how on Earth you ever got to the solution. And then you have to dream up some reason. So anyway, that's what he did. And that's applied here. So we scan in a raster fashion. And then what do we do? We use the current measured here to modulate a light beam in display, either another CRT or read it into a computer and display it on an LCD. And then you can control the magnification. How do you do that? Well, you just control the deflection. So if you deflect the beam a lot, you don't get much magnification. And as you reduce the deflection, you get more and more magnification. And so you can get thousands, tens of thousands compared to optical microscopy, which maxes out below 1,000 pretty much despite all sorts of incredibly clever tricks. OK, why do we create shaded images? So we have to look at how the surface interacts with the beam. So here's the beam coming in, electrons at several electron volts. And they hit this material, and they start interacting. They bump into things, and they create secondary electrons. And they lose energy as they go. So the first few rather high energy, and then they get lower energy, and those guys bump into things. And most of those electrons just disappear in this object, but some of them that are near the surface escape and are measured by our secondary electron collector. And so basically, what we're displaying is what fraction of the incoming beam actually makes it back out again. By the way, one of the limitations is this thing better be conductive because you're pumping all these electrons. It's not a huge current, micro amp or less. But still, this insect is in there, and you're putting a micro amp of current, and pretty soon, it'll charge up to be some significant fraction of a Coulomb, and it'll explode. So in order to do this, you typically gold plate these objects so that they're conductive. Why gold? Well, because it doesn't outgas. This whole thing has to be done in vacuum. OK, so that's what we have in the vertical situation. Now imagine that we have a highly inclined beam, surface element. Well, now many more of the electrons are going to escape because they're generated closer to the surface. So that means the currently we measure will be higher when we're dealing with a surface that's inclined. And so that's it. That's the mechanism for turning surface orientation into brightness. And our job is to invert that. So one way we can understand this is to plot the reflectance map, which we've already done for Lambertian surfaces and Hapke surfaces. Brightness versus orientation-- so in this case, coming straight down means that we get a relatively dark image because a lot of the secondary electrons are just never going to make it out of the surface. Then as we go to higher surface inclination, it gets brighter because more of the secondary electrons escape. And in typical arrangements, the thing is axially symmetric. So whether these electrons come out in this direction or that direction, if they get out of the surface, they're going to be counted. And so that means that this is going to be symmetric. And so if we plot the isophotes, we're going to get something like that. Put numbers on it. Where it gets bigger, the further out we get, the more inclined the surfaces. And yes, you can modify this. You can remove some of the collectors or wire this collector to a different sensor and so on and make this asymmetrical. But the standard way of using it is just to measure the total secondary electron current. And that gives us that type of reflectance map. OK, so now if I measure the brightness at a particular point in the image, I get the slope of the surface. So I can directly translate that. If it's 0.7, then the slope is whatever that distance from the origin is. But I don't get the gradient. Right, what I'd like to know is what's the surface orientation. And as usual, we got two unknowns, p and q. We only have one constraint. We measure the brightness. And so yes, we've reduced the number of possibilities, but we still have a one dimensional, infinite set of possibilities. So the slope typically is thought of as a scalar whereas this is a vector. So it's a little bit like speed versus velocity. I mean, not everyone agrees on those definitions. But to me, speed is a scalar quantity going 50 miles an hour whereas velocity is a directed vector. So OK. So what to do? Well, it's just another example of this shape from shading problem. We're going to have some distribution of some pattern of brightness. And our job is to estimate what the surface shape is, and we have a reflectance map to help us with that because it allows us to look up for any particular brightness we measure what the slope is, unfortunately not the gradient. If it was the gradient, then we would be in better shape. And we're going to do this for various special cases and then for the general case, where we can use any reflectance map we like. And that's important because we don't want this to only work for say, Lambertian, surfaces or the lunar surface or whatever. So OK. So when you get a chance, look up scanning electron microscope images and just Marvel at the wonderful world that exists there and how we're able to understand it so well. And, you know, puzzles like why is it that if you reverse the contrast, it's not easier to interpret the image, which is what you'd expect. You'd expect that a surface that's brightest where it's facing you would be easier to understand that than one that's darkest. OK, so in preparation for this, I want to talk a little bit about how to go from a needle diagram to shape. Now where are we going with this is we're going directly to a method that takes us from a single image to a shape. But meantime, we have some tools, and I want to exploit those tools. So for example, what's a needle diagram? Well, it's just surface orientation at every pixel. And what are the needles? Well, you can think of each facet of the surface as having a normal that's sticking out. And you're looking down at that normal. That's the needle. And the view of it, you may remember from the very first class, if you're looking straight down perpendicular to the surface, then that needle is just a point. And if it's not, then it'll be pointing in a direction. that tells you what the surface gradient direction is. And the length of it will tell you how steep the surface is. So where do we get these? Well, we're not going to get them from there. So there's more work to be done. But for example, we get this from photometric stereo. So in photometric stereo, we're not computing z as a function of x and y. What we're getting instead is p and q as a function of x and y, of course remembering that p is dzdx and q is dzdy. Well, it's our estimate based on our image brightness measurements. OK, so that seems actually like a very simple problem. Why? Well, in the discrete case, we have a number of unknowns equal to the number of pixels. We're trying to recover z at every pixel, let's say. So there are millions of unknowns. But at every pixel, we also have two pieces of information-- p and q. So if we have a million pixel image, we have 2 million pieces of information. So it's overdetermined. We actually have more information than we need. And as we saw, that's always handy because that means that we would be able to reduce noise and get a better result than if it wasn't overdetermined. So in fact, we have twice as many constraints as there are known. OK, so what can we do? Well, one thing we can do is suppose we start here and just integrate out p, which is dz dx along this axis. And what we'll get is the change in height. So z of x is z of, let's say, 0. Suppose we start at the origin plus from 0 to x. Right, because dz dx is the derivative. You integrate the derivative here. You know all that. So OK, so I can get the height anywhere along the x-axis. And similarly, I can get the height anywhere along the y-axis by integrating the other thing I know. So then from there, I can combine them, and I can go partway up on the y-axis and across on the x-axis. So I can get partway on the x-- so I can fill in the whole area in various ways like this. And actually, I could take a curve. So let's take that curve. And so along that curve, what we're looking at is a combination. Right, so what is this? Well, that's just the dot product of the gradient pq dot dx dy. So that just tells me the change in height. So this is actually delta z, If you like. That's a change in height if I take a small step in the x-direction and then a small step in the y-direction. Pretty obvious. And so I can do that for all these points. I just pick contours and integrate up. Well, that's OK, but what if somebody else comes along and says, oh, I don't like the contour you chose? I'm going to go this way. And what you'd hope, of course, is they get the same answer. But there's no guarantee of that because these p and q's are determined experimentally. They're subject to measurement noise. So they're not perfect. They're not actually the derivatives of z with respect to x and the derivative of z with respect to y. They're our estimates which include some noise. So OK, well, what we'd hope is that they come out the same. And what that means is that actually, what we're hoping is that if you go around the loop, if we go this way, da da da, up there, and then we come back, what should this equal? STUDENT: Zero. PROFESSOR: So that should be zero. Yes, thank you. And every jogger knows that that's not true. If you jog in a loop, you seem to be going uphill all the way. OK, I guess we don't have many joggers here. So-- oh, it's like-- well, OK. So that should be true. So there's no guarantee that when we measure p and q experimentally, we estimate them, that this is going to be true. So what do we do with that? Well, one thing we can think about is turning this into some sort of condition on p and q. That is, p and q have to satisfy some constraint for this to be true. And this has to be true for any loop, not just that one. And I can decompose a large loop like that into small loops. So suppose it's true for that loop. Well, then I can put another loop next to it and another loop. And now rather than go around these four loops, I can eliminate the inner parts, and they cancel each other. See, if I am going from here to there and then I'm going from there to here, I'm back at the same place. So this is a crude way of proving that if it's true for a small loop, for any small loop, I can decompose any large loop into lots of small loops. And then it's going to be true for the large loop as well. So OK, so let's see what-- so let's have a small group of size delta x in this direction and delta y in that direction. And let's suppose that this is the point xy. And let's see how the height changes as we go around this loop. Start down here, say. Well, then we need to know what the slope is in the x-direction. So that's p. And let's take the slope at the center of this stretch, which is going to be p of x and y minus delta y over 2. And that's the slope, and we're going delta x with that slope. So we make this loop small enough so that we can use this linear approximation, that the slope is pretty much constant over that stretch. Then we go up. OK, so that's going to be-- we need the slope in the y-direction, which is q. And we'll take the slope and estimate it based on the center of this line. So that's at x plus delta x over 2 and delta y. Sorry. And and then we go across the top, minus p xy plus delta y over 2. Oh, this is times delta y. And then we go down. So this is minus q. And that should be 0. We should be back at ground zero. So what we get here is if we do the Taylor series expansion of that, we get-- and down here, we get p of x comma y plus higher order terms. And then we expand this. We get q of x comma y plus delta x over 2qx-- let's see-- plus. OK, and of course, these terms cancel. So we end up with py delta x delta y minus qx delta x delta y is 0. Or in other words, these two are the same and cancel out the the sides of this little box. And so we get pi is qx. So if this is true of very small areas, then that condition has to be true. And that makes sense because p was supposed to be our measurement of dz dx and q was supposed to be our measurement of dz dy. And so that's z sub x y, and that's z sub y x. Now it's slightly confusing because that's obviously true of the original surface as long as it's smooth enough that the next partial derivatives don't depend on the order. OK, so if p and q really were the derivatives of the surface z of xy, then this would be true. But we're measuring them from experimental data. So this typically won't be true, and if we take this integral, we will get different results depending on different directions. And so this is like overdetermined linear equations. There is no solution. All we can do is find some least squares approximation. So same here-- there is no surface z of x,y that will give us exactly the p and q that we measured from photometric stereo or some other visual method. OK, now actually, we can get there a more elegant way, which is handy because we'll use this again later. So like many important theorems, this one has many names. I guess it's attributed to Gauss, and it's a special case of more general theorems used in fluid dynamics. But what it does is it relates contour integral to an area integral where-- let's draw some sort of shape. So l is this contour and D is that area. So why do we care? Well, in machine vision, often we have computations that take us over all pixels. And there are a lot of pixels. Any time you can reduce that computation to a computation about the boundary of a region alone, you've got a huge win. You're down from millions of calculations to thousands of calculations. So there are several places in machine vision where we use a trick like this to turn an integral over an area into an integral over just a curve with a huge benefit. Of course, it's a very specialized thing. There are only that many things that have the special property, namely things that can be written in this form. But you know, suppose, for example, you want to compute the area of a blob. Well, you can count the pixels. You can just go row by row by row, see, OK, this pixel's in the blob, that one's not, and so on. Or you can just trace the outline. Turns out you can compute the area just by tracing the outline. In fact, there used to be a wonderful instrument which is fiendishly difficult to understand. But it allowed people to compute integrals in an analog fashion. You should look it up. It's an instrument that often was made out of precious metals because it was only for special people like surveyors. And you would basically hold one part of it and trace the outline and compute the area. Once you complete the loop, you've computed the area. You can read it off the instrument. And that's an example where we're turning an area integral, the integral of 1 over that area, into a contour integral. As you go around the edge, there's a little wheel that slips on the paper in just the right way and so on. So this isn't just something useful in digital domain. It's been used in the analog world as well. So area is an example of something where this is a useful formula because we can compute area by doing something on the boundary instead of doing something in the interior. But it turns out that there are lots of other things we want to do. For example, in vision, often we want to know where something is. And in the simple case, it's a blob, and where is it? Well, the centroid is a very good way of talking about the position of a blob in an image. And so how do you compute the centroid? Well, again, you can go pixel by pixel and cumulate x times something, throw it in accumulator, and when you're done, you divide, and you get the centroid. Or you can go around the boundary, which is much faster. And continuing along that line of reasoning, there are things called moments of which the centroid computation is the first and the area computation is the zeroth. But generally, you can compute moments by just going around the boundary. And moments are useful in describing shapes. So so far, we talked about area and position, centroid. But you might also want to be able to say something like, oh, it's elongated. And so these moments are useful for that, and they can all be computed in this clever way. So let's apply Green's theorem. There's a 3D version of that, turning 3D integral, volume integrals, into surface area integrals. But since our images are 2D, we typically don't need that. OK, so we apply this to our problem. So we're trying to match something in this formula with what we were doing, and I guess we had pdx plus qdy up here. So let's try that. Let's try using Green's theorem. So m is q dq dx minus l is p. And we're saying that that is zero, right? We're going around the loop. OK, and this has to be true for all, not just particular loop, but any loop-- small loop, large loop, whatever. And that can only be true if the qdx equals dpdy because suppose that quantity, that interbrand, was non-zero at some point. All I need to do is construct the loop that includes that point, and I'd have a contradiction. So that again confirms the conclusion we arrived at painfully here by making a small square and then using Taylor series. This is a slightly more elegant way of getting to the same thing. OK, so what to do? p and q do not satisfy that constraint. So we could try to introduce that as a constraint. You could say, OK, find the z of x and y that has these p's and q's as derivatives approximately and also satisfy that constraint. Well, we haven't talked about how to solve constraint optimization problems. That's a little harder than the least squares we've done so far. So let's try a different tack. So let's basically do just brute force least squares. So we try and look for a surface z such that that is the smallest possible. Right, so ideally, if there was no measurement error and we had the correct surface, then there was a z such that its x derivative matched the p we computed from the image and its y derivative matched the-- sorry, the p that we got from the image and this one matched the q that we got from the image. But because of measurement noise, we expect that will not be possible. So let's at least try and make it as small as possible. OK, so that's the least squares problem in a nutshell. And well, we solve it as usual. We just differentiate with respect to z. Yes, can we do that? What is z? Is z a parameter? Right, so now unfortunately, we can't do that. If we had a finite number of knobs, we could turn-- parameters, variables. Then we could just differentiate with respect to each of them and set the result equal to 0 and we're done. That's what we've been doing. Unfortunately, z here is a function. And so there's no sort of sense of a way of talking about the derivative with respect to a function. So it has not a finite number of degrees of freedom. It has an infinite number of degrees of freedom. And there's a subject in mathematics that deals with that-- calculus of variations. And it's called that because basically, you say something like suppose that z is the solution. Then if I make any small variation on z, that integral should go up. And based on that very sensible idea, you can come up with some equations for solving that problem. But we're going to remain in the trenches and work with discrete version for a moment because-- and this particular problem is pretty simple. The reason to work through it in detail is because it hides a lot of the pitfalls that you might run into when you solve a more complicated problem. OK, in a discrete case, we don't have a function of x and y. What we have is a bunch of discrete values on a grid, so a finite number of unknowns-- maybe huge, maybe a million. But we can use calculus, ordinary calculus. Just differentiate with respect to each of them. Set the result equal to 0. So true, we're going to get a lot of equations, but our methods apply. We don't need to invent something new. So this is what we're looking for, and what we're given is again a set of gradients on a grid with measurement noise in them. And we're trying to find a value of z at every grid point that will make the error as small as possible where the error is the discrete equivalent of this thing. So let's look at that. So i and j are row and column numbers. So slight annoyance here for computer scientists and mathematicians, which is that you would like the first discrete index to correspond to x and the second discrete index to correspond to y. And as a student, I had a real hard time with that. But of course, in mathematics, we count rows down and columns across, and we write them ij in the other order and also with I reverse. OK, so this is-- what is this? We're trying to estimate the derivative in the x direction. And that's why we were varying y rather than i, right? And so this is our estimate of the derivative in the x direction. And that should be small. It should match what we observed from the image. And we also need to add in the other term. We also want to match the derivative in the y direction. And now typically, there aren't the set of values of z that will make both of those terms zero. So we kind of compromise. We just want to make them as small as possible. So we're going to minimize this. And what can we minimize? Well we have the set of zij's. OK, so we then differentiate and set the result equal to 0 for all possible values of ij. So if the image has a million pixels, there be a million equations coming from that. Fortunately, because we picked least squares, the equations are all going to be linear. So as much as the disadvantages to using least squares, such as not being robust against outliers, we can solve these equations because they're just going to be linear equations. OK, so here, pay attention. Big problem-- if you differentiate with respect to zij, you get the wrong answer. Why is that? Well, ij up here, these are dummy variables. If I replace ij with alpha and beta, it's the same sum. But then if you differentiate with respect to zij, so another way of thinking about it in programming language terms-- you know, you've got an identifier collision. You're using i and j to mean one thing in here, and now you're trying to make it mean something else. So here we have a sum of all possible ij's, and now you're going to differentiate. So long story short, pick some other identifier names. And you might think, OK, that's kind of obvious. Yes, but there have been papers published that got this wrong. So it's possible. It can happen. OK, so, well, this sounds kind of messy because we have a sum over a million terms, and now we're going to differentiate. But the good thing is that, yes, we have a lot of equations, but they are all very sparse because if you think about, OK anything here having to do with row 1,000, when you differentiate with respect to z in row 990, it's not going to have any effect. So all of those derivatives are 0 except a small number, the small number where kl matches up with an index in here, ij, or kl matches ij plus 1 or kl matches plus 11j. That's it, three terms. So we're going to have a lot of equations each of which has three terms in this case. OK, so let's try that. So let's first try where we match kl is i comma j. So there's a square. So differentiate that, we get 2 times that times the derivative of the thing inside with respect to zkl, which is going to be minus 1 over epsilon. Right, so it's a square. We get twice that term. And then we have to differentiate what's inside with respect to zkl. And we said that it matches this term. So it's going to be minus 1 over epsilon. And sorry if I'm going over this in a very pedestrian way, but you'll need to do this in a more complicated example, and it's easy to not do it right. So OK, so that's that one. But there's also this match. So we have to add to this 2 times this term. So we get twice this term and then differentiate this with respect to zkl, which is matching that. So that gives me minus 1 over epsilon again. OK, so that's-- there are two terms. Then we also have to consider where kl is ij plus 1. In other words, k is i. l is j plus 1, or conversely, j is l minus 1. Right, so we differentiated with respect to that. We get 2 times this OK, OK. So we get twice that term, and we have to differentiate what's inside with respect to zkl, which matches that one. So it's going to be plus 1 over epsilon. And we're almost done. Just one more term. This one here, now we have k comma l is i plus 1j. Or in other words, k matches i plus 1, meaning i is k minus 1. And l is just j. So we get that one. That's going to be zkl minus z. OK, i is k minus 1, l over epsilon minus-- and plus 1 over epsilon. And all of that's supposed to be 0, right? That's the least squares method. And this is going to be true for all points in the image. So fortunately, this simplifies. So first of all, of course, I can ignore the two because if twice something is 0, then something is 0. So forget the 2. I could also cancel out one of the epsilon, but I'll keep them just for reasons that will become apparent. OK, so now I'm going to gather up the terms. Let me first gather up all the terms in p. Well, there are only two of them. So that's pretty straightforward. So I get pkl minus pkl minus 1 divided by epsilon-- so those two terms. And let me gather up all the terms in q. So I get qkl minus qk minus 1l over epsilon. And then let me gather up all the terms in z. Well, there are a bunch more of those. And they are 1 over epsilon squared. So let me write that out front. So we're going to have minus zkl plus 1 minus zk plus 1l minus zkl minus 1 minus zk minus 1l. And then zkl occurs four times. Well, let me just keep the epsilon squared. And all of that's supposed to be 0. So what are those terms? Well, this looks awfully like the x derivative of p, right? Because we take the value of p at positions, separate it in x, and we divide by epsilon. So we can think of this as p sub x. And this looks awfully like y derivative of q. And then I claim that this thing here corresponds to the Laplacian of z. Let me see why. OK, so I can draw a little diagram here and gather up all these terms. So I get 1 over epsilon squared, and I have four for kl and then minus 1 for all of the neighbors. And now z sub xx OK, so those things I'm drawing down there are variously called computational molecules or stencils or something like that, and they're a graphic way of showing how to compute an estimated derivative. Now I didn't bother for these, but I could. So I could, for example-- that's the computational molecule for the x derivative. And then here's the computational molecule for the y derivative. It's so obvious, we don't even bother with that. But when we get to higher derivatives, it's handy to draw these diagrams. So if we could convolve this with itself-- remember 603-- then you get that. And if you convolve this with itself, you get this second derivative. So just how do you convolve? You flip and shift. Anyway, you could also get this from Taylor series or any number of other methods for estimating derivatives. OK, so then if we combine these two, if we add these two, we get that diagram-- well, except for the sign. So this is actually minus the Laplacian but it occurs on the same side of the equal sign. So when we move it to the other side, it becomes plus the Laplacian. So the Laplacian is actually heavily used in machine vision, particularly pre-processing. So we should become familiar with it. By the way, another way of writing it is that. And engineers have a different point of view on that as mathematicians. I gather mathematicians prefer the first way of writing the Laplacian, and fluid dynamics people like the other one. But whatever. It's a Laplacian. So the Laplacian is an interesting operator from many points of view. It's a derivative operator. We've already seen the brightness gradient plays an important role. First derivatives-- for some operations on images, we would like things to be rotationally invariant. In other words, if you are performing some image operation enhancing edge detection, you don't want to depend on your choice of xy-coordinate system so that when you turn the coordinates, you would expect things to work out pretty much the same. Of course, they'll have different coordinates, but they'll be in the corresponding place. So one of the big questions is, are there derivative operators, which are very useful in edge detection, that are rotationally symmetric so that when you rotate the coordinate system, they do the same thing? Well, the Laplacian is the lowest order linear derivative combination that does that. It doesn't look like it, does it, because we're adding second partial derivative with respect to x and the second partial derivative with respect to y. But if you go through the pain and agony of changing coordinate systems and computing the first derivative and then computing the second derivatives and you add them up, and magically, in the new coordinate system, you get exactly the same. So suppose we have a coordinate system like this, and then we go to a rotated coordinate system. It turns out that zx prime-- that in that rotated coordinate system, the Laplacian is the same. I'll spare you the details. It's pretty boring. But this makes the Laplacian a lot of interest in all sorts of image processing operations because it is rotationally symmetric. Of course, on a discreet grid, we are going to be somewhat biased anyway. Like, does this look rotationally asymmetric? Well, not terribly. So there are actually better approximations if you're on a square grid, and we'll talk about them. And if you're not on a square grid, you can do even better. For example, if you're on a hexagonal grid, you can get a better approximation. Anyway, back to this-- so what we've done over here is we've bypassed the calculus of variation for the moment, and we somewhat painfully went into the discrete world of discrete pixels. And we found an equation that has to be true at every pixel in order for this to be a least squares solution. That is, this is a solution that minimizes that error. It's the best fit. It's as good as we can get. And then sort of looking at this formula here, we decided oh, maybe that's the discrete version of that continuous equation. And that's exactly right because if we use the calculus of variation, we go directly to that in one step. So why didn't we do that? Well, because there's this huge learning curve. So it's one step after you do a lot of stuff, and I didn't want to do that right now. So anyway, what do we do with these equations? How do we solve them? Of course, we can-- they're linear equations. So we can use Gaussian elimination or some perhaps slightly more sophisticated algorithm, all of which take a long time for lots of equations. Right, typically, they go something like n cubed or, if you're a complexity theory person, n to the 2.76 or whatever with a very large multiplier in front. But whatever. Once n is a million or 10 million, that's a large number, and it's not likely you're actually going to be doing that. So what to do instead-- well, the great thing is that we have a sparse set of equations. So we got a million equations, but each of them only has a handful of terms in it. So they only involve-- each equation only involves five values of z. So we got-- you think of writing them out. There's a million rows of these, but each row only has five non-zero elements. Well, it turns out we can then solve this iteratively pretty simply. And of course, it's easy to propose an iterative solution. It's hard to show that it converges. So we leave out that part, and we'll just appeal to the textbooks that say that it does converge. So what can we do here? Well, we can pull out one of these terms and pretend that we know-- we have an initial guess. So we know what these values are at the moment. And we're going to compute a new value for this one. So let's see if I got it written down or not. So I'm going to use subscript in parenthesis as a way of indicating the iteration step. So this is step n plus 1. And that's step n. OK, I almost wrote superscript parenthesis n here for the n step. But no, these are from the image. They're fixed. Once we've processed the images, that's given. The only thing that might change is this lot. And what is that? Well, that's local average. And in our case, with that computational molecule, that, looks like that. So the iterative step is very simple. We go to a particular pixel, and we get a new value by taking the average of the neighbors from the previous step. And then we add in this correction here, which is based on the image information. And what does this do? Well, it brings us closer to satisfying the Laplacian condition because this is the estimate of p sub x and this is the estimate of q sub y. and the difference between this and that is our estimate of a Laplacian. So as we go along, we adjust that pixel value using that very simple equation. So here's where the sparseness comes in. This wouldn't be very useful if we had a million terms in this. But we've only got four neighbors to consider. And you can show that this converges. We won't do that. And it's going to be very much faster than trying to solve it with Gaussian elimination, particularly since you don't need infinite precision. You can stop after a certain point and relax. So a couple of things to point out-- first of all, this is closely related to solving the heat equation, this type of iteration, because the heat equation is the second order PDE just like the equation we have. It's also the diffusion equation. And so there are loads of techniques available from other fields that tell us how to do this efficiently, how to do it in parallel, and whatever. So again, the idea is you step through the pixels. And at each pixel, you compute the average of the neighbors and then add in a correction to them. And that's the new value. And you just keep on doing that. Now for a start, you don't have to do this sequentially as Farnborough did the rows of his field. You can do this in any order you want. And in fact, there are advantages in terms of convergence if you do that. Also, you can parallelize this. You can do as many of these as you like in parallel as long as they don't touch each other, right? So if you are making a change to a pixel based on some neighbor, you do not want the computation that changes the neighbor at the same timestep. But that's huge because we can divide the image into, I don't know, nine sub-images, little 3 by 3 blocks. And while we're operating on this one. We're not allowed to touch these others. But that's fine. That means we can have huge parallelism, a million pixels divided by 9. Well, we probably don't have 110,000 processes. So we can do something slightly more intelligent. Anyway, methods for numerically solving these equations are well--known, and it's a very stable problem in the sense that if there's noise in your measurements, it'll still converge. It'll still work, and everything will be cool. OK, so-- OK, we have a bit of time. So what have we done? Well, what we've done right now is learned some techniques because in a way, this isn't directly useful except for the case of photometric stereo. And we now want to move on from photometric stereo. But the kind of tools like this discretization and avoiding the conflict of identifiers and then looking at the terms and seeing, oh, this is an approximation of the derivative-- those will reoccur. So it was good that we did this. But beyond that, this shows you if you have photometric stereo data, you can reconstruct the surface in a least squares way and get a reasonable solution that matches the experimental data. OK, now we're going to go back to the somewhat more challenging problem of single image reconstructions. And we solved one case already, which is if the surface had these very special properties. Right, so let's just go over that a little bit. So this was where we had-- brightness was a function of that, which was incident. And in this case, the reflectance map was very simple. We had these parallel lines. They won't be equally spaced because of the square root, but they're parallel lines. And each of them has the 1 plus psp plus qsq equals a constant. So the isophotes in the reflectance map are these parallel straight lines. And that's actually what allowed us to solve the problem because then we said, well, suppose I rotate to a somewhat more useful coordinate system. In that coordinate system, I can just determine the slope in that special direction. So if I read off brightness value like this, in the original coordinate system, it's a linear relationship between p and q. But in the new coordinate system, it's conveniently giving me an exact value of p-prime. And it tells me nothing about the q prime. So I've sorted of maximized the information in one direction and made it worse in another direction. And I'm just going to write this down. I'm not going to do this carefully, but it turns out that we will need to deal with these coordinated transformations at some point. So-- oops. So-- oh, and what is this angle? Let's call it theta s. Since that ray, that access p prime, is perpendicular to all of these lines-- right, I'm going to have that p comma q dot ps qs. It's going to allow me to determine that angle is given by this triangle. qs-- so I can-- well, might as well write it down. So whatever the light source position is, I can figure out what that angle is. So actually, the light source is somewhere along this line. Let's say it's up here. OK, now what I want to do is rotate the image ahead of time so to make this unnecessary later. So what's the relationship? Well, I'm sure you know this. And we can call it theta s or just theta for simplicity. OK, but what we need is a relationship between p and q and p prime and q prime So we need dz dx prime is dz dx dx dx prime plus dz dy dy dx prime. And of course, this is p prime that I want to go for. This is p. This is q. And so I'm left with that. Well, annoyingly, that's going the wrong way. So I actually need the inverse transform of this. So what's the inverse of rotating through theta? Rotating through minus theta, right? So x is x prime cos theta minus y prime sine theta, and y is x prime sine theta plus. OK, so then this is dx. dx prime is cosine theta. And this is dy dx prime. So that's sine theta. So actually, I got p prime is p cosine theta plus q sine theta. And similarly, I'm going to get q prime is minus p sine. So this is nice. And you might say, well, I could have guessed that, right, because it has the same form. Well, think again because when you get to the second derivative, this doesn't work. You can't just guess the answer. And that may be relevant because we will be talking about-- we're talking about the Laplacian. So right there, if you were trying to prove that theorem I said about rotational symmetry, you'd need to get second derivatives, and then this wouldn't work. Anyway, so I'm going to do that. I'm going to rotate these coordinate systems, and then I'm going to end up with something like-- which relates the image measurements to the slope. And the details of the equation isn't so important. What's important is that I can be at a particular point in the image. I read off the brightness, and I compute this quantity. The other things are all known. I know where the light source is. And I've got the slope, which is amazing. I have the slope of the surface in a particular direction, and that's enough. We can now reconstruct the surface. But this did require that we rotate the coordinate system, that we have a particular direction. And that's going to be true in the general case. So this is only for Hapke, where we have nice, straight line isophotes in the reflections map. But we'll generalize that. And OK, so what do we do with that? Well, we went over this a little bit last time because we can just integrate out z of x is-- right, so we have the slope. We just integrate it out. In terms of discrete implementation, we're here. We go a small distance delta x-- like, I don't know, a pixel. We know the slope. So we know how much z is changing. Right, so we have a new value of z And then we look at the image and say, what's the brightness here? And using that formula, we find a new value for p prime, and we continue. We take another little step delta x, and we get there and so on. And so we can build up a shape in the discrete world. And we can go the other direction as well. We can go minus delta x and build up a profile this way. So that's kind of amazing that you can do that. Well, that's a single profile. But now in the image, we have the y direction to deal with as well. But we can do this for any y, right? But in each case, we're confined to running along a row. There's no interaction between the rows. We just treat it as a 1D problem along each row. We read off the brightness at each pixel. We convert it to the slope using that formula and then use that slope to see where we go. So there are couple of things about this that are interesting. So one is that if I add a constant to z, does anything change? So in fact, let's go back to this one up here, this overdetermined problem which has now sort of disappeared. If you add a constant to z, does the Laplacian of z change? No, right? So that's interesting because that means that, OK, you can recover z. But actually, the absolute height of z you don't know. And that's reasonable because if brightness depends on surface orientation, then moving the surface up and down shouldn't have any effect. And in particular, in the case of orthographic projection, it doesn't change in any way. It's not even magnifying or minifying or something. And so similarly here, the shading information tells us nothing directly about depth, only about relative depth. You know, what's the slope of the surface? And so here, to actually get a reconstruction, I need an initial value for z. And, well, that's a problem. Now if it was just-- like in the case of the Laplacian solution, if it's just an overall change in height, that would be easy to deal with. That's just one unknown. But here, we potentially have a different initial value for every row that we're integrating along. So you can imagine that we're computing these solutions, and we have an initial value, say, here. And we integrate it out. But we can pick those independently, or somebody could go there and measure it. But that defeats the purpose. We're trying to find the surface shape of the moon before anyone goes there. And so having someone go there and survey it kind of defeats the purpose, although it does mean that you only have to survey one curve, and then everything else sprouts off from that. And we already talked about various heuristics to try and build something even though we don't know that. So we don't need to have these initial points along the y-axis, though. We could have initial points along any curve. So that means what we actually need is an initial curve. Doesn't really matter what the curve is as long as it's not parallel to the x-prime . Axis OK, and then now, we can map this back into our original unrotated world. And these profiles now will run at an angle. And we have some sort of curve here. Now let me introduce coordinates. Let's call the initial curve is some function of eta. So initial curve is x of eta, y of eta, z of eta. Right, so we'll assume that's given. And then we go away from that using some other parameter, [INAUDIBLE]. And that's our x prime in this case, but we want to generalize that. So this is going to be very similar to the general method. And so we take a step delta x along this curve, which would be a step in x prime. And because of the rotation-- so this is the scheme we're following. So we're we exploring the surface by moving along these curves, and from step to step, this is what we do. We have delta x and delta y change in a way that's at an angle, theta s, in this original xy coordinate system. I've gone back to that system because in general, we won't be able to use this trick. It's only for Hapke surfaces that it works. And so you can see the numerical calculation is very straightforward. We pick up the image brightness. We plug it into this formula, and we get a step in the z direction. And so we just continue doing that and explore that curve. And then we do the same for the next curve, and they can be computed independently. So you can do them in parallel if you want. Just one little note-- you can change the speed that you explore the solution in because if this tells me to take a step delta x, delta y, delta z, maybe I'll take 2 delta x, 2 delta y, 2 delta z or a half. And how would I decide? Well, suppose that you only move 1/100 of a pixel. That's probably a bad choice for increment. Or suppose you move 10 pixels. That's a bad choice for increment. You want to reduce it. And the convenient thing is we can easily change the speed. We just multiply all three of them by the same constant. And so if you want simple equations, we can just multiply by that constant, and you get a somewhat simpler looking expression. Right, so we're moving in the x direction proportional to qs. We're moving in the y direction proportional to ps. And those are fixed. We're just driving along. And then in the z direction we have this simple formula. And so what we'll do next time is generalize this to arbitrary reflectance map, not just Hapke-type surfaces. And please do have a look online for pictures of images taken by scanning electron microscope. They're really neat, and they provide nice examples of shading and our ability to interpret shading. So--
MIT_6801_Machine_Vision_Fall_2020
Lecture_3_Time_to_Contact_Focus_of_Expansion_Direct_Motion_Vision_Methods_Noise_Gain.txt
[SQUEAKING] [RUSTLING] [CLICKING] BERTHOLD HORN: We'll start off today by talking a little bit about noise gain. In other words, the relationship between errors in measurement and errors in estimation of quantities you're interested in about the environment. And I just happen to have this example on my computer. It's not vision-related but illustrates some of the points. So what is this? This is a indoor equivalent of GPS where instead of using the timing of signals from satellites, we use the timing of signals from Wi-Fi access points. And it's been in the works for several years, but it doesn't really exist yet. For example, at this point, you need Android 9. And the only phone that runs Android 9 is Pixel. And most of the Wi-Fi access points don't-- well, all of the access points don't support it. I've been driving around looking for them. I've registered thousands, and I found one. Anyway, the idea is that in the future, we will be able to measure distances to access points. And of course, you can imagine that the accuracy is somewhat limited, because electromagnetic radiation travels at a nanosecond per foot. So you have to really measure these round-trip times from phone to access point and back with very high precision. But suppose you do. So you have a bunch of access points. I mean, we've got four right here in this room. And then, your job is to determine the accuracy of your location. And just as in GPS, you may be able to measure the distance to the satellite to, let's say, 10 meters. That doesn't mean that the accuracy you can expect of your x, y, z coordinates-- latitude, longitude, and altitude-- is 10 meters. In fact, it's typically a lot worse. And in GPS terminology, that's called dilution of precision. And in GPS, it's different for horizontal than for vertical. That's another interesting point, that you can determine your horizontal position with higher accuracy than your vertical position. So like this morning, my phone's GPS said I was at minus 36 meters. And I know Boston is at sea level and the water level is rising, but that's probably not accurate. So point of that is that when we do this noise gain analysis of some machine vision process, it's probably going to be different in different directions. So it's not just a single number that says your accuracy is 1 meter. So back to this one here. So the green points are for Wi-Fi access points-- nicely, symmetrically placed. The red points are possible places where your cell phone can be. And the circles, or ellipses, are areas of-- or contours of constant error. So in other words, if you move out to that curve, you will be in error with respect to the distances you can measure. And so turning that around, it tells you the accuracy you can expect. And it looks very nice in the middle there. They're small circles, meaning, A, that you will be able to determine your location pretty accurately and B, that there isn't a difference in different directions. It's not like you can determine x better than y. But then, when you go outside the convex hole of the responders, you'll see that things become elongated. We have these ellipses, and they get larger down here. What does that mean? Well, it's that you won't be able to measure your accuracy very well in this direction. But you can still determine it pretty well in that direction. And why is that? Well, if you think about the distances to the responders and how it changes as you move around, if you move in this direction, you're moving at right angles, pretty much, to those vectors. And so you're not changing the length of those vectors by much. And so correspondingly, you won't see a big change in the signal. And so correspondingly, you won't be able to accurately determine the position. Whereas if you move radially, if I move in this direction, I directly have an impact on the distance to that one and the one up there. And with this one, I'm at an angle. But it's like 45 degrees, so that's square root of 2-- 1 over square root of 2, so that's 70%. Yeah, right. It's the latter. So the idea is that the red is a possible position for my sensing equipment and the circle around the red is how far I have to go before the error is a certain threshold value. So in other words, if I'm here and I measure these distances and there is no measurement error, then the sum of squares of errors will be 0. If I move a little bit away, my distance from the responders is wrong. And I can-- in this case, I have four of those numbers. And I look at how big they are, and I add up the sum of squares as a overall measure of error. So when I'm drawing the circle, that's the locus of all the points that have the same size error. And out here, you can see that I have to go further away before I see that error. And conversely, I'm less sensitive to the error and I will be able to not determine the position as accurately. So that one's kind of pretty clean. But let's suppose that we have three transponders. Now, you can see it gets a bit messier and slightly surprising results. One is that out here, these things that look like ellipses become non-elliptical, because they are ellipses in the limit of very small displacements. But for larger displacements, the curve can have any shape. But in most cases, we'll be focusing on the infinitesimal case-- make life simple. So what's interesting about this? Well, one of the things is that it's pretty accurate down here, which is away from the responders and it's away from the centroid. You'd think, well, maybe it should be pretty good at the centroid, or at the intersection of the bisectors of the angle, or the intersection of the perpendiculars drop on the other side of the triangle-- something like that. But that's not the case. And so this is interesting, because it means that you may be able to do things away from where the responders are. So the responders could be like down a hallway, and it'll still give you good accuracy away from that. At the same time, we can show that if you put them in a line, that's a really bad idea. And of course, that's what they are here-- everywhere. They're just-- anyway. And again, the idea here is that if I'm over here and I move a little bit to the side, I'm not changing the distance to those three responders much, because I'm moving pretty much at right angles to those three vectors. And therefore, I won't really notice that I've moved. And therefore, conversely I can't determine my accuracy in that direction, as well. So that's with three. And let's go one more. So that's with two responders. And you can see that there are some areas where it's working just fine over here, but in between them, not so good. Why? Well, because if I move horizontally, I'm not really changing the distance very much. It would be second-- it would be quadratic. It will be moving, changing as x squared. And so for small x, it's a very small change. And even worse out here. And then, you start to see something. Maybe you can see a locus of places where it's working pretty well. And their property is that from each of those positions where it's working well, the two responders seem to be at right angles to each other. And that makes sense, because basically you're saying, OK, I have a constraint in a direction and a direction at right angles. If I rotate the coordinate system, I can make that x and y. I have a constraint in x, and I have a constraint in y. That's like the ideal condition. If the two directions are very similar, then I can move at right angles to the average and not change things very much. So it turns out that points where the responders appear 90 degrees apart are particularly good. And what is the locus of all such points? Geometry, theorems about circles-- so the points on the periphery of a circle have the diameter, the endpoints of the diameter, at a right angle apart. That's not the way to say the theorem, but I think what I mean. So if I draw a circle with the line connecting the two responders as diameter, points on that circle will have that property. And so not surprisingly, those are the points where I'm getting good accuracy, even in this simple case. OK, enough of that. Let's go back to-- So the idea really is very simple. It's that we have some sort of forward transformation that takes us from a quantity and environment we want to measure-- I don't know, distance, velocity, whatever-- to something that we can observe in our instrument, the camera, say. And so the forward problem is that we-- let's just take the simple case of a scalar. There's an x. And we don't observe x directly. We can measure f of x. And of course, in vision, our problem is to go the other way. If I measure f of x, what is x? And if f is inevitable-- nice, smooth, monotonic function, we can do that. But then the second question is, OK, my measurements aren't perfect, so I don't really know f of x perfectly well. What does that mean about x? And OK, that means x won't be accurate, which isn't catastrophic, but how much bigger is the error? And of course, the answer is very simple. So here's our function. And let's say we have a certain x. Then the forward direction is we take the x, we go up to the graph, we read off a certain value, y. And what we're doing is we're getting y from our camera or whatever, and we go in the opposite direction. So we invert that function. Of course, there'll be problems if it's multi-valued. Then there won't be a unique answer. But for the moment, let's assume that we can invert it. OK, so then the next question is, what if there is some error? Well, in the forward direction, it's pretty straightforward. Suppose that I take a second value here, then there will be some error up here-- delta y. And what's the relationship between delta x and delta y? Calculus, derivatives-- OK, you know what the answer is. It's delta y over delta x is basically the derivative of f of x, or at least in the limit as we make those very small. So the relationship between the noise in the quantity we're interested in and the noise in the measurement is just the derivative of that transfer function. And so conversely, we're trying to go the other way around. We'll just turn this on its head. And now we see that the noise in our result is related to the noise in the measurement by this 1 over f prime. So clearly, f prime equals 0 is bad. And that's not too surprising. If this continues and becomes flat, that means that the thing we're measuring is not responding to the quantity we're interested in. So no surprise we can't recover it. But also, if f prime of x is small, that's not so good. Because if we're up here, and we make some change in x, that's going to be an almost imperceptibly small change in y. And conversely, if there's any noise in the measurement, I don't really know where I am. Because suppose the noise in measurement is that, well, that means they have an uncertainty of that size in the quantity I'm trying to estimate. And that's it. It's just-- this is the noise gain. In a lot of situations, we don't worry about it too much. We have to worry about it in machine vision, as I mentioned, because of the noise in the measurements. OK, so that's real simple case. We just got scalar quantity. Let's extend this a little bit. We're recovering a vector. So let's suppose-- so this is the forward direction. There's something we're trying to measure, like the location of a robot manipulator arm endpoint in space-- x, y, z. And we're using a camera or a couple of cameras to image it. And so we're measuring something in the image that we'll call x, which is a transformation of the quantity we want. Now, in that case, it won't often be that linear, but let's suppose we'd have a simple case where there's a linear transformation. Well, then of course, we just do that. If we can invert that matrix, that's our answer. We've estimated where the endpoint of the robot manipulator is. But of course, we are also interested in, how good is that answer? And so we'd want to know if I change x a little bit, how much does b change? And so a crude way of talking about gain would be just to take the magnitude of the change in the result and divide it by the magnitude of the change in the measurement. But that's not-- it doesn't take into account anisotropy. Like we saw in those diagrams, the error may be very low if you go in a certain direction. But it might be quite large in another direction. So the answer is a little bit more nuanced. We should be a little bit more careful. So let's say a little bit more about this. How do we solve linear equations? We use Gaussian elimination. And if you do that, you come up with some formula, which includes the inverse of the determinant. And so the conclusion is that determinant equals 0 is really bad, because then you can't do this. And actually, the magnitude of the determinant being small is also not that good. Why? Well, because you're going to take 1 over some small number, gives you a large number. And then multiply your experimental measurements by that large number. And so any tiny, little deviation in that is going to be magnified by the inverse of the determinant. And let's just go back to an example that we did do, which was 2D. So our matrix in this case is 2 by 2. So as a start, just to refresh your memory-- and I did this last time, but I think I got a lot of blank looks, and so I'll do it again. So what's the inverse of this 2 by 2 matrix? It's that. And how do I know that, or how can I verify that? Well you just need to multiply. And what do you get? ad minus bc, and then ab minus ab plus ab. And here we get cd minus cd. And then finally, we get minus bc plus ad. So these, of course, are 0. And these are the same as that. And so for 2 by 2 matrices, we think it can explicitly get an inverse. And of course, we've got a lot of interest in that quantity. Because if that quantity is small, than our calculation is subject to amplification of error. And this came up because we were looking at computing image motion. So we had-- at two pixels-- we had our constraint equation, which looked like that. So he had two of these. And now, we're solving for the motion u and v. And so of course, we just get-- just using the formula up here and so on. So we just plug that in. So there's an example where we can apply this idea of noise gain directly. And we know that if this quantity is small, we're going to be amplifying noise a lot. And we already went through the argument that means that the brightness gradients at those two pixels are similar. They are oriented in the same or almost the same direction. And so that means they're not providing very different information. It's almost as if you'd only made one measurement. So no wonder the answer's flaky. And we can go into the velocity space diagram. Each of these constraints-- this is a straight line. Why? Well, because it's a linear function of u and v equal to 0. So that's going to give us a straight line. This is going to give us a straight line. We're basically looking for a point in the plane that's on both lines. So we're looking for the intersection of the two lines. If the two lines are at right angles, that's a very well-defined point. If the two lines are almost parallel, then you can imagine that any small shift can move the point of intersection a lot. So this is the case that's not so good. And if I move this one a little bit, say I'll move it over here, there's a huge change in the intersection point. So that corresponds to the case where the two gradients are almost parallel, and that makes this quantity small. Notice that not all hope is lost. It's true that the component in this direction-- we don't really know very well. But we've got very good constraint in that direction. So that's another lesson, which is we may have a situation where the noise gain is high, but it may not be equally high in all directions, as we saw in the diagrams I showed you. And it's good to know which component can you trust. If I'm going to have my robot move over and pick up a part, that's a useful thing to know. OK. So let's review a little bit, and I want to go on to something called time to contact. We're kind of cutting across the material in a diagonal way. I mean, it would be nice to present all of this stuff first and all of that stuff second. But you'd probably fall asleep. I want to motivate you by showing you that if you put these pieces together, you can get some pretty powerful results. So let's go there. Let's see if we can, first of all, review what we've done so far. So we have this idea-- the constant brightness assumption. And we have that image solid, which was a function of x, y, and t. And we followed some curve through here, which is perhaps the image of some particular thing in the environment that moves or the camera moves. But we're assuming that as it moves, its physical properties don't change, so it's still going to be imaged with the same brightness. So along this curve, this holds true-- the total derivative. And from that, we got our constraint equation. You're going to get probably pretty tired of seeing that one. And so this is the constant brightness assumption. And from it, by just chain rule, we get this constraint, which is also called the brightness change constraint equation. And we use this in many different ways. I mean, it's fundamental to what's happening in an image when there's motion. And then, we looked in particular at the optical mouse problem, which is a simplified version of things we'll be looking at later. It's simplified in the sense that we're assuming that the optical mouse is working on a flat surface and that the whole image is moving as one. In other words, u and v is the same for the whole image. And that's obviously a very nice and simple case. And we said that one way of dealing with that is to turn it into a least squares problem. So we're going to add up over the whole image in x and y direction, we're going to integrate-- or in the discrete case, take some of the rows and columns-- of this quantity, which supposedly is 0. So we should get 0. Why might it not be 0? Well, if you plug in the wrong values for the velocity, u and v, you won't get 0. And so one way of trying to find the right velocity is to find the minimum of that integral. Now, in practice, your measurements-- Ex, Ey, and Et-- will be corrupted by noise. And so you'll never actually get this integral to be 0. So the answer isn't we're going to find the place where it's 0. The answer is, we're going to make it as small as possible. And that's our estimate of the correct value for u and v. And we could justify that by some hairy, probabilistic statistical argument, but I think it probably is not beneficial to go there. We'll just-- it seems like an intuitively right thing to do. OK, so that's what we're going to minimize. And we only have two parameters, so clearly this is a calculus problem. We just take the derivative and set the result equal to 0. So we're assuming that this is varying smoothly with u and v, as it obviously is. Yeah? Well, OK, so let's put it this way. So suppose that I have made it Ex, Ey, Et, and now you tell me u is 1 and v is 2. And I can plug it in to the equation, and I get some large number. I'm likely to say, I don't think you're right. And then he says, well u is 1 and v is 3. And I plug it into this equation, I get a smaller number. Which one would you trust? So it's-- ideally, this should be a very small, or even 0. And-- AUDIENCE: And why is that the case? BERTHOLD HORN: Sorry? AUDIENCE: Why do you want a smaller number for that? BERTHOLD HORN: Oh, well, because if there was no measurement error-- if there was no error either in the measurement or in your knowledge of u and v, then it would be 0 at every pixel and so then integrating over the whole image. OK. All right, and I'm not going to do that, because that's too close to what you did in the homework problem. So we'll stop there. OK, what are we going to do? Well, a very simple case is where the velocity is the same everywhere-- optical flow case. And we did that. The other other simple cases where u and v are not constant, but they vary in a very predictable way in the image. And we saw that last time when we were talking about the focus of expansion. And so let's, again, review and go back to all the way to perspective projection. Right, so we had a relationship between world coordinates and image coordinates. So the capital letters are coordinates in the world, and the small ones are corresponding in the image. So that was perspective projection, again, in a camera-centric coordinate system. And then we said, well, now suppose things are moving-- big X, big Y, bz are changing. And how are little x and little y changing? Well, we just differentiate with respect to time. And we get 1 over f, dx dt is 1 over z big X dt. And then, unfortunately, we have this one as well. Or written in a slightly nicer form-- right? Because dx dt is the velocity in the x direction. And that's what we've been using the symbol u for. And similarly in 3D, we'll use the big U stand for the velocity in the x direction in the world. And similarly, so we have that. And then, we looked at the situation where u is 0 and v is 0, and we called that the focus of expansion for that particular motion. And so if we plug in little u is 0, we get a relationship here that is-- oh, where's my x0? So let's do it. So we got 0 is 1 over zu minus-- and now the big X over big Z we know is x over F. so And this is x0. This is where the velocity is 0. And similarly, for the y direction, so we get x0 is f over z, u over w. So we have relationship between the velocities, the distances, and where the focus of expansion is. And then, we talked about that quantity that occurs there, which is w over z, and what is that? Well, easier maybe if we turn it on its head, as we did last time. And of course, big W is the component of motion in the z direction. So it's dz over here. So this is a distance over speed. And so we last time said the units would be meter over meter per second or second. So this is actually the time to contact-- how long it's going to take before we crash into that object if nothing changes. And that's obviously a useful quantity to try and find. And so we'll try and find it from a sequence of image measurements. Now, actually, we're going to call this-- we're going to give this a name c. And c is actually 1 over the time to contact. And the reason for that is mostly to make the algebra simpler. After we find c, of course, we can just find its inverse. But it has the advantage that if the motion is very slight, then the time to contact can be huge, where c just becomes close to 0. Yeah, sorry. No, I hesitated, because it didn't look right. But thank you. OK. OK, so let's start real simple. Let's suppose that there's only motion in the z direction. So I'm moving towards the wall. And of course, the image is expanding as I approach the wall. And what is the focus of expansion? Well, if I'm moving straight to the wall, the focus of the expansion will be right down the barrel. It'll be at 0, 0. So let's take a special case. And then, according to the formulas for x0 and y0-- and if I draw the diagram for the motion field, it's going to look like that. And so a couple of things. One is, well, if I can measure those vectors, then I'm done. Why do I need to do all this other stuff? Well, the thing is, we don't know those vectors. All we have are images. Images are brightness patterns. Images are brightness as a function of x and y. There are no vectors in there. So one approach might be we'll find the vectors, and then we can intersect them to find where that point is. So for us, this vector field is a useful tool to visualize what's going on. But the actual experimental data is there's an image, and then we take another image. And the other image is either expanded or shrunk. And our job is to solve this problem. Then, I had the arrows pointing inwards, which means that that's actually a focus of compression, rather than the focus of expansion. But of course, they're just depending on the direction of motion. If I'm standing here and the wall is receding with a positive velocity, then it's going to compress in the image, and I get this. On the other hand-- and there's no danger. The time to contact is negative. I left the surface a while ago. But the case we're going to be more interested in is where the velocity of the wall relative to me is negative. And then this diagram is reversed, and there is a focus of expansion. OK. So what can we do with this? So in this case, we go back to the same old equation. And now, in this particular case, U and V-- we've made we've made the capital U and the capital V 0. So all that's left is that second term. And so U and V take this form. OK, so stick that in there. We get, let's see, U-- we get cx, Ex plus y, Ey plus Et is 0. So I've just combined these two terms. And so if I wanted to-- Just as what we did before, we can measure one-dimensional motion from a pixel. Similarly, here we can measure the inverse of the time to contact just from brightness derivatives at one point in the image. Of course, it's not likely to be very good, because all of these quantities are subject to noise. And in fact, of course, estimating derivatives makes things worse. So we've got two numbers that are subject to noise. We subtract them. If they're similar in magnitude, about all you got left is the noise. So this is not going to be a very accurate method. Before we go on, though, let's look at this term over here. And I'm going to call this the radial gradient. And I put it in quotation marks, because it's not the greatest notation. But we use it so much, we need some notation. And we can think of it as that dot product. We've got two vectors. And what are those? Well, this vector is the brightness gradient. And we keep on seeing the brightness gradient. And it's just pointing in the direction of the most rapid change of brightness in the image. One way to think about images, E of x and y, is as a topographic map. So if you think of translating brightness into height, then you can visualize the three-dimensional thing. And the image is some surface in the three-dimensional thing. And the brightness gradient is just the gradient of the surface. It's a vector in the direction of steepest ascent. If I want to get to this mountain as fast as possible, I go up the gradient. And if I want to get down as fast as possible, I go in the opposite direction. So that's Ex, Ey. It's the gradient of the brightness. And then what's x and y? Well, that's radial vector. It's like in the polar coordinate system. I could imagine erecting a polar coordinate system in the image with the origin at the center, and that's that vector. And so what this is doing is it's taking the dot product of those two. So for example, if they're at right angles, then this is going to be 0. And if they're parallel, then they will be as large as possible. Now, we could-- let's see, which way do I want to go? Let's try this. We could normalize this, and turn this one into a unit vector. So first of all, you recognize that by dividing by the square root of x squared plus y squared, I turn it into a unit vector. Because now, if you take a dot product with itself, you get 1. OK, that clear? So there's a unit vector. Unit vectors are useful for indicating direction. So this doesn't have-- well, magnitude's 1, so the magnitude doesn't tell me anything useful. But it's a way of talking about different directions. And then, taking the dot product with another vector does what? Well, it gives you the component of that vector in that direction. So here's a vector. And I'm interested in how much of it is going in this direction. I take the dot product, and I get that component. So what this is really computing is how much of the gradient is in this radial direction. And that's why we use that term, radial gradient. And the reason I am putting it in quotation marks is because it isn't quite right, because there's this factor that I've left out-- that it's not just the radial gradient, but it's multiplied by the radius. But it's basically measuring how much of the brightness variation is in the output direction from the center of the image. OK. And now I'm ready to solve the problem. Because I have this equation here, and I've solved it for a single point, except I don't really trust that. So what I'm going to do is as before, use least squares. So we're going to minimize-- and this is over the whole image. And again, keep in mind that we're dealing with very simple cases where the motion in the image is not some arbitrary vector field, but it's defined by a small number of parameters. Yeah? This here? Oh, so that comes out of the equation up there. So there's an equation for 1 over f u is 1 over z big U minus wz x over y-- over z. And first of all, [AUDIO OUT] is now assumed to be 0. We're assuming that there's no motion in x or y direction, so those terms drop out. And the other term-- we've defined c to b w over z. And then, the big X over z little x over F, because of perspective projection equation. OK, so yeah, there's probably more than one step going from up there to down there. But it's applying the perspective projection of the equation in the special case that there's only motion along the optical axis. There's no motion in other directions. OK, so again, what's going on here is if we had the correct value of c and there was no measurement noise, this would be 0 at every pixel. And so if we add it up over all the pixels, it should still be 0. If we plug in the wrong value of c, it ought to be non-zero-- grow with our error in c. And so finding the c that makes this as small as possible is our way of estimating what it is. And yeah, it can be justified in terms of-- if you assume the noise is Gaussian and do all that statistical, probabilistic stuff. But I don't want to do that. I think it's intuitive that if this is supposed to be 0 when I have the correct information, then making it as small as possible in the presence of noise is a sensible thing to do. Now, I just-- calculus. There's only one unknown in there, only one knob I can tweak. So I'm just going to see where that has a 0 derivative. And so well, what do I get? I get 2 times-- there's a square in there. So I'm going to get twice whatever is under the square. And then I have to differentiate that term, as well. So multiplied by-- and now, the derivative of this with respect to c is, of course, just that part. So multiply by-- and if it's equal to 0, I don't really care about the 2, so get rid of that. And now, I can split this up into two integrals. And I'm going to leave out the dx dy pretty soon, because it's implied if I have the double integral over the image. OK, so I can solve for c is the integral of-- let's call this G squared. And this is G-- oh, sorry, minus G Et G squared, where G is this radial gradient, which will occur so often that I get tired of writing it out. So there's a way of estimating time to contact, because c is the inverse of the time to contact. And it's kind of interesting, because there's no high-level stuff in here. We're not detecting edges, or tracking points, or doing anything sophisticated-- recognizing poodles behind palm trees. We're just doing this brute force number crunching. And it's very effective. We got a million or 10 million points, each of which is lousy. But we combine them this way, and the good to one part in 1,000. So what when you implement this, what do you actually do? Well, you have to take the image and estimate the brightness gradient, which is trivial. We just take neighboring pixels in x direction, subtract them. There's our Ex. And then take neighboring pixels in the y direction, subtract them. That's Ey. And we need Et, so we take two frames, one after the other, and we take corresponding pixels and subtract them. That's our Et. Or there may be some scaling, but that's the basic idea. From those, we then compute this radial gradient, which is easy to do. x and y are the position in the image relative to the center of the image. We'll talk about that some more later. And then, we just compute these to sums. So these double integrals are just sums over all of the pixels. We just run through all of the pixels adding up these things. And so really, well, if you want to do it sequentially, you need two accumulators. You set them to 0, then you run through the image row by row, pixel by pixel. And at each pixel, you compute G. You're computing x Ey from it. You compute G. And then one accumulator-- you multiply G by et. add that to that accumulator. The other accumulator, you take G squared, and you dump it into that one. And then, when you've come all the way through the image, you've got these two sums. You divide them, and you're done. So it's totally brainless. There's no intelligence there, artificial or otherwise. And I call this a quote, "direct computation." Because there are lots of other ways of approaching this problem. For example, you might measure the size of-- like you're coming out of a parking lot and there's an MIT bus in front of you. Then you can measure in the image, how many pixels is the image of the bus? And how many pixels per frame in time is it changing, and so on? You can do that. But it means you have to detect the bus. You have to find the edges. You have to estimate the beginning and end of the bus. And you better measure it with very high precision, like a hundredth of a pixel. And it's not too hard to find out where the edge of the bus is in terms of pixels, but hundredth of-- and why do you have to measure that accurately? Because it doesn't change much from one frame to the next. So there are other ways of doing this. But this is-- it's brute force, mindless, and elegant. That's my view of it. OK, now of course, this is very specialized. We're only dealing with a case where we are running straight into the wall. And so we'll look at more interesting cases in a minute. But what I'm hoping to do, if I can get the projector going again, is to demonstrate some of this. So what I'll show you is a little program called MontyVision, which allows you to create image processing applications by a graphical process of basically plugging together quote, "filters." So let me bring this over here and show that to you for a second. So this is what it looks like. And plug together the left most thing is a file source, which, in this case, is .avi file, video. Then it goes into a splitter that cuts it up into streams, including going from color to gray level. There's a decompressor, because all the video is compressed, otherwise it would be ridiculously large. And then it goes into the time to contact box, and it comes out the video renderer over there. And this time to contact box has a set of parameters that show up somewhere off screen right now. OK, so I'll put this back where I can see it on my screen. And let's hope this works. Oh. OK, so I'll run this several times, because this goes by pretty fast. So here we are running into a truck that's used in the mining industry. It's kind of a large truck. And what we're seeing are a whole number of different things. I'll also try and get rid of this. Well, let me just run it again. So you be seeing a circle-- red circle-- that denotes the focus of expansion, the estimate of the focus of expansion. And you can see that we're going to hit that tire or that wheel. And then, you will see the three bars on the right. So the third one is c, the thing we just calculated. And you can see that it's red meaning bad, and it's growing upwards as we approach. It's the inverse of the time to contact. And the other things, like the time to contact is over there in frames. And we can plot to see how accurate it is compared to when we actually hit the target. So this is slightly different from what we did, because this allows some motion in the x and the y direction, and we'll do that in a second. And as a result, it has three qualities its computing, which we'll call a, b, and c. And those are the three bars you see on the right. And if we were going straight down the barrel, the first two would be 0. They wouldn't be either up or down, but they're not. There's an x component and a small y component that-- and of course, this is noisy. So it's not perfect. OK, let's look at a different one then. Newman-- Well, I guess I'm obsessed with running into trucks-- one of my nightmares. So here's another truck we're running into. The circle, again, indicates the focus of expansion. The three bars are a, b, and c. And the rightmost one is the one that we now know how to calculate. And I'll show there's a couple more times. And it's growing as we go, because the time to contact is getting shorter so 1 over the time to contact is getting larger. The time to contact computed is down on the right-hand side. Now, it turns out that if we were to do this frame-by-frame, you would notice that near the end it just is wrong. And so why is that? Well, there's a whole bunch of different reasons. One of them is that the image becomes out of focus and so the information is changing. We're kind of assuming that the image is just zooming, but in the system that has a lens, unless we adjust the lens to stay in focus, we're going to go out of focus. Another point is that initially, the image motion is very small. It's a fraction of a pixel between frames. But then, of course, as we get close to the object, things are really exploding. And so our whole assumption about estimating derivatives by taking neighboring pixels and subtracting them and all of that stuff is not working very well. And so how do you deal with that? Well, one way is to combine pixels so that on the super pixels, the motion is a small number of pixels. And we know that's when this works. So what we could do is run this at multiple scales. So you have the full resolution of the image. And when things are far away, that's the one that's going to give us the best information. Then we, for example, average 2-by-2 blocks of pixels. Now we have an image which has a quarter as many pixels. And it can cope with twice the image motion, because a certain amount of image motion in the original image now corresponds to half as many pixels in this reduced one. And then, we do that again. We take that reduced picture, we again-- 2-by-2, average and now we're down to a 16th. So this gets very cheap to compute compared to the original image. And so there's no real cost-- additional cost to this. Well, some. So what is 1 plus a quarter, plus a sixteenth, plus a sixty-fourth plus-- OK, well, you know how to sum geometric series. And the answer is-- I don't know-- 4/3 or something. So yes, it costs more, but it's not a huge cost. You can afford to do this at multiple scales. And that allows you, then, to cope with all of the velocities, starting off with a very slow sub-pixel motion to where things are really blowing up. OK, well let me do one more thing and hook this up to the camera. Hopefully that'll work. Oh, don't do that to me. OK, so here's a web camera. And I guess it's relatively dark in here. Let me try the paper. Out the focus. Trying to find something that will give it a nice texture to work with. I guess there's a chair which has lots of holes in it. Let's try that. OK, so I'll try and hold it as still as possible. So again, the three bars are a, b, and c. And they're all over the show, but relatively small. Now, if I move the camera up, that third bar should go down and be green for safe, meaning the time to contact is negative. If I move it down, that third bar will go up and be red, meaning I'm about to crash into something. And I think the reason it's got this jerky motion is because it doesn't like this projector. So it's got a very slow cycle time. It should be running at 28 frames per second, and it's obviously not. OK. Now, if I try to move in x-- let me see if I can move in x-- that first bar should go up. Oh, I'm not holding it straight, so I'm getting the crosstalk between x and y. Right now I'm moving in y. And I'm moving in the opposite direction in y. So that second bar goes up, and the second ball goes down. And here, I can move in the x direction-- the first bar goes up, and the first bar goes down. And I mean, to me, this is fascinating, because it's such a neat instrument. And it doesn't have any magic in it, and we can understand its limitations. We can calculate what the error is and so on. It's not like something that's very elaborate and has some hidden code in it. So let me show you that version. Let's see. Oh. Oh, OK. Oh, I see. It's closed. So one of the features of this code is it can show you some intermediate results. I need to look at the control panel for this to see what it's showing right now. So right now, it's sub-sampling 2 by 2. And let me show Ex. Let's get rid of this. OK, so Ex is just the derivative in the x direction. And positive is shown green, negative in black. And it's fairly weak in most parts of the image, except where the black line is. There, on one side of the black line, there's a rapid change in brightness up. And on the other side of the black line, there's a rapid change in brightness down. So that gives you the green and the red fringes in the x direction. Now, if I turn it so that that black line runs horizontally, there'll be less, because the x derivative now is very small. I see that, because it's more or less constant in the x direction. Whereas, if I look at the y derivative-- oh, I can actually show x and y together in a slightly different presentation. So here, the x derivative is controlling red, and the y derivative is controlling green. And you get this funny kind of almost three-dimensional feeling about the result. But the gradient is showing the direction of most rapid variation. So think again of what we said about the analogy with a surface-- a topographic map. And that's why this looks a little bit to us like it's three-dimensional. OK, so those are the basic things we compute. I can also show, instead, Et. Now, Et, if I was solid as a rock, would be 0 everywhere. And obviously, I'm not. Let's see if I can hold on to something to make it more constant. So you can see it decreasing in magnitude, although I can't seem-- OK. So that's a time direction. We're subtracting successive images. And let's see, let's look at G. So G is that radial derivative, x. And what can you say? Well, it goes outwards. You see that square box? It's green all the way around, which is not what Ex would do, because Ex would be opposite on the two sides. But because we're multiplying by the vector from the center of the image, it actually ends up being green all the way around. Now, we can do one more thing, which is to multiply it by the time derivative Et. So in that formula, you may remember that there's a product of ET in this brightness gradient. And obviously, if that is 0, it will say there is no motion. If there is motion, then it'll be non-zero. And in particular-- hmm. It's unfortunate that it's not-- oh, OK. So here you can see that in most places-- I wish it was running faster, because it's very hard to-- in most places it's now green, because I'm moving away from the surface. When I do this, in most places, it'll be red. And the reason the other areas is because it's jiggling back and forth. I'm not a very good manipulator. If I had a robotic manipulator, we could do this better. OK, so anyway, anything else we want to see on this? So before I put that away, are there any questions? Yeah. You've got a very limited, very constrained problem, which is the problem of the fly landing on the surface or someone running into a wall. If there are multiple motions, this isn't going to work. And we're going to do that. So we're slowly expanding. We started off with one unknown, which was just the inverse of the time to contact. And now we're going to do a little bit more. But ultimately, what we'd like to do is deal with arbitrary motions, where u and v are not constrained by just a few parameters, but they could be different all over the image. So u and v, right now, are some simple function of some constant. Where is the equation? Over here. And what we'd like to eventually get to, after a few more stages, is where u and v can vary arbitrarily over the whole image. And you can see how that's going to be problematic, right? Because at every pixel, we have our magic equation up there, the brightness change constraint equation. At every pixel, we get one constraint. So if we have a million pixels, we have a million equations. But we have 2 million unknowns, because at every pixel, we'd like to know u and v. So that doesn't sound very good. So we'll have to introduce some additional assumptions to solve that problem. Because right now, it looks like we have a million equations and two million unknowns. We know what the answer to that one is. But fortunately, in most cases, there is additional information. For a start, when I'm moving around the room and you're moving in your seats, in most places in the image, neighboring points are moving not the same, but very similar. And so if they were moving the same, then we could easily integrate that with our approach. If they're similar, it's a little bit harder. But that's what we're going to do. We're going to say that we're trying to recover this vector field in the situation where things can move independently but there is some-- it's not like you're looking at-- oh, you don't remember this. But it used to be that late at night TV stations would go off the air, and there was no signal. And then the radio receiver would be basically putting white noise on your screen. So you'd just see this-- every pixel is totally unrelated to every other pixel. So that's a situation we're not going to be dealing with, because that's not a natural situation. As I'm doing my daily business of eating and whatever, I'm dealing with a situation where neighboring pixels typically have almost the same velocity. OK, well let's go back to this and try and generalize it a bit. So the next most general thing to do is to say, OK, let's have motion in U and V in the x and y direction, as well. So no longer will U and V-- big U and big V up there-- be 0. But there'll be-- we'll allow them to vary, as well. So let's see. Before I go there, let me just say something about-- I totally missed this. But everything we did so far, we got the answer, and then I said, oh, but how is this going to fail? We haven't done that here. So you remember that this was c equals the ratio of two integrals. And of course, it's going to fail if the integral of G squared is 0, for example. That's one way it can fail. So what does that mean? Well, that means that there's 0 radial gradient everywhere in the image. And one way for that to happen is you're in a coal mine without lights. E is 0. That's not a particularly interesting case. The other one is that the brightness-- the radial derivative-- is 0 everywhere. That means that the component of the gradient in the radial direction is 0. So we have this expression over here. This is 0. And that means that-- let's see. That means that the radial direction and the gradient are at right angles to one another. That's how we get the dot product to be 0. So let's try and draw a picture like that. So here's an image. And say we're there. Then the radial direction is that. And we're saying that the gradient has to be at right angles to it so that the brightness can change in that direction, but it can't change in this direction. So in this direction we can change. We can't change in that. And similarly for any other direction-- we can have the brightness change this way but not radially. So what sort of pattern would do that? How about a bullseye? Is that-- it will have variation in the radial direction. So that doesn't work. So if it's not a bullseye, turn it 90 degrees everywhere. An x will do. Sorry, what did you say? AUDIENCE: Rotate it. BERTHOLD HORN: If we rotate it-- hmm. If we draw a slice and rotate it-- pie charts. So if we have a pie chart, then everywhere the gradient is in the rotating direction, there's no variation in brightness inwards and outwards. So this whole pie sector is one color. This is another color, and so on. And so that's a case where this will fail. And does that make sense? I mean, would you expect it to fail? Why-- could you do better? So you're looking at-- you see nothing else in the world except this pie chart, and you're moving towards it. Well, the-- shaking of heads. The image doesn't change. I mean, we're making assumptions, like you can't see fine, little specks on the surface that would give you a clue. Assuming it's perfectly smooth and you just have these, if you magnified by a factor of 2, it looks the same. So that's perfectly consistent. This method isn't going to work. And it can't. It doesn't-- there's no information there for it to work on. So we'll come back to this issue when we generalize it. Again, looking at the case where it'll fail, and is it reasonable for it to fail? Could we do better? Now, in a lot of cases, we can do better, just because, if I look at a piece of paper, it's not perfectly smooth. They're tiny fluctuations, there are little fibers. And so this overall pattern might look like this, but there are tiny little specks in there, and I can pay attention to those. And similarly, this algorithm could, if the contrast is high enough of those little specks, it would pick up a non-zero Ex, Ey and it would calculate the time to contact. OK, so more general-- slightly more general case. OK, so now we're allowing u and v to be non-zero. And so we've got u of f is-- I'm just copying the equation from over there. And for convenience, I'm going to multiply through by f. OK. So I'm just defining two new variables. So we've already had c was w over z, which is the inverse of the time to contact. And now, we're defining two more, which are fu over z, which is f times x0. And fv over z, which is f over y0. Why is that? What does that mean? Well, what we're going to find is time to contact and FOE. But it's more convenient to define these variables that are functions of time to contact and FOE, because then the equations are simple. We only have a few terms in them. So this one here is just f times the x component of the focus of expansion, and this one is just f times the y component. So once we know a and b, we can calculate where the focus of expansion is, assuming that we know f. One thing that I didn't mention is in our formula here for c, f doesn't show up. So that's kind of interesting, because it means that we don't need to certain properties of the camera in order to do that computation, which is surprising, because for many purposes, you do need to know. For example, if you take the approach, OK, time to contact is distance divided by velocity. Well, to compute the distance, you need f. To compute the velocity, you need f. So it's a very pleasant surprise that to compute the time to contact, we don't need it. Intuitively, what's going on is that if I approach a wall, it's going to loom outward. And if I approach it with a telephoto lens, it's still going to loom outward. And if you actually go through the perspective projection equation, it's going to increase in size at the same rate. And you're right that that is big X over big Z. So I've applied the perspective projection equation here. OK, so now we have u and b as functional [INAUDIBLE] unknowns, and we plug them into our favorite equation. And let's see. We get-- So that's our brightness change constraint equation. And I'm going to, again, call this G to cut down on the amount of writing the radial gradient. And I'll use the same method to try and find the answer. OK, so we have this expression, which, if we had the correct values for a, b, and c, is 0. If we integrate over the image, it should still be 0. If we have the wrong values of a, b, and c, it won't be 0. And so we formulate the problem as making this as small as possible. Now, in the absence of measurement noise, we could actually make it 0. In practice, Ex, Ey, and ET will be subject to measurement noise, so we won't be able to quite make it 0. But it's still a valid approach to pose it as this kind of least squares problem. OK, and of course, only a finite number of parameters-- a, b, and c. So it's a calculus problem. I keep on saying that, because later on, we're going to be looking for functions, not parameters. And then we can't use calculus-- so can't differentiate with respect to a function-- is the short version of that story. OK, so what happens if we differentiate this? Well, the derivative of that integral is the integral of the derivative of the integrand. The integrand is this square thing. So we get 2 times that. So we're going to get 2 into-- So we have 2 times that, and then we have to take the derivative of that term in here with respect to a. And the only thing here that depends on a is this part. So that's just Ex. And then, we repeat that. And again, we take the derivative of the integrand. So we get twice this thing. And the derivative of the term here with respect to b is obviously just Ey. So and then the third step is-- I'm not going to write this out again. And the only thing that depends on c is cG. And so we take the derivative and we just get G. So we get three equations, and magically, we can actually solve them. So these simple cases have a wonderful feature that there's a closed-form solution. I can write out what the answer is. And that's invaluable, because yeah, you can always numerically calculate stuff. But it's very hard to say things about the solution. Like, suppose I want to tell you that there's only one answer. Well, if you have some numerical way of finding the minimum, there could be another minimum somewhere else. And if I have an analytic solution, the closed-form solution, I can say things like, how sensitive is it to noise? Well, I can just differentiate with respect to the thing that has noise in it. If I do it numerically, I'll have to recompute. And then you say, OK, that's fine. That answer is OK for this set of parameters. What about different set of parameters? You have to recompute. Whereas if it's analytic form, there's the formula. There it is. So unfortunately that doesn't happen all the time. Once we make things complicated enough, we typically can't find a closed-form solution. OK, so what do we do with this? Well, the integral of a sum is the sum of the integrals. So we can rewrite this like this. I'm leaving out the 2, because that doesn't make any difference to anything. So I'm just multiplying out this term by Ex and then splitting it up into several integrals, because the integral of a sum is the sum of the integrals. So that's going to be GEx. OK, so that's one equation. And you'll notice it's an equation that's linear in a, b, and c. So that right away gives us hope that we can solve this problem pretty easily. And if I repeat for the second equation-- So three linear equations and three unknowns. And before we go on, a couple of things. One of them is, you should recognize parts of this. That's what we had before when we were going straight down the optical axis where a and b are 0. Then we're just left with this part. We just had one equation and one unknown, set that was nice and easy. Another thing is that the coefficient matrix is symmetrical. So this coefficient is the same as that one. And that coefficient the same as that one. And this one is the same as that one. And often having a symmetrical matrix gives some advantage in terms of understanding the stability of the solution. So what are these things? Well, how do we do this, before we even start to solve the equations? So let's look at the coefficient of a up here. Well it's just the integral of Ex squared over the whole image. So we go through the image, and at every pixel, we look at the neighboring pixel. We subtract the gray levels to estimate E sub x. Square it and add it to an accumulator. And we do this for all the pixels. Then we have another accumulator where we estimate Ex and Ey by looking at neighboring pixels in the y direction, multiplying those two differences. And then we have a third accumulator. And G, we have to do x times Ex plus y times Ey. And then we take the result, multiply by x. We throw that into that accumulator. Now, we don't need to do this one, because it's the same as that one because of the symmetry of the coefficient matrix. We do have to accumulate that one. We have to accumulate this one and that one. And again, these two, because of symmetry, we don't need to do those. So we run through the image, and we have six accumulators that we just add to as we go. And of course, you could parallelize this, because they don't interact. I mean, the operation I do on this pixel, in this case, is completely independent of the operation I do on that pixel. So if you have a GPU, you can do a whole bunch of pixels at once. You can dramatically speed this up. Although, you saw it running on a silly ThinkPad, and I was not doing anything fancy. It's using software that's five years old. So even with that, if it's not hooked up to this projector, it gets 28 frames per second. So it's not critical that you paralyze it, but if you wanted to, you could. And then you could run it at thousands of frames a second, which, by the way, is what optical mice run at. They're typically running at 1,800 frames per second, or in a gaming mouse, even higher. But the images tend to be smaller. The images are often only 32 by 32. OK, so we accumulate those six. Notice that these do not depend on any changes in time. These only depend on the brightness pattern, the texture that's in the image. Then we need three more which do depend on changes in time. And that's it. So we have a total of, let's see, nine accumulators we have to keep track of. And then we get to the end, and then we have to solve three equations and three unknowns. And I guess I ran overtime. So are there any questions before we depart?
MIT_6801_Machine_Vision_Fall_2020
Lecture_12_Blob_Analysis_Binary_Image_Processing_Greens_Theorem_Derivative_and_Integral.txt
[SQUEAKING] [RUSTLING] [CLICKING] BERTHOLD HORN: This quiz one, which will be out today. And the rules for that are slightly different from the homework problems. It counts twice as much as a homework problem. And it's longer. Not twice as long, but something like that. And so you have also a little bit more time to do it. It's not just one week. But it's a bit of work, so don't leave it too late. Please start on it fairly soon. And this is where you're supposed to work on your own, not collaborate. And it covers whatever we did up to this point, with a bit more emphasis on recent material. And I guess only the last question has to do with the patent we're discussing, so should be able to deal with the other four right away. Little sidebar here about intellectual property, and starting off with the fact that some people don't like that terminology. And how can an idea be property? A little bit like, how can a company be a person? Well, it's an awkward thing in law that you have to come up with some way of formalizing these things. And so in this case, ideas are treated as property to some extent. There are several different types of intellectual property. First of all, we have patents, which is what we'll be talking about. And there are different types of patents. The so-called utility patents, ie, useful patents. Not to say the others aren't useful, but they're design patents. So for example, Jerome Lemelson's first patent was basically a design patent for a baseball cap with a hole in it and a tube that you could blow through. So this is the inverse of the beer baseball cap. And there would be a propeller, and as you blow through it, the propeller would turn. Another famous design patent is Apple's design for a cell phone. And their design patent says that it should be a rectangle with rounded corners. And there was a lawsuit where they sued Samsung because their phone with a rectangle with rounded corners. And the jury awarded Apple a billion dollars, so big money involved here. And you might say, well, what should a cell phone look like other than this? This is it. But it's not completely crazy because at the time they invented it, phones were these big concrete things you held in your hand. Anyway. Not to argue about the merits of this. And of course, it's being appealed, and so in the end I don't know what'll happen. I guess some large amount of money will change hands, an amount of money which is astronomical to us mere mortals, but probably for these companies it's small change. So we shouldn't feel too sorry for them. So patents. We mentioned last time this was the social contract where you explain exactly how to do something in return for having a limited monopoly that lasts a certain number of years. And the rules change slightly as time goes on. By the way, you are supposed to explain how to do it. And in particular, if you know a good way of doing it and in your patent you only describe a lousy way of doing it, that could be grounds later on for invalidating a patent. And that's called best mode. So there are lots and lots of terminology. And in the litigation, there are standard things that come up. One of them is best mode. So in one machine vision patent for a wheel alignment-- so that one has cameras and LEDs, and it determines the axis of rotation of the wheel and the axis of steering that it turns around when you turn your steering wheel. And the patent that was claimed to have been infringed did not disclose the best mode. It disclosed the method that we see actually implemented. It gives you the correct answer if you have perfect measurements. But with realistic measurements, it would have an error of a degree or two, and that's not good enough. If you have a car like a BMW, they specify those angles to 0.1 degrees. Anyway. So they had a problem there because they did not disclose the best mode. Utility patents. And we talked about the structure of those. I'll just briefly talk about some of the others. So copyright. You can protect artistic expression using copyright. So if you write a book, there's a copyright on that. If you record a song, there's a copyright on that. If you choreograph some ballet, there's a copyright on that. And there are exceptions. So for example, if I want to talk about some topic, I am allowed in class to present a portion of that material without violating the copyright, without having to ask the author, how much do you want from me to use your product? There are also exceptions where you use extracts. So for example, if you're writing a news article, or say you've got a blog about a movie and there's a particular part of the movie that you'd like to talk about, under certain circumstances you can clip that out and use it. And of course, you can imagine, there's a lot of fun that lawyers have with this because a lot of music is extracted from other music and put together, and is that legal or is that copyright infringement? Copyright used to be basically for the lifetime of the author. If you write a book, you should benefit up to a certain point. When you're not around, then you don't benefit. Well of course, the heirs of famous authors didn't like that, so that was changed. And now we have the Sonny Bono system of copyright. Basically, he was the person who convinced Congress to change the laws, and now it's author's life plus 75 years. And they've been periodically updating it. So it's basically author's life plus infinity, because every time we get close to the limit, they say, oh, well, let's upgrade these poor heirs. They can't live without the royalties from this. Excuse me if I'm being sarcastic. So that's copyright. And by the way, that was very important for a while, and still is, in software. Because again, the rule was you couldn't patent mathematics or an idea, abstract idea. And so the courts typically held that that's what programs are. They're ideas. They're abstract stuff. They're not-- you can't hold them and weigh them. And so the way people protected themselves was to send their programs into the copyright office and register them there. And you can imagine that, say you have some operating system like IBM 360 operating system and you send in your 750 million lines of code to the copyright office. That was quite an exciting event. And certainly, if someone makes an exact copy of your program, then you can sue them under this law. Of course, with all of these, there are people who are working to get around these laws. And so in the case of copyright, there was this notion of a clean room. So what you would do is you would have a bunch of people that understood programs well analyze what someone else's program is doing, kind of reverse engineering it. But they weren't allowed to show it to another group of people. They were only able to tell them what it does. And then this other group of people would write the code. And supposedly, because they didn't see the original code, there was not a copyright violation. I'm not sure that's-- and it's not totally unreasonable in that, if you look at the code line by line, it's likely to be totally different. They'll use different variable names and stuff. Then there's trademark. So trademark is much more restricted. It's just, if you want to call your company Dunkin' Donuts, you can trademark that. But it has to be unique in the field, and it has to not be-- there are a whole bunch of rules. But basically, you can't use a common word, like you can't call your company Time Space because those are common words. And so a lot of company names are slightly misspelled versions of common words. The other thing is that the trademark may include particular shapes, particular markings-- so you can distort some letter-- and color. So Dunkin' Donuts has two colors that they've copyrighted, and I'm proud to say I have hats that have both of those colors. They're very useful in hunting season, which is approximately what's happening right now where I live. So you can use color to protect yourself. They have to be unique to the field. So you may have a trademark on a name in the rubber industry and someone else has it in the semiconductor industry. They don't conflict. And there was a famous case about that where-- again, Apple-- Apple sued the Beatles. Why? Well, because when the Beatles started, they formed their own company called Apple. May not know that. But in any case, Apple sued them. And they lost because there's no confusion of the client, the customer. They're in two totally different fields. One is music, and the other one is computers. And of course, Apple claimed, oh, well, but we distribute music, so it is the same thing. Well, whatever. Anyway. Then, the last thing is trade secret, which is no protection at all. You just hide what you're doing. You don't tell anyone exactly how you're doing it. And classic example of that is Coca-Cola. There's a safe down in Atlanta, Georgia, which has the formula for the ingredients in Coca-Cola. And supposedly, very few people know what's in there. And the danger, of course-- the good part is that it's unlimited. There's no expiration date on patents. And the bad part is, if it ever comes out, there's no recourse. There it is. Everyone now knows what to mix up to make Coca-Cola. So it's a risky thing, but it's certainly much cheaper than pursuing some of the other avenues. You don't have to pay lawyers for it. And there is a certain legal recourse. If somebody signs a non-disclosure agreement when joining your company and then they walk off with the formula for Coca-Cola and tell Pepsi Cola how to do, it then you can sue them. But the cat's out of the bag. It's not so easy to recover from that kind of loss. Just a quick overview of what's called intellectual property. And Richard Stallman would definitely complain that I use that terminology because he doesn't think that ideas should be treated as property, just as some people don't think that companies should be treated as persons under the law. But there you are. Back to the particular patent. So this particular patent is real low level. I wanted to start with something very simple, finding edges very accurately. Finding edges very accurately. So where are we going with that? Well, the idea is that once we found edges and we're describing images using edges, we can do more elaborate things like recognition and determining the position and determining the attitude of an object. And most of what we'll be doing after finishing with this patent is in 2D, where-- the world, of course, is 3D, but there are lots of cases where 2D machine vision is incredibly successful. And one thing that's important to point out is that it has to be incredibly accurate. If your thing works 70% of the time, forget it. It's got to be working 99.999% of the time. So these methods are very carefully thought out and attuned to have extremely good performance. And what we'll find is that in the 2D world, this is possible. It's a little harder once you get to 3D. So once we've done position and attitude in 2D, we'll progress to 3D, which is a more interesting problem. But let's get some of the basics sorted out. So I'll just quickly review what this patent did. And so we'll start-- so this is-- and the first idea was to look at the brightness gradient, and so just a quick story on that. So here we've got brightness as a function of x. And there's a point where the curvature becomes 0 and changes sign, and that's called an inflection point. And that's what we're looking for. So the edge is actually spread out over several pixel, but we're trying to identify very accurately a particular point on the edge. Then we looked at the derivative, and there we're looking at a peak-- we're searching for a peak. And some methods use second derivative and look for 0 crossing. But importantly, this is in the gradient direction. So this is, of course, a 1D cross section and we're dealing with images, so how do we take the cross section? Well, we're only interested in the direction that is perpendicular to the edge, and so these graphs and these ideas correspond to that. Then we talked a little bit about things that are sometimes called stencils and sometimes called computational molecules. And what we're trying to do is, in the discrete world, we're trying to estimate derivatives. And so, of course, there are some obvious ones. So there's a way of getting an approximation for e sub x. And here's another one. And these are all some of the ones that we already looked at. So there are lots of ways of estimating the derivatives. And how do we know which one to choose? Well, there are trade offs. And we saw that one way to progress is to use Taylor series expansion to see what the lowest order error term is. Because the higher we can push that, if it's a third order error term, it's better than if it was a second order error term. And if two methods have the same order of error term, then we look at the coefficient. And if it's lower-- like these two have the same lower error term. And also what was very important was, where are we trying to estimate the derivative? And we decided that we get the best results at those particular points. Now, some of those points are offset by half a pixel from the pixel grid, and that's why people don't use them. But that's silly, because what's wrong with having some quantities on the pixel grid, the image, and some quantities on the grid that's offset by a half? As long as you know that it's offset by a half, you can translate any result on that grid into a result on the other grid. We can also analyze these in the Fourier transform domain in terms of how they affect higher frequency content. And we won't do that right now, but that's second set of methods we can use to decide which of these is better. Now, these derivative estimaters can become quite complicated if you're looking for high precision. So for example, in analyzing muscle electrical signals, there's quite a bit of noise and there's quite a lot of distortion of the signal as it progresses through tissue. And so when people are trying to estimate the first derivative, in one case, one paper I saw use 39-degree approximation. So they use a pattern like this that's 39 elements long, and they feel that that way they can control the trade-off between suppressing the noise and getting very accurate results. So we're using very low order, partly because it's easier, and partly because one other trade-off is if you make them too long, then they start to have different features interacting with each other. So here we are trying to detect an edge. Now, if you use an edge operator that's 100 pixels long and there's another edge within that 100 pixels, it's going to interfere with the results. So we try to compromise. On the one hand, we get better results with bigger support, better noise suppression. But at the same time, we run more into the problem of what, happens when two edges get close to each other? And in particular, here's an image of the corner of a cube. And over here, we could have potentially quite a large support, but when we're back here, edges get pretty close together. And then a large support means that you're combining information about different edges and you won't get the results you would like. That's first derivative. And if our model is we find the peak in the gradient direction of the first derivative, then that's all we need. But for some purposes, we might want higher order derivatives, and we'll see some examples of that later. So now, one way to think about this is that second derivative is just the first derivative applied twice. And so we can run our numerical approximation a second time. And that corresponds to convolution. And the result is-- So that we can easily compute second order derivatives this way. Let me just make sure we understand what this is. So these computational molecules, the way they work is that you put them down on the image grid and then you multiply the gray levels by whatever the weight is at this point. So multiply that by 1. Put it in an accumulator. Take that one, multiply it by minus 2. Add that to the accumulor. Take that one, multiplied by 1, add that to the accumulator. And then, finally, take the result and divide by epsilon squared. Well, often we don't care about constant factors, so we might drop that. But if we're actually thinking about derivatives, we should do that. So that's one thing. And we're really talking about convolution. So we apply this to every place in the image-- slide it along if you like-- to produce a new image. It results at a bunch of points. Now, how do we know that if we want a sanity check? Well, a way of making this-- making a check on this is to try it on some function where what the answer is. So we know that, in this case, the answer should be 2. So let's apply this. Well, we have to decide where we're applying it, so let's suppose that this is 0, this is 1, this is minus 1. So we're plunking this down. And so what do we get? Well, in this case, epsilon is obviously 1, so that's 1 times-- and then 1 times minus 1. Well, that's 2. No, sorry. 1 times-- yeah. 1 times minus 1 squared is 1. Then we have minus 2 times x squared with 0 and plus 1 times 1 squared. And so that's 2. So apparently it works. So there are different ways of designing these computational molecules. Certainly, convolution is one. And then you'd want to check them. So another thing you might want to check is, well, what if f of x is x? Well, then we're going to get 1 times minus 1, 0, 1 times 1. The answer is 0, which is what it should be. What if f of x is 1? So you want to check it for polynomials of low order up to the order of the derivative you're trying to get. So if f of x is 1, then of course we get 1 minus 2 plus 1 is 0, which is again what it should be. It's the second derivative is 0. So one of the constraints on these operators if they're supposed to be derivative operators is that the weights add up to 0. That makes sense. So that's just one dimension, so to speak. What about d2, dx, dy? We can just do that. So because this is the x derivative and that's the y derivative and composition of differentiation corresponds to convolution-- so we're sneaking in some stuff here. One of them is that we're used to dealing with linear shift invariant systems. But it turns out that, of course, derivative operations are linear shift invariant, in that if I take the derivative of a function and then take the derivative of the same function shifted, what I'm going to give is the derivative shifted. If I take the derivative of the sum of two functions, I'm going to get the sum of the derivatives. So that's a very important thing to exploit, is that taking derivatives can be considered convolution. And so all of the good stuff we know about that applies. So let's see what this gives us. Now, what we're going to do basically is take one of these and flip it and superimpose it on the other one. And if I superimpose it over here, of course, I get 0 because I'm multiplying-- we assume the background is 0. We assume that the only values we're showing are the non-0 values. But then what if I move it here? Well, then there's an overlap. And then I move it over there. There's an overlap. And I can move it down here. There's an overlap. And I move it down here and I get-- so I expect to get four values out of this convolution. And so I'm going to get something like this. And so my 2 by 2 stencil for estimating the mix derivative is just that, and it makes perfect sense. One thing to watch out for is that, in convolution, we flip one of the two functions and then slide it across and try it in all possible places. Over here, sometimes we're using computational stencils that are not flipped, so we may end up with some sign reversal. But you want to-- you can always check on a simple polynomial to see whether it's working the right way around. By the way, this one here, we can take a diagonal view of this. It's a little bit like, if I project this down-- so there's a plus 1 far out on this side. There's a plus 1 far out on that side. And then there's a minus 2. There are two minus 1's in the middle. And this looks awfully like a second derivative, like the one we had over here, just rotated 45 degrees. And of course, this mixed partial derivative is in a rotated coordinate system, just d2, dx squared. So if I take-- so this is my original coordinate system. And then I look at the world in this coordinate system. Then the second derivative, ex prime, x prime, is the same as that mixed derivative. So we think of exy as a very different animal from exx and eyy, but it isn't. Just a little bit more of this before we go on. And we already mentioned the Laplacian. Let's bring that in here again. So we had a second derivative operator here. And so in the case of the Laplacian, we're adding two of them in different orthogonal directions. So we could just do-- so that's just this thing plus the same thing rotated 90 degrees. And so that's one way of writing the Laplacian. But that turns out to also be a good approximation to the Laplacian. How do I know? Well, there are lots of ways of checking. One is Taylor series. Another way is apply it to test functions, see if it gets the right result. And another way is Fourier transform. But those-- and you'll see that this is really the same pattern rotated 45 degrees, but now the separation is square root of 2 times as large. So I end up having to change the weight. Now, I mentioned that the Laplacian is the lowest order linear differential operator that's rotationally symmetric. Well, neither of these looks particularly rotationally symmetric. So can we make one that's a bit more-- on a square grid, you can't, but we can do better than these. And one way is to combine them, to take away the sum of these two. So I've taken-- I don't know-- 4 times this one plus 1 times this one, and I get this operator. And it's a little bit smoother in terms of rotational symmetry. The corner one's on 0, but they also don't have the same weight as the diagonal one-- the up and down ones because they're further from the center. So how do I know that's a good combination? How did people get to that one? Well again, you look at the Taylor series, and it turns out that the lowest order error term for this one is one larger than the lowest order error term for either of these two. So it's better. More work. If you apply this to the image, you're doing not quite twice as much work, but you're doing more work because you have to take into account all of the corner ones, which this operator doesn't. Or conversely, the up and down and west and east ones there. And if we wanted to, we could do a more detailed analysis of the error terms, but that's pretty boring so we won't do that. Anyway. So those are all of the computational molecules we're going to need. And as you can see, they're options. They're different versions that are trying to compute the same thing. Of course, on the hexagonal grid, this looks much better because we can do something like this. Which looks nice and rotationally symmetric. Yeah? AUDIENCE: What's the one in the middle of the-- BERTHOLD HORN: Oh. Minus 20. So can anyone think of how you could determine that it should be minus 20 without actually doing a lot of hairy algebra? AUDIENCE: They have to sum to 0. BERTHOLD HORN: They have to sum to 0, right. Why is that? Well, because when you apply this operator to f of x equals 1, [INAUDIBLE] Laplacian is 0. And so if you apply this to the function that's 1 everywhere, obviously you'll just be adding all the weights. And so they need to add-- how do I know it's 1/6? Well, one way to get it is to apply this to a test function like f of x is x squared plus y squared. Well, I know that the answer is 4. And then go from there. Or I can get it from this argument, which is the weighted argument that I'm taking 4 times 1 of those and 1 of those. And this one has a factor of 2 in there. So you end up with-- I don't know-- a 1/2 plus a 1/4. No. Anyway, it comes out to 6 epsilon squared. So it's annoying that we don't have hexagonal pixels. By the way, there are some situations where people are very concerned about efficiency, like trying to image the black hole at the center of our galaxy using radio frequencies. Each antenna you put up costs a pile of money, so you want to make sure that this grid of antennas is the most efficient way of sampling the Fourier transform space in that case. And so it turns out that this way of sampling is 4 over pi as efficient as that way of sampling. So there are places where people do this. And for example, briefly, almost all chips are laid out on a rectangular pattern because that's very easy to do and check. But if it comes down to packing density, and particularly if you have something that has a very simple repeating pattern, then sometimes there's an advantage. So there were memory chips for a while that used the hexagonal layout, but they've since disappeared because now we're stacking things vertically. Right now, it doesn't seem to be an efficient way to go. Oh. So while we're here, I should mention-- I mentioned already that the Laplacian is the lowest order linear operator-- differential operator that's rotationally symmetric. Here is a non-linear one. So this operator is rotationally symmetric. What do I mean by that? I mean that if you rotate the coordinate system, for example over here, and you compute e sub x, dash, and square route and add e sub y dash and square route, you'll get the same answer as if you take-- so it doesn't depend on the orientation of the x-axis. And so this is lower order, of course, in Laplacian because it's first order, but it's not linear. Nevertheless, we run into this quite a bit because, remember, Roberts used these stencils, which you can just think of as ways of computing the derivatives in the rotated coordinate system. And then he took the square root of the sum of squares, and that was his edge detector. And so it's equivalent to doing this. And for his purposes, that took less computation. And he already knew in 1965 that what you want to do is make sure that your ex and ey refer to the same point in pixel space, a lesson which has since been forgotten, except here. So that was the front end. And this has to be very efficient because it's run on every pixel. And it also lends itself to special purpose hardware, of course. So the next step was our subpixel edge detection method. So we used what is called non-maximum suppression. So this is weird terminology. Why not just say finding the maximum? But there it is. So where did that terminology come from? Well, the idea is that we apply this edge operator everywhere, and in most places it has a feeble response, but on the edges it really kicks up. And so one approach would be, let's just threshold. So if we get a strong response from one of these molecules here, then we're on the edge, and if not, then we're not. Well, that involves early decision making, because once you've made that decision, that's it. You'll throw away that edge point and never do any computation with it again because you've decided it's below threshold, or vise versa, you picked it as being a threshold. So in the patent, Bill Silver makes a big fuss about avoiding thresholds if possible and not making decisions too early. That's his main motivation for not using thresholds. So some previous edge detection work did work that way. You apply some operator which has a strong response on the edge, and then you threshold. And now you get responses. But they're not just right on the edge because we saw that the edge is a slow, smooth transition. So there'll be neigbouring points that also have a strong response. Plus, with noise, there will be points in the background where noise just happens to add up, and now there seems to be an edge there. So that's undesirable. So the previous methods worked by thresholding and looking for things that had a strong edge response. And here, instead what we're doing is we're going to remove everything except the maximum. But maximum in what sense? Well, again, just into the gradient direction. And so here we deal with the unfortunate fact that we're going to quantize the gradient directions. So we're only going to have compass directions-- east, northeast, north, northwest, west, et cetera, eight of them. And let's suppose that this is a quantized gradient direction. Then we step through the image and we consider those three values. And the non-maximum suppression says that we will accept this as a potential edge point only if this is true. Because if g minus was bigger than g0, well then that's going to be the edge point. And remember that the edge is running at right angles to this lot, so the actual underlying edge is like this. And we're looking in the gradient direction, and the gradient direction is of course perpendicular to the edge. And then we talked about how there's this asymmetry, because occasionally we're going to find that g0 is equal to g minus or g0 is equal to g plus, and we don't want to declare both of them to be edge points. We want to have a tie breaker so that only one of them gets elected. And which one? Well, it's arbitrary. You could easily well have done that, as long as there's a way of breaking that tie. And then we said now we can plot the profile of this edge response along the gradient direction, and we get a picture like this. And then we can fit some curve to it. For example, we can fit the parabola to it, and we find the peak of the parabola. And that's our subpixel edge position. So several points there. One of them is, why fit a parabola? Well, it's arbitrary. The shape of that curve depends on the optics, the image sensor, the thing you're looking at. But we only get three samples of it. So treating it as a smooth curve, the lowest polynomial that will work is second order, so one option we have is to use that. Not to say that that's the only option or the best one, but it's a pretty good guess. Next thing is, what's s? Well, s is the displacement from g0. So in here, s0 would be that point. And then if s's, say-- well, if we get over here, then s is a 1/2. So if we get over here, then s is a 1/2. We're halfway to that point. And obviously, it doesn't make sense for s to be bigger than a 1/2 because then this would have been the maximum. And same in the other direction. Notice that s is not in units of pixels because, in this diagonal case, s equals 1 is that distance, which is the square root of 2 times the pixel spacing. Whereas if we happen to get the quantized direction over here, then s would be the pixel spacing. So that's something to keep in mind, that it all depends on the actual gradient direction. Avoid thresholds. Yeah. And so now we have a potential edge point. And notice we're not doing any thresholding. So we're going to get these points all over the show, not just on the edge. But we haven't done any thresholding. So here, we mark this place based on square route of 2 s delta, where delta is the pixel spacing, so just to be precise. So that's where we think the edge is. So I've drawn the edge here, but actually, now with subpixel interpolation, we find it's there. So if we just were to go with a peak in that curve on the discrete grid, then we'd find that the edge runs through that point, but now we know it's over there. But in order to get there, we quantized the edge direction. And so we can improve things slightly. So I suppose I should draw that again and make it less messy. So here is our quantized direction, and here is some point we found that's an edge point. And then suppose that the actual gradient direction is slightly different. It can't be hugely different because then we would have picked a different compass direction. So now that's the gradient direction, so the edge has to be perpendicular to it. So I can draw a perpendicular to the green line that passes through this point. So the edge actually is like this. And when I report a potential edge point, I can report any point on that line. And which one do I pick? Well, the simplest one is just to pick the one I calculated. I can, however, slightly improve the result by instead picking this one. Why? Well, it's closer to the origin. It's closer to the actual peak, and so it's less likely to be subject to noise. It's not a huge improvement, but they decided this was a worthwhile additional step. It also actually aids in a step that we will be getting to in a second where we chain together the edges. So that's the idea. So we project from the quantized gradient direction down onto the actual gradient direction there. That's the plane position [INAUDIBLE].. What next? Oh. So then we get to the bias compensation. So we said that we somewhat arbitrarily picked the parabola. And then, as you know, in the patent there's a second method which uses a rooftop triangular shape. It uses that. It fits that and finds the peak of that, and that gives you another possible position for where the edge really is. And we said that in certain circumstances one may be more accurate-- give you a more accurate answer than the other. But what you can imagine doing is actually experimentally moving the edge by very small increments, tiny subpixel increments, and seeing what your method gives you and then plotting that against what it should have been. So this is an experiment where you have a camera looking at an edge, and then you move the edge or the camera by some tiny amount in increments and you measure. And now, ideally, your magic method for peak finding should always give you the correct peak value. But it may not. So ideally, you'll get a 45-degree line. And again, we're only interested in the range from minus 1/2 to plus 1/2. And all of these methods should give you the correct answer in three cases, no matter what you do. If your peak is actually at g0, then it should return g0 that position-- s equals 0. And similarly, if you were halfway between pixels, it should give you the-- and both of these methods do, they satisfy that requirement. But what happens in between? Well, as I mentioned, that depends on exactly what shape the edge has. And you might end up with something like this, or maybe something like that. So typically, the departures from the diagonal will be quite small, and typically they'll be quite smooth. And so you could keep a lookup table of this or something to compensate for it. But it's not really worthwhile. It's a relatively small correction again. But in going down to-- the aim is 1/40th of a pixel accuracy. So if you think about it, that's quite amazing that you can do that. And one part of it is this plain position correction, and one part of it is this. And so what is done is to approximate whatever that shape is with this function. And so, for example, for b equals 0, we get s prime equals this. So that's just the diagonal line. That's the ideal case. For b greater than 0, then that means that s prime is s raised to a power greater than 1. And so that means it's going to bow this way as the red curve does. And if b is less than 0, that means that s prime is s raised to a power that's slightly less than 1. So that's approximating square root. So that's this bowed upper curve, which I'll show in green. So that's green. And the other one is red. So what's the point? The point is that this is a small correction, so it doesn't really pay to try and be too fussy. But you want to do it. And this is a one-parameter fit to the type of curve that-- B is the one parameter. And so you could calibrate it based on this method, get one value of b. You could calibrate it based on this method, get a different value of b. Then you notice that if we happen to be working east-west, the spacing between pixels is much less than if we're working north-east. And so it turns out you want a different value of b for this case than you do for that case. Fortunately, there are only two cases. They're the ones that are east-west, north-south, and then the ones in between. But you can use a different value of b for those two. And you again, you could do something more elaborate, but since it's a small correction, it's only going to affect a small fraction of a pixel. Also, you don't want to be too clever here because this curve is going to change a little bit with circumstances. If your camera is slightly out of focus, you'll get a slightly different result. If the corners of the cardboard box you're looking at are somewhat damaged, then you'll get a different edge response, and so on. So you don't want to be too overly clever. Now, a lot of this depends on the actual edge transition. And we drew one just by hand and then came up with these methods for finding out where the edge actually is. But realistically, what's causing these edges to be fuzzy? We already said that it's a good thing they are fuzzy. Otherwise, we wouldn't be able to do the subpixel recovery. We'd be suffering from huge aliasing problems. So it's a good thing they're fuzzy, but why are they fuzzy? Well, one reason is the defocus. So let's just look at that as a special case that's of interest for other reasons. So here's our lens, and the object's up there. And suppose that it's a point light source. And this is in focus plane where that distant light source, star maybe, is imaged as a point. But our camera has the image plane slightly off, and so the picture will be slightly out of focus. So what did I call this? So this is f, and I guess I call this delta. So when I look at the image of that star, it's no longer an impulse, a point. It's a circle. So uniform brightness. And if I want to plot it as a function of x and y, it would look like this. And I don't know, I call this a pillbox. I guess people don't have pillboxes anymore, but it's a cylinder of constant height. And if I want to describe it mathematically-- so what's that? Big R is the radius of the little pillbox. And I divide by 1 over pi R squared because the same energy is being deposited into that area no matter how out of focus I am. So if I'm in focus, it's all concentrated at one point. If I'm out of focus, that same amount of light is spread into a larger and larger circle. And so I compensate for that by dividing by the area of the circle. And then what's this? So this is the unit step function. So for R equal to 0, this will be u of some minus quantity. So that'll be 0. So I get 1 minus 0. And so this will just be 1. And then when I get out to the radius, if I go past the radius, if little r is bigger than big R, the step function is 1 because this will be greater than 0. And so I get 1 minus 1 is 0. So it's just a fancy way of saying the same thing as that diagram. The other thing I need is what is big R? How big is it? Well, it's obviously going to depend on how far out of focus I am. So it's going to depend on delta, and it's going to depend on the size of the lens. Something like that. It's just similar triangles again. Oh, f, which is this distance here from the lens to the in-focus image plane. So obviously, as I go more out of focus, as I move the image plane further up, this radius gets bigger and the brightness gets less because I'm now putting the same energy into a larger area. And so this is called the Point Spread Function, PSF, Point Spread Function for this system when it's out of focus. And this is used a lot in understanding the effect of being out of focus. We think of it as blurring. And of course, we can think of it in terms of the Fourier transform, in terms of removing some or suppressing higher frequency content. And it's the higher frequency content that makes things look sharp and, in our case, blurs the edge. So let's see what the effect is on the edge. Well basically, we're going to have a response which we can calculate geometrically by superimposing the edge and the circle. So let's take a simplest case where there's black on one side of the edge and white on the other. And now we have a circle of a certain radius. And what are we looking at? Well, we're looking at this overlap. That's going to be what controls how bright things appear. And then we're going to move this thing across the edge to see how the response varies. So we take this disk and we slide it across. And obviously, until it touches, nothing happens. There's no output. And also, obviously, once we get over here, the output is constant. It's 1. Nothing more changes. So there's a transition between x being minus R and x being plus R where there's some change between 0 and 1. And that's what we're trying to calculate. Well unfortunately, it's not quite that simple. So we can just write the answer out by inspection, but we can get it this way. There's probably a formula for a sector of a circle like that, but I don't know what it is, so let's do it using things we do know. So a couple of things we know. We know how to compute the area of a triangle. Well, probably know several ways of computing the area of a triangle. And we also know how to compute the area of a sector of a circle. So let me draw that here. So that's a sector of the circle. That's the triangle I'm looking at. And it's obvious that the quantity I want is the difference. It's what's left over when I subtract that triangle from that area. So this is R, and I'm going to call this angle theta. And this thing here is obviously R squared minus x squared. So x is the position. x0 means that you're right on top of the edge, that the circle is bisected by the edge. And then x can get as large as plus R before it saturates or minus R before it saturates. So the area of the sector is 2R squared theta, this sector here. And we can check. When theta is 2 pi-- I guess-- oh, this is-- theta can only get up to pi. So we would get-- oh, pi R squared? How about let's check that again. So theta is the 1/2 angle of the sector, and so it can only get as large as pi. And if it's equal to pi, that means we've covered the whole circle and we should get the area of the circle, which is pi R squared. So we get R squared theta. And then we have to subtract from that the triangle area, which is-- it's 1/2 the base times the height. So the base is 2 times this quantity because this quantity is just that section. So that's the base. And then the height is x. And so 1/2 base times height gives me that. And theta is given by this quantity. And so I end up with-- the details aren't too exciting here. I'm not going to expect you to remember this. But we can plot that and see what transition it gives us. And what's more, we can then feed it into this algorithm of this pattern to see how accurately it will determine the edge position. And in particular, we can plot the diagram that I just erased here which had s versus s sidebar. In other words, if this is how the edge is formed, if this is why the edge has that smooth transition, then this will allow me to calculate the error. And the error ought to be pretty small, but it's non-zero, and if you want high accuracy, you have to take it into account. So another way of looking at this is to plot this diagram. So where does that come from? That looks like a circle, except it's elliptical because it's been increased in height by 2. Well, if you think about it, when I move this edge or the circle, then I am adding or subtracting an area-- infinitesimal area that has a height equal to this quantity. Or in other words, just twice the height of the circle there. So actually, for some purpose, I don't need to do all that hairy math. I can just look at that diagram and immediately write down that the brightness derivative is that. And oh, it has a peak at 0. Well, we expect that. While I'm here, I can look at the second derivative. And it looks like that. And it's x over minus-- it's just the derivative of that, which is pretty easy. But what's E? Well, it's the integral of this thing. And we just did that integral in a somewhat painful way. But it's probably just as well because I don't remember what the integral of that is, and there it is, I think. So what can we do with this? We can now feed this into the algorithm and say, if this is how-- why the edge is smoothly varying because it's out of focus, then this is the relationship between the true position of the edge and the one I compute by, say, using the parabola argument. And therefore, I can compensate for it. Now of course, in a real imaging situation, there'll be more than just the defocus of the lens, so it probably doesn't pay to be too careful about this. Now suppose that you're a patent infringer or you're trying to infringe this pattern. Then there are a number of ways to go. Basically, you look at the components of each claim and you see if there's one of the components that maybe you can avoid. Maybe you can do it in a different way. Or maybe you can do it in a better way. But just arguing whether something's better or not doesn't help. It has to be different. Now, there are some things here that aren't that pleasant. One of them is this quantization of gradient directions. And the reason it's not that great is because it introduces the awkwardness that the spacing comes in two sizes, pixel spacing and square root of 2 times pixel spacing. And these effects of defocus, et cetera, are they on pixel grid? They're in units of pixel spacing. And so now we're sampling it in two different ways, so we expect to get slightly different error contributions. So how can we avoid that? So here are a couple of ways of-- so the idea is we don't want quantized gradient directions. And so suppose our gradient is-- now, if we follow the preferred method in the patent-- Again, notice that that method doesn't show up in any of the claims, which is good because that means that you could easily circumvent the patent. But it is the preferred method that's shown in the specification. Then we would quantize this, and we would work with the diagonal. But let's instead say, well, how about this? How about if I figure what-- if I knew what the value was there, I know what the value is here. How about this arrangement? So this could be G0, and that's G plus, and that's G minus. Then I would avoid the quantization of gradient direction. And how do I do that? Well, I only know the values on the actual pixel grid, and so-- ta-da-- I interpolate. So I use interpolation to go from the values on the grid to the value over here. And because I'm interpolating along this line, I can actually easily just use a 1D linear interpolation. So how does that work? Well, if I have a function, then I can say, well, I know the value here, I know the value there. And let's say it's a straight line in between. And then the formula for this is-- you can come up with that yourself. But you can easily check it. First of all, it's linear in x. And then at x equals a, we get only this term. And the b minus a cancels the b minus a. And that x equals b, this term drops out, so we only have that. And again, the b minus a again. So it's easy to do that in 1D. And of course, you can extend that to so-called bilinear interpolation in 2D, but we don't actually even need that here. So then we approximate the value here by interpolation. And we can use more sophisticated methods of interpolation. For example, we can use cubic spline interpolation, which we won't talk about here, but it involves more points. That gives you an interpolation that's a smooth cubic curve, which in some cases is more accurate. And then we perform whatever we did, and we end up somewhere there. And we don't need the plane position step because we're actually on the gradient direction. We're exactly where you want to be. So you wonder, why didn't they do that? Well, they don't say, but two reasons that I can think of. One is that, before we complain because sometimes we were using pixel spacing and sometimes square root of 2 times pixel spacing, well now we can have anything in between, not just those two values. And that correction graph, the bias graph, will be different for all of those values. So you've got to-- I don't know-- quantize it, build a lookup table. Not insurmountable problems, but an extra hassle. And you wonder, is it worth it? That's one reason you might not choose this. The other reason is that you did an interpolation, and how accurate is that? You've gone away from the values that you actually know for sure to something that you interpolated using a method that you chose in some arbitrary fashion. How good is it? So that'll introduce some error as well. Now, in many cases that error wouldn't matter. It'd be so small. But if you're going for a 40th of the pixel accuracy, even that small error can be significant. So that's one method, one alternative. And one of its problems is changing size of this step from a pixel to square root of 2. Well, what if we get rid of that and we just have a fixed size? So here's again our pixel grid. We know the values at all of the intersections of these lines. And this is G0. And now we say, well, let's just draw a circle. And again, we'll pick some gradient direction. And now, we use that value and that value and that value. And those are equally spaced. They're always the same distance from each other, no matter what the gradient direction. So I've now dealt with two issues. The one is there's no quantization of the gradient direction, and there's none of this change of the-- what does x equals 1 mean? Before, that could take on two values, and over here it could take on a range of values. Well, here it's always the same. It's the pixel spacing. Of course, I don't know the value here, so I have to use interpolation, which is now a 2D interpolation. And so I can use bilinear, or I could, for example, use bi-cubic. And again, why didn't they do this? Well, it's extra work. Particularly with the bi-cubic, we need to take into account more pixels than we have here, go out to a 5 by 5 grid, not just the 3 by 3 grid. Bilinear is not terribly accurate, so we may not want to use that. Anyway. So you can see that, even though a lot of things are described in the pattern, there is also some hidden stuff going on. They probably experimented with all of this and picked something that was very accurate and yet simple, and that's how we got to that method. Now, the last thing I want to talk about today is multiscale. They do point out in this patent that you might want to do this at multiple scales. They don't make a big fuss of it here. But we already mentioned that that's something you want to do. There are some edges which are very sharp, and they will be best found at the highest resolution. And then there are some other edges that are kind of blurry, either because they are out of focus or because their object doesn't have a sharp transition. If the object has a smoothly turning curvature, then there will be a transition in brightness from whatever the brightness is for one planar surface to the brightness for a different planar surface. But there are all these in-between values. So the transition won't be on one pixel. It'll be spread out over many pixels. Let's see. Do I want to do this now, or-- we haven't talked about Cordic, so-- I'm just looking at the time. We don't have much time, so why don't I do Cordic? So this was the method to go from ex, ey to e0. e theta, where e0-- so it's just a Cartesian to polar coordinate transformation. And square roots [INAUDIBLE] take some computing. You'd have to probably use a lookup table to speed that up. So what is their preferred implementation of this? So the idea is that we rotate the coordinate system. So we have some gradient, ex, ey. And if we knew that angle, then we could just rotate it down, and then the length of it, ex, would be e0 and ey would be 0. So we're aiming for where e0 is just ex prime and ey prime is 0. When you write it this way around, that makes no sense. Now, we don't know theta, but we can rotate using some test angles and, by an iterative method, keep on improving, keep on coming closer to this situation where the y component of the gradient is reduced to 0. Oh. [INAUDIBLE] So the superscripts in parenthesis again refer to the iteration number rather than being powers. And of course, we know in 2D a rotation matrix is just very simple. And so one thing we can do is have a sequence of thetas that we try. And when we're finished with this iteration, when we decide to stop, the answer for the angle is just the sum of all of those increments we make. So you can imagine various strategies. Like you can try an angle, and suppose that this goes the wrong way. Well, then maybe take 1/2 that angle and try that. Or you could try turning through that angle positive and through the angle negative and compare the two results, see which one works better. And then keep on reducing the size of the angle until you converge, or at least until you're happy enough with the results. So each time, reduce-- each step, reduce the magnitude of ey, and it will increase the other one. Because as we rotate closer to the x-axis, the projection of this onto the x-axis will increase. So that's the iteration. And how do we pick the thetas? Well, we could do this. Theta 0 is pi over 2. Theta 1 is pi over 4. Theta 2 is pi over 8. And so on. That would be one obvious approach. So we try and turn through these different angles and see if it-- and we accept the change if it satisfies this condition, that magnitude-- that ey is reduced. It doesn't have to be positive. So it could overcompensate as long as the magnitude is reduced. And you can always flip it to make it positive again if you want to. So you can do that, but it's expensive for two reasons. One is you got the cosines and the sines, but you could build a table. Well obviously, you just store the cosines and sines of these angles. The other one is each time you need four multiplications and two additions. And it doesn't sound like much, but remember, this happens several times because of the iterations, and it happens at every pixel. So it's expensive. So how do you avoid multiplication? Well, what we can do is pick the angles very carefully. Suppose that we picked the angles so that they are inverse powers of 2. Then that matrix there, the rotation matrix, just becomes this. And the matrix part of this is very easy to compute, multiplication by 1. Doesn't cost us anything. And this is just a shift, and shifts are extremely cheap. And then we have an addition. So it reduces the whole thing down to two editions from four multiplications and two additions. Well, then we have-- and we do this repeatedly. And you can see that the angle we're turning through gets smaller and smaller. What is that angle? Let's see. We can compute it from that formula. After a while, it basically gets halved. Initially, because of the non-linear nature of trigonometric functions, it's not halved, but eventually it becomes half. And then when you're done, you end up over here. You end up with a product of all of these cosine theta i's. And so what do you do with that? Well, maybe you don't care because it's just a constant multiplier. But suppose you do care. Then you can pre-compute it. And actually, it's 1.16 very quickly. It converges very fast. So you can just use 1.16. So that's the basic idea of cordic, that we rotate, but through special angles that have this property, where the tangent of theta i is 1 over 2 to the i. Well, it takes a bit longer to explain it in the paper, but that's the basic idea. And so the iterative process is you change through that angle, and if it improves the answer, you keep it. And then you keep track of whether it got negative or not. And the first thing you have to do is get it to the first octant. The whole idea here is that you're working in this regime, but that's obviously trivial. You just look at the signs of x and y and whether y is greater than x, and you can reduce it to the first octant. So next time, we'll talk about multiscale, and we'll talk a little bit about sampling and aliasing. Of course, that's part of the multiscale story. So again, please start early on the quiz. It's more work than the typical homework problem.
MIT_6801_Machine_Vision_Fall_2020
Lecture_20_Space_of_Rotations_Regular_Tessellations_Critical_Surfaces_Binocular_Stereo.txt
[SQUEAKING] [RUSTLING] [CLICKING] BERTHOLD HORN: Briefly last time about tesselating spheres in various dimensions. We found that representing rotation as unit quaternions was useful. And so in exploring that space of rotation, we're dealing with a sphere in 4D, and it'd be good to divide it up into equal areas. And in the process, we started talking about 3D, which is slightly easier, but which is also going to be important for us. And tesselations of the surface of the sphere can be based on the platonic solids, which there are five of. And they have equal area projections on the sphere. So they're very nice from that point of view, but the division's kind of coarse. And so typically, we'll divide it more finely. So one way is to go to the Archimedean solids. And so here, we've got all of those. I guess in this view, there are 13 of them. As I mentioned last time, there are potentially 14 because this one is not equal to its mirror image. So you could have two of those with different orientations. And they have, again, regular polygons as facets, but this time, you're allowed to use more than one flavor of polygon. So here, we even use triangles, squares, and pentagons, and as a result, the areas aren't equal. So this is a pretty extreme example where we have triangles and dodecagons. So those triangles obviously have a much smaller area than the dodecagons, and so on. Anyway, we can base a tessellation on that, dividing up the sphere. Also, this is relevant to-- when we're talking about rotations, we're interested in the rotation groups of these objects. And let's see, is there another one somewhere hidden behind here? No? Well. And so there are 12 elements of the rotation group for the tetrahedron. There are 24 for the hexahedron. And there are 60 for the dodecahedron. So what about the octahedron and the icosahedron? Well the octahedron is the dual of the cube, so it has the same rotation group. The icosahedron is the dual of the dodecahedron, so it has the same rotation group. And we'll define more precisely what "dual" means, but roughly speaking, replace a face with a vertex, replace a vertex with a face, replace an edge with an edge at right angles. So you can see how you can construct one of these out of the other. So that's not much-- we got groups of size 12, 24, and 60. And we're talking about the surface of a sphere in 4D, so that's a three-dimensional thing. And we're only dividing it up into 60 regularly spaced points. So we'll need more. Oh, by the way, one of these Archimedean solids is one that's commonly used and kicked around. Now, if we divide them up, we'll often start off with one of these solids, either platonic or Archimedean. And here, for example, we've taken an icosahedron and divided it up into lots of triangular areas. And that's a popular thing. So here is a geodesic dome. And you'll notice that many of the vertices have six edges coming in, but there are a few that have five. And the precise rules affecting that-- there will be 12 of those with five, and everything else will be hexagonal. And it looks very regular, but actually, the facets aren't all the same size. Here's another one. So this one, actually in a way, is better for our purposes. When we divide the surface up, it's kind of making histograms on the sphere. And if we make a histogram in the plane, we'll often just use square tessellation, which isn't the best, but pretty straightforward. Over here, it's not so obvious what to use. But one of the things that's convenient about this type of tessellation is that it's not very pointed. So you don't have parts of a cell that are far away from the center compared to other parts. So actually, the tessellation, the triangular ones aren't that great, the ones I've shown you. This is a dual of a triangular tessellation. We replace the center of each triangle by a vertex, and so on. And so in that respect, this one is a better tessellation. And of course, it gets bigger. So this was, I guess, the 1968 Expo in Montreal. They built this geodesic dome. And what was interesting about it was that it was union built. And the architect had made sure that all of the links were labeled because they weren't all the same. But the difference was small. So the people working on it said, that's stupid. Let's just work with it. And it started not becoming a sphere because they hadn't followed the labeling. And so they had to actually take it apart again and start over again. I think this still exists. I've been there a few years ago and it was there. And that, of course, is Buckminster Fuller. Here is, actually, a page of out of Buckminster Fuller's original patent application of 1965, where he describes how to build these geodesic domes. And again, the geodesic dome for us has a number of features. One of them is that the facets aren't all exactly the same size, which is undesirable, and then they're triangular, which is undesirable. So we're likely to instead use the dual, which would have a large number of hexagonal or approximately hexagonal facets, and some number-- well, 12-- pentagonal facets. Enough of that. Then right now, we'll be talking about-- since I have the projector here, I thought I'd do this. We're going to run across certain types of surfaces that make some machine vision problems hard, called "critical surfaces." And they are hyperboloids of one sheet. So I need to talk a little bit about-- it's not very well-- I'll blow it up. So quadrics are surfaces defined by second-order equations. So we've got x squared over a squared plus y squared over b squared plus z squared over c squared is 1. That's an ellipsoid, shown over here. Special cases where a, b, and c are the same-- you get a sphere. And we're all familiar with that American football shape. We're going to be interested in this surface, which is a hyperboloid of one sheet. And it's x squared over a squared plus y squared over b squared minus z squared over c squared equals 1. And this is a shape you may be familiar with from certain kinds of furniture made out of rods. And it has the feature that it's ruled-- that is, we can embed straight lines in the surface, which obviously, you can't do here. And actually there are two sets of rules that are at an angle to each other. And so you can make this beautiful, smooth, curved surface out of straight sticks, which seems pretty strange. But there are parts of the world where this is used widely to make chairs, and tables, and so on. And then we have hyperboloid- of two sheets where there are two minus signs. So basically, you can classify them based on the signs. These are all positive. Here, there's one negative, and here, there are two negatives. And obviously, it's called "two sheets" because there are two surfaces. You can't get from one to the other. And let's see-- signs, so what's the other possibility? We could have no negative signs, one negative sign, two negative signs. How about three negative signs? What type of a surface do we get then? Well, we get an imaginary ellipsoid because it's all negative. How can it be equal to 1, unless you allow complex numbers-- hence the term "imaginary." So these are the sort of generic ones. Now annoyingly, there are a lot of special cases. And we already talked about the sphere. That is a special case, and that's pretty obvious. But then there's a whole slew of other special cases. So let's see-- so that's just the ellipsoid. We already saw that. Then there's a special case of that, where we're dealing with a cone. And then, we have where that neck of the hyperboloid of one sheet becomes infinitesimally small. And then, we have elliptic paraboloid. And this one we've already seen, the hyperboloid of one sheet. Then we have hyperbolic paraboloids. So these, you see, don't have a quadratic term in z. They just have a linear term in z. And then, that's the other one. And there are more special cases. And most surprisingly, one special case is planar, which seems weird because you've got a second-order equation. How can that be? Hang on a second. Well, imagine if you have two planes that intersect. How can you describe that surface? Well, one way, you could take the equations for one of them-- some linear thing equals 0-- and take the equation for the other one-- some linear thing equals 0-- multiply them, and that would describe that object. And obviously, when you multiply the two linear equations, you get quadratics. So enough pretty pictures. Let's talk about relative orientation and binocular stereo. So the problem we're interested in is computing 3D from 2D using two cameras. And we already said that if we know the geometry of these two cameras, it's relatively easy because if you have a point in the image plane where there's some feature, you can connect it to the center of projection. Gives you a line. You extend that line out into the environment, and the object is somewhere along that line. And now you do that for the left camera, and you do that for the right camera. You have two rays. And ideally, they'll intersect, and that gives you the 3D position of the thing you're looking at. So that's the easy part. In order to do that, you need to know the geometric relationships between the cameras. And that's what relative orientation is about. And so that's something that you would do ahead of time before you install this thing in your car and use it for autonomous vehicle purposes. But you might have to do it again on the fly because if the baseline isn't something incredibly rigid, it's quite possible that things get misadjusted when you use that vehicle. So we may need to do this calibration again. It's not just binocular stereo, though. The whole same machinery applies to structure from motion, where instead of a left and the right image, we have an image before and an image after. And again, there's a duality-- did the camera move or the world move? Doesn't matter-- the same math. So let's look at the binocular stereo case, keeping in mind that it's just the same as the finite motion case. So infinitesimal motion is easier, but we're talking about a substantial amount of motion. So suppose that we have a left center of projection and the right center of projection, so those are the principal points of our cameras. And then there's a baseline. And we'll have a vector B that describes that baseline. And now, we're looking out at a point in the world, and we can determine where it is in the two cameras. From the individual camera images, we don't know how far along that ray is. But of course, if we have both, then we can find the intersection. And here, I'm working in the right-hand coordinate system. I have to pick a coordinate system. And I have several choices-- I could pick left, I could pick right, or for symmetry, I might pick the center one. But I do want to make sure, when I get the results, that they're not biased by my choice of coordinate system. So if instead I had picked the left coordinate system, I should get the same answer as if I picked the right coordinate system. So we'll check on that. So in this case, then, the baseline is measured in the right coordinate system. Then our r, is, of course, measured in the right coordinate system. And this prime indicates that we've converted it from the left coordinate system to the right coordinate system. Let's talk about the geometry of this thing. So if I have a point out in the world, and I connect it to the baseline, that defines a plane. And I can think about the image of that plane in both camera systems. So suppose we have an image plane here and an image plane here. Then we expect to see something like this-- that this plane, which is defined by these three points, L, R, and P, projects into a line. And the point P is imaged somewhere along that line. Or think of it another way-- suppose that I go into the left image, and I use my interest operator, like SIFT or SURF or something, and I define this point. What do I know? Well, I know that the world point has to be along that straight line. And then I wonder, what does that mean for the right camera? Well, I project that straight line into the right camera. And of course, I just get a straight line in the right camera. And so one conclusion is that when we search for correspondences, we only have to search along a line. So instead of trying to look for that interesting point all over the right-hand image, once we've figured out the geometry, we only have to search along that line. And that gives us some measure of disparity, which we can turn into a distance measurement. So those lines are called epipolar lines, where there's a correspondence between two sets of lines. One way to think about this is that there's some line here between L and R. And now, imagine there's a plane through it. And now we draw other planes. So for example, there would be another plane here. So think of all of the planes that have the property that they pass through that line. Or we could take one plane and just rotate it. Those are all the epipolar planes. And when we look in the images, we're intersecting the image plane with this arrangement. And so we're going to get a set of lines, each of them intersected with the image plane gives us a line. But the lines won't be parallel. So we're going to end up with-- if I look at the actual images, it'll be something like this. This is my left image. So if they're not parallel, they're going to intersect. And actually, they all intersect in the same place, if I've drawn it properly. And so this is my left image, and that's my right image. So what is this point in the left image, if you think about this geometric arrangement of the sheaf of planes in 3D? So that must be where all these planes come together. So that's the image of the right camera. So this is the image of R. And this is, correspondingly, the image of the left camera. Now, they don't necessarily have to actually appear in the part of the image that you're scanning. They could be outside the frame. And typically they will be, particularly if we make the camera's face more or less parallel out at the world. Then we would have this point move out that way, and this point move out that way, and these lines would be more like parallel. So here, it's like your eyes are converging. If you're looking at a nearby object, the two eyes are not parallel, but they're converging. So the way I've drawn it, I'm assuming they are converged slightly. They could also be parallel, in which case these points would move out. Or they could, in fact, be diverging, except that's not the greatest thing. Because then the overlap-- the part of this image that's actually appearing in that image-- is reduced. So part of the point of actually trying to converge is that you try and have as much overlap between the two images as possible. But it's not necessary. It just means that you don't have a lot of extraneous image information that's of no use to you for recovering depth. Now, actually, I said that's the image of the right-hand side. It's actually the whole baseline. The whole baseline images in one point in the left image, and in this point in the right image. And the other projection center happens to be on the baseline, so it ends up there, as well. And so one of the properties that we mentioned was that if we know the geometry, then there's a correspondence between these lines. And if you find anything on the line in the left image marked in red, then you only need to search along the corresponding line in the right image. So we're trying to find the relationship between these two cameras. And that involves, obviously, the baseline. That's a translation. In the equivalent motion vision case, there's a translation from that position to that position. And the other thing is the relative rotation of one camera relative to the other. And so we know that rotation has 3 degrees of freedom. So we might think that as in absolute orientation, it's a 6 degree of freedom problem. But it isn't quite, and the reason is the scale factor ambiguity we talked about. So if I take the whole world, and I just expand it by some arbitrary factor, the image positions won't be changed, because in perspective projection, we're dividing x, y, z, and if you expand both x and z, the result doesn't change. So from the binocular situation here, we can't get absolute size without some additional factor. And so that means that we can't get the absolute length of the baseline unless we have some additional thing. Now of course, in images, there might be other things like you've imaged a Walmart and you have an idea about how many acres of land Walmart occupies. Then you can scale everything according to that. But without that, we have to treat the baseline as a unit vector. And so we only have 2 degrees of freedom for that component. So there's a total of 5 degrees of freedom. So we have five unknowns, unlike six for absolute orientation. Then comes the question of how many measurements do I need? What is the minimum number of correspondences to pin this down? And it's hard to come up with a good heuristic argument. What's the mechanical analog? Well, the mechanical analog is that we take the image points, and we create a wire that goes in that direction, a wire that goes in that direction. And they have to intersect, but we don't know how far along. So imagine passing these two wires through a collar that forces them to intersect somewhere, but the collar is free to move all along. So if that's all I have, two rays, one correspondence, one ray from the left and one ray from the right, then it's obvious that I can play all kinds of games. I can change the baseline. I can move this around, rotate it. I mean it's constrained, but it's only one degree of freedom constraint. So that obviously won't do. So let's add a second correspondence. So we have a second image point, and we have a collar around there. And that's going to be more constrained, but you can already see a number of ways that this can't be enough. For example, you can draw this axis. And you can rotate the right camera and all of its wires coming out about this axis, and they will still pass through. So that reduces the degree of freedom, but it's not enough. So the answer is 5. And one hand-waving argument is that each correspondence gives you one constraint. And why is that? Well, if you compare the two images, you will often find that there is a disparity, and the disparity has two components. So this is a situation where we're superimposing the two images, left and right. And we're comparing the image position of some particular point. And let's suppose that we approximately line things up-- that the optical axes are parallel, more or less, and they're more or less perpendicular to the baseline, so that if things are infinitely far away, they would emerge in more or less the same place in the image plane. But in practice, they won't. And so there'll be an error, disparity, in the horizontal and the vertical direction. Now the horizontal "disparity," as it's called-- should put that in quotation marks, but anyway-- corresponds to depth. The closer the object is, the more those two rays have to cross over. The angle between them gets larger, and so the image position changes. If I don't converge my eyes-- I just keep my eyes parallel-- and then I bring an object closer and closer, then the difference between where it images in the two eyes gets further and further apart. And we started there. We had a formula where the depth was inversely proportional to that disparity. And so that's the horizontal disparity. So that's the thing we're trying to use to get depth. But there could also be a vertical disparity, and what's that from? Well, it shouldn't be there if we had things properly lined up. It's an indication that the two cameras are not oriented the same way. And so the vertical disparity provides the constraint. And if we figure out the rotation and the baseline, we should be able to zero out the vertical disparity everywhere so that we don't ever see a vertical disparity. In actual use, once we've got everything set up, we should only have horizontal disparities, which are then inversely proportional to depth. And in order to set the instrument up, one thing we can do is tune out those vertical disparities. And that's how it actually worked for decades. People had very carefully made optical equipment. They'd plonk in two glass plates that were photographs taken from the plane, and there was a binocular-like imaging arrangement. And you set it up by noticing that this church tower is slightly higher in this image than that. And there was a sequence of moves where you would tweak out these five parameters in sequence, and iteratively. And one of the nasty features of it was that it might not converge. And so there was an additional component where you would record the five adjustments you made and plonk them into a formula which told you how to adjust things so that they do converge. So it was pretty painful. And it's too bad that there wasn't some easy solution. But we're going to find a solution-- unfortunately, not closed form. But in any case, nobody does it that way anymore. But it was interesting to think that there's this complicated mechanical Heath Robinson-like machine that had these knobs that you could tweak to find the parameters of this transformation in three-dimensional space. So that's the minimum number of correspondences we need. In practice, of course, we want accuracy. And so of course, we would like to have more points. And then there won't be any arrangement that makes them all work, so we'll do some sort of least squares thing, in that we will have the image positions match as closely as possible. So it's important that the error is in our measurement of image position. And so the thing we want to minimize is going to be the sum of squares of errors in image position, not something out in space. And we'll come back to that point. So in practice, we use more than five points. Five is the minimum. And there's an old problem, which is it's nonlinear. So how many solutions? And roughly speaking, we're dealing with second-order equations. And we end up with seven second-order equations, these five plus two more having to do with the baseline being a unit vector and so on. And so by Bezout's theorem, we might have as many as 2 to the 7 solutions, 128, which is kind of scary. And that's also one reason why you typically don't use five correspondences. You use more to try and get rid of that ambiguity. Anyway, it was for a long time not known what the actual answer was. And also, there are different ways of counting. But the true answer is 20. Now, Kupka, a century ago, showed that there can't be more than, in his way of numbering, 11, which would be 22 by our counting. And it took almost a century to get it down from 22 to 20. But in a way, for us this is kind of a curio. It just shows you that it's nonlinear. And it shows you that you have to be careful when you get the solutions. But in practice, when you take many more measurements than needed, and when you have a rough idea of the arrangement-- I mean, you built this thing, typically, so roughly what its geometry is-- then it's not really a problem. Except for people interested in obscure theoretical-- not that there's anything wrong with it. It's a lot of fun trying to figure this out. So how do we find the baseline and rotation from correspondences? So we have our baseline again, L and R. And then we have some point out there. And this is r l1, and this is r l1. This is the baseline. Now, one thing we can do is notice right away that those three vectors are coplanar. That's one of these epipolar planes. So what does that mean about the parallelepiped that I can construct from those three vectors? So maybe I should put arrows on these things. So we said that the volume of the distorted brick shape that we get by using those three vectors as edges-- so here is the construction I'm talking about. So you take these three vectors, and you build things with parallelogram inside, and the whole thing is called a "parallelepiped." And its volume is the triple product. And so what do we expect for that triple product based on that argument? It's a flat thing, so it should have no volume. Because these three vectors are coplanar. So this object isn't something that has three-dimensional volume. It's flat. And so in the ideal case, we expect that to be 0. And that's called the "coplanarity condition," and it's the basis of all of this stuff. So right away, you can imagine a potential least squares method. All we need to do is for every correspondence we have, we compute this triple product. And it might be positive or negative, depending on the directions of these arrows-- do they form a right-hand coordinate system or not? So we can't just add them up. But we can add up their squares. And supposedly, if there are no errors, the sum of those squares should be 0. So a potential method here would be given, actually, our measurement errors, let's minimize the sum of those squares. So we could take this triple product, take the sum of squares. And if our estimate of the baseline and our estimate of the rotation, which is buried in here, is correct, then it should be 0. Now, that's a feasible approach. It's not particularly good because-- well, let's put it this way-- if you have absolutely perfect data, you will get the perfectly correct answer. But it's not a good method. And the reason is that it has a very high noise gain. And this typical of quite a number of bad methods that have found their way into machine vision, where mathematically, they're right because you give them the correct measurements, they give you the correct answer. But they are not practically useful because with a small error in the measurement, you don't get the correct answer, and you often get an answer that's wrong in quite a large degree. And so we need to do a little bit better than that. But we can start with that as a basis. So what are we really trying to minimize? Well, as I mentioned, the key is the image. This is where we make the measurements. And when we make the measurement, then we know that that point is known, but only within a certain accuracy. And for our edges, we were saying that if you do really well, we can determine them to 1/40 of a pixel. But whatever it is, whatever that quantity is, it's in the image that we are trying to match things up, not some triple product volume of something. It's true that if everything is correct, that volume will be 0, but it's not true that that itself is a good measurement of error. It is proportional to the error. So if we figure out the proportionality factor, we could use it. Well, let's go a little bit further, and say, so the measurements aren't perfect, or our baseline and rotation aren't perfect. And then those two things won't intersect. So let's see how that works out. So we have r l prime here. So here, the two rays are not intersecting anymore. And this minimum separation, that's a measure of error, so we could think about minimizing that. So let's take that idea little further. So it's pretty easy to prove-- I won't bother-- that this minimum approach has the property that it's right angles to both of these. Because if it isn't, then you can move it closer and get a smaller distance. And so that means that this direction is perpendicular to both of these two vectors. So it's parallel to the cross product. And so I can draw an equation that goes around this loop. So I start over here, and then I go around this, along this vector. Now, I don't know how far I have to go because I've only got the direction, remember. And I'm going to then add to that a vector that goes in this error direction. And that's going to be equal to going the other way, which is b plus, and then coming out along the right way. So all I'm saying is if I add up the vectors on the left here, then they must equal the vectors on the right. Or I could have just gone around the loop, and said the result is 0. So that's an equation I'm interested in, partly because I would like to make this small. I would like those rays to intersect or almost intersect. So there I have got a vector equation. What do I do with vector equations? Well, the appendix teaches you some tricks. One of them is convert it to things we know a lot about, scalar equations. And how do you do that? Well, you take dot products. So we can turn this into a scalar equation. And actually, it's a vector equation. So it provides three different constraints. So we should be able to do that three times, and so take a dot product with three different things, and get three scalar equations, which correspond to that single vector equation. Now, when we do that, we'd like to pick something that makes a lot of these terms disappear, to simplify the equation. So we want to pick something that makes a lot of these terms drop out. And so for example, if we take a dot product with-- why is this good? Well, r l cross r r is perpendicular to r l and perpendicular to r r. So when you dot it with this term, that term goes away. When you dot it with that term, it goes away, because it's perpendicular to r r. So that's a simple one. And we get-- so the first and the last term drop out. And we're left with this. And this is nice. It's sort of intuitive. It's saying that gamma, which is measuring the error, the gap between the two rays, is proportional to the triple product. And we know that when things work correctly, the triple product is 0, and then we expect gamma to be 0. So this is a way of calculating the discrepancy in the gap there. Then we can also use the same sort of idea. Let's multiply by something that makes a lot of terms go away, i.e.: it's perpendicular to some of these things. We've already taken care of r l cross r r. That knocks out the first and last term. What if we try and knock out these two terms? Well, that means we need to take a dot product with the cross product of those. And that gives us-- and so that's a way of calculating beta. So that's going to allow us to determine how far out along the ray we get to the intersection, or almost intersection. And similarly for alpha, that'll tell us how far we are out along the other one. And so this is the third of our dot products, and it gives us-- so this allows us to calculate the three quantities, alpha, beta, and gamma. And they're important. So first of all, gamma is the thing we're trying to make small. And then there's alpha and beta. And they give us the three-dimensional position. But there's another important point here, which is, are these things positive or negative? Now, in the case of gamma, it just means does the right ray go below or above the left ray? So that's not that interesting. We just want to make it small. But for alpha and beta, being negative is a real problem. See, we're dealing with equations for lines. Well, they don't stop or start anywhere, so we're actually looking at the intersection or near intersection of not just these line segments, but these two infinite lines. And depending on the geometry, it may end up that-- suppose these are turned outward-- they're actually intersecting behind you. And so mathematically, you get a negative alpha and a negative beta. And then you have to decide if that's physically reasonable in your case. It typically won't be because it means that you're imaging something behind the camera. So remember the formula-- big X over big Z-- if you make big X and big Z both negative, you still get some result. But what does it mean? You typically don't image stuff that's behind you. So we typically check whether in the solution, alpha and beta are greater than 0. And remember, I said that there could be as many as 20 solutions. Well, one way to throw out most of them is to calculate this. And a whole bunch of them will have negative values for alpha, or negative values for beta, or both. And so yes, in the mathematical sense those equations have up to 20 solutions, but some of the 20 solutions won't make physical sense. And so you can just discard them. So we've got gamma. Oh, I guess we haven't got the actual distance. Gamma is just the multiplier. So the actual error distance, d, is gamma-- but we keep on coming back to that same triple product. That's the key to everything. Now, what we'd like to do is have a closed form solution, where we give five correspondences, and we end up with five equations saying gamma equals 0, and solve them. But unfortunately, they're second order. So we've got five quadratics, and so far, nobody's come up with a closed form solution. And there probably isn't one because you can, somewhat painfully, reduce those five equations to a single fifth-order equation, a quintic. And we know that quintics don't have solutions in terms of multiplication, and addition, division, and square roots. So now, of course, from a number-crunching point of view, who cares? If you have a fifth-order equation, you can find its roots. But in the strict way of saying, is there a closed form solution, probably not. Although I haven't seen a paper making a clean proof of that. So what do we do? Well, we know that we would like to minimize the error in the image, not this error out in the world. You can imagine that this could get huge. For example, if there's some object that's very far away, then that gap obviously gets large, even for a small error in image position. So this d, this quantity, is not the one we want to minimize. But it's proportional to it. And if the image position is correct, then d will be 0, and vice versa. So they're related, and we can take care of that just by some weighting factor. So triple product, and we're going to add it up. We're going to square it because it could have different signs. So we're going to minimize sum of squared error. And we're going to-- so what's that weighting factor? Well, the weighting factor is the conversion factor from an error out here to an error in the image. And if I know these distances, I can figure out what that weighting factor is. However, the formula for it is a mess, so I'll just not give it to you. It's in the paper if you really want it. And then I solve this problem. So the problem here is, minimize this with respect to b and the rotation, and subject to-- so unfortunately, it's not an unconstrained optimization problem. Now, how do I actually do this, because this weight depends on the solution. So I'm going to do this iteratively. So basically, that conversion factor between error in the world and error in the image changes with my solution. And so I will use this particular set of weighting factors, and solve this problem, and then recompute the weighting factors, and solve that problem. And fortunately, in most cases, it converges very quickly. And that makes the whole thing feasible. Without that, it's just intractable. Yeah, what is-- oh, sorry, it's my way of saying rotation, without specifying that it's a rotation vector or rotation matrix. So it's r parenthesis, dot, dot, dot. So this is a square of a triple product. And of course, we can expand the triple product in many different ways. For example, let me expand it some other way that's more useful. And now, we get to quaternions because we're dealing with this quantity, which is being rotated from the left coordinate system into the right coordinate system. And so we can introduce-- so if r l is the thing that I'm actually measuring in the left coordinate system, since everything over there is expressed in the right coordinate system, I have to rotate it into the right coordinate system. And I can do that using quaternions. So let's call this triple product t for convenience. And so I'm going now from vectors to quaternions, but these are quaternions with the special property that they have zero scalar part. So remember that if I have two quaternions that represent vectors, the formula for multiplication simplifies. It's just the dot product and the cross product So then one of the lemmas we stated without proof was a way of moving one of these multipliers to the other side by taking its conjugate. And the next thing I'm going to do is define a new quantity, for convenience, which is the product of the baseline and the rotation. It sounds really weird, but this is a very useful quantity. So I take a quaternion representing the baseline, and why do I do this? Well, because it simplifies it and makes it symmetric. Yeah, I guess I'm mixing notations here. I'm sorry, so this is actually r l, and similarly here. It was coming from two different papers which use different notation. So what's my job? My job is to find the baseline and to find d. And why is that enough? Well, because if I find d, then I'm done. I can recover-- so when I'm all done, I can recover b by doing this. So when I'm done, I just multiply d by q star. And that's equivalent to this expression. And then, of course, q and q star is this quaternion identity with zero vector part. And b times the identity is just b. So I have replaced the problem of finding b and q with the problem of finding d and q. That doesn't sound right. Because those quaternions have lots of components. So b has four components and q has four components. So it sounds like we've got eight unknowns. And we know that the whole problem's constrained by 5 degrees of freedom. So what's going on? Well, there are some constraints. For example, we know the baseline is a unit vector. And it turns out we can constrain d to be 1. And we can show-- we're not going to do that, but-- that these are perpendicular to each other. That's fairly easy to show. Oh, well, this is really the same. Forget that [INAUDIBLE]. So we've got that constraint, that constraint, and that constraint. So now the counting is right. We've got eight quantities, eight variables, and we have three constraints. So there are only 5 degrees of freedom. But it does make it much worse than absolute orientation, where the only constraint we had was that q be a unit vector, which was very easy to implement. Here now, we've got three quantities, three constraints. There's some interesting symmetries here that I want to just briefly talk about, including the strange thing that you can replace-- you can interchange the left and right coordinates, the left and right ray directions, which doesn't make any sense. How can that be? It's like somebody screwed up and gave you the data for the left eye and switched it to the right eye. Yeah. Well, I'm not going to say how to calculate the weight. Unfortunately, it's non-trivial. So again, what is the weight? The weight is the relationship between the error in 3 space where the two rays intersect and the error in image position. And we can obviously calculate it based on the rays and all of those things. But it's a slightly complicated formula. But the only important thing to remember is that we have to adjust the weight. So we do this calculation, we get an estimate of baseline and b and d. And based on that, we recalculate the weights. Because depending on the orientation, the relationship between error in 3D and area in the image will change-- hopefully not a lot. And so this is, obviously, then dependent on having a good first guess. And so we'll have to talk about that because that's always a handicap if your iterative algorithm needs a good first guess. So we have a good first guess, we calculate the dot weights based on that. We solve this optimization problem. We go back, recalculate the weights, do it again. And typically, we don't have to do it too many times. So what's this about interchanging left and right? Well, it has to do with the idea that we're intersecting lines, not line segments. So when you interchange the left and right rays-- so here's our left and right ray drawn correctly-- and now suppose that somebody says, actually, I've mixed up the data, and I'm giving you that ray for the right eye and that ray for the left eye. Oh, they intersect-- if these intersect, these intersect. So that happens to be behind the camera, and so you would calculate alpha and beta and say they're negative, so this is not right. But there are several symmetries like this which can be useful in the numerical calculation. And so the triple product we're interested in is this thing. Hopefully, you got it there. And surprisingly, it's also that, where we've interchanged the q and it. It's sort of like interchanging rotation and translation. That seems pretty weird. And if you don't believe it, you can expand it out in terms of components. This is an expression in terms of quaternions, but we can rewrite it in terms of the components of the quaternions, which of course are these vectors. And then you'll see that it's perfectly symmetric. So if you look at this, you will see a number of symmetries. One of them is between r l and r r. If I interchange left and right, nothing changes. And the other one is if I interchange the rotation and the translation, d and q appear symmetrically everywhere. And why is this of interest? One reason is that it means that if you're searching the space of possible solutions, and you have an approximate solution, you can immediately generate other approximate solutions by making use of this symmetry. So altogether, I think there are eight, the symmetry of eight-- I think there's a symmetry of eight, so that you will find your solution more quickly because everything you've worked out has eight different interpretations. And so by the way, that means if this is a solution, we know, of course, that this is a solution. Because minus q represents the same rotation as q, so that's not useful to us. But out of the formulas, you're going to get those as well. Then this is a solution and this is a solution. So this gives us a factor of 2 factor of 2-- I guess that's where the 8 comes from. So when you solve this numerically, you may find that there are up to eight solutions. Now on Stellar, there are two papers that describe this process. And it's kind of annoying that it's this messy. It would have been wonderful if someone had figured out a closed form solution. I have to admit that I don't think there is one. I'm fairly convinced. And so we're stuck with this kind of numerical calculation. So how do you actually implement this? Well, one approach is to assume you know one of these two unknowns. It turns out if you fix one of them, then instead of being quadratic, it's linear. There's a simple least squares solution, closed form solution. And then assume that d is known-- you just calculated it-- and solve for q. And it's symmetric, so you would expect this to also be a simple least squares. And then of course, you'll have to do this again because now q has changed. And giving a recipe like this doesn't prove at all that it's going to converge. But it does. And I'm not going to prove that it does. So that's a very heuristic, very simple method that works. You can do better by using some nonlinear optimization package. And a popular one is this one, mostly because there are free implementations, freely available on the web. And I briefly had a roommate when I started, Marquardt, and we had an interesting adventure where I drove him across the border to Botswana in the middle of the night. And then I never was in contact with him again. And he may be this Marquardt, but I don't know because he's apparently dropped out of sight, and not pursued science, and found something more lucrative to do or whatever. Anyway, this is a very nice package which allows you to solve nonlinear optimization problems. And basically, you just have to give a bunch of equations that are supposed to be 0. And you have a bunch of knobs, a bunch of parameters. And it will tune the parameters until those equations are as close to being 0 as possible. So it's like a black box-- you throw in your equations and hope for the best. But it requires that you have a non-redundant parameterization. I only mentioned this one now because so far, we've been able to do closed forms, so we didn't need to do anything like this. But this is useful not just for relative orientation. Lots of these kinds of optimization problems succumb to this approach. So what's the problem? Well, the problem is rotation. If we do it as a normal matrix, we've got that problem-- nine numbers and only three degrees of freedom. So one answer that is commonly given is Euler angles. And you know what I think about those. So let's x those out. One that's actually used some is the Gibbs vector. So the Gibbs vector is tan theta over 2 times omega hat, where omega hat is the unit vector in the axis direction. And obviously, it's a vector, has three numbers, 3 degrees of freedom. And unfortunately, it has a singularity at theta equals pi. And so that's a potential problem because rotation about 180 degrees is a perfectly legitimate rotation. Now in this particular case, you typically don't have the right camera rotated 180 degrees relative to the left camera. So in some sense, go ahead, use the Gibbs vector. It'll probably work. Of course, what I'd recommend instead is unit quaternions. And we've worked out the details over there using unit quaternions. The only problem is they're redundant. There are four numbers, 3 degrees of freedom. But this package allows you to add additional constraints. So you just pretend that this is another equation. And it will try and come close to satisfying that equation, while satisfying the other equation. You may need to play with weighting of different components, and that works very well. It converges pretty rapidly. Just wondering where to go next, given that we don't have a lot of time. We talked about that. And this probably doesn't come as a surprise, if you know the rotation, there's a straightforward least squares method to find the baseline. And that's along the lines of what we said over here, so I probably won't bother with that. Well, not much beyond. We have this formula here. So one of the ways we've been thinking about rotation is axis and angle. And that's a pretty intuitive way of thinking about it. I mean, some people think all angles are intuitive. I think this is more. And from there, we can go to this pretty straightforwardly. So the vector part is, in fact, in the direction of the axis. And it's just scaled according to the amount of rotation. And then the scalar part-- not sure what else you can say about it. It has the nice property that if we combine two rotations, we just multiply these in this form. And the transformation of a vector is a little bit more complicated, but we saw the formula for that. Then we use quaternions to represent vectors by making the scalar part 0, so that's a useful thing. And if you wanted to, you could use them to represent scalars by making the vector part 0. But that's not that interesting. So not sure if that answers your question. STUDENT: Yes, definitely helped. BERTHOLD HORN: Also, remember that there's a short, four-page blurb that summarizes everything you'd want to know about quaternions. Quaternions can be intimidating. I mean, when I looked at Hamilton's book, it's like 800 pages of dense math. And then I took an engineering approach, and said, what do I do with this? How do I actually use this? And it turns out that this is all I needed to know about it. I don't need to know all of the really, truly esoteric stuff. Yeah, that's a great question. So we've got several things we could use as an error measure. They all have the properties that if things line up perfectly, they're 0. So why prefer one over another? And I cautioned against using the triple product in its raw form. And the reason is that they have different amplification factors. So suppose that the rays, for example, are almost parallel, and the object is one light year away. Then an error of one arc second is going to be hundreds of thousands of kilometers. And you're going to try and minimize that, where some other part of your image might be an object that's closer by. So you want to compensate for that. And this way w is simply the relationship-- so here, very roughly speaking, z, f. And so the weight w is f over z. So the triple product measures this error. And that can be huge just because z is large. But we're interested in the error in the image position that is the result of whatever algorithm we use to find image position. And so we can reduce that triple product area into an image-relevant error by taking that. And here, the calculation is trivial. Unfortunately in our case, we got ray's going at different angles. And the calculation is in the paper online if you want to see it. So the thing that comes next, which I'll just touch upon, is when does it fail? We always get to that point. So now we have a method-- well, we already know that we need five correspondences, so if we don't have five correspondences, it will fail. But is it possible that there's certain kinds of surfaces, if we look at them with two eyes, there are ambiguities, and/or high sensitivity to error, so that we can't quite figure out what's going on? And so those things we were discovered pretty early on, like over a century ago, in an original paper they called "Gefaerliche Flaechen," which I guess properly translated means "dangerous surfaces" or "planes." And I guess the English terminology is "critical surfaces." So the idea is that there may be cases where you're looking at an object of a certain shape that make this problem hard, or actually make your method fail, in that there is a lot of ambiguity. And to make that plausible, imagine we have a U-shaped valley, so like that. And then we have an airplane up here. And we have some landmarks. And you remember, our job here is to figure out where the airplane is. It's the relative orientation. We have the plane flying along taking measurements, and we're trying to relate those measurements. Well, we have an ambiguity, in that if we move that airplane along the surface of this circle, we don't change this angle. So that what am I measuring in the image? I can only measure angle. I don't know distances. So if I move and the angles don't change, then I have a problem. And I've done it for A and C, but obviously, the same is true for A and B, and any other number of ground points you want to add. So that in this situation, where you're flying right above the axis of a semi-circular valley, then it's going to be impossible to distinguish different positions along here. Now, this is a cross-section. This is 2D. So this isn't the whole story. But it gives you a plausible way of understanding why there might be a problem. And this is why when they lay out the flight plans for mapping, they never do that. They prefer to do this-- fly the plane over a ridge rather than over a valley. Because here, when you move over, the angles between different images of different surface features do change a lot. And so what we would like to do is get the 3D generalization of this problem. And that's-- well, I kind of gave away the answer by showing you those pictures. It's a hyperboloid of one sheet. So we'll see that it's a second-order, a quadric surface. And it happens to be one with the right number of minus signs, for the one sheet-- I guess it's one minus sign. And so if we're looking at a surface like that, then there's going to be problem. You say, well, who cares? I mean, how many surfaces? How likely is it I'm going to be looking at a hyperboloid of one sheet? Well, that's true, but then if you're looking at a surface, you only have a small portion of it, the difference between it and a section of a hyperboloid of one sheet might be small. And in that case, you will be approaching this dangerous situation. That's one argument. And the other argument is that there are other special cases of quadric surfaces, and in particular, of the hyperboloid of one sheet, which are more common. And the most common one is the plane. So it turns out that one version of this quadric surface is just two intersecting planes. And you can see why that could be. So the equation for one plane could be a linear equation like this, equals 0. So that's one plane. And this is the equation for a second plane. I guess I wrote it in 2D, but just add z. So each of these planes has a linear equation like this. And I can talk about it by saying, this is the plane where ax plus by plus cz plus whatever is 0. And if I multiply them together, of course, the product is 0. And so what surface is that? Well, this times that equals 0 is the combination of two planes. And in our case, one of the planes goes through the center of projection, so it projects into a line. So if I have a plane, and I'm looking straight along the plane, I just see a line. So it's even weirder, because it means that a planar surface is a problem. And what's more common than the planar surface? Well, maybe not in topography of Switzerland or something, but man-made structures other than [INAUDIBLE] and have lots of planar surfaces. And so we can't dismiss this entirely. We've got to talk a little bit about Gefaerliche Flaechen, which we'll do next time. And there's a new homework problem, sadly. I almost missed it because it seems like we just had one. But then someone reminded me that it shouldn't be due on Thanksgiving, and so it isn't. It's, again, going the Tuesday after.
MIT_6801_Machine_Vision_Fall_2020
Lecture_14_Inspection_in_PatQuick_Hough_Transform_Homography_Position_Determination_MultiScale.txt
[SQUEAKING] [RUSTLING] [CLICKING] BERTHOLD HORN: "Quick," meaning pattern recognition quickly. And that's in distinction from another pattern we'll look at later, which is slower but gets a more accurate answer. So a number of terms were defined there. One of them was that of a model. So there's a training step that produces a model. And the model consists of probes. And the probes are places where we will be collecting evidence for a match to produce a scoring function. And we think of that score as a surface in a multidimensional space which has as its axes the various degrees of freedom. And we're looking for a peak in that score surface that's above a threshold as a way of indicating that there's an instance of that pattern in the image. And there may be more than one, and there may be none. And at the highest level, we look at quote, "all poses," meaning we're going to quantize them, but we have to consider the full range of rotations, translations, scaling, whatever. And so the complexity goes exponentially with the number of degrees of freedom. So even if we quantize fairly coarsely, say somewhere between 10 and 100 steps, that means we're multiplying the amount of work by 10 and 100, something like that. So normalized correlation was expensive and couldn't handle more than two degrees of freedom, translation. Well, this method is looking at very few points in the image. And so it can afford to deal with a large number of degrees of freedom. OK. That's a great question. So at one level, we're building up. So we're starting off looking at very low-level issues, and then we're building on top of them. Then as to why did we pick these patterns, well, one reason is that it's a package of things that are related. So we don't have to re-learn terminology and whatever. So for example, the front end for this one, as you saw, is the pattern we talked about before. And the next one we're going to talk about is closely related to this. It uses this one to get a first guess at the answer. And then the one after that is a fast way of computing a step that we need to do the subsampling. So is it a random sampling? Well, kind of, because machine vision has exponentially grown. And there was a time where there wasn't enough material to fill a whole term. So I taught this course, which I called "Making Machines See and Feel," because half of it was vision, and half of it was robotics manipulation. And we can't do that anymore. Now we've got multiple courses on machine vision. So this is a particular approach to machine vision, which some people refer to as the physics-based approach to machine vision. By the way, they made me change-- the head of department contacted me and said I couldn't have a frivolous title like "Making Machines See and Feel." And that's why now it's called "Machine Vision." So the patterns we're looking at hang together, which makes it easier to discuss them. Their patterns of the most successful machine vision company. So there's some weight there. Is it what they're doing today? Well, a lot of the machines they sell do this. Others do other things, which we have yet to find out, because we don't know their trade secrets. And the things we're talking about in the pattern discussion is mostly 2D. So it's restricted to situations where we're dealing with two-dimensional surfaces, like integrated circuits, printed circuit boards, conveyor belts, the ground outside an autonomous car, things of that nature. So after this, we'll go on to higher level still. So it's part of a progression. And yes, we're not covering everything possible. And so we're taking stabs at things that give you a feeling for what's there rather than try to cover everything. So in this-- back to this. So there's a training step where we show it an image, and it automatically computes this model. And that was very important for them, because we can send someone to MIT and get a degree and handcraft some code to do something. But that's only worthwhile if you have a huge application for that. I forget what the example was-- pencils or something, earphones-- toothbrush, sorry. There was someone who was asked about supporting startups, and his test was the toothbrush. So if he can sell a billion of them, then maybe he'll fund it. But if for every visual task you have to expend that effort to program it, that's not good. So the training method is very good, because someone with not a great deal of experience can show the system the object, hopefully a good sample of it, and get a training image. There are some refinements you can do. As we discussed, for example, you can have negative weights. But those are things that a person has to-- can't think of an automated way of doing that yet. And so that's actually something that's never done, because it's much easier to just go with that if it's satisfactory. So what do we do with this model? Well, then we map the model onto the real-time image. And in the process, we use the pose, which is translation, rotation, and all these other things. So our model now is mapped on top of the image. And then at each of the probe positions, we look at the brightness gradient. So actually, we pre-process-- we don't really go back to the image. We go to the brightness gradient. And we perform a test to collect evidence, to collect evidence, that this object is actually there. And the evidence is cumulative. So we just add it up. And so that's the idea of doing a lot of local operations, where the final result is just the sum of those, a little bit like the binary imaging processing-- binary image processing methods that I mentioned. OK. And then we do this for quote, "all poses," meaning that we need to quantize the pose space. And that's where the limitation of this method comes from, which is that we will be limited in accuracy by the quantization of pose space that we have. So what are all of these components of pose? Well, obviously translation. And when we were looking at the pattern, there was a whole page. But they were redundant in that we had a scaling one that worked in a logarithmic scale, and one that worked in the linear scale, and so on. But let's just list the ones that aren't redundant. So there's going to be translation, rotation. And again, depending on the application, not all of these may be necessary. Then there's scaling. Now, scaling may not be necessary, because maybe your objects are all the same size, and maybe you're not dealing with the change in size due to change in distance, because you're using a telecentric lens so there won't be a change in size. But we allow for that. Then there's skew, where, if you like, instead of rotating both x- and y-axes by the same angle, we rotate one by a different angle. And again, that may or may not apply in a particular situation. And then we have aspect ratio. So in a way, this allows a certain degree of generalization. So you have a pattern. But if you want to, you can also deal with the same pattern in a different size, the same pattern that has been squashed in one direction and maybe extended in the other direction. So translation, two degrees of freedom, rotation one in 2D, scaling one, skew one, aspect ratio one, so a total of six independent parameters. And so we're dealing with a potential six-dimensional space. And if you quantize that even to 10, then you are up at a million. And if you quantize it to something more reasonable like 100, then you're up to 10 to the 12-- really? Yeah, 10 to the 12, which is impractical. So in most applications, we don't use all of these. By the way, if we put them all together, we get a general linear transformation. I'm doing this partly because when we get to 3D, it'll be handy to be familiar with these terms. So that's a general linear transformation. We've got six parameters and affine. It's an affine transformation. And we could say, you know, forget this categorization. We're just going to think about the transformation in term of those six coefficients, which is fine, except that for us people, it's easier to think about rotational, translation, and scaling, because the individual-- what does it mean if I change a1,1, and I keep the other coefficients constant? And we'll talk some more about that transformation. OK. So I want to talk next about the scoring functions. So it's clear what we're doing. We create this model. We map the model onto a runtime image. We collect evidence. We get a score. We find the peak in the score surface in that multidimensional space. And if it's above a threshold, then there's a potential candidate-- there could be more than one, or there could be none. And so that's the overall process at the top level, where we explore the whole space. And we talked about the way we perform the scoring, which is mostly based on the direction of the gradient. So. But it can take into account the magnitude of the gradient, as well. And we had this issue that it's possible that we get a match even when the we're not on the right part of the object, just because there's a certain range of angles which are going to be accepted. And so there's going to be some probability that a random gradient in the background texture is going to match, and we-- So this was the function used for grading the quality of match in terms of the direction of the gradient. And this was, of course, just a sample. But it's the one that's highlighted in the pattern. And for this one, this is 3 over 32. So there's roughly a 10% chance that even if we plunk the probe down in some totally wrong place that we'll get a match. And so in the pattern, this is referred to as noise. A little-- I guess it's background noise. I don't know, it's a little confusing. It'd be better if they didn't use that term. If we have the version that ignores polarity, then, of course, it'll be twice that. So that's one disadvantage of that version of the matching function. It allows you to reverse the contrast. But at the same time, the noise contribution goes up. OK. So then they define a number of scoring function. And they're kind of trade-offs between accuracy, speed, whether it's normalized, whether the actual value of the result is meaningful, or whether it's just something that's bigger if you have a better match. And so let's look at that. So we've got-- Now, if you remember, the probe was something that had a position, a direction, and a weight, and possibly some other stuff. But those were the top things. So here is the weight of the probe. So we stepped through the probes. I stepped through the probes, and that's the weight. And what does that do? Well, it discards things that have negative weight. So just in case you've used that capability, for this scoring function, you throw those out. This is R dir, the direction scoring function. And it's looking at the difference between the angle, the direction of the probe-- so that's the direction of the probe-- and the gradient at the position that the probe is placed in, position-- And big D is just a function that gives you the arctangent of ey over ex. It's the direction of the gradient in the runtime image. So let's look at that. So what happened to the other components of the pose? Well, at this point we're dealing with compiled probes. So here we're just varying the translation. We're only varying the translation a. We've already mapped the probes according to the other components of the pose. Yeah? Oh, that's a vertical bar, meaning absolute value. Sorry. And in some sense, that's just to take account of the fact that this wraps around. And we could have done that some other way, like mod 360 or something. OK. And this version is normalized in that we divide through by the sum of weights, such that if we get a perfect match the result will be 1. And so the absolute value of this score, of this particular score, has a meaning. It's not just that it gets bigger if you have a better score. And this is used in the coarse step of the algorithm. What else can we say about it? Then there's a second version. What's the other version? It's 1b. Sorry, this is S1a. This is S1b. And this one doesn't multiply by the weight. And this is a slightly weird notation, but it's kind of clear once you think about it. So this is just a predicate that computes whether the weight is greater than 0, whether it's positive weight. And it's 1 if that's the case and 0 if it's not the case. So that's just a funny way of saying we're only going to process probes that have a positive weight. And the advantage of that is we don't need that multiplication. And so it's cheaper than the other one. And so this is faster. We haven't done anything with that quote, "noise term." So when we get to the preferred embodiment-- So it's very similar to 1b, except that now we're saying, well, if we apply this in a random place in the image, we'll get a non-zero result because of this random matching. And so by subtracting out that component N, which is an estimate of the number that will be matched randomly, we can improve the result. Yeah? AUDIENCE: What's the D function? BERTHOLD HORN: This function, OK. So D is the function that tells you what the gradient direction is in the runtime image. Gradient-- sorry, gradient direction at a plus p i. So I didn't underline these, but they are vectors. a is a translation, and p i is a vector that's the position of that probe, so. And so then when I subtract the little di, get the error in the direction. And then I use that to access this function to decide how much to penalize the result based on that error. OK. So this one now has the feature that if you're throwing the probe-- throwing the model down at a random place, on a random texture, the answer will tend to be 0, because we've taken out the random coincidental matches. And so unlike this one, where, yes, if you have a perfect match you get a 1, but if it's just garbage background, you don't get a 0 you get whatever N is. So this is slightly better in that respect. It's slightly more computation not much. So that's the quote, "preferred" method. And if your implementation doesn't use that, that doesn't help you because they don't say that you have to use that. They give you other alternatives. And they just say that, well, this is the one that we think is the best. So OK. So then we get as S2. Now, this one is not normalized. So it will not-- its absolute value will not be significant. It's just going to be larger if you have a better match. And so what's this? So this is similar to this function, M of a plus p i is the gradient magnitude, which we haven't used so far. So this one actually takes into account the gradient magnitude, and it uses the magnitude directly. And it means there's another multiplication. And this version is used in the-- first of all, it's not normalized and is used in the fine scan step. And then finally-- so a lot of this is used in getting to a potential candidate solution. And then there's a fine scanning step-- there's a scoring step that gives us an actual value. And this one is more work to compute. So this one, again, is normalized. And it uses the magnitude of the gradient, but it uses it according to the scoring function that we defined, which was that it saturates. So that's the direction scoring function, and this is the magnitude scoring function. So in S2 a, we would just keep on going up, multiplying by the magnitude. In that one, we limit the contribution. So you don't have one single very good edge dominating everything else. OK. And there are lots of details on how to make this run fast and so on, which we won't go into. I do want to talk about a couple of interesting points. Another thing we're not going to talk about is the granularity. So we mentioned this, that we might work at a scale where these computations are cheaper as long as we can get a satisfactory result. And mind you, what's a satisfactory result here is very different from some machine vision, where if you can improve the recognition accuracy from 71% to 73%, you've got a paper. This is more like this has to work 99.9% of the time. And so granularity is determined basically by decreasing the resolution until it doesn't work well enough anymore. So if for your task you know what "well enough" is, you can perform that task. And basically, for each granularity you have to go through the whole process. You have to build the model, because the probes will be different depending on the resolution. And then you run through, and you see how well it works. Yeah? AUDIENCE: When wouldn't you want to normalize? BERTHOLD HORN: Mostly when you're in a hurry. So it's just a computational issue. If you're at a part of the computation where you have to do it for every possible pose, you might want to save that step. If you at the end, where you near the correct answer and you only have to do it a few times, then you can normalize. OK. So the granularity-- and they have a very sophisticated method for doing this. And I think we've seen enough of that part of the pattern, so I'm not going to go through it. But it is, obviously, in practical terms an important component, because you want this to run as fast as possible. If you're at Amazon, and you watch the boxes come down the conveyor belt, and it's reading all kinds of stuff off the box and determining its position and orientation, you realize that you don't have a lot of time to do the computation. These things run at a hundred frames a second. And all of this computation can be done in that time in a relatively cheap processor. OK. But I do want to talk about a couple of minor issues that may not be obvious. One of them is getting the directions right. So we're really focused on gradient directions. So we need to make sure that when we perform these transformations that we transform the gradient direction correctly. And so here's the issue. So here's my edge. This line is the iso fold. And here's my gradient, which, of course, is perpendicular to the iso fold. And now suppose that I have an operation that changes the aspect ratio. So I suppose I squash it. So I'm changing the aspect ratio. Then of course, this line is going to have a lower slope, because the top is moving down, the bottom is moving up. So OK. And hmm, the gradient also is going to have a lower slope. Right, so it's no longer perpendicular to the iso fold. So that is kind of obvious. And now if you're transforming x and y, it may be that the derivatives with respect to x and y transform in a different way. And so what do we do about this? Of course, this doesn't affect translation. This doesn't happen for translation, doesn't happen for rotation. You rotate, and the right angle is preserved. Doesn't happen for scaling. But it does happen for the other two. So how do we deal with this? Well, we have the gradient direction. That's our input. That's what the box we have in front of all of this computes. OK. So that's where we start. And now all we need to do is compute the iso fold. So from the gradient direction, we get the iso fold. Then we transform that. And then we construct something at right angles to the iso fold. So here's our new gradient direction. So the solutions are kind of obvious, but it's something that one might easily forget. How expensive is it? Well, not terribly, because 90-degree rotations are simple. So cosine of 90 is 0, sine of 90 is 1, minus sign. So they just-- we don't even need to do multiplication. We just interchange x and y and flip the sign of one of them. And then after the transformation, we have to do the inverse, which is the transpose of that matrix. So again, we just interchange x and y and flip the sign of one of them and we're done. But it's something that's easy to forget. And until you run into the last two operations, it doesn't matter. OK. Another thing which was almost like an add-on that they thought of, because it's not discussed much in the specification but it shows up in the claims-- and there's some discussion at the end of this, which is inspection. So first of all, the main focus here is figuring out where something is-- position, pose. A second aspect is recognition. And how does that work? Well, you just look at the score. If it's close to 1, you've got it and again, this may occur in more than one place in the image. If you-- typically, "recognition" means that you're distinguishing different things like cats and dogs. And so how does that work? Well, you have a library of models. Suppose that you have different types of screw heads, then there's a model for each of them. You run this process, and you see where there is a match. And if necessary, you can compare the match score to figure out what type exactly it is. So that's recognition and position. And they also talk about inspections. So inspection is based here on a fractional match-- runtime image. OK. So the model consists of probes. We put the probes down on the image. And for each place, we can determine whether it's a reasonable match or not. And in the process, we can calculate a percentage. And if the object is partly obscured, well, then that fraction will be lower. If, say, you're looking at a gear and some of its teeth are missing, then you'll get a match, but the score won't be 1. And so that's one direction. The other one is-- so we can look at the runtime image, and see where it has edges, and then discover how many of those are actually matching something in the model. Now of course, if there's clutter and there's some background, then there would be a lot of edges that will not match the model. But that's also useful information. So that's a way of measuring clutter, basically that. Yes. And there are some more things to say about that, but that's the basic idea. OK. A couple of more things I want to do here, partly in preparation for the 3D work we'll be doing. And that's to elaborate a little bit more on these transformations and to vaccinate you against some bad ideas, hopefully. So let's go 3D but in a kind of simplified world. So we know already a lot about that. We know about perspective projection. So let's think about the projection of a plane. And this comes up a lot. I mean, we could be talking about tabletop, conveyor belt, integrated circuit, printed circuit board. And it's all in 3D. Now, we can carefully line up the optics and so on so that we get an orthographic projection, in effect. Or we can deal with the real full 3D world. Could also be the road in front of a car. Lots of examples of flat surfaces. OK, so two things. One thing we remember is good old perspective projection. And the other thing is that there is a camera coordinate system where this applies. And then there's other coordinate systems. Let's call them world coordinate system. And what's the relationship? Well, there's going to be a translation. The camera has its coordinate system has its origin at the center of projection. And the world has wherever. If it's a robot arm, it might be the base of the robot arm. If it's a box, it might be a corner of a box. And then there's a rotation, so in world coordinates and in camera coordinates. And we talked about how that simple formula for perspective projection only applies if we're dealing with a camera-centric coordinate system. OK. And this matrix here is an orthonormal matrix that encodes the rotation. And we'll talk quite a bit about that later. OK. Now, what I want is the transformation from world coordinates, object coordinates, to image coordinates. And so I combine these two. So what I'm going to get-- So this is just multiplying out that matrix. And this is, again, the same thing. So as we know, perspective projection has this nasty property that it involves a division that's nonlinear, and it's kind of a bit messy. So that's a general case. And we'll find ways of dealing with that. But I want to look at the particular case where the thing we're looking at is planar. And that means that we can erect our coordinate system in that object. Here's our planar surface. Rather than pick some arbitrary coordinate system, let's pick one where z is 0 so we only have two of these coordinates to deal with. So what we're sort of doing is we're kind of halfway between 2D and 3D, because we started off looking at a 2D surface embedded in 3D. And where we're going is some mapping from the 2D surface out there into the 2D surface and the image. Yes? AUDIENCE: Where does the e w come from, the top-- numerator? BERTHOLD HORN: E-- sorry, which-- e w? AUDIENCE: Oh, yeah, next to the yw is a e w. Or is that a z? BERTHOLD HORN: Oh, sorry. Oh, here? AUDIENCE: Up top. BERTHOLD HORN: Oh, sorry. That's my bad writing, yeah. zw. OK, and that's the one that's actually 0. So that, when I do that matrix multiplication then, that means that I can ignore the third column, because it's going to get multiplied by zw. And so I can make that anything I want. So this column here doesn't matter, because it's going to get multiplied by zw. And then I can kind of conveniently fold in the translation. So one of the annoying things about Euclidean transformations, real-world movement, is there's rotation and there's translation. And it's hard to treat them as one thing. It'd be nice to have notation, some magic thing. Let's call it a glob. So the glob, you multiply it by your vector, and out comes another vector. And that single thing encodes both translation and rotation. So the way we've written it there, we've split those two. Well, it'd be really nice to have them as a single operation-- multiplication of something, for example. Well, we can do that here because if we drop out that third column, we can bring in the x0, y0, and z0, and we end up with-- let's see. So we've got-- I called it r1,1. And-- So you see what we've done here. We've taken advantage of the fact that we don't need that third column for zw, because zw will be 0. And then we can fold that addition in by just multiplying that last column by 1. So in this particular case, where we're dealing with a planar surface out in 3D, we can fold rotation and translation into a single matrix. And let-- call this matrix T. Let me call that matrix up there R. So R has some special properties that we'll talk about later. One of them is that it's orthonormal. And that means if you take its transpose and multiply it by R, you get the identity matrix. And the other one is that its determinant is plus 1. T doesn't have those properties. So that's very important, because how many degrees of freedom do we have? Well, in 2D it was a little easier to figure out. We had two for translation, one for rotation. So those already give you three degrees of freedom. And it's pretty obvious how you could represent them-- some x offset, some y offset, and some angle. In 3D, it's a little bit harder. We have three degrees of freedom for translation-- a change x, a change in y, a change in z. And then there's rotation and the various ways of thinking about that, which we'll get to later. But basically, there are three rotations. There's one that preserves the xy-axis, there's one that preserves the yz-axis, and there's one that preserves the zx-axis. And as a mental shortcut, you can say there's rotation about the x-axis, rotation about the y-axis, rotation about the z-axis. That's actually a very bad way of thinking about it. But it gives you the right number, which is 3, so. OK. So that means that if we have translation and rotation, there should be six things-- six degrees of freedom. And then you look at that matrix up there and stuff, and there's already nine just in the matrix, and then there's another three for the translation. So there's 12. So there's a redundancy. There's something wrong. We've got many more variables than there are degrees of freedom. And so that's because there are these constraints. So that R matrix isn't just any old 3-by-3 matrix. It has to satisfy all of those constraints. And it's the usual thing. You have a number of variables, a bunch of constraints. You subtract them, and you get the degrees of freedom. So in this case, when we subtract out the constraints, it turns out we get 9 minus 6 equals 3 for rotation. The 9 is because it's a 3-by-3 matrix, there are nine numbers. What's the 6? Well, the orthonormality constraint provides six constraints. Why? Well, each row has to be unit size vector, so that's 3. The rows have to mutually be perpendicular. Well, there are three ways of pairing them up, so then 6. So 9 minus 6 is 3. OK. So then we get to this. And you know, this is an interesting way of dealing with perspective projection in the case of a planar surface. We just have to be a little bit careful. So suppose, for example, that we determine this matrix T experimentally by matching up an image of a calibration object like, I don't know, cube, or checkerboard, or whatever. Somehow we find this 3-by-3 matrix, and then we're done. Well, no. Because this is a matrix with nine independent elements, which is way more than the transformation deserves. And so actually, we need to enforce some constraints. For example, it's derived from a rotation matrix. So that first column should be a unit vector. The second column should be a unit vector. The third column we lost. We don't know what that is, although, of course, we can reconstruct it. Because it's perpendicular to the first two columns, so it's their cross product. That's, by the way, a useful programming trick. You don't really need to keep a 3-by-3 matrix for rotation. You just keep two of its rows or two of its columns, because you can always compute the other one. And in fact, in a sense, it might be more consistent to work that way. OK, two constraints. Well, there's another one. These have to be orthogonal. So r1,1, r1,2 plus r21, r22, plus r31, r32, has to be 0. OK. So this matrix T should really satisfy those three constraints. And let's add things up. So it has 3 by 3. It has 9 elements. Three constraints, that leaves us six, six degrees of freedom. That's correct. So that's all true and wonderful. It's just that typically, we don't enforce that constraint. Why? Well, because the rest of it you can do linear least squares to fit to your calibration data. But how do you enforce these second-order equations? And so you will see a large number of publications that expound this approach, which would be valid if only they had enforced this. So that's like a kind of attempt at generalization from what's covered in this pattern, namely 2D transformations, to where we'll be going later, which will be 3D transformations, and a kind of warning about this stuff that we have to be careful. So by the way, if I'm going to use this to predict where I'm in the image, I'm just going to use-- of course, the equations, I'm just repeating the equations from up there. And so that means that if I take the camera coordinates and multiply them by some arbitrary non-zero factor, nothing changes. And of course, that's the scale factor ambiguity we already talked about. What does that mean? Well, one thing it means is that you can take this matrix and multiply it by some arbitrary constant, non-zero constant, and nothing changes. So that's another clue that this is a funny kind of matrix. And yes, you could try to adjust its size by trying to impose these constraints. But-- OK, oh, this is called-- I should say this is called homography. And we'll talk later about its use in photogrammetry or misuse in photogrammetry. OK. The other thing I wanted to talk about was, you may remember right when we started talking about this pattern, I said, well, there's prior art. And we listed blob analysis or binary image processing, and we listed binary templates, and we listed-- what else? Anyone remember what else we listed? And there was a fourth one, which we didn't discuss. The other three we did discuss. And the fourth one is Hough transform and its generalization. So let's just briefly talk about that. So that's because we picked our world coordinate system so that in the plane-- it's only working for this plane. And in that plane, zw is 0. What it means is that, suppose I want to deal with an image of this table, and then I want to refer to where my phone is on the table. I'm going to pick a coordinate system that's, say, lined up with the edges. But the important thing is that z is perpendicular to that surface, so that whatever I'm-- where is this paperclip? I only need to give x and y, because if it's in that plane, z is 0. And so this will not apply if I have a true three-dimensional object. It's only the case where I'm confining attention to a plane. And so it comes up in industrial machine vision, because-- in many ways. One is, obviously you want to try and get your optical axis as perpendicular as possible to the surface you're interested in, whether it's a conveyor belt, or the surface of the road, or whatever. But inevitably there will be some small error, and inevitably someone's going to bump into it and change it slightly. And this takes care of it, because this slight generalization allows you to deal with other camera orientation so that you don't have to have a perfect orthographic projection. You can have a slightly skew one. Thinking of other applications, in some countries there's like a surveillance camera at every intersection, and it's looking down at the road. Well, if you're mapping points on the road, you don't need the full 3D transformation. You just need this. But you just have to remember that now your world coordinate system has the constraint that it has to be lined up with the surface of the road so that zw is 0-- assuming the road is really flat, but. OK. Hough transform. So when I was a student, which is centuries ago, NASA had a large part of one of the Tech Square buildings, one of the original Tech Square-- they've since been renovated, so. I guess it was 535. And there was a floor or maybe two floors dedicated to the following thing. At that time, people were very interested in various kinds of processes of elementary particles. And you studied them by shooting them into a cloud chamber, Wilson's design of a cloud chamber. You shoot them in, you quickly decrease the pressure, and it's saturated with some vapor-- alcohol or something. And the ionized points in that space then form nucleation points for the liquid. And then you take a picture of it. And you find that there's particles zooming through in a certain direction. And then you can see how that depends on the magnetic field that's applied. If it's charged particle, it'll be curling around. And so people spent many, many, many man-years and women-years poring over a table like this, where these images were projected down, and then they were fitting lines to them or arcs. And so there was a huge interest in automating this process, because it was clear that it was only going to get worse. Because the higher energy you get to, the more pictures you have to look at before you find the interesting stuff, and the more complicated the pictures get. So it was-- doing it manually was hopeless. So edge detection, line detection was important. And then what do you do with the lines? Well, Hough came up with this thing, and-- this one idea. And this is probably-- as far as I know, this is the first machine vision pattern. I think this was submitted in 1960 and issued in 1962. And it's pretty short. I don't remember whether I loaded it up on Stellar. It's not particularly interesting, but we'll go through it. And at that time, there was quite a bit of negative feeling in the machine vision community, which was, I don't know, a dozen people at that time. You know, why pattern this thing? Anyway, he patterned it. So what is it? And again, the context is we're trying to see lines in photographs of Wilson bubble chamber pictures. And so he's looking for possible lines. And you say, well, just use the edge detection methods we've talked about already. Well, the trouble is that these lines were little dotted lines. They were little bubbles. And the bubbles were not spaced equally, and they were not all the same size. So it wasn't clean. And also, because they weren't edges, they weren't transitions from dark to bright. They were transitions from something to something and then back again. But that would have been not too hard to deal with. So he came up with this idea that we map from image space. so here's our image space. And now in this image space, we can have all kinds of lines. And we map to a parameter space four lines. So you know how many numbers do we need to describe a line? Well, we already did that. So one way, which I'm not too excited about, but there it is, is y equals mx c. So it apparently takes two numbers to describe a line. And then we-- did I use m here? Let's see here. OK. And so if I have a line in this space, that maps into a point in this space. And so if I have some local evidence for a piece of one of these bubble tracks, maybe I can map that into this space and accumulate evidence for different possible lines. And so like, that fragment might go there. And then this fragment, that's-- I can fit a line to that. And then hopefully, that'll come to near the same place, and so on. So the whole idea is that, then, to have an accumulated array and count the evidence for each of the possible parameter combinations, and then look for peaks. And those will correspond to lines in the image. And this can be generalized to other shapes, but let's stick with lines. So here's another thing. So suppose that I have a line in this space. What does that correspond to in the original image space? Well, it turns out because this equation is kind of symmetric-- if you write it the right way. Let's rewrite it. I can write the equation of the line this way or that way. And then it's clear that really, those two spaces are symmetric. They're complementary, mapping from one to the other. And so what does that mean? Well, that means that a line in this space on the right corresponds to a point in the space on the left. So that corresponds to, let's say, that. And so what use is that? Well, that's pretty exciting, because I may not have a good estimate of the line at all. Maybe all I've got is a bubble. I don't really know what it is in the larger scheme of things. Well, suppose this is my bubble. What does it tell me? It does tell-- it doesn't tell me what the line is. But it's one of those things, where it constrains the number of lines. So all of the lines that this could give evidence will go through that point. And those are all of the lines along here. OK, so that's an important insight, that if I have a nice clean edge. I can fit a line and I'm done. Well, if I have a bubble chamber picture, I just have these bubbles. But each bubble gives me evidence about a possible line. I don't know exactly what it is. I've reduced the problem from a 2D unknown to 1D. Should sound familiar. And so what do I do? Well, I just take every bubble, and I find it's transform in this space, and I use that to accumulate some total. So there's a bubble. Now I'm going to get another bubble. And maybe that will give me that line. And then I'll get, I don't know, another bubble, and that'll give me this line, and so on. And you can see how each of these bubbles contribute some evidence. And then if I keep track of that by having accumulators over here, I can look for the peaks in this accumulator array. And they will correspond to bubbles that line up along some line. So that's the basic idea of transform. And the key idea is that there's this mapping, and that it's symmetric. Let me just actually do an example. Oh, let's see. How does this work? So here's my image. And I have a line here. This was supposed to be 0, 1, and this is minus 1, 0. And this point would be minus 1/2, 1/2. And let's put one more in. Four is 1 comma 2. OK. So let's suppose we have bubbles at those points. Then we go to the transform space. And let's take number 1. So number 1 says x is 0 and y is 1. So that's this one, point number 1. And if I write that into the equation, y equals mx plus c, that means that c is 1, So then I can plot that in here, and I-- do I have it reversed? m going up, yeah. OK, c equals 1 in this space is that line-- oops. That's the line c equals 1. So this point gives me evidence that the line I'm looking for is one of those lines. Well, each point here corresponds to a line in that space. OK. Now suppose I pick another one. So let me take point 2, x equals minus 1, y equals 0. That's this one, point 2. And if I plug that into y equals mx plus c, I get m equals c. And so that-- so this is line number 1. So that line looks like this. This is line number 1. And so I'll go into the accumulator ray, and all of these cells will be incremented. Let's just do one more. Let's do this one, point 3. So for 3, we have x is 1, and y is 2. And for that, the equation is m equals 2 minus c. So that line, 2 minus c. So that'll start at 2 and then go negative, go like that. So that's line 3. And you can see what's happening, which is that there's one accumulator that will be repeatedly updated. Now, in the presence of noise they won't be all perfect. And instead of just always hitting that accumulator, we'll be hitting neighboring cells a little bit. But the idea is that the cell that has the right parameter in it will be incremental the most. And you can do more sophisticated things. But the key idea is that we have a space for the possible parameters of the transformation. We have each point in here corresponds to a particular line. And then, by the way, each line in here corresponds to a point. And so we can gather up the evidence. And even if the line is really ratty, just these bubbles distributed in unequal intervals, we can get a large count there. So here it's used in line detection, but it could be equally used in edge detection. And it was in some cases. It's not used much in edge detection-- in line detection now, because we have-- if the image is reasonable, we have much better methods. Their problem was that at high magnification they just had these isolated bubbles that weren't quite on the line and were different-- and so this is a good way of trying to deal with that noise, if you like. So. Absolutely. You've just invented the extended Gauss Hough transform. So yes. So that's definitely a way to go. And they're going to be obviously trade-offs. Like, if you have something that has a lot of parameters, that space gets large. And then you have to deal with the cost of storing the quantized version of it. And there are little tricky things like, if you divide up the space, you quantize it into little boxes. If you make a tiny change in the data, you may go from one cell to the neighboring cell. And what do you do about that? So there are tricks for dealing with that. But let me do another simple example, which is circles. And let's suppose we know the radius of the circle. So we're looking for, again, is just x and y, just two parameters, center of the circle. So one way to introduce this story is that in LTE-- Long-Term Evolution-- in cell phones, the signal from your cell phone arrives at a tower at a time after you sent it, of course, by an amount which depends on how far away you are. And LTE, unlike CDMA, uses time division multiplexing. So you get a slot to send your data, and somebody else gets a slot to send their data, and somebody else gets a slot to send their data. And if they don't fit in that slot, they're going to interfere with each other. So it's very important that your phone use a timing advance-- in other words, that it knows my signal is going to take 5 microseconds to get to the tower, so I have to send it 5 microseconds before the appointed beginning of the time slot. S actually, it's even worse than that, because it doesn't know the time. I mean, it's got its own internal clock, but it's not accurate to nanosecond accuracy. There's a different clock in the cell tower which is. But so how does this work? Well, the cell tower sends out a signal, and that arrives at the phone. And then the phone sends something back. So it is possible to get the round trip time, even if the two clocks are not synchronized. So details we'll leave out, but basically, you can get the timing advance, which is what the cell phone needs to use so it correctly inserts its message into the string of messages. And that, then, obviously tells you how far away you are. So in Android, you can write an app which-- well, apparently with chalk on my fingertip, the fingertip sensor doesn't work. Anyway, we can get the timing advance, and then you can do various games with it. One of them is if you know where cell towers are, and you have the timing advance from more than one of them, you can determine where you are. Or you can turn it around and say, I don't know where the cell towers are, but I'd like to know where they are, and you wander around. And you measure the distance from various places that you know from, say, GPS where you are. So there are problems of that type which involve circles. So let's see how we might use an extension of the Hough transform to do that. And keep in mind that we know the radios. So the timing advance gives us the radius. So that makes it a little simpler. We'll talk about the more general case in a second. OK, so I'm here and my cell phone told me that the timing advance was a microsecond. So the tower is, what, 300 meters away. And actually, the quantization isn't very good. It's like 150-meter increments. But anyway. So that means that I don't know where the tower is, but I know it's on the circle. And that's just the same thing we've been doing all along, that we get a measurement. It doesn't give us the answer, but it constrains the answer. And then we take another measurement. So now you can imagine that we could move around. And say now we're here, and we get a different radius. Well, it's not a-- move it there. So that's radius 1. And you'll say, oh, well, obviously the answer is here and there. Well, yeah, OK. That's assuming that these measurements very accurately, which you don't. And it leaves you with a two-way ambiguity. So a better way is to just keep on going. So we've got x2, y2, draw a circle based on that radius. And in this case, they don't intersect so something's gone wrong, which can happen. And in practice, you would have a situation like this. So that's a Hough transform-like method. All we need to do is have an accumulator array that covers the possible geographic position. And then we use this method to update it. So we reset it all to 0. The first measurement we get, we go around the circle, and we increment all of the accumulators we hit on that circle. Next measurement, we go around that circle. And hopefully, the true location of the cell tower will then be the one where there's a very large accumulated total. So that's a simple generalization. And a more interesting generalization, you know someone mentioned higher-order polynomials, and that's certainly a possibility. In that direction, what if we don't know the radius? Suppose we have a circle searching-- this is now a different problem. We're searching for-- So there's a circle with the center x0, y0, and radius x0. And we get these measurements. Now, the parameter space is larger. So over here, conveniently the parameter space was two dimensional, just as the theta space. Well, here we're going to have a three-dimensional parameter space. And we can set up an accumulator right here. And each point in this space corresponds to a circle at a particular position in the plane and a particular radius. And so we can just collect up the evidence for it. And every time we find a point that is on the circle, we update all of the accumulators in a cone, because, OK, it could be zero radius, meaning it's right where I am, or it could be slightly larger radius and it's on a small circle, or it could be a big radius and on a big circle. I don't know. But now-- and again, I don't get the answer from that single measurement-- I do this again and again, and I get a bunch of cones. And they all intersect in some complicated way, and, you know, I don't really care. All I care about is that the answer is going to have a lot of contributions. A lot of these cones-- well, ideally all of the cones will go through the answer with noise-- most of them will, not all of them. And so that's kind of the idea of the generalized Hough transform, that we can pick some other shape. And just the key idea is that we have the original space, and we have the parameter space. And we collect evidence, just as in the pattern, and we accumulate the evidence. And we have a score surface. And we find the peak of the score surface. And if it's above some threshold, then we accept that as the answer. Think about the r equals 0 plane. Well, if r equals 0, then I'm there. It's a circle of zero radius. So I can just contribute to this cell. But now suppose that r has some non-zero value. Well, that means that I'm not at the correct answer, but I'm higher up in here. And the possible locations for the-- so I'm here. And the possible locations are on the circle. And then when I change the radius, I'm going to get larger and larger circles. And these are the sections of that. So the parameter space can get complicated. These are simple examples. And if it's a high-dimensional problem, the parameter space can be expensive to maintain and, in the presence of noise, may not work that well. So Hough transform is typically not used in its normal, originally defined mode to find bubble chamber tracks or lines or edges. But the idea of the Hough transform is used often as like a subroutine of something more advanced. And the key idea, again, is just that we have this parameter space. And we quantize it, and we accumulate evidence in it. And it doesn't need to be about edges or something like that. OK. Now, we didn't say much about it in this pattern, but it's important part of the pattern. And right in figure 1, it started off by saying, basically, low-pass filter sample, subsample. So let's talk a little bit about sampling, subsampling, low-pass filtering. And that was part of the working at multiple scales. Now, there are different motivations for working at multiple scales. One is that you can reduce the computation by working at lower resolution. So if you can get the result you want at lower resolution, it makes a lot of sense to do so. Another reason is that sometimes, features like edges or texture are apparent at one particular resolution level, but they're not so obvious at other resolution levels. So we think of an edge as a kind of ideal step edge. But we saw, for example, in defocusing, it won't be an ideal step edge. And there will be a scale at which it's the most apparent. And at other scales, it might be such a slow, smooth transition that it's hard to detect. So let's talk about multiple scales and subsampling. So we already hinted at this, that it's often not that costly. So let's suppose we reduce the number of columns and the number of rows by a multiplication by a factor r, where r could be 1/2. But let's keep a general. So then the amount of work is going to go as work, and also the amount of space if you keep things around, nm plus r squared nm plus, r to the fourth nm, et cetera. So the overall work is larger than what we have for just the higher-resolution image but not very. So let's look at r equals 1/2. I think we already looked at that case. Then we get 4 over 3 nm. so in that case, it's just like 30% more work. It's not huge. How about r equals 1 over square root of 2? Then we get 1 minus 1/2 is a 1/2. And we flip that, we get 2nm. So that's more work. But this turns out to be important. So we're-- the Hough is the obvious thing. We're just going to take, for example, 2-by-2 blocks and take the average, and that's the new value. And we find that that's actually not a very good way of doing it. But conceptually, we can think of that as a very simple subsampling method. And so why would we go to something else? Well, it turns out that's very aggressive. And you can find serious aliasing problems when you squish things down that much. So it might be better not to be as aggressive. And so 1 over square root of 2 is halfway there. And that one is used. So how do you do that? Well, here's one way. You know, what we're going to do is basically reduce the number of cells by 2. So over here, we reduce the number of cells by a factor of 4 going from the 2-by-2 blocks to the 1. So can you think of a way of subsampling this grid so that you reduce the number of cells by 2 rather than 4? Anyone play chess or draughts? And so a checkerboard will do it. So we can do a red, black or red, white-- red, black. So if we just retain one of the colors, one of the two colors, by definition, then, we've halved the number of cells. And it's probably not the first thing that came to your mind, because it doesn't look like it's a nice rectangular grid. But it is, because all we need to do is think of it as a grid running in that direction. So because these cells follow a regular pattern here, and then the next row is here, and same in this direction. So if we're willing to deal with a 45-degree rotation, this gives us a square root of 2 reduction. And the other thing it does is it increases the spacing by square root of 2. So in this case, the spacing increased by a factor of 2. So now the spacing between neighboring cells is multiplied by the square root of 2. And so that's a little odd 45 degrees. But think of it. You do it the next time, you're back in sync with the original. So on a related point, there's a method called SIFT, which is used for finding corresponding points in different images of the same scene. And it's used widely to patch together multiple images to produce 3D information about, for example, popular sites that tourists go to where you have thousands of pictures. And you are trying to take two pictures taken from two different positions, with different cameras, under different lighting conditions, and you're trying to match them up. There's this method due to Lowe called SIFT, which gets descriptors of points in the image that are attempting to be as unique as possible. And he uses much smaller-- much less aggressive. So he uses multiple steps per octave. So we've gone down a whole octave in one step, you know, a factor of 2. Here we go down a half an octave. It takes two steps to go down a whole octave. And I forget what he recommends in his pattern. It's, I don't know, six or something. So it's more like a musical scale, where an octave is divided into-- eight nodes? Anyone know? OK. And so they're not equally spaced, but that's the idea. So yes, it might seem a little odd that we're not going so aggressive with a factor of 2. But actually, there are good reasons not to always do that. And, you know, SIFT is a classic example of a situation where we do not subsample as aggressively. But we're out of time, so sorry. OK. I've got proposals for those of you in 6.866, of almost everyone. If you're one of those people who hasn't yet sent in a proposal, please send it in. And yesterday, I opened a cookie, and it said, slaying the dragon of delay is no sport for the long winded. So please take that to heart if you're one of the people who has not sent this proposal in yet.
MIT_6801_Machine_Vision_Fall_2020
Lecture_10_Characteristic_Strip_Expansion_Shape_from_Shading_Iterative_Solutions.txt
[SQUEAKING] [RUSTLING] [CLICKING] BERTHOLD HORN: Let's start with some announcements. We've had three homework problems and we're ready for the first take home quiz. The rules, again, are that it'll count twice as much as the homework problem and there's no collaboration, unlike homework problems. So that's next week. And then those of you in 6866, there's a proposal due on the 22nd and that's meant to be short and that's where you tell me what you'll be doing for the term project. And so the idea of the term project is to take some machine vision problem, preferably something we discussed in class, and implement it in some way, your choice. It could be, I don't know, Windows, MATLAB, Android, whatever. It could also be something more theoretical. It could be some mathematical solution to some machine vision problem. Most people opt to do some kind of implementation and it's very flexible. I mean, if you find that you can use OpenCV to implement part of what you want to do, go ahead and do that. But of course, then your contribution has to be on top of that-- not just using OpenCV, but doing something useful with it. And if you have problems coming up with ideas, send me email. What will happen with the proposal is I will take a look at them and if I can point you to sources that might be helpful in implementing that project, then I will do so. OK. And we're just about to finish our discussion of how to extract stuff from image brightness and in particular, shape from shading. And it's a little bit abstract, a little bit mathematical, and we'll soon have a big change of pace when we start talking about industrial machine vision. And of course, we can't cover everything, and we'll take a different approach to covering it. So rather than use published papers or textbooks, we'll look at patents. And part of the reason for that is that in our world, you publish papers. That's what you get credit for. In their world, you don't publish. That's what you get credit for. So when you do see what they're doing, it's in the patents where they're trying to cover themselves, protect themselves from somebody else using that same idea. So that'll be a big change of pace. And in the process we'll learn a little bit about patent and patent language, since that's an important topic if you're an entrepreneur involved in a startup or something of that sort. So obviously that's going to be a little different from partial differential equations and some of you may be looking forward to that. But let's finish with the partial differential equations. So where are we? Oh, OK. Well, for example, you could implement time to contact on your Android phone and in that case, you'd use Android Studio and I could supply you with a dummy project that's just a shell so that you don't have to write all of those files for Android Studio. So that's an example. You could implement some of the subpixel methods that we'll talk about for edge detection and use whatever, MATLAB, whatever's convenient. So that's an example of a project. Another example, more theoretical, would be we've talked about shape from shading in the context of particular types of reflectance maps like Hapke and you could implement-- you could work out the details for a different type of reflectance map. There would be a more abstract mathematical project. And I think what I'll do is I'll pull together some of those and maybe say something about them on the Stellar website. OK, so the first part of the term we were focusing mostly on image projection, perspective projection equation, and derivatives, motion, motion in the world, motion in the image, and then we switched to one thing that we can do with image brightness measurements, which is the other half of the image formation. And in particular, we're looking at shape from shading. And so far we've solved the problem for a very particular case, which is the Hapke model of surface reflectance. And at the core of all of it is the image irradiance equation, which basically says that brightness at some point in the image is the reflectance map corresponding to that surface orientation. So here we're focusing on the dependence of brightness on surface orientation. And as we mentioned, it depends on illumination, it depends on surface material-- that's where the reflectance map comes into it-- and it depends on the geometry, in particular, surface orientation. And it's a local thing, so that's good. It means that the brightness measurement at a particular point in the image typically depends on what's happening at the corresponding point on the object. And the reflectance map was our way of summarizing the detailed reflecting properties which, in turn, atomically are reflected in the BRDF. So we had the biodirectional reflectance distribution function, but we pretty much built up on that to we have a somewhat easier to manage reflectance map. So the BRDF depends on four parameters. The reflectance map only depends on two, but it's built on top of the BRDF. OK. So we looked at a particular case which is that of reflecting properties of the moon, the mare of the moon, and the other rocky planets and in that case, we found that we could solve this problem in a particular direction. So in the case of the moon, if we take the ecliptic plane, then that's the direction along which we can actually determine the surface orientation. So we can integrate out in certain directions and we can't integrate at all in the direction of right angles and I mean, typically, we'd be looking at some small patch, but this gives us an idea of what directions we can perform this in and what not. OK. And what we ended up with is a set of equations that take us from point to point on the surface and first of all, x and y are varying in this way-- and I have to apologize. I think maybe the last time I ended up with those reversed. I had the first one being qs and Ps, and this corresponds to that angle of rotation where we rotated one coordinate system to the other. And then what's the change in height? Well, according to chain rule, which looks intimidating, but it's just P times the derivative up here. So we got-- well, just P times Ps or Ps plus qs q. OK, so this is the rule we can use to take a small step. We take a small step in the image based on that and that corresponds to a small step in height. And I forgot to mention it here, but we're now assuming orthographic projection. So that's a point that's important which is that once we switch to dealing with brightness, we switch to orthographic projection. Why? Well, because it makes everything much easier. All of this can be done with perspective projection and it originally was, but the math gets messy. And so the thing to remember is that pretending we have a telecentric lens-- we talked about telecentric lenses-- and using a orthographic projection is it corresponds to being very far away, as we are when we're looking at the moon and the light source is very far away, and we have a very small visual angle. And it simplifies things. It's just like Lambertian, you know? It's not that these methods are restricted to Lambertian-- well, in fact, we're talking about non-Lambertian here. It's just that you can do some interesting things if you make that assumption and then generalize from there. OK, so that's the rule. And basically we have three ordinary differential equations that we're going to solve numerically, and we don't need very sophisticated methods like eighth order Runge-Kutta or something. We're just going to do forward Euler or in other words, if you have the slope and you have a step size, you just multiply the slope by the step size to see how much higher you've come. So that that's the method we're going to use. And of course, it's not terribly accurate, but if we make the step small enough, it's good enough. And the measurements we're dealing with are themselves noisy, so it doesn't make sense to apply a method that's good to 12 decimal places when we start off with things based on image brightness measurements. OK. Now, how do we employ this? Well, we need to tie it to brightness. So somehow brightness has to feed into this equation. Well, for the Hapke type surface, we have this kind of dependence-- or actually it was cosine theta i over cosine. And in our arrangement for orthographic projection, it's very convenient to have the viewer up along the z-axis. OK. And then we can plug in terms of p and q. So that's in terms of angles and in terms of unit normals, but we would like to express it in terms of p and q. Then you remember that n dot s came out to be this thing. And let's see. We're going to take the square root of that and then we're going to divide it by n dot b, where v is the same as z, so that's n dot z. So that's 1 over that and that conveniently cancels out so we'll end up with that. So this is our r of p and q and by the assumption here about the image irradiance equation, that's e of x and y. So we can write e of x and y is this thing. And you can see the term we're looking for is right in here, Ps P plus qs q, so we need to square the whole thing, get rid of the square root, and then multiply through by rs and subtract the 1. So we end up with-- OK. So for this particular surface, there's a direct relationship between the quantity we can measure, e, and the thing we need to continue the solution. Right? So what's rs? Well, rs is just dependent on the source position. It's a constant. It just avoids having to write that all the time. So we just take the brightness, we square it, we multiply it by rs, we subtract 1, and there's the derivative in the z direction and that's it. We just march along from one point to the next, adjusting x, y, and z as we go. And at each point we-- do we know the surface orientation? Well, a bit of it. We know the slope in that direction. I mean, that's what we're exploiting. But as we indicated, we don't know anything about the slope in the other direction. So no, we don't know the surface orientation based on this and we need something else to do that. Now since each of these profiles is going to be independent to actually get z as a function of x and y, a real description of the surface, we need to somehow have an initial condition for these differential equations. So x and y, well, we pick some point in the image to start, but what about z? Well, under our assumption of this image formation model, there is no dependence on z. The dependence is on the slope of z, right? So actually if we moved this object in the z direction, its image wouldn't change. Well, under perspective projection it would change in size, but we are not dealing with perspective projection, we're dealing with orthographic projection and so its size doesn't change, so there's an ambiguity. So for each of those curves that we're computing we need an initial condition, so actually, we need an initial curve. And so in 3D, how do we do that? Well, here's a way of defining a curve. We have some parameter that varies along the curve. Could be arc length or some arbitrary parameter, eta, and for each eta we give a position in space x, y, and z and that's a curve. And so let's assume we have that initial curve, so some sort of curve like this. And then we can start at any point on that curve and integrate out those equations numerically, and there's our surface. And as we mentioned, we can actually go in both directions from the initial curve. And so we end up with z of x,y or actually z of eta and psi because the way we've parameterized it is one parameter goes along the curve, the other parameter goes along the initial curve. So it's a surface in 3D and it takes two parameters to parameterize that. So that's pretty straightforward, I hope, that in that particular case, we have some very special properties. One of them is that we can locally determine the slope in a particular direction and that means, of course, we can go in that direction and build up a curve. And that's not going to be true in the general case, so what do we do about the general case? So we'll still start off with image irradiance equation, which says that the brightness at a particular point in the image is dependent on the surface orientation at the corresponding point on the object. And we'll try and follow this model here. So suppose we had some particular point, x, y, z, and then we take a small step. And in the image, let's suppose the step size is delta x, delta y, and for the moment we won't say which direction we're going, we'll just leave that unknown. And to construct the solution, what we need to know is, what's z? How is z going to change? And so of course, we have that relationship. The change in z is dz dx times delta x plus dz dy times delta y and so we can calculate the change in height if we know p and q. And suppose we know p and q, then we're at a new point on the surface and we can repeat. We take a small step in x and y and now over here we kept on going in a certain direction. In this case, we may need to choose a direction in some particular way, but we need to know p and q. OK, well, we can assume that we start off not only knowing x, y, and z, but also the surface orientation. So we could have x, y, and z and p and q, but then how do we update every step we need to update p and q? So here we have updates rules for x, y, and z. For x and y, we're the ones controlling the step and then z, the change in z, is given by this equation. So well, we can use the same chain rule trick. We can say that delta p is p sub x delta x plus p sub y delta y and delta q is q sub x delta x plus q sub y delta y so that we can update p and q as we go along. So we're not only updating x, y, and z, but we're also updating p and q. So this is kind of interesting. Before, we had a curb in space. We were tracing out x, y, and z as we construct a solution. Now we've got more because at every point we know p and q, which means we know the surface orientation. So what we're really constructing now is a strip. Well, not very elegant . And this is called a characteristic strip characteristic of that differential equation. And that means that we're carrying a long surface orientation so if I wanted to, I could erect surface normals as I go along. So that's obviously more information than just a curve. So that's what we'll be doing. We'll update not just x, y, and z, but p and q, which we didn't have to do over here because of the particular properties of the Hapke type model. OK, but how do we do this? Well, in order to update we need to know Px, Py, qx, and qy. And we can write this another way in matrix form. So there are two linear equations, two unknowns, we can write it with a 2 by 2 matrix. And so what are these r, rs, and t? So r is p sub x, which is really the second derivative of z. s is p sub y, which is 2 sub x, which is this one. So the quantities we need in order to use this update rule are the second partial derivatives of height, and those are interesting because they correspond to curvature. So the first derivatives have to do with surface orientation and the second derivatives have to do with how quickly the orientation is changing and that, of course, is curvature. And for a 3D surface, curvature is a little bit more complicated than it is for, say, a curve in the plane, and you need three numbers to describe it. So for a curve in the plane, you can just give the radius of curvature or the inverse of that, which is called curvature-- just one number. But for 3D surface it's a little bit more complicated, and you need this whole matrix of second order derivatives and it's called the Hessian matrix. And in here I assume that the order of differentiation doesn't matter, that z of x, y is z of y, x, and that will be true for some reasonable surface-- won't specify the exact conditions for that. And of course, you can construct pathological things that don't satisfy that, but those are mathematical curiosities rather than real surfaces that we'll meet in machine vision. OK, so that's the curvature matrix. And so if we know the steps and we know the matrix, we can calculate the change in p and q and we can continue the solution. Well, that means that we should add rs and t to our menagerie of variables. So we're going to carry along x, y, and z, p and q, rs and t. Yeah, we can do that. Now do we update rs and t second derivatives? Well, we use the third derivatives. So I think you can see where this is going. This is kind of continuing ad nauseum using higher and higher order derivatives and so that's probably not going to work. In fact, we end up with more unknowns. Here we've got-- before we didn't know p and q, two unknowns. Now we don't know rs and t, three unknowns. So it's not going in a good direction. But what's neat is that we haven't yet even used our image irradiance equation. We haven't looked at the image. So far we're just playing with derivatives, so that's obviously a flaw in our reasoning here. We're only looking at the derivatives of z, we're not using image brightness measurements at all, so that doesn't make any sense. So let's see what we can do with the image irradiance equation and in particular, we're often interested in the brightness gradient so let's look at the brightness gradient. So which way do I want to write this? Again, by the chain rule, we'll get the derivative r with respect to p times dp dx plus r with respect to q times dq dx. And of course, these are the very quantities that we've run into over here. So this is what we call r, this is s, that's s, and that's t. So that's an interesting analogy with this. We can write this in matrix vector form. That's the same matrix. So that matrix is important and it makes sense. OK? If you have a surface with constant surface orientation, the image will be constant brightness in this model where brightness depends only on surface orientation. If we are looking for a gradient, we're looking for changes in brightness and those are only going to happen if there's changes in surface orientation. Changes in surface orientation correspond to curvature, right? So at one place I'm going downhill and then it's flat and then it's going uphill. Second derivative is non-zero, and exactly the second derivative matrix controls that. That's the precise statement of how that all works. Well, if we look at these two things juxtaposed, there's that common matrix h. Now, if we could somehow figure out what h was, we could just plug it in here and we'd be golden. We can implement this method because we take a small step in the image delta x, delta y, multiply by this matrix h, and out comes the small change in p and q So we can have updated rules for x, y, z, and p and we're done. So I guess the question is, how do you solve this thing for h? So what have we got? So this we can get from the image, the brightness gradient. So that's available. And then this we get from our reflectance model, so this is from the reflectance map. Assuming we know p and q-- and we said we're carrying along x, y, z, p and q, and so if we have a model of how the surface reflects light, we have a reflectance map, we can just take the gradient in the reflectance map, which is r sub p r sub q. So we have this vector and we have that vector. Can we solve for h? So one of the things we use a lot is equation counting and constraint counting, unknown counting. So what have we got? Well, these are two equations, two linear equations. So two equations, how many unknowns? Three, right? We got rs and t. So no, we can't do that. That's too bad. We had a very good thing going there, because here we can get this from the image, we can get that from the reflectance map, and if we could solve for h we could plug it in here and we'd be done. OK, so here's the whole trick of the method, which is that because h of p is in both of these equations, we can make some progress. We won't be able to solve for h, but that's not really our aim. Our aim is to get an increment p and q. The only reason we want h is because this is our formula for computing the change in p and q. Well, maybe we can't solve for h, but maybe if we pick delta x delta y in a nice way we can use this formula. Right? And so how would we pick it? Well, we'd want to pattern match these two things, right? So can you see what's going to happen? What's the direction that we're going to go? What delta x and delta y would you use so that we can actually compute delta p delta q using that formula? So pattern match. OK, how about this equals that? Right? So let's try. And just so we can control the step size, let's multiply it by some small quantity. So that's it. I mean, before Hapke we had a direction that we had to pick and remember, we couldn't compute the profile in any other direction. The direction was given. But what was special about Hapke was that direction was the same everywhere, so it was built into the whole solution. Well, now maybe the direction is going to change as we explore the surface. OK. So what happens if we try this? Well, then we plug that into that equation and we get this and we get-- So again, there's a particular direction that we can make progress in and that's this direction. And if we go in that direction, we can figure out how to change p and q and that's it. We're done. So if we want to summarize all of this, we have-- right? That's just from here. And then we've got-- let's leave out dz for the moment. dp, d-- and of course, we're interested in z so we need to write that one as well. So what we've got is five ordinary differential equations and they're particularly simple first order equations. And so we explore the surface along these curves and actually along these strips, and those are the equations that generate that strip. So it's very simple. As we go along, we have the image brightness, we look at the brightness gradient, and that's going to tell us how to update p and q. Since we're carrying along p and q, we know where we are in the reflectance map so we can compute r sub p, r sub q. That tells us the step to take, the update in x and y. And then, well, there's also this output rule, so to speak, which tells us how much the height is changing. And that's just based on p delta x plus q delta y, same old formula we've used all along. I separate this equation from the rest because the other is our dynamic system where the first two feed into the second two and the second two feed into the first two. So if you're thinking control theory and stability and stuff like that, that's the interesting part, that they're these two systems that are feeding into each other. And it's kind of weird, but what happens is that in the image space and the gradient space, we have this way of going in gradient directions. And so let's plot the isophotes, just some random isophotes, in those two spaces. So I don't know, this one here could be Lambertian. I don't know, this is some-- whatever that is. These are the isophotes in the image. And what this is saying if we're at a particular point-- suppose we're here in x and y and we're also carrying along p and q, so we're also somewhere in-- let's suppose we're here. Then the step we take is based on the gradient, which is perpendicular to the isolines. And so the step we take here though, weirdly, is dependent on the gradient there. So this is the actual step we take and then the step we take in p and q is strangely dependent on the gradient there. So we actually take a step in that direction. So it's a little weird. You're not going uphill. You're not just doing gradient ascent or gradient descent, but you're going in the gradient in the other diagram. Anyway, this makes it clear how to implement that. You just have to have these two things, the image e of x and y and the reflectance map r of p and q. And once you plunk down somewhere in there, you just follow this rule and it'll trace out a curve in the image. And indirectly, it'll trace out a curve in 3D and actually, it'll trace out a whole strip because all along we know the surface orientation. And that's a little different from the Hapke case where we don't know p and q as we go along. We only know one component in a certain direction. OK. So we've reduced our image irradiance equation to those simple ordinary differential coupled-- ordinary differential equations. And this is a partial differential equation. Why is that? Well, because p is dz dx and q is dz dy and we're just making things look less intimidating by using these abbreviations p and q, but this is really a first order nonlinear partial differential equation. And in physics you run into loads of partial differential equations, but they're generally second order. Those are the ones of interest, heat flow, wave propagation. So they're typically second order and they're typically linear and here we got something unusual. We have first order, which you think should be simpler, and it's a nonlinear. And so if it wasn't for that, I wouldn't have to explain all of this because you would have learned this in physics. But physics does second order linear PVEs and not first order nonlinear PVEs, so we've just come up with a method for solving those and that's what we need to do in shape from shading since the brightness depends on the first derivative. OK. Now this is general for any r of p and q. Let's just take a look at it for some particular surface properties that we've been studying. So one of them, of course, is Hapke, and that's a special case we solved up there. But let's just see how the general case reduces if we assume this for the reflectance map. OK, so here. So that's a reflectance map for Hapke. So what do we need? We need r sub t, so we differentiate this with respect to p. As square roots, are we going to get 1/2 divided by the square root? Let's take out the 1 over square root of rs first. It's just a constant. And then the rest is going to be 1 over-- right? Because we have something raised to the 1/2 power so you differentiate that, you get 1/2 times that thing to the minus 1/2 power. And then we have to differentiate what's this term inside with respect to p and we get Ps. And of course, r sub q is very similar. And then the other one we need is p r sub p plus q r sub q, that's just going to be the same thing. Now these three share this multiplier and that multiplier, as we mentioned last time, is really just controlling how fast you go along the curve as you solve it. So you could change that. And I mean, it would change numerical stability and how accurate the solution is, but in the infinitesimal case it wouldn't change the solution. So actually, we could remove these three terms as long as we do it on all three equations. And then we have Rp is proportional to Ps and Rq is proportional to qs. And so our update rules-- the update rule for x is just Ps. the update rule for y is qs, and that's what we had had up there. So the general case reduces down to this pretty easily, particularly since, I guess, this is, let's see, rs e squared minus 1, as we showed up there somewhere. OK, so that's good. The general case reduces correctly to that special case that we solved first. Let's look at some other case. So we said that in the scanning electron microscope, we had dependence of slope. Right? Remember that unless you do something strange to your microscope, it's rotationally symmetric in imaging. And so if you look at a reflectance map for that instrument the brightness is only dependent on the slope, the magnitude of the gradient, not the gradient direction. So OK, so you know, what's if? Well, that depends on the instrument and also the material of the object. And so you need to calibrate that. But let's leave it general. Let's just leave it as if. OK, then to use this method we need r sub p and r sub q. We differentiate with respect to p and we get that. And differentiate it with respect to q, we get that. And so this is going to tell us our update for x and this is going to tell us update for y and we also need pr p plus qr q, which is going to be this constant times-- and this is an update for the height. So we can certainly apply this method to scanning electron microscope images. And again, there's this constant multiplier here. We'll talk about this some more, but-- oh, not the whole thing-- but that only affects how fast we move along the solution, so we could actually simplify things by getting rid of that. And then the equations are very simple. What is it telling us? It's telling us that the direction we're going is the gradient. We're going uphill, so the gradient is the direction of steepest descent. So if I'm standing on the mountain-- you know what the gradient is, so that's where we're going. All downhill. As we said, we can reverse the direction. We can make delta chi be negative and go in minus p minus q direction. So it's very simple. And then here's the rule that tells us how much we're updating z. So scanning electron microscope is a little bit simpler than a Lambertian, but rather than solve this one separately after doing Hapke, we just went to the general case in general. OK. And you can do the same for Lambertian. Unfortunately, it gets messy because the Lambertian has that square root and 1 plus p squared plus q squared. But of course, you can do it. OK, so a couple of things. One to remember is that we're dealing with a solution that generates characteristic strips. So we're not just exploring the surface along curves, but along the curve we also know the surface orientation. And then another related concept is that of a base characteristic. OK? So the characteristic strip has x, y, z, p, and q along the strip and the base characteristic is just-- let's see-- the projection into the image plane. And to some extent, that's of interest to us because that's what we. We have the image, we're trying to explore it, and for one thing, we want to make sure that we cover much of the image with these curves. And of course, the ultimate goal is the surface in 3D, but we're also quite interested in what happens in the image plane that we're actually covering that. OK. So what might this look, like these basic characteristics? So here's our image and for the moment, we're assuming we have some sort of initial curve and then these base characteristics go up from there and maybe the other direction. So as I mentioned, one reason you might be interested in these basic characteristics is because you want to make sure that you're exploring as much as possible of the image and not leaving out some areas. And also you might say, well, this is no man's land. I should really be interpolating another one in here. And in some other areas, conversely the base characteristics might get close together and you might say, well, that's unreasonable. I should really have pretty much the same height there, so just drop one of them or merge them, take their average. So in terms of implementing this, you'd be looking at these basic characteristics and interpolating and removing as required. Now another issue is this sounds very sequential, which is unpleasant from the point of view of implementation because it could take a long time to do this, and also unpleasant from the point of view of biological interpretation. But it turns out that the solutions along these curves are independent. I mean, each of them satisfies a state of differential equations and the only way they interact is that, well, they all sprout from the initial curve. So actually you could have a process running along each of these curves, so it is parallelizable. And that's actually kind of implied by what I said a minute ago because if you're going to interpolate new characteristics, it's best that you do it as you grow and say, oh, wait, these two are getting too far apart so let me interpolate a new one there or if they get too close together, let me merge them. So it's not full parallelism. It's not like you can do something at every pixel at the same time, but it's a significant improvement over complete serial computation. So it's like a wavefront that's propagating outward. So if we have some initial conditions, you can imagine that as the solutions progress we could keep them moving at more or less the same speed and then look at neighboring ones in order to improve and interpolate and what have you. So that means that they ought to move at similar speeds, so that gets us to this question of speed. And I mean, in terms of the numerical solution of these equations, it's just the step size, you know? What step size? Well, clearly if the step size is a hundredth of a pixel, that's overkill. That's not going to work very well because the brightness doesn't change much in the hundredth of a pixel. Conversely if the step size is a hundred pixels, that's probably completely wrong because you're missing all of the in-between brightness variations, so you'd like to have a reasonable step size. And so let's look at what we can do in terms of controlling the step size. And as I mentioned, all we need to do, really, is multiply all of our equations by the same quantity and all it does is it changes the increment. And so let's look at some simple cases. So constant step size in z. So that's interesting because that means that you're stepping from contour to contour. Think of a contour map. If we implement that, then these would be contours of constant height on the surface and what we're doing is all of these solutions, as they grow, are going from the contour at a thousand meters to the contour at 990 meters to the contour at 980 meters and so on. And so that's an interesting and useful way of controlling the step size. And what do we need to do that? Well, we've got pRp plus qRq in the equation for z, which just disappeared, And so we just divide by that. Why is that? Well, because then dz d-- let's call it side dash-- is one. Right? So we had this equation over here, dz d psi is pRp plus qRq. Now if we just divide by that then the rate of change is-- the derivative is one and that means that we have constant increments in the z direction. Of course, we have to divide all of the other equations by the same factor. So that's an easy to visualize change that has potential benefits. We just multiply all of the-- divide all of the equations by that, and then we're stepping from contour to contour as we explore the surface and that makes it a little clearer how we're exploring the surface. OK. But we can pick something else. For example, we were talking about steps in terms of pixels. So secondly, we can look at constant step size in image. So we want delta x squared plus delta y squared to be a constant, and those are proportional to Rp and Rq. So we divide by-- I shouldn't have wiped out these equations yet because it's handy to have them at this point. dx, d-- So that assures that square root of delta x squared plus delta y squared is going to be constant if you make it 1. Oh. OK, so that's another way where instead of moving in constant increments in height, the intervals in the image are fixed in size. And well, a couple of issues with that. One of them is that those curves may run at different rates so that one of the curves is getting ahead of the other because we're not tying them together in height or anything, we're only tying them together in how far are we from where we started. So that's a problem. And then another problem is we're going to divide by that. Of course, if that's zero then we're out to lunch. And I make a note of that now because we'll need that in a minute. OK, constant size steps in the image. How about constant size steps in 3D? So that means we want that to be 1 and so that means we need to divide by that quantity. And so for that, where's that come from? That's this thing here. So this gives us delta x, this gives us delta y, this gives us delta z. And if we want the sum of the squares of those to be one, then we divide by that. And again, this has the same problem or spatial case that if Rp and Rq are zero, then that is zero and so on. So how about if we step in isophotes in contours in the image-- contours of brightness. So here we had contours on the surface. z was the constant along each of those curves. But it might be interesting to step in the image from one brightness level to another. So well, I won't go too much into detail of that, but we basically then have to divide by that quantity. Remember the two gradients, the one in the image and the other one in the reflectance map? Well, that's the dot product of those two. I don't know if that means anything, but it's just interesting to note. And we won't go into too much detail, but obviously that's another interesting speed control in that we're moving from contour to contour in the image and one advantage of some of these is that they tend to make it easier to tie together neighboring solutions. So in this case, these curves, these wavefronts, would be just isophotes in the image plane. OK. And in terms of the numerical analysis aspect of it, again, we're not doing any fancy methods for solving ordinary differential equations. We're just saying if the slope is m and we take a step delta, then the change is m times delta which is the lowest order, crudest thing you can do. But as I mentioned, we don't really expect to get much better by using something very sophisticated. You may have heard that there's results recently about three bodies. So you all know that if you have two bodies then they orbit in elliptical fashion and the ellipses are stable and all that good stuff from starting from Newton and Kepler and Copernicus and so on. If you have three bodies, chaos ensues. All sorts of things can happen, and mostly the orbits are not periodic. And so even if you want to know-- suppose you live in a world with three suns and they're orbiting each other and suppose you want to know whether at some one of them will run into another and blow up your world, there's no periodicity so you can't use simple method. Anyway, there's a wonderful science fiction book called The Three Body Problem where people live in such a place. And curiously, just recently, someone has actually put to good use one of these gigantic supercomputers. So you know nations compete with each other in various stupid ways and one of them is I can build a bigger computer than you. And so periodically the US has the biggest and then, I don't know, Japan, and then China. So I think at this point it might be China. Anyway, a lot of times then you ask, well, OK, they've got this fantastic computer. What are they getting out of it? Well, you can do some things. You can do weather simulations better than anyone else. You can solve some quantum equations of more than one particle on it. Well, what he did, this person, was the three body problem. And so he was interested in finding periodic solutions. And for a hundred years it's been known that there's some periodic solutions, but they were very special. Things go in particular figure eight patterns and asteroid orbits, but there's a very small number of these solutions known. I forget what. I don't know, six or something. Well, he used the supercomputer to find, I don't know, 68 and you know, it's amazing. It's fantastic. They're wonderful orbits. And you might say, well, wait a minute, this is really something that should be done analytically because you can do the numerical simulation but how do you know that it's really, really periodic? Well, having the supercomputer, he is able to do things that mere mortals wouldn't normally do like approximate the Taylor series to a thousand terms, you know? You'd usually stop at two or three and maybe if you're on the computer you might do, I don't know, eight. But he didn't stop at any particular point. He simply kept on adding until he didn't need to go any further. And the same with the calculations, the eight order Runge-Kutta, that's a very sophisticated method for solving ODs. Well, he used thousand order and in the process, he discovered these periodic orbits. Anyway, where was I going with this? Well, the point is that there are very sophisticated methods for numerically solving equations and if you're trying to say, for example, figure out how long our solar system is stable, it's very difficult. You need to use much more sophisticated methods than we do. And Gerald Sussman actually built a machine to do that, but because it's chaotic, you can't be sure. But he can say that nothing bad is going to happen in the next hundred million years, so you can rest assured that things will be fairly safe. Fortunately, we have a case where we don't need to have anything like that kind of numerical precision. OK. We do need an initial curve though, so let's talk about-- which is a nuisance because the whole point is to explore the surface using optical machine vision methods, not go there with a measuring tape. And so having an initial curve is not desirable. I mean, it's better than actually having to measure the whole surface because you're only measuring one curve on it and then the rest is filled in using the image. But actually here we have an even worse problem, which is we're carrying along not just x, y, and z, but we're carrying along orientation as well so we get these characteristic strips. So shouldn't it also be an initial strip? In other words, we're not just going to be forced to supply z, y, and z, but that makes it even worse. That means you have to measure that curve and also at every point on the curve, measure the orientation. Well, fortunately, that's not necessary and there are two reasons for that. One is that on the initial curve, we have the image irradiance equation. So we have e of xy is r of pq. Or you're on the curve, you look in the image, what's the brightness there? Doesn't tell you the orientation, but it gives you one constraint on the orientation. And if we look at our reflectance map, then it means we're on some curve. So we have one constraint. So it's not like it could be any p and q, it has to be one of those. But then the other thing is that we have this curve and we're told that that curve's actually in the surface. So that means that if I differentiate dz, d eta should be d, dx, d eta plus q. Right? By the chain rule because p is dz dx and q is dz dy. And since someone magically gave me this initial curve, I can compute these derivatives, dx d eta, dy d eta, dz d eta and amazingly, this is a linear equation. So what's my job? My job is to recover the unknowns p and q, and I have two equations, two unknowns, and so there we are. I can solve for p and q. Well, you might say this first equation is likely to be nonlinear-- you know, you look at Lambertian equation. But there's one linear equation. So then by Bézout's theorem, what matters is what order is this equation. And if it's second order, that means you might have as many as two solutions, but two solutions is better than an infinite number of solutions. OK. So in practice, we don't really need an initial strip. We can get along with an initial curve because we can find the orientation using those two equations. OK. But we'd really like to get rid of this initial curve business. It's really annoying. And so what do we do? Well, it'd be great if there were some special points on the object where we know the shape, orientation, something. So here is our prototypical object, the image of this prototypical object, and so question arises, are they some-- so in most places, we don't really know what the orientation is. Like we go here and we measure brightness e, I don't know, 23, and we go to the reflectance map and we get a contour. There's a constraint, but we don't know what the orientation is. So are there any places here where you could tell me what the surface normal is? The edge. Right. So the thing I draw here, I guess the official word is occluding boundary. Sometimes the image version of it is called the silhouette. Why? Well, because that's where the object curls around and the part over here is visible and then the part where it curled around is not visible and the terminator that separates them, I can draw a surface normal perpendicular to this curve and the surface normal at that point on the object will be parallel to that. So what I'm saying is that if I go all along the occluding boundary I can construct a vector in the image plane and on the object, the corresponding surface normal will be parallel to that. So that's different from other places where I don't have local information on surface orientation. And of course, in perspective projection it's a little different, but we're talking about orthographic projection. OK. So I could perhaps use those as starting conditions. I could start my solutions there. Well, the problem is that the slope is infinite there, right? If you think about approaching that edge, you fall off. dz dx and dz dy become infinite. The slope is infinite, so that's obviously going to be a problem if we try to somehow incorporate that in an equation. So what's interesting is that the ratio is known because the ratio just defines this direction. And so that's a funny thing where p and q are infinite, whatever that means, but we know their ratio. But unfortunately, it turns out that we can't use that. We have these equations that tell us how p and q are changing as we take a step, but if the slope is infinite then that doesn't work. So the occluding boundary tells us something, but we can't start the solution there and we'll get back to using the recruiting boundary. So that's number one. Now number two is if we look at-- imagine a beach ball painted white and the sun is behind you and you're looking at it. There'll be some spot on it that's brighter than any other spot and you can, from your knowledge of inversion surfaces, say right away what the surface orientation there is. Right? Right? Because its brightness is cosine of the incident angle. The cosine doesn't get bigger than one and it does so for zero angle, so that's when the surface normal and the direction to the light source are the same. So that's kind of a special thing and so unique. I mean, it doesn't happen anywhere else. So let's see how to formalize this unique, global, isolated, extremum. So if I go back to, for example, Lambertian surface, I have a reflectance map like this and here is my unique global isolated extremum. Right? So most brightness measurements don't tell me the orientation. If I measure this brightness, well, it could be any one of those. If I measure this brightness, it could be any one of those. But if I measure that brightness, I have the surface orientation. So that's very special and these things are called stationary points. And why that? Well, because the places where the derivative is zero in the reflectance map. And we'll see there's another reason for calling them stationary points. So maybe we can start the solution there and get rid of this problem about needing an initial curve. Well, if it's an extremum, that means that r sub p and r sub q are zero. Well, if it's smooth in the extremum. And we could consider the case where we have some sort of nondifferentiable r of p and q, but let's not do that. Let's stay real. OK, fine. What's the problem with that? Well, the problem with that is that our five differential equations included these two. Right? So at this particular point, suppose I put my solution, my solver down at that point in the image-- corresponding point in the image. It's not going to go anywhere because r sub p and r sub q are zero. And actually, also if we consider the image itself-- so that's the reflectance map and here's the image itself. Well, corresponding to that point in the reflectance map, suppose here's my beach ball. There's this point. Well, that's an extremum in the image and so here, by the same argument, if e of x and y is smooth and that's supposed to be an extremum, then the derivatives there will be zero. And so what does that mean? Well, that means that dp d-- that these also don't change. Right? And since z is dependent on those, nothing changes. We're just stuck at that point. I mean, it would have been perhaps that, well, you had that point but p and q changed and after a while there will be a change in x and y. But no, that doesn't work that way. Everything is zero there. So stationary points are very interesting because they give us local information about surface orientation, but they don't directly allow us to start the solution. We can go on with this. I said extremum rather than maximum because for Lambertian it's a maximum, but for the scanning electron microscope it's not. It's a minimum, right? For the scanning electron microscope, we had reflectance map that looked like this and this was the magic point and there, the brightness is a minimum. Right? Remember the objects were the edges, the occluding boundaries were bright in the image, and the part facing you was dark? So in the case of the scanning electron microscope, we do also have stationary points but they correspond to minima rather than maxima. But anyway, OK. So what to do? Well, if we can get away-- if we can get a little bit away from this point, then those conditions aren't going to be true anymore and those quantities may be small, but at least we can move. And of course we can control the speed. So suppose the quantities are small, big deal, we just multiply the step size. So as long as we can get away from that point, but how do we get away? So here's our stationary point. One thing we can think of doing is constructing an approximation of the surface. Let's suppose-- OK, so here's a story. We know the surface orientation there, of course, is one of those stationary points and now we want to get away from it a little bit so we can start the solution. So the idea is we want to start the solution from this curve. So we know the orientation so we can construct a small plane and I don't know, make a radius epsilon and then start the solutions there. Is that going to work? Well, if it's a plane, then all parts of it have the same orientation, they all have the same brightness, and we're just exactly the same problem out here than we were there. So that idea doesn't quite work. So the answer is, well, let's have a curved surface. So we still have that special point, but now let's suppose that the surface is curved and we're going to construct a small curved shape around that point and we'll start the solution from there. So this sounds kind of, I don't know, specialized, weird. Why these points and so on? But actually, these points are very important in human perception as well. So you can do experiments where you show someone a picture of a vase and they have a very good idea of what its shape is. I mean, not metrically accurate, but generally pretty good. And then you Photoshop out the bright spot and they still have a shape in mind, but it's changed. And so actually, it turns out that we use these stationary points as well. Another example is where you have cut out-- so you have some blob in the world like this, but now you're showing someone a picture of only, I don't know, say that. That doesn't include the bright point. It turns out that this is much more ambiguous that then if you included that bright point. So it's a real thing. It's not just something that affects our particular method of solution, it's important for there to be a unique solution or a small number of solutions as opposed to an infinite number of solutions. OK. So this is going to be some sort of a curved surface and we need to find out what its curvature is in order to construct it. Right? And so it's like, Oh my God, now we not only need to guess at the surface, but we need to know the curvature of the surface. But actually, it turns out that's possible, and so let's see how that might work. So the idea is that we are going to have a small patch, and I'm going to make this as simple as it can be. So we're going to assume, first of all, that we have an SEM type of reflectance map just to make it really simple, and then let's suppose we have a surface like this. And this will have a stationary point at the origin. And let's see. So you got p is dz dx is 2x. q is dz dy is 4y. And then the reflectance map gives us p squared plus q squared is 4x squared plus 16y squared. And by the image irradiance equation, that's actually the image. OK? And I'm going to take the gradient of the image-- it'd be 8x-- and not surprisingly, the gradient is zero at the origin. And that corresponds to it being an extremum, so that just confirms that we have, in fact, set it up so that we have an extremum at the origin. OK, now can I use the gradient to estimate the shape, local shape? Well, no, because the gradient is zero right at the origin. And so let's take the second derivative. So the plan is the gradient will be zero, so that's useless. Brightness itself we've already used to determine that it's a stationary point, but if we take the second partial derivatives of brightness we get some information about the shape. Then we're going to try and recover the x squared plus 2y squared from this. So we might say, well, how can I measure the second derivative? Well, of course, just apply the first derivative twice. And we also already talked about convenient computational molecules for doing that, so there's one for exx. So the plan will be we find the stationary point, we estimate the local shape by looking at the second-- not the gradient but the gradient of the gradient, so to speak-- and construct a small cap of that shape around the stationary point and then start the solutions from there. But I didn't quite get done with that, so we'll finish that next time. And then as I said, then we'll have a real big change of pace and we'll start talking about some industrial machine vision methods and the patents that describe them.
MIT_6801_Machine_Vision_Fall_2020
Lecture_23_Gaussian_Image_Solids_of_Revolution_Direction_Histograms_Regular_Polyhedra.txt
[SQUEAKING] [RUSTLING] [CLICKING] BERTHOLD HORN: We're talking about representations for three-dimensional objects-- in particular, those that can't conveniently be represented as polyhedra. And one representation is the extended Gaussian image, and for that, we needed to talk about Gaussian image and Gaussian curvature. And the Gaussian image is a correspondence between the surface of an object and points on the unit sphere, simply based on the equality of surface normals. And we can extend that from points to areas. And if we do, then we can talk about curvature, in that, if the surface is highly curved, then the corresponding area on the sphere will be large. And if the surface is not curved very much, then things will be compressed. In the case of a planar surface, everything ends up in one spot and we have an impulse. So the Gaussian curvature is just the ratio of those two areas. And we saw that in the case of a sphere that obviously becomes 1 over R squared because the area on the object is 4 pi R squared and the area of on the unit sphere is 4 pi. OK, what do we do with that. Well, we're going to actually use the inverse of that quantity, and we will plot that as a function of position on the sphere. And that, in a crude way, can be thought of as defining how much of the surface has a normal that points in that direction. Now, of course, for a convex, smoothly curved objects, there will typically be only one point that has exactly that normal. But if we take a small area around that, we can extend that idea and make that work. One thing we can do with this quantity is take its integral either over the object or over the sphere. So for example, we can integrate the Gaussian curvature over some patch on the object and then change variables. And we get the area of the corresponding patch on the sphere. And this is called the integral curvature. No surprise there. And one nice feature of the integral curvature is that it applies even when curvature itself can't be computed at a point because of discontinuities in surface orientation, like edges and corners. But before we talk about that, let's go the other direction. Instead of integrating on the object, let's integrate on the sphere. 1 over K. And change variables. And what do we get? It's the area on the object-- area on object-- that corresponds to that. And so that's a quantity, actually, that we're even more interested in. So it's saying that, if I take this quantity, which we'll call G, for Gauss, and we integrate it over a patch on the sphere, what we get is the area of the corresponding part of the object. So that's the generalization of that idea of-- it's the area with that surface normal. Well, here, we allow for the fact that the surface normal can have some variation and all of the points in this patch have surfaced normals that end up in that patch. And so I'll just say something quickly about integral curvature. So suppose I have the corner of a cube. What's the curvature? Well, it's 0 there, 0 there, 0 there, and infinite on the edge, maybe-- infinite at the corner. So how can I talk about curvature? Well, one thing I can do is take the integral of curvature of some area like that, and that captures the total change in orientation in that patch. And that's what this is computing. And so what does this look like on the sphere? Well, we have these three distinct surface normals, and they correspond to three points on the sphere. They are 90 degrees apart in latitude or longitude. And what's the area of that? Well, to understand that, what we want to do is take a file to this cube and smooth off the edges so that, if we take a cross-section through a corner, it doesn't look like this but it looks like that. And then, if you like, you can take the limit as you make the radius of this corner smaller and smaller. So in the ideal object with sharp corners, we only have these two surface orientations, and that's it. But if we think about it being a smooth transition of some sort, then we get all sorts of positive linear combinations of those two orientations. So that means that, if we think of this edge, for example, there's going to be some great circle on the sphere that corresponds to surface normals that-- very smoothly going from that one to that one. And similarly, I can think of this edge as corresponding to that great circular arc and that edge here corresponding to this one. And then when I look at the corner, it's harder to draw, but I'll have, also, transitional directions, and they'll all be positive combinations of these three. And so, at the corner, I'm actually dealing with surface orientations that are within this patch. So that means the integral curvature of that corner is the area of this patch. And that is-- it's one octant of a sphere. So the whole sphere has an area of 4 pi units here, and we have one octant. So the integral curvature of the corner of a cube is pi over 2. And now you can imagine this could apply to other things, like, if it's a parallel pipette instead of a cube, you'll get a related but different result. And we can apply it to other objects, like cylinders and cones and so on. So it's a useful concept. We won't be doing much with it. We're mostly going with this integral. That's the one we're interested in. So we're going to end up with some distribution on the sphere-- we will call G. And it'll depend on the orientation, which we can describe in various ways, like surface normal, unit normal. One question is, can we have any sort of distribution on the sphere, or are there some constraints? So we saw earlier, when we were talking about polyhedra, that there might be a constraint. So with polyhedra, we saw that, if we create these vectors which have a length proportional to the area in the direction of the surface normal, that those all have to add up for it to be a closed object. Well, something similar happens here. So the idea is that we have some object here-- let's take the discrete case first. We have facets. So here's a facet with some surface normal, and we imagine the whole thing is a mesh of facets like this. And then suppose we look at it from far away over here. Then the facet will appear foreshortened if the surface normal is not pointing directly at the observer. So let's suppose we call this direction v. So this is the apparent area of-- suppose this is patch i. So that's the actual area of the patch. And that's the surface normal. And the way it appears to us is based on the cosine of the angle. And we can get that just using the product. OK, well, now I'll see some part of this convex object, and I won't see other parts. So what am I going to see? Well, only the ones where that product is positive. So that's the total cross-section that I see when I'm looking at the object from that direction. So I step through all of the triangles in this mesh, or whatever shape they are, and I pick the ones where the surface normal is within 90 degrees of the viewing direction, and I compute that sum. So-- what? Well, now, if I look at it from the other side, I'll get a similar sum, except now this is reverse minus v. And the other thing that's changed is that now I'm only seeing things that have a positive product with minus v. So we're looking at this object from one side. We see some area. It's like we were projecting it to the orthographic projection. And then we look at it from the other side, and, of course, there will be mirror image reversed-- that outline, the silhouette. But it should be the same area, right? And so these two are the same, right? I'm looking at it from this side. Something appears foreshortened. I add all of those up. And presumably, there's no overlap in these sums. There's the borderline case, where this is equal to 0, but then that will contribute 0 to the sum. So we don't care. Well, it turns out that now we've listed all of the facets for a convex object. We can see all of the facets from one side or the other. So each facet will appear either in this sum or in that sum. And then I bring this over to the other side, which flips the sign, and so that means that, actually, now the sum of all facets-- so I move this to the other side, and I end up with 0 over here. So the sum of all the facets is 0 when I take this dot product. OK, now this is true for all possible viewing directions. Doesn't matter which viewing direction I look at it from. If I look at it from the other side, I don't have the same cross-sectional area. And so that means that this part better be 0, because, otherwise, if this wasn't 0, I could just pick a viewing direction in the direction of that vector, and I'd have a non-zero result. So that means that sigma Aisi is 0. That's a vector equation, not just scalar. And so that means, really, that the centroid is at the origin. So if I think of these areas as masses on the unit sphere, each of these facets gets mapped onto the unit sphere at a point that depends on this direction. And I put a mass down there that's proportional to this area. So I have this distribution on the sphere, and this is saying that the center of mass is at the origin, at the center of the sphere. So these EGIs are distributions on the sphere, but there's one restriction. They can't be arbitrary. They have to have this property that the centroid is at the origin, and that corresponds to the surface being closed. For example, I had that geometric object with conical and cylindrical parts. And if I plot that on the sphere, I have a small circle and a great circle. And if I stop there, then the centroid obviously is not at the center of the sphere because I have this great circle-- yes, centroid of that is at the center of the sphere. This one-- no, it's over here somewhere along this axis, and so the combination is off center. So what's wrong? Well, what's wrong is that I forgot the plate at the back that closes it off, which would contribute to a big mass on the other side of the sphere, just enough to counteract that small circle of mass. And so the overall centroid is at the origin. So that's the discrete case. I won't bother with going over the same argument in the continuous case. But if we take the integral of the mass on the sphere, the density on the sphere with the direction of point on the sphere-- that also is 0. So if I think of the EGI as a mass distribution on the sphere, its centroid is also going to be at the origin. OK, now, we're going to look at discrete implementations. We have some surface data, perhaps from some machine vision method like photometric stereo. And we're going to map it onto the sphere in a discrete fashion. But it's also useful to think of the continuous case. And one reason is that, if you have a model of an object, if it's a geometrically-defined object like the thing up there, you can actually exactly compute what its EGI is rather than having to approximate it. And so there's some advantage to doing that. OK, examples. Now, we already know that, for a sphere, the EGI is very simple. It's just R squared because G is 1 over K, and k is 1 of R squared. And we got that from the ratio of the areas of the-- and this is symmetrical. So there's just one value I'm writing down. That's because G is the same everywhere around the unit sphere. So it gets more interesting when we have other objects, because, then, we have to have some coordinate system to refer to points on the sphere. So what's the next most complex object to take as an example? Well, let's take ellipsoid since we know what those are since we talked about them in our discussion of critical surfaces. And we're not going to do the whole algebra, partly because we did the ellipse, and this just gets worse. So here's our ellipsoid. And of course, we have the possibility of writing it in this form, as we saw, where the A, B, and C are the semi axes. And so we can get various shapes by changing the ratios of those axes. And of course, if A equals B equals C, then we have a sphere. And so this is an implicit equation for the surface. It's not much use if you're trying to say, generate a visualization of that shape or if you're trying to say, integrate over its surface or something. So there are alternate ways of describing the same surface. And here's one of them. And so that's a parametric description. And so I could generate points on the surface very easily just by sampling theta and phi and computing points on the surface. And these would correspond to latitude and longitude if it was a sphere. Phi and theta are a way of addressing points on that object. OK, so I can always write a vector to any point on the surface by just listing those things. And what do I want? Well, a couple of things I need. One of them is surface normal, and the other one is curvature. So let's start with the surface normal. So how can I get a surface normal? Well, the normal is perpendicular to any tangent. So if I had two tangents, I could just take their cross-products. How do I get a tangent? Well, I just differentiate this with respect to the parameters. So I have two ways of doing that, and that gives me two tangents. And let's see. So there's one. And there's another. And now I can take the cross-product to get a surface normal. And after some algebra, we get that expression. This is not a unit normal, and we don't really care about the magnitude of this thing. We're going to normalize it anyway. So we can get rid of-- we can ignore that. Then if we do that, then it actually looks similar to R itself. Just, we've replaced A with BC and B with AC and C with AB. So it's an interesting substitution. OK, so that gives us the surface normal. So that allows us-- for any point on the surface defined by theta and phi, we can compute the point on the unit sphere that corresponds to that. And then we need curvature. And that's harder because we need to differentiate one more time. Now, we need to look at-- as you move on the surface, how fast does the surface normal move on the sphere? So anyway, it's in the paper and, I will not go through it. It's somewhat painful. I'll just give the result. So I need to define a couple of other things. So on the unit sphere, the way of dramatizing that using latitude and longitude-- let's see. So which way do I want to do it? So it's the same method we used over here to parameterize positions. OK, so this is a unit normal on the sphere. And obviously, I'm going to have to equate the normalized version of this thing with the terms of that. And then, in the process, it's convenient to have yet another vector, which is-- And what the significance of that is is not obvious. It's a little bit like when we're talking about coordinates on the sphere, which could be either geocentric or based on local surface normal. And but the answer now is-- and therefore, the thing we're interested in is-- So aside from computing the curvature, which involves seeing how fast the normal-- a major pain of this calculation is that we don't want the answer in theta phi coordinates. Those refer to points on the object. We want them in terms of coordinates on the unit sphere. And that's what we've done, because s is defined in terms of coordinates on the unit sphere. And so, if you give me the latitude and longitude, I just plug that in, and I get G. OK, what does that look like? So that's some distribution on the sphere, and we'll use that for recognition and finding orientation. So what does it look like? Well, the first thing to notice is that there's some extreme-- no surprise. We'd expect that, where these semi axes hit the surface, those might be interesting places. And sure enough, we end up with maxima and minima. So those are the extreme values, and they occur at the places where the semi axes intersect the object. And then, if I look at that on the sphere, that means I'm going to have three of these points. And one of them is a maximum, and one of them is a minimum, and they're 90 degrees apart. The surface normals here are pointing three orthogonal directions. And let's see. What's the third one? Well, it can't be a maximum or minimum. It's a saddle point. Now, when I go to the other side, I expect things to be symmetrical. So the curvature here is the same as the curvature here and so on. So the mirror images of these three points on the other side of the sphere-- so somewhere in the back here is another minimum. And down here, in the back, is another maximum, and here's another saddle point. And there are some theorems that may seem intuitively obvious that tell you that you can't have maximum and minimum on the sphere without having a saddle point. So it's not too surprising that we have that. So that's the extended Gaussian image of an ellipsoid, and its maximum minimum and saddle points are lined up with its axes. We've chosen to line the axes up with x, y, and z-axis, but that'll be true in general. If I rotate this object in space, what happens to the spherical image? It just rotates exactly the same way. And that's what we meant when we said we're not really looking for rotational invariance. We don't expect things to be constant when you rotate, but we want it to transform in some easily understandable way. And I don't know-- people have some terminology for that. Rather than invariant, they say equivariants. There's no agreed terminology, as far as I'm concerned. But it's an important idea that we want the change to be easily understandable, unlike-- say if we've taken the perspective projection of the object. Then when the object rotates is a very complex transformation of the image, whereas here it's very straightforward. And so we can now use this image with experimental data to both test whether we might be looking at an ellipsoid and to determine what its attitude in space is by taking the library version and trying to bring these two into alignment, and we'll talk about how to do that in a little while. So the sphere is very simple. The ellipsoid is complicated because it has a full three-dimensional shape. Somewhere in between are things that are a little bit easier to handle. And for some purposes, those are of interest. There are certain objects that are in between in complexity. And in particular, if we look at solids of revolution, we find that it's easier-- a lot easier-- to compute the EGI. And solids of revolution, of course, include cylinders, and cones, and spheres and hyperboloids of one sheet, hyperboloids of two sheets, assuming the parameters are chosen appropriately. OK, so how to compute the EGI of a solid of revolution. So in the case of a revolution, there's a generator that we spin around some axes. And so let's suppose that this is a generator, and we're spinning it around this axis to produce some object. And then we're going to map this object onto this sphere. So let's define a couple of things. So suppose we're here on the object. Then we're very interested in the radius, r. There some other coordinates we might want to use. We might think of that as height. Then we might use the arc length along the generator. So we'll derive formulas for several cases because some are convenient in certain situations, more convenient than others. OK, so surface normal. Here's our surface normal. And then angle with the equator. So the corresponding point on the sphere would be somewhere where, if you measure this angle at the equator, it would be the same angle. So that's the latitude on the sphere. OK, so that's how points correspond, but we need curvature. So how do we do that? Well, we go for that definition up there, and we look at some element that we can easily figure out the mapping of. So rather than just consider a point, let's consider this whole band. And let's say that the width of the band is delta s, the change in the arc length along the generator. And then the surface normal presumably will change a little bit as we go to the other edge of the band, so that band maps into this band. So the nice feature of the solid of revolution is that it's symmetric both in the object and in the transform, in the EGI. And so it reduces it from 3D to 2D. Yeah. So this really is the longitude on the Gaussian sphere. So in this direction, we have that angle, and then, in this direction, we have eta. Well, it's the angle going around this way. And the great thing about the solid of revolution is that everything is constant in that direction. So we can cheat and forget about it. So we need the area of this band. So the area of the object band is-- so it's 2 pi times r times the width of the band delta s. It may not be obvious. But we could take this ban, cut it, and lay it out in the plane and measure it, and this is what you'd get. And correspondingly, then over here, we've got 2 pi. And what's r here? Well, it's depending on the latitude, the higher I go, the smaller r is. And in fact, it's the cosine of eta. So if I project this down here-- right angle-- and this is 1, then this is the cosine of eta. So that's the radius times 2 pi. And then I still need to multiply by that. So that gives me K is-- so 2 pi cancels, and I'm left with that. Now, actually, I'm mostly interested in 1 over K. So let me flip this over. Yeah. So what are these things? So this is the rate of change of direction of the surface normal as I move along the arc. So that's a curvature. That's a rate of turning as I move along the arc. So that's actually the 2D curvature. It's the curvature of the generator. So that's interesting. That means that I've been able to reduce things to-- well, let me stick with this form-- from 3D to 2D. So I've got K is cosine eta over r times KG. And so if I can get the curvature of the generator, then I'm done. The important thing to see is that we used the same idea all along, which is that the Gaussian curvature is the ratio of those two areas. We just need to find corresponding patches and measure the areas. Now, that's one way of expressing it if we know KG. But as I mentioned, this curve may be given in various forms. We may give it in an implicit form, or we may give it as r as a function of s or r as a function of height z. So it's convenient to have different versions of that formula. This one is ds, so the object and the sphere. So if we blow up that place where the narrow band hits the solid of revolution, here we got delta s. That's the step in the arc length. And here's a delta z, the change in height. And here's a change in radius, which, for positive eta, is negative. If eta is a positive quantity, then the curve is coming in towards the origin. So the change in radius is negative. And then where is eta? Well, it's here, because the surface normal is there, and this is eta. And so that better be eta. And then I can read off trigonometric terms from that diagram. For example, I can get sine ADA is minus the minus dr over ds, which is minus rs. The subscript now denotes differentiation. So if I have r as a function of s, I can use this method and just differentiate with respect to this. But what I need in the formula is cosine of eta. So one thing I can do is differentiate with respect to s. And of course, the sine becomes a cosine. And there we have the second derivative, which shouldn't be a surprise because we expect curvature to have to do with a second derivative. And so here we have a very convenient formula for the curvature of a solid of revolution if we're given r as a function of s. And so right away, we can do an example. In the case of the sphere, we can write that this small radius, little r here, which is the same as this thing, is the big R times cosine of that angle. And that angle, of course, is just the arc length divided by the radius, our usual formula for and defining angles and radians. OK, so that's r. The other thing I need is our differentiated twice with respect to s And of course, if you differentiate cosine twice, you get minus cosine, and then there's an r squared. So we should get 1 over r minus sine. And so then, when we put it together, hopefully we get that. So we already know that. But it's a good way to check the result. So that's if we're given the generator as r as a function of arc length. Well, it's one of the 12 most common ways of specifying a curve, but it's not in the top three. So let's go a little bit further. So the other thing we can look at is z. It's more likely we're given r as a function of z, because, if we turn this sideways, that would be the normal way you'd specify y as a function of x when you're defining a curve. So let's see. That one also comes out pretty simply. So if we look at that diagram over here again, we can relate z and s to trigonometric term in eta, so we have 10 over eta. So we can get 10 of eta from r given as a function of z, just by differentiating with respect to z And again, I need secant or cosine or something. So I can differentiate this with respect to z. And I'm going to get the secant squared eta d eta dz. Oh, ds, sorry. And then that's going to be d ds of minus rz. And that's minus-- the chain formula times dz ds. And then, from the same diagram, I can read off-- so I'm reading off all of the possible trigonometric things. So cosine of eta is just dz over ds. Well, no. OK. Putting it all together, I get rzz cosine to the fourth of eta. And so K ends up being-- this formula is slightly messier than the other one. I left out a couple of steps there, just to avoid the monotony. One thing you'll need is secant squared eta is 1 plus tan squared. And so in our case, that's 1 plus rz squared. And when you put all that together, you get that formula. So that's a second way of getting the Gaussian curvature of a solid of revolution. If you generate a curve, it's given as r as a function of z instead of r as a function of s. And again, we can apply this to an example just to have a sanity check. So in the case of the sphere, we have r is the square root of r squared minus z squared, assuming that z starts at 0 at this point. And so rz is minus z of a square root of r squared minus z squared. But we need the second derivative. So we differentiate again, and we end up with-- And let's see. The other thing we need is 1 plus rz squared. And if you put it all together, we get the correct result. OK, so this gives several methods for generating extended Gaussian images of solids over evolution. One reason we did this is because we're going to look at a particular solid of revolution and study its EGI and talk about how you would use it in alignment and recognition. And that's a donut. So here's a cross-section. And basically, we take a circle as a generator, and we rotate it around this axis. So we generate a torus. And in terms of specifying how big this thing is, there are two things we need. Let's call this rho. We need the small radius, and then we need the large radius. And those two define the shape. So is this going to make sense? Can we compute an extended Gaussian image for that? So it's a solid of revolution, so we should be able to just use one of these formulas we derived over there. What might be a potential problem? AUDIENCE: It's not convex. BERTHOLD HORN: Right, it's not convex. Right. So, so far, we've focused on convex objects. In the case of a convex object, the Gaussian image is convertible-- that is, you can go from the object to the sphere, and there's a unique place to go back to on the object because there's only one point that has that surface orientation. And we know that there are some powerful properties in that case, such-- will improve it. But it's unique. That is, if you have a particular extended Gaussian image, there's only one convex object that corresponds to that. And I didn't mention, but Alexandroff proved that one. And again, it's an unconstructed proof, but it means that there's no confusion. There's only one. So we're going to lose some of that. We'll see that we can take the extended Gaussian image of this. But some of the nice properties that we talked about won't apply here. So then-- well, we'll wait until we get there. So right away, in terms of the issue of inverting the mapping, we see that, in this case, instead of there being a unique point on the object that has a certain surface orientation, there are two places. So that means that the mapping is not inevitable. And those two places differ in an important way, which is that the object is convex here. If we move in the blackboard direction, you can see it's curving that way. And if we move out of the blackboard direction, it's curving in a similar way. And so that part is convex, whereas, over here, again, in the blackboard direction we have this convex shape, the circle. But then when we go out of the blackboard, it's curving this way. So this is actually a saddle point, and so the curvature here will be negative. So we'll have to deal with that. And so that's, again, just a reflection of the fact that it's not a convex object. If it was a convex object, the curvature would be everywhere non-negative, and in most cases, just plain positive. OK, so, keeping all that in mind, let's just blindly apply our formulas for computing its extended Gaussian image. So what we're going to need is the radius, the little radius, r. So that's this quantity. And so little r is big R plus rho times cosine eta. To apply the formula we need r as either a function of s or z. So let's do it in terms of s tho, because this is s, and so this angle is divided by rho. Cosine is divided by rho. And then I take the second derivative, and, of course, the big odd drops out. And I'm going to get a negative sign because of the cosine turning into a minus cosine. And I have to divide by r squared. OK, so that's my second derivative. And so, by the way, a number of things become apparent, that-- so that's curvature of the generator-- that, as I go from this orientation up to the top, that is going down all the time because cosine from 0 to pi over 2 goes down until it hits 0. So something interesting happens up there. And then, if I go further, it's going to go negative. And that's the area we're talking about here, where the surface curvature is actually negative while it's positive here. So that divides the torus in to two parts in a way-- one that's inside this cylinder, where everything is negative curvature, and the other part, which is outside. OK, so combining those using the formula we had over there for the case where r is given as a function of the arc length, this is what we get, and it's for this part. So now, what happens on the other part? Well, we can do the same calculation there. And now r r minus rho cosine eta. And so r is r minus rho cosine s over rho. And so the second derivative is going to be-- and so we get a contribution. Minus, minus. So the contribution there is a different sign and also a different magnitude. So what do we do? Well, we have two obvious choices. One of them is to just add them up. So if we have a non-convex object, one thing we might do is just compute the Gaussian curvature at all of the points that have the same surface orientation and then add those up, and it seems like a reasonable thing to do. And actually, of course, we're interested in the inverse. We're interested G. So if we think of those as K plus and K minus, it turns out that that's nice because stuff cancels out, and we just get that. So that's not very satisfactory, because it says that, if we define the extended Gaussian image this way, then we get a constant for donut, which is what you get for a sphere. So this is like a sphere of radius square root of 2 times rho. And it has no orientation. It's constant all the way around, so it's not going to help us in determining the attitude of an object. So no, we don't want to do that. Another way to think about it is that, when we do this in practice, we simply project out the normals wherever they come from without taking into account local curvature. So we could, for example, carve this up into lots of little facets. And for each facet, we compute the area on the surface normal, and then we put a mass equal to the area on the unit sphere at the corresponding place. And there's no account taken of whether the surface curls in or out. And so we don't want to do this. We actually want to do this so that the second term, which is negative, makes a positive contribution. So if we do that, we get this. And so that's going to be our EGI for the torus. And let's look at that. So that's pretty interesting. So it's not constant. That's good. And actually, it has a singularity at the pole. So when we were dealing with that object up top right-hand corner, on the EGI, at the back, there's the mass concentration, which, in the limit, is an impulse. So that's certainly one form of singularity. This also has a singularity, but it's not an impulse. It's just that, as you approach the pole, this keeps on going up, and it's infinite at the pole. And actually, if you want to know how it varies, it's pretty easy because-- imagine that we embed our unit sphere in a unit cylinder. Let's see. Cosine is opposite of adjacent, so secant is the inverse. And so the height of this thing gives us the secant. We start at the equator with a non-zero value, but then it monotonically increases until, when eta approaches 90 degrees, we're off at infinity. So this is a way of visualizing how that varies. It's symmetric in this direction, which is appropriate for solid of revolution. And you can now imagine that we could use this for alignment, because, if we have a model of this object and we have data from machine vision, then we have these two spheres with the distribution, which is non-zero everywhere, veering smoothly, but it has this rapid growth towards the poles. So we can bring them together to line up those singularities, or, in practice just very, very large values. Notice that that doesn't give us, completely, the attitude because we can still spin this around this axis without anything changing. And that's also appropriate because we're dealing with a solid of revolution. So of course, it's ambiguous, what angle it's turning around its axis. Yeah. We briefly mentioned this last time, and somebody sent me email about how to trace that down. So first of all, the proofs are non-constructive. Both Minkowski's proof for the discrete-- the polyhedral case and Alexandroff's for the smooth case, the smoothly curved surface. The proof does not include reconstruct from this one and that one, and they're the same. That's not the way the proof works. And so people have tried to find some way of iteratively reconstructing. And for the discrete case, there are two very slow solutions. So imagine that we have a polyhedran and we know the orientation of each of its faces and we know the area. What do we do? Well, one thing we can do is we can construct a plane. So if we know a normal, we can construct the plane. And a piece of that plane is going to be part of that object. And then we can move that plane in and out. And as it intersects other planes, its area will change. And so we can set up some big search or optimization problem where the knobs we can turn are the distances of all of the planes from the origin. And the objective is to match as close as possible the areas that somebody told us that each of the facets is. And it's a nasty problem because computing those intersections is a lot of work, and some of those faces may not exist. You may have pushed it so far out that it no longer intersects with the rest of the object. And in many cases, if you think about it, you have some complicated object with many faces, and then you push a face outward. It's likely its area is going to decrease because it's moving out. So here's the thing we've reconstructed. Here's the surface that we're playing with. If we move it out, its area will increase. Well, unfortunately, for some objects, that's not the case. It actually goes the other way around. So if you have some iterative negative feedback scheme, it'll suddenly be positive feedback and blow up. So it's been done, and Katsuya Akiyoshi started that game, and I guess Jim Little is another reference. So short answer, no. But you can approximate it if you're willing to do a lot of computing. So now, what's important for us, though, is that we don't need that because what we're doing is we're working entirely in this space. We don't go back to the object space. So we're doing the recognition by comparing the distributions on the sphere, and we're doing the alignment by trying to rotate one sphere relative to the other until we get a good match. It's intellectually very intriguing. Why can't you just compute the object? But it's actually something, fortunately, we don't need to do-- fortunately, since it's not easily done. It's interesting distribution on the sphere. And there's another way of understanding it that's quite useful, which is the same argument we made about bands on the surface and bands on the sphere, except, this time, in a different direction. So here's a donut. And now imagine that we divide it up into bands. We're cutting it based on an axis, and we have a plane that goes through the axis. And we rotate it slightly to generate this band, and then you look at the sphere, and we look at the corresponding band. Now this goes through a full rotation, so we'll have a full rotation on the sphere. But that same plane I was talking about that we used to slice this when we sliced the sphere-- we get a crescent shape, like a slice of lemon, not this bad. Now, this band is actually not as wide on this side as it is on that side. But if the radius of the donut is large enough, the difference will be small, whereas, in this one, it's obviously a function of latitude, and the width of it goes as the cosine of eta. So the area on the sphere goes as cosine of eta. And so the curvature goes as cosine of eta, and so G goes secant of eta. You see that? So we're really, as usual, taking the ratio of the area here to the area there. And what's happened is that this has gotten squashed near the poles, and so we have that cosine of eta dependence. Now, you might say, but this band is in constant width. It's slightly narrower on the other side. Well, combine it with the other band that has the same orientations. So when we cut it with a plane, in this case, the object actually gets cut in two places. And so we need to add contributions from both of them, and this one happens to be asymmetrical in exactly the opposite way as that one is. So the end result is-- but between the two of them they're perfectly even all the way around if you add them up, whereas this one is perfectly cosine of eta. So what's next? What's the area of a torus? Anyone have that off the top of their head? OK, it's that. 4 pi squared. And why is that? Well, one way we can think about it is that we take a circle of circumference 2 pi rho. So that's the circumference of the circle. Then we spin it with an axis around that circle, which has a length of 2 pi r. And the product gives us the area. So we have that generator that we're sweeping along an axis, and that's a hand-waving argument, but it can be made rigorous. In any case, the area of the torus is that. Now, if we look at the equation, the formula here for the Gaussian image, you see that r and rho don't appear separately. Only the product appears. And so what does that mean? Well, it means that two donuts of different shapes but the same area have the same EGI. So that's the price we pay for going to allowing non-convex objects. We lost uniqueness. And so before, there was only one convex object that corresponded to a valid EGI. Now we have a bunch of them. And so if, for example, if you had a bicycle tire with a big R and a small rho, you might have the same EGI as a scooter tire with a large rho and the small big R, if the product happens to be the same. So that's a shortcoming of this representation, and it may or may not matter in an application. If you're dealing with a car repair shop and you have trucks and scooters coming in, then this might be an issue. You might need to use some other method to distinguish them. If you're in the world where these donuts are in with other objects, like spheres and cubes and bricks and tabletops and so on And this is the only torus you're going to run into, then it doesn't make any difference. But it shows that, when we extend this to non-convex objects, things aren't quite as nice. And there are other issues with this. For example, if we image this donut-- if we image a convex object from opposite sides, we get the whole surface. There's nothing hidden. Here, there are little pieces that are missing because they're hidden by-- So normally, you will see all surface elements where the surface normal is not more than 90 degrees away from your viewing direction. If it's more than 90 degrees from your viewing direction, it's in the back. It's self-shadowed. You won't see it anyway. But here, the small parts of the surface hidden back in here, where the surface normal is pointing towards you, so they should be counted in constructing the EGI-- and they are in the mathematical version. But if I take this from, say, photometric stereo data, there will be some small error introduced because I'm missing part of the surface. And again, that's because the object is not convex. OK, well, let's talk about how we would do this using numerical data rather than nice, mathematically-defined shapes. So this is a little bit like in our discussion of patents, where the ultimate application is where you have a real image-- training image, and you fit edges and do all that stuff. But there's some utility to being able to also deal with CAD data, where you have an analytic description of an object and now you don't need a perfectly made one because you've got the perfect thing there in the CAD. So similarly here, in practice, we'll be looking at real objects which are imperfect. But if it's possible to put something in the library based on the true shape that this thing is supposed to be, that's valuable. Hard to do it numerically. And for example, if we have photometric stereo data or if we have a mesh model as people use in graphics. So then we have patches on the surface. And in the case of photometric stereo, those patches will typically correspond to pixels. So there's a small, quadrilateral patch on the surface that maps into a pixel, and we know its orientation. But whatever it is we have the same job. We know that facet by, say, its corners. Let's say it's triangular. And we need two things. The one is the surface normal. And then the other one is the area. So let's start with the surface normal. Well, that's easy to get because we can take any two edges and take the cross-product-- for example, that. And that looks asymmetrical because we have b appearing twice. Why should b be appearing twice and the other's only once? But we can easily show that it's actually that, which is symmetrical. OK, so it's easy to compute the surface normal. And then the other thing is we need the area. And so there, the area of the triangle is 1/2-- where does that come from. Well, the dot product is proportional to the length of the two vectors times the cosine of the angle in between them. And that's exactly what we need for the area. So if we have a parallelogram, the area of that is the product of those two vectors. And we don't have a parallelogram. We only have half of a parallelogram. So we get that. And again, this is asymmetrical, so that's not-- that seems odd. And of course, you can do this three ways. You can either have two copies of A in the formula or two copies of B and so on. And if you add them all up just for fun-- there, that's symmetrical. How will I fit this? Oh, no, there are, because we added three of them, and each of them is a half. So anyway, easy to compute. So this is normal and the area. And now what do we do? Well, we put a mass on the sphere at the point. Based on the surface normal, the mass will be proportional to this area. And then we repeat this for the other facets of the object, and that way we get a mass distribution on the sphere. And the density of that is our G. So this is another way of understanding why, when we add the two contributions from the two sides of the donut, we want to add them the way we do. We don't want to subtract stuff and have it cancel out, because, here, we're not taking into account anything about curvature directly. It's interesting that that's the effect we get, that if the curvature is high, these guys will be all spread out, but it doesn't matter whether the curvature is positive or negative. They'll be spread out. So how to represent this? So basically, what we're building is the direction histogram. So you can imagine that we would somehow divide the sphere up into boxes, just as we do when we're computing histograms. And then we just count everything that falls into each of those cells. And direction histograms are used in other contexts. They're pretty interesting. So for example, if you look at the fine structure of, say, muscle you'll find that most of the fibers are parallel. And so how do you express that-- which direction are they going, and how many are not in that category? Well, you plot them on the sphere and build this orientation histogram, and then it'll become apparent that there's a strong concentration, a particular point on the sphere that corresponds to the longitudinal axis of these fibers and not so much elsewhere, but there will be some. And this has an application in, for example, neuroimaging. So as with MRI, you can find out the flow directions of water in your brain and thereby determine the dominant axon directions, and then you can plot these connecting cables that go from one part to another. And you can study them by this method of plotting these directions histograms. They do the same with blood vessels. So one method for trying to distinguish tumor from other tissue is to make note of the fact that the tumor needs blood supply to grow. So it puts out stuff that attracts blood vessels. But it's disorganized It's not built the way it is when you're growing from a small cell to many cells. And so it's a mess. It's leaky. But more importantly, from our point of view, blood vessels go every which way. So if you image a tumor and you plot the orientation histogram, you'll get a uniform distribution around the sphere. That's bad. If you image real tissue, you'll find that, yes, there are vessels going every which way, but there's typically a dominant direction or multiple dominant directions. And when you plot the directions on an orientation histogram, they'll be strong blobs, whereas, in this disorganized tissue, it'll all be spread around. So orientation histograms aren't really a new thing. Most people don't know about them. But they're used in other areas-- cryomicroscopy, and what have you. Now, ordinary histograms are pretty straightforward. In 1D, you just divide things up, and you just count how many things end up in each slot. And then maybe, based on that, you create some estimate of a probability distribution. It's slightly more difficult in 2D. Well, OK, so we just divided up into cells. And same thing-- we count. Squares? Not so good. Well, one reason is that they aren't round if we could somehow fill the plane with disks, it would be better, but we can't without overlap or leaving gaps. Why is this bad? Well, take a more extreme case. Suppose your tessellation is triangles. We could certainly use that way of dividing up the plane as well. But you see that, in the case of a triangle, you're always combining slightly different things. But in the case of a triangle, you're combining things that are pretty far away from the center compared-- for the same area triangle-- than if you had a square, or, even better, if you had a more rounded shape like a hexagon. So in the case of the hexagon, the ratio of the largest radius to the smallest radius is very small. In the case of this triangle, it's quite large. And the square is intermediate. So in the case of 2D, depending on your application-- this is what people generally just use because it's trivial. It separates the problem of dealing with x from dealing with y This you don't want to use, and this would be better. But it's extra work, so people typically don't do it. And the improvement is not huge. It's not like it's twice as good as square. So that's one issue, how to pick the cells. Then there's another one, which even occurs up here, which is-- suppose that something falls there, and with a little bit of noise, it would have fallen there. And so when you look at the histogram, you have to take that into account, that there's some sort of randomness going on here and that when you compare two histograms, you want to be careful to take that into account. And then how do you do that? Well, one way is to have a second histogram that's shifted. And so, in that one, these fall into the same cell. Problem solved. Except now you have to compare a shifted and an unshifted histogram. Well, in 2D, of course, you can do the same thing. But you have to be careful. You end up having to do it four times. And so as you go up in dimensions, this, quote, "solution" gets more and more expensive. You have to shift it to half an x, a half a y, and then shift both x and y. And then together with the original grid, you've got four grids. So that's a common solution for the 2D binning problem. There's another way, which is to say, well, when I deposit my contribution here, I put some of it over there and some of it over there. Basically, you're convolving your distribution with some spread function, and, depending on the implementation, this may be cheaper to do than that. In the case of 2D, you'll have to put it into four places. And again, this is like doing it at the time you enter the data versus doing it at the time you read it out. So there are those issues. But we actually have a worse problem. We have a sphere. And so how do we divide up the sphere? And we already talked about longitude, latitude not being a great way to divide it up. So let's, before we do it, summarize what of the desired properties we want of a tessellation. And in the planar case, people don't even think about it. It's just obvious. But when we have a curved surface, it's more complicated. We would like the cells to all have the same area. And again, here, it's trivial to arrange for that, hard to do on the sphere. Then we might want them to be equal shapes. And again, these are all the same shape. No problem. And then we might want the shapes to be rounded. And that refers to the discussion we had about triangular grids and hexagonal grids. And again, on the sphere, it's very easy to build triangular grids, but they're not particularly good. We want something that's more rounded, like hexagons, pentagons, dodecagons, and so on. OK, what else do we want? Equal area, equal shapes. We'd like to have a regular pattern. We want it to be easy to do the binning. So over here, how do we do the binning? Well, we just do an integer division and throw away the fraction. Or we round off to an integer-- both in x and y, If necessary. But that's not so obvious to do here, particularly if we have some interesting pattern with lots of hexagons and pentagons. So if I have a unit vector, where does it go? Now, over here, you even think about it. It's so trivial. You just divide by the interval, and there's an integer part, and that's what you use. If you have a bunch of facets on the sphere and you have a unit normal vector, what do you do? Well, you can do something brute force. You can just take the dot product of that unit vector with the unit vector of each of the cells, and then you pick the cell that has the largest value because it's cosine of the angle and the largest value means the smallest angle. But obviously, that's not practical because that means that, every time you access the orientation histogram, you need to step through all of the cells. OK, easy to bin. Let's see, one, two, three, four, five. I thought I had eight. Let's see what else we need. We want to have alignment on rotation. So what's that all about? Well, in doing the matching of these edges, we will need to bring one object into alignment with the other. And again, in the planar case, it's just translation. You just shift things around. It's very straightforward. And in particular, you can shift it around by discrete increments equal to the size of the cell and, each time, test in that full match. And there's no loss of quality because you just take the numbers as they are. Well, that's not going to be so easy in the case of rotation. So let's think about how that might work. So suppose that we have divided up the sphere into-- I don't know. Let's make it a dodecahedron. Dodecahedron. OK, so here's our sphere. We've taken the dodecahedron and centrally projected it out onto the surface. So we have these pentagons. Well, they have curved edges, but they're the result of projecting a pentagon up. And so this is one of the cells. And there's another one. And there are 12 of them, right? Dodecahedron. And so what's my data representation? Well, it's just 12 numbers. Now, of course, this is not really a good example because that's too few, but just to illustrate the point. So my orientation histogram is 12 numbers. Now, if I rotate the sphere so as to bring the facets of the dodecahedron back into alignment with itself, what happens to these numbers? Well, a facet goes to some other facet. So maybe A1 goes over here and A7 comes back here, and A9, A3. So all that happens is that they're permuted and there's no loss in quality, and it's easy to compute. So that's what happens over here. If I shift this whole thing, the entries in the data are permuted, but I don't even worry about that, because I just have an array, and I just imagine that the array starts somewhere else. So that's the advantage of alignment on rotation. So this means that, here, for any rotation in the group of rotations of the pentagon, my data changes in a very systematic way that does not involve any loss of quality. What am I talking about-- loss of quality? Well, suppose that, after rotation, these cells didn't line up but overlapped in some way. Suppose, after my rotation, It looks like that. Well, that means then I'm going to have to redistribute whatever weight was in here, a little bit in there, and this red cell would pick up a bit of the weight from there, a bit of the weight from there. So I'd be doing some interpolation convolution operation, and the result could be useful. But then maybe I'd have to do it again. And then I'm going to be in the Xerox of the Xerox of the Xerox problem, where each step, I lose a little bit of quality, and, after a while, it's not really useful anymore. So let's see. Regular pattern, regular shape. So then the question is, what patterns can we use? And so the reason we talked about platonic and Archimedean solids is because those are the starting points for these orientation histograms. And unfortunately, we've run out of time. So we'll talk about that next time. So there is a quiz. And I think we've covered everything that you need to do the quiz. Otherwise, we'll finish on next Tuesday.
MIT_6801_Machine_Vision_Fall_2020
Lecture_19_Absolute_Orientation_in_Closed_Form_Outliers_and_Robustness_RANSAC.txt
[SQUEAKING] [RUSTLING] [CLICKING] BERTHOLD HORN: We're talking about photogrammetry and, in particular, absolute orientation. And we're drilling down into more details for this one than we will for the other aspects of photogrammetry because a lot of it's going to be based on what we're doing here. And we found that rotation is the part that's awkward to handle. So we talked about different ways of representing rotations. And we picked unit quaternions. And major reason for that is that, given all of the advantages we discussed before, the big thing for us is we can get a closed form solution to the least worst problems. So in some sense, we have an objective way of getting a, quote, "best fit answer," which is more difficult with the other notations. Then we talked about how to manipulate these things. There are two operations we're particularly interested in. The one is composition of rotations. If you do successive rotations, what happens? And the other one is rotating a vector. So composition of rotations is just multiplication. And we can represent it in various ways. For those of us who grew up with vectors, maybe mapping it into scalars and vectors is the most intuitive. So we have this formula. And it's actually quite efficient in terms of composition of rotations. In terms of computation, it's a lot less work than multiplying two orthonormal 3 by 3 matrices. But for much of what we do, what's more important is the other operation, which is rotation of a vector. And so for that, we have that formula where we're rotating in four space using q. And then we're unrotating using q conjugate. And somehow, we end up back in a real three-dimensional world. And for that, again, if we want to just think about vectors, one way we can write it is-- so if we don't want to think too much about operations in the quaternion world, we can just do this operation directly using vectors and scalars. And this one, we have a disadvantage. It takes somewhat more arithmetic operations than multiplying a 3 by 3 matrix by a vector. Although we saw that the ways of rewriting this were we reused the q cross r that reduces the number of operations somewhat. And then we need to connect this to other notations. So we, for example, want to relate it to axis and angle notation. And there, we have the formula of Rodriguez, where omega hat is the unit vector in the direction of the axis. And so that gives us one conversion that we want. So we can use that to-- q dot-- sorry about the handwriting. And when we identify these two formulas, we find that actually q is cos omega over 2 and omega hat is-- well, let's see. Other way around. q vector is omega hat sine theta over 2. Oh, let me make that theta. Sorry. Our criterion for rotation looks like that. And so we can right away read off the axes, it's going to be parallel to the vector part. And if we want to, we can read off the angle by using a10 on the ratio of the real part and the magnitude of the imaginary part. So that is one way of converting between two of these different forms. Now, we have eight. So we don't want to go through all eight of them. But the other one that's important for us, since we live in a world where orthonormal matrices rule, we need to be able to convert back and forth between those. And so if we expand that formula using that isomorphism with 4 by 4 orthogonal matrices, then we got that. And we expanded that out. And when we look at that matrix, we find that it has skew symmetric parts and symmetric parts. And we can use that to help us in the conversion from one form to the other. So first of all, if we're given the quaternion, this allows us to compute the orthonormal matrix very easily. So in this transformation, it's 4 by 4. But we don't care about the first row and the first column because that is a special quaternion that represents a vector where the scalar part is 0. So all the first column and the first row tells us is that if we have a completely imaginary quaternion, then when we perform this transformation, we get back one of those. So we're back in the world of vectors. So that's the one way of conversion. The other way is a little bit less obvious, partly because we're going to have a 3 by 3 matrix. So we have nine numbers. And our answer only has three degrees of freedom. We only want the axes and the angle. And so you've done some of this. But one part we can do is to look at the trace of that matrix, that 3 by 3 sub matrix, and I think we end up with something like 3q squared minus-- so it's just running down the diagonal. And of course, q0 we said over there was cosine of half the angle. And then this part, each of these is proportional to sine of half the angle and is a unit vector. So if we take square of these, we should get sine squared theta over 2. So trace equal to that. And then we can manipulate this in various ways. We could add cos squared theta over 2 plus sine squared theta over 2 equals 1 in order to get rid of that sine squared theta. So we can add that in without changing anything. Or we can subtract it. So if we subtract it, we get that thing that's more symmetrical than cosine and sine. And of course, that's the double angle formula. So that's two cosine of theta. Oh, we need the minus 1. And from that, we can solve for cosine of theta is 1 half trace of R minus 1. So right away, that allows you quick test on whether rotation matrix could even be a rotation matrix. Because if you trace ends up being too large or too small, cosine theta can only be between plus and minus 1. So that limits the value of the trace. So that's a lot cheaper than checking whether it's also normal. Now, that's a way to get the angle. It's not a good way because the problems in the theta equals 0 because we have that thing. We're up on the curve and a tiny change in the height potentially corresponds to a large change in the theta. And similarly-- so yes, this is true, but that's not the way to compute theta. So what do we do? Well, we use the off-diagonal elements, right? Because the off-diagonal elements all depend on the sine of theta over 2. And then it depends on whether we're going to use the symmetric part, like q,x, q,y, which would then depend on sine squared of theta over 2 or the asymmetric part, which is q,0, q,z, which is going to depend on sine theta over 2, cosine theta over 2, which is the double angle formula for sine theta. So anyway, need to get sine theta so that you can then use atan2 and avoid that problem because where this one is bad, sine theta is good. OK, so that's one way to go. But to make it really explicit, let's actually give a full inversion formula. And I do this mostly to vaccinate you against conversion formulas that have been published because they are not good. So in this kind of sense, where, yeah, mathematically you're done. You've got this. But actually, in terms of numerical accuracy, no. OK, so given the three-by-three matrix, r, we can compute various sums. And let's start with a diagonal. So we've got our three-by-three matrix over there. And if we end up with the diagonal, we get that if we add 1, right? There's that 1. Then we can combine it in various other ways. So this is just sort of like, you know, try all possible ways of subtracting and adding terms on the diagonal, as if the off-diagonals didn't exist. And ta-da. Now, we just take square roots. So that's one approach. Of course, the problem is there's a sine ambiguity because, in each case, we can compute these sums and differences and then divide by 4 and take the square root, and we've got one component of the Quaternion. But we don't know whether it's plus or minus. And yeah, we know that plus q is the same as minus q. But this is more. This allows us 16 different sine choices. So even if you allow for the flipping of the sine of the Quaternion. That leaves us with 8. So that means that we shouldn't rely on this method alone. But what we can do is compute these, and then pick the largest for numerical accuracy, and solve for it. So suppose this sum is the largest, then we'll use it and arbitrarily use the positive version because we have that minus q is the same as q, so we can pick. We can make one arbitrary choice and solve for that. So let's call that q,i. And then, we need to go to the off-diagonals. And fortunately, they're symmetric and asymmetric parts that we can pull out. Now, this is a bit more involved than the published methods, but it works. It's resistant. It's robust. It's resistant to numerical problems. So by adding and subtracting corresponding off-diagonal parts, we get six relationships more than we need. But the way this works is, we've picked one of these and solved from the diagonal. And then we go into three of these and solve for the rest. So for example, if we solved for q,y, this involves q,y, this involves q,y, and this involves q,y. So we picked those three to solve four q,0, q,x, and q,z. So then correspondingly, if we have-- so that's a direct way of going from Quaternion to rotation matrix. I mean, we can also go indirectly. We can first find axis and angle and then write the rotation matrix in terms of axis and angle. And it's more complicated mostly because we've got nine numbers, and they could be noisy, and we'd like to get the best possible result out of those nine numbers. OK, so out of the 56 ways of converting between different representations, we've done four. And that should be enough. So we can convert back and forth between the things that we're more familiar with-- maybe not between poorly spin matrices, but we don't use those much, except in quantum mechanics. OK. And you did some of this in the homework, and I don't want to repeat that. But let me talk about something in particular, scaling. So far, we'd assume that our coordinated system transformations were rotation and translation. And in many cases, that's all there is. If we have actual time-of-flight data to measuring distances, there's usually no question about the scale factor because we know the speed of light with more precision than you can throw a stick at, although there's often an error in offset. And so how do we get into scale? Well, it's quite easy to get into scale-- for example, if we get the baseline wrong. So our plane is flying along taking one picture, and taking another picture. And we know the speed of the plane, and we know the time, so we know how far-- we know the baseline. But how accurately do we really know it? And so if we are trying to patch together pieces of the terrain that we obtained with successive-- two camera positions, then we should allow for some small difference in the scaling because we know the distance pretty accurately, but not perfectly. And we're looking for very high accuracy in topographic reconstruction. So how to deal with that? Well, let's assume we've-- this isn't hard. We again find that centroid maps the centroid, even with the introduction of scaling. That's very easy. Just differentiate with respect to translation. Set the result equal to 0. And then we move the origin to the centroid, and we get these prime coordinates. So now, let's look at this. So here's a possible version of the problem we're trying to solve that, now, we've got rid of translation by moving to centroid. And now, we have an unknown rotation, and we have an unknown scaling. That's a new part. And so we might want to set up a least squares problem. And of course, the norm is the dot product of this thing with itself. So we can actually multiply it out. We get four terms. And then the sum of those four terms is the sum of four sums. And so we end up with-- so this is very similar to what we've done before. And how do we find the optimum? Well, of course, we just differentiate with respect to s. This is particularly easy now. It's just a scalar. And we set the result equal to 0. And the first term drops out, and then we get the second term. Oh, here, of course, we use the fact that this the size of this vector is the same as the size of the rotated vector. So rotation doesn't change the size. OK, this is equal to 0. And so we can solve for s. Well, let's give these things names. Let's call this Sr, so we don't have to write them too many times, and let's call this D, and call this Sl. So we have S equals D over Sl. We just solved this for D. We can forget the 2. And so we don't, at this point, know the answer because we haven't got the rotation yet. But in the same situation as we were with translation, we can remove the scale factor from consideration now. And then at the end, we go back and use this formula to compute the scale factor. So that keeps everything coupled that we can't do it independently of rotation. Now, we talked earlier about symmetry. Why are we rotating from left coordinate to right coordinate rather than from right coordinate to left coordinate? And a method should really be symmetrical, in that if you compute the inverse, you should get the inverse. So the rotation matrix you get should be the transpose of the rotation matrix and so on. So here, if we now go from the other way round, if we look at this problem, then we expect to get a scale factor that's the inverse of that scale factor, right? Because we're going from right coordinated system to left. We were going from left to right before. And if you actually work it out, you get that. So again, you can do it, but it's not the same answer. It's not the inverse of that. What we would expect is that this prime is 1 over S, and it's not, in general. So that suggests that this is not a good method. And what's going on? Well, what's going on is that the least squares method is trying to make the errors as small as possible. And one way to do that is to make the transform coordinates smaller up to a point. So you can kind of cheat by making the scale factor a little bit smaller than it really should be because you're shrinking things down and making things smaller. And then conversely, if you go in the other direction, you shrink down the other coordinate system. So neither of these is really acceptable. And so we look at another error term, the one that you looked at in the homework problem in the 2D case. And so in this case, we're going to end up with an error that looks like this, where the Sr, Sl, and D as defined over there. And that's nice because S doesn't show up here in that term that has the rotation in it. We just have these other terms. And so we can just differentiate with respect to S, set the result equal to 0 because derivative of that 0 with respect to S is 0. And so that means that S squared is Sr over Sl. And now, if I go in the other direction, I'm going to get S bar squared is Sl over Sr. And that is the inverse. So that method is much to be preferred. It also has the property that we don't need correspondences. So it's just like translation, where we are able to map centroid to centroid. We didn't need correspondences for that. Here, it's very convenient that we don't need the-- the only term that depends on correspondence is that one. We only need these others, which basically say, you've got this cloud of points. How big is it? And then you have the other cloud of points. How big is it? Well, the scale factor is the ratio of those two sizes. It's very intuitive. OK, so that leaves us with the problem of-- so we can deal with translation and scaling in the correspondence-free way and also free of rotation. We don't need to take into account rotation. Well, we can separate the problems. So that leaves us with the rotation part. And we spent some time on that, so I won't review that in great detail. Just note that, in the end, we have this to maximize because this was a minimization, but we have the negative sign, so this has to be maximized. And N, of course, is a four-by-four matrix, which we can show is symmetric. It's not obvious that it is. And so now, we have a calculus problem. We just differentiate with respect to q and set the result equal to 0-- well, not quite. We have that constraint. And that's just as well because, otherwise, the answer is 0 because that differentiate that, and you get 2N,q. How do you do that? Well, I'm sure you remember these formulas and so on-- so all in the appendix. OK, so if we set that equal to 0, well, there's a very convenient answer, which is that q is 0. Or looked at another way-- that's an extreme one. Well, we're actually trying to maximize, so we'd go the other way. We would just make q grow. We can make q as large as we like. And so there's no maximum-- well, infinity. So that's obviously not satisfactory. We need to take into account the constraint. And as we saw in the slides last time, one way to do that is using Lagrange multipliers-- a very elegant method, and it's described in the appendix. But this is a special case where we can get away with something simpler called Raleigh quotient. So Raleigh did all sorts of interesting things, including work in optics. And he ran into this problem of trying to find an extremum. And so he came up with the idea, well, how do I prevent this from running away by being made very large? Sorry, this is transposed there, yeah. Well, I just divide by the size of q. So there's no advantage to making q very large, right? So if you think about it as directions in a space, this quantity is constant along any ray because wherever I am, I'm going to divide out by the length of q itself. So that creates a different function, the one that doesn't go off to infinity. But that is constant along any ray, and that's exactly what we want. We want to find the ray, the direction of the ray that makes that as extreme as possible. Yeah? AUDIENCE: What is the matrix, then? BERTHOLD HORN: Oh, so that's the thing that we got out of where-- it disappeared-- where we took the representation of a Quaternion product with as a four-by-four rotational, orthogonal matrix. So that's sort of the key-- AUDIENCE: Can you call it q, really? BERTHOLD HORN: No. Where is there space? Well, let's just take this. Just a quick review of that part-- so I'm not going to copy it all out, just at the beginning. So we had this sigma over-- right? This was the thing we were trying to maximize. And then we wrote-- first, we took this over to the other side to make it more symmetrical, so-- oh, sorry. And then we used that notation where we represented this as-- now, before, we expanded it the other way out. We converted q into a four-by-four matrix and multiplied by this Quaternion. And now, we do it the other way around. And then because this is a dot product, it's that transpose. And then we get 2 transpose R transpose li, R,ri. And of course, this is the sum of that. And q doesn't depend on i, so we can get rid of that. So it's that matrix. So it's derived from the data, from the correspondences. OK, so how do we find that? Well, now, it's a pretty straightforward calculus problem. We just differentiate with respect to q and wet the result equal to 0. And it's a ratio, so we use the rule for differentiation of a ratio. And so first, differentiate the numerator. We get 2N,q divided by this. And then we need to add the term involving the denominator. So we get minus 1 over q dot T q,0 squared 2,q, and q,0 N,q. So this is just the formula for taking the derivative of a ratio, and this 2,q here is the derivative of q transpose q with respect to q. And this is supposed to be 0. So that means that N,q is-- and this, of course, is a scalar, some constant. And so what does this tell us about the q? That we multiply some matrix by q and we get out some scaled version of q-- so eigenvector, right? So q is an eigenvector. And now, we're trying to maximize this. So we need to know which of the eigenvectors to pick. And it's pretty easy to see that if N is lambda q, then this ratio over here is going to be lambda q transpose q over q transpose q is lambda. And so to maximize it, we pick the eigenvalue. Pick the largest eigenvalue. We want to maximize this ratio. And the way to do it is to pick q to be the eigenvector corresponding to the largest eigenvalue. And then this whole thing is equal to that eigenvalue. So obviously, you wouldn't want to pick a smaller eigenvalue than the maximum one. OK, and so the only slight tipping point is the fact that there's this constraint. But the constraint is so much easier to handle then the constraints we had with the normal matrices. So we can actually do this in a very straightforward way-- Raleigh quotient. So a couple of things that we usually ask after we've, quote, found the solution, one is, when does it fail? And the other one is, how many points do we need? How many correspondences? And so let's start with that. So in all of these photogrammetric problems, that's an important point. And so well, we can approach this from the point of view of the properties of this matrix N. But it's a bit of a bear of a problem to worry about. And singular is how many eigenvalues are non-zero and so on? Let's just do it intuitively. So first of all, we can say, well, how many things are we looking for? We're looking for six. If we're looking for translation or rotation, we're looking for six. If we add scaling, we might be looking for seven. So each measurement correspondence, there's a point in 3D and a point in 3D that we say are the same. So that's worth three correspondent-- three constraints, right? So we're looking for six things. So hey, we only need two correspondences, right? That's assuming there's no redundancy in that information. Well, let's start off with one correspondence. So here's an object. And then we have a second object. And we want to know how one is rotated relative to the other. And of course, we know the duality between two objects, one coordinate system, or one object, two coordinate systems. So let's do the two objects, one coordinate system. And obviously, this isn't going to do it because I can move this thing. I can rotate it around that point without changing that correspondence. And also, I can rotate it about this axis. It's just very little constraint-- well, three constraints out of six dimensional possibility. So that's not good enough. So let's pick two. Suppose that our very rough back-of-the-envelope calculation suggests that two should be enough because each of those correspondents gives us three constraints. It's very powerful to say that this point is on top of that point. And so we get three out of one correspondence, three out of a second correspondence. That makes six. We're looking for six degrees of freedom. Maybe that'll do. Well, what do you think? I mean, if I tie two objects together at two points, is that enough to rigidly combine them? Or is there some degree of freedom left? So it's hard to see because it's in 2D. Well, draw that axis, and take one of the objects in the rotate it about that axis. So no, that doesn't work. And the reason is that when we make the second measurement, we don't get three new things. The information's redundant. Why? Well, because there's the distance between the two that is fixed. And so it is true we have six numbers, but there's also one that's not worth anything new. And so we only have five constraints. So yes, we've got just one left, and perfectly makes sense. We only have one degree of freedom left. We can rotate about this axis. So OK, so it takes three. It takes three correspondences to work. And that method that we deprecated at the beginning used three correspondences. So in that respect, we're not any better. We need exactly the same number of correspondences as that method that we pooh-poohed and said wasn't very good. OK, so one question is-- we've come across this before in the 2D world-- is there some way of generalizing the transformation so that it matches? Here, with three points, we get nine constraints. And of course, they're highly redundant because the distances between the points are the same. And so with scaling, we're up to seven. So that's already going in the right direction if we add scaling. We need to add two more so that we would fully use and need three correspondences. So one idea is, well, how about the general linear transformation? And we did this in 2D. And I guess, in 2D, we ended up with six degrees of freedom. What is it in 3D? So well, so we could have no longer constrained to orthonormal matrices. And that actually is a great thing because the orthonormal matrices are the ones that produce these horrendous problems with the constraints that they are orthonormal. And oh, this looks just right. It's got nine numbers in it. So nine constraints from three correspondences matches to that, so it looks like this is, perhaps, the generalization that we need. Well, unfortunately, not quite because we have translation as well. So the general linear transformation has 12 elements in 3D. We had six in 2D. And so there isn't a nice symmetric argument that, with three correspondences, it would work. We would need four correspondences. And that does work. And it's very elegant and very neat. The least squares comes out beautifully because there are none of these annoying constraints-- like, determinant of the matrix is one. But that's not the transformation that we're dealing with in the real world where we have two sensor systems looking at the world, and they're typically only dealing with rotation and translation, maybe scaling, but not that full-- this allows skewing of the axes and asymmetric scaling, that the x-axis is scaled different from the y-axis, and that there's a slant, and so on. So OK, so back to our analysis of this method, we've said something about how many correspondences we need. So one way to fail is to not have enough correspondences. And if you have only two correspondences, that matrix N will be singular, which isn't enough to make the method not work. But it will it will have more than one eigenvalue that's zero, and that's a way to formalize our hand-waving argument. But it's kind of messy, so I didn't want to do that. But there is another case that we want to look at that's actually kind of interesting. So how do we find the eigenvalues? Well, we use MATLAB. Or, more seriously, we need to solve this characteristic equation, which is obtained from that matrix N. So that's the equation that says, determinant of N minus lambda i is 0. So that's the determinant of a four-by-four matrix that has minus lambdas down the diagonal, subtracted out the diagonal. And when we take the determinant, we get up to fourth-order combinations of lambda. So it's going to look like this. Now, our matrix N is a particular matrix that was obtained this way. And so actually, we can say more about this. I already mentioned this last time. It's not easy to show, but C,3 is the trace of the matrix that happens to be 0. Well, it's easy to show it's a trace of a matrix. It's not easy to show that it's 0. So anyway, it's 0, which makes this equation easier to solve. In fact, usually, the first step in solving a fourth-order equation is to eliminate the third-order term. Well, it's already happened. So we're ahead of the game. What about the others? So this is the trace of another matrix. So these others aren't 0, so what is this matrix M? Well, the matrix M is where we actually started with. This is what you'd compute when you got the correspondences. What's that? That's a dyadic product. So we take the victor in the left-hand coordinate system and multiply it by the victor in the right-hand coordinate system. But rather than the transpose of the first times the second, which gives us a dot product, we do this. And you've seen that. So that's a three-by-three matrix. We just stepped through the data, correspondence by correspondence. And we compute this dyadic product, and add it into a total, and we get that matrix N, which is, of course, three by three. And it turns out that the four-by-four matrix N has terms that can be computed from this matrix. So that's actually the most efficient way of getting N. You just sift through all of your data and compute M. Then at the end, you compute N from it. OK, next thing-- suppose that the determinant of M is 0. Suppose that M is singular. Well, that's pretty useful because that means that C,1 0. And then we can all solve this equation without needing to go to any special textbook or something because then, we've got lambda to the fourth plus C,2 lambda squared plus C,0 is 0. In that case, it reduces to that problem. And we use what equation to solve that? So it seems to have lambda to the fourth in it, which means we should use Ferrari or Cardano. But of course, it's quadratic in lambda squared, right? So we just apply the quadratic. So it's a particularly easy case. And so you might wonder, well, who cares? That's a special case. Well, it turns out, it's a it's a particular case that's interesting from a geometric point of view. It has to do with the distribution of the points. And so this matrix N can have some 0 eigenvalues, which are potentially problematic. And when does that occur? Well, one is when points are coplanar. So far, we've talked about clouds of points, like the spikes on a sea urchin. And we're matching two of these. So we're rotating one in alignment with the other. But this applies equally well if all of those points are in a plane. And so what happens then? Well, if there's a plane, then all of our points in that plane, and we can draw a unit normal to the plane, and then the dot product has to be 0. So let's suppose all the right-hand points are in a plane. Now, with our measurement error that implies that the left-hand points also are in a plane, but we may not need to make that restriction. There might be measurement error, and the others might not be exactly in the plane. So what I've done here is I've drawn the centroid. So I've already moved everything to the centroid. And therefore, I can write the equation of the plane in this simple form. The dot product-- the component of any one of these vectors in this direction is 0. That's the definition of a plane. OK, well, then, that's the same as r,i transpose n. And then I can go to M,n hat is-- OK, so if the points are coplanar, then we do have the determinant of M is 0. And then that problem is particularly easy to solve. Now, I haven't shown it the other way round, which I should, which is that if determinant of M is 0, the points are coplanar. But I think you can see how to do that without too much trouble. Now, in that case, actually, we can solve the problem more simply. Suppose that things are nice, and both are coplanar, both the left-hand cloud of points and the right-hand cloud of points. So I can draw these two planes. So here's one plane. So all of the points in the left-hand coordinate system are one plane. All of the right-hand ones are in this other plane. Not particularly good drawing. Well, two planes-- so all of left point cloud, right point cloud. So then, I can decompose the problem into two steps. The one step is rotate the one plane so it flies on top of the other plane, and then an in-plane rotation, and I'm done. So the only two things-- I can decompose the full 3D rotation into two very simple rotations. Now, to do that, I'd need to find this angle. And I need to find this axis. And the axis is parallel to n,1 cross n,2. If we have two planes that have normals n,1 and n,2, they intersect in a line, which is parallel to n,1 cross n,2. And the angle, of course, I can get very easily also. I take the dot product. That gives me the cosine. I take the cross product, and the magnitude of that gives me the sine. And then I use atan2. OK, so I've got the access. I've got the angle, so I can construct the Quaternion. And then I rotate all of the points in one of the coordinate systems. And now, the two planes are on top of each other. And now, I've just got to take the rotation in that plane so that these points align. So that's a least squares 2D problem, like you've solved in homework problems. Of course, it's in a plane that's somewhere in 3D-- but pretty straightforward anyway. So that's interesting that the special cases where things changed in terms of the difficulty of solution also correspond to some things that are physically relevant. So OK, so let's see. Where to go next? I don't want to go onto relative orientation too quickly. I want this to sink in because this is like a triumph. We've got a closed-form solution. And sadly, so far, there isn't one for relative orientation. But we use a lot of the same tools-- the representation for rotation and so on. But I want to look at some other aspects. One of them is least squares. Everything we've done in this course is least squares. And if you talk to your more sophisticated friends, they'll say, oh, that's useless because it's not robust. And what does that mean? Well, it means that if you have nice noise, least squares is the way to go. By nice noise, we mean something that has a reasonable distribution, like Gaussian. And the problem we face in practice sometimes is that a lot of the measurements follow a Gaussian distribution. But every now and then, you get something that's completely wacko. And so what do you do with that? So how do we deal with outliers? Now, you can formulate a new optimization problem that uses something other than square of error-- for example, absolute value of error. But it no longer has a closed-form solution. And so there's a trade off. Obviously, we don't have a closed-form solution. It might be more work to compute. That's often not an issue. But it also makes it very hard to say anything in general about it. Like, how does it vary with the parameters, and so on. If the only way you get the answer is to crunch the numbers, then you don't have that pleasant experience of a general formula that says, oh, it's going to vary linearly with frequency or something like that. So that's one disadvantage of departing from this least squares method. So suppose that we have a 2D problem, and we're just doing a line fit. So there's our least squares line. But now, suppose that there's some data point that's just completely wacko. Well, that has a huge square of error. And so it's going to pull your solution dramatically in that direction. So that's the intrusion. And it'd be slightly better if it was absolute value of error because, then, it wouldn't pull quite as hard. But it would be even better if you said, oh, that point's just crazy. Forget it. And there are different approaches to this. So there's a question of robustly dealing with outliers. Now, it actually goes further than that, that you can show that there are certain problems where, if the noise is Gaussian, then least squares gives you the best possible answer. It's not just hand waving, but maximum likelihood, all those wonderful things. So how do we deal with this problem? Well, there are various approaches. The one I'm going to talk about is called RANSAC, Random Sample-- I don't know-- Agreement. And Fischler and Bolles invented it at SRI, Stanford Research Institute, which is connected to Stanford in some way, although because they do classified work, it's not as intimately connected as a C cell. It's more like Lincoln Labs. Anyway, they had problems like satellite navigation and making measurements of, I don't know, star positions, gyroscopes. And occasionally, they'd have some massive error, which would completely mess up their result, even though, most of the time, the errors were well behaved. They followed some nice, smooth distribution with not too much spread. So what's the idea? Step one, random sample. So if I take a random sample of these points and there are not too many outliers, there's a good chance that I won't hit the outlier, and I will get a good result. And how do I know that I picked the random sample and how large? So there's several questions. How large a random sample? Well, their recommendation was the minimum needed to fix the transformation. So in our case of absolute orientation, there would be three. Now, we know if we get 1,000 correspondences, we're going to get much better results than with three. It goes as 1 over the square root of n. So that's limiting the accuracy, so some other people recommend that you use more. And then what do you do? So now, you take the random sample, and you do a best fit, typically least squares because it's efficient and easy to understand. So OK, and then check how much of the data. And so what you do is if that's your fit, then you add some sort of band. And you count how many of them fall inside. And notice that this involves some kind of threshold. So in our case here, let's say that all but one point fall in that band, and we say we're done. If not, we take another random sample, and we do this operation again. And there are all sorts of interesting issues. One of them is you have to know what the ratio of inliers to outliers is. So you have to have some model of your measurement process because that's going to control how you set the threshold. So you need to know something about the noise to set that threshold. And then you need to know how many. So there are two thresholds. There's a threshold here of the band that determines whether these are inliers or not. And then there's the second threshold here that says-- oops, scared her with some equations. So we got arbitrary numbers in there, which assume that you know something about the data. And this works pretty well. I mean, I guess, you have to have some limit on not doing this an infinite number of times. So you probably have another exit clause which says, well, if you've tried 100 random samples, then let's stop, then, probably, your assumptions about the model are wrong, and the thresholds aren't right. So you need to give up. And there are variations of this. So that's a good point. I didn't make that point yet. But we're talking specifically about absolute orientation. But this trick is to be applied, if necessary, if there are outliers, to any of the other least squares method. So the least squares approach is very appealing because you get a closed-form solution many times, and you get a measure of how large the error is-- standard deviation-- and all these good things. But it's always subject to the problem of bad data. And so this is definitely a method to consider in the presence of real data, which sometimes has outliers. Yes, OK. And there are different variations of this. In fact, the name of it, consensus, kind of suggests a different approach, more like a Hough transform. So before I read the paper, what I thought they were doing was the following. You take a random subset. You do a fit, and that gives you some answer, like slope and intercept of a straight line, say, in this case. And you do it again. Do it a certain number of times. And then you plot them in some parameter space. And some of them will cluster tightly, and some of them that included the outlier will be way off. And so you can imagine a Hough transform method where you have an accumulator array in this parameter space. And whenever you get an answer, you increment that cell in the parameter space. And after you've done it often enough, you look for the cell that has the most hits. And that's somehow hybridizing Fischler and Bolles' RANSAC method with a Hough transform method. And all of these are kind of ad hoc, but they are very useful in practice. OK, well, one of the things we're going to do a little later is to find representations for objects, particularly objects that are not simple polyhedra, but more complicated, and, particularly, objects that are near convex that might not have a lot of holes. And that representation is very useful for detecting things and finding their orientation. And we run into the same orientation problems as we have over here. And often, there's not a closed-form solution. So one approach is to sample the space of rotations. And how do you do this? So since we're talking about-- so let me first look at sampling a sphere in 3D. So one way we can do it is to use some discretization of latitude and longitude. You know, I just find there's some people who-- I forget what the terminology is. But where these lines cross, that's like a point of convergence. And if you go there, something magical happens to you, particularly if you take along crystals. And sometimes, these are hard-to-access places, like in the middle of a military reservation. Anyway, we sample them. And what's wrong with that? It gives us a way of sampling the whole space. But obviously, it's not a uniform sampling. Up here near the poles, we're sampling points that are very close together. And so that means it's inefficient, that if we had used some of those points near the equator, we could have done a better job, more even job of sampling. So there are lots of interesting questions that come out of this. One of them is, well, can we think of a better coordinate system so that, if you sample uniformly in those two coordinates, you will get a nice spacing on the sphere? Well, good luck with that. You can show that it's not possible. So what else can we do? Well, we could generate random theta and phi using some distribution. So let's see, latitude goes from minus pi over 2 to pi over 2 and longitude minus pi too. So we could divide this up. We could have a random number generator that goes between minus pi and plus pi and one that goes between minus pi over 2 and plus pi over 2 uniform. And of course, we'd have exactly the same problem because we're going to sample the polar region more strongly than the other region. So that's not very good. So how to solve that? Well, the trouble is, we got this curved surface. If we had a Cartesian world, it would be easy to sample. If I have a cube, I can just have a uniform distribution in x, a uniform distribution in y, uniform distribution in z. And I call a random number generator three times. I have a point in that space. And if I do this a lot, I don't expect there to be aggregation, systematic clumping, as we have in the case of the sphere. So how to go from this to that? Well, one thing to do is to inscribe a sphere in the cube and then somehow map the cube onto the sphere. So one way to do that would be to imagine the origins at the very center of this thing. And any point on the sphere, we draw a line, and then we find where it intersects the sphere. At any point in the cube, we sample-- and that point could be inside the sphere. So if it's inside the sphere, we draw that line, and we project it out to the sphere in this case. So to be clear, again, we just have uniform, like most languages give you the ability to generate uniform random numbers between 0 and 1. You just map them onto that interval in x, and then another one interval in y, another one interval in z. There is a point in 3D. And you have to reduce it to the surface of a sphere while you just normalize. So you have a three vector. You just take the unit vector version of that, and you're done. So does that give me a uniform distribution on the surface of the sphere? No, OK, so we have some head shaking. And that's because, well, the cube has these corners. And so there will be a higher density of points in the directions of the corners then there will be where the sphere is tangent to the cube. It's better, though. This one's only off by-- I forget-- a factor of 4 over pi, I think, or pi over 3-- no-- 4 over pi. So this isn't too bad. So how can you do better? How can you take this idea and fix it so that you don't have the corner problem? AUDIENCE: Tessellate the shapes. BERTHOLD HORN: Tessellate the shapes-- yeah, we could tessellate the surface of the sphere using regular solids, the magic things that Plato organized his society about. That's certainly a good approach. Is there some way of discarding some of these points that we are generating here? OK, so counteracting this aggregation in the other corners, we could introduce some weight. And for example, if we could give the points when projected a weight, we could just make it inversely proportional to the radius. So if we're far out, then-- so that's a good idea. Another idea is to say, well, we could just throw away all the points that are outside the sphere. And then, we have a random distribution of points in a sphere. So we, again, we generate points in the cube. But then, we check whether their radius is bigger than 1. And if it's bigger than 1, we chuck it out. And it has a disadvantage that you're not generating points at a fixed rate, that it costs you more, sometimes, because you have to go back and regenerate it, and maybe regenerate it again. So there's a disadvantage. But what happens then? Well, then we have a uniform sampling of the inside of the sphere. And that's almost good enough. Now, we just need to project all of those points onto the surface of the sphere. And if we want to avoid a nasty numerical problem, we might want to also throw away all points that are close to the origin because, then, we might have numbers you're dividing by normalizing the vector to make it a unit vector. You might be dividing by some small number, which is going to introduce some error and biases. So you generate points uniform in the cube. You throw away all points above some certain radius. And you throw away all points that are below a small radius-- I've drawn it a little big there-- and then normalize them. And so there's your point sampling of the surface of the sphere. So we may not care about the sphere. But I can't draw this in 4D. But it works exactly the same way, right? If we want to have a nice, uniform sampling of 4D space, one way-- a random sampling-- one way of doing it is exactly this. We do a four-dimensional equivalent of the cube. So we generate four random numbers for x, y, z, and, say, w. And then, we check whether our sum of squares is larger than 1. If so, try again. If not, then we turn them into a unit Quaternion. And that gives us a very simple way of getting a uniform sampling of that space. Now, there was mention of regular polyhedra. So that's an important thing to think about. So what are they? Well, they're tetrahedra, they're hexahedra, octahedra, dodecahedra, and icosohedra. So these are the so-called regular solids, regular polyhedra. That means that all of their faces are the same. And those aren't the only objects of interest, but let's start with those. So if we want to sample the surface of the sphere, one other way is to project out-- well, these obviously aren't spheres. But you can imagine them placed at the origin and then projected out onto the sphere. And we get great circles wherever there's an edge on one of these objects. And so that's a way of dividing up the sphere in a perfectly regular way. And what's the problem? Well, the only thing is that there aren't too many facets. So tetrahedra, 4, 6, 8, 12, 20. So we're not dividing the sphere up very finely. And as Plato discovered, that's it. There aren't anymore. So that means we have to subdivide. And you might say, well, wait. What happened to the soccer ball? How many faces on a soccer ball? Ugh, how can people play games and not even know how many-- anyway, 32. So it's a mix of the dodecahedron and icosahedron. And it's not in this group because it's a semi-regular. So I guess, Plato gets credit for these. And Archimedes gets credit for the semi-regular one. And there's a whole bunch of, I don't know, 14-- or it depends on how you count. So these objects are all their own mirror images. But when you get to the semi-regular object, there's one where the mirror image is actually different. In other words, you can't rotate it to bring it into alignment with itself. And so if you count that, it's 14. If you don't count that, it's 13. And the soccer ball is somewhere in here, the icosadodecahedron. So what's the difference? Well, the semi-regular ones still have facets that are regular polygons-- so triangles, squares, dodecahedrons, hexagons, et cetera. But you're now allowed to have more than one type. So over here, once you decided on triangles for the facets-- they all have to be triangles-- over here, you can have a mix. So the soccer ball is a mix of hexagons and five-sided figures. So that gives you a larger variety. But think about the fact that they all have to have the same edge length. So for example, if you had an object where you're mixing triangles and squares, since they have matching edge lengths, the facet area is different. And there's obviously some formula for that, the area of regular polygon in terms of the number of edges and the edge length. OK, so that means that over here, we had equal cells. And so it made it very easy to have a uniform sampling. Over there, it's a little bit more tricky because, now, we've got to deal with the fact that the facets are not equal in area. The good part is that there are more of them. So that's on the sphere in 3D. We're talking about a sphere in the 4D. And so Plato and Archimedes didn't think too much about those. So we can't resort to them. But those figures come into play again because in 4D, we're talking about rotations. And we we're looking for regular patterns of rotations in this space. And so one way we can think about that is in terms of rotations of these objects. So let's just do one and see how that goes. So the aim here is, again, to find some methods for uniformly solving the space of rotations so that if, say, we do a search-- so we've got this object in the sensor, and we have a model in the library, and we're trying to figure out, is there some way of rotating this to bring it into alignment with that? And that could be part of recognition, that we have multiple objects in the library. Which of them can we match? But of course, we can't match them directly. We first have to get them rotated. And then, how do we pick the rotations? Well, to be efficient. We don't want to be sampling parts of the rotation space more densely than other parts. That'd be the equivalent of spending all the time searching around the poles. And so we're trying to find a uniform way of sampling this slightly hard to imagine space. But let's start simply with the hexahedron and think about its rotations. And well, first, there's the identity. So what do I mean by rotations? I mean by rotations that you take the object, then you rotate it, and then the faces line up with the faces, and the edges line up with the edges, and the vertices line up with the vertices. OK, so the identity-- so in terms of Quaternions, that looks like that. It has a 0 vector part and a scalar part of 1. Then we can think about a rotation, let's say, about x through, let's say, pi. And so that's going to be, let's see, cosine pi over 2 is 0. Sine pi over 2 is 1. So that's that Quaternion. So that just turns it 180 degrees about the x-axis. And naturally, I can do the same thing about y. So I've already got four distinct rotations that line that object up with itself. Well, I don't have to rotate 180 degrees. Oh, well, how about minus 180 degrees? Well, of course, that's the same, right? So that doesn't add anything. But I could think about rotating pi over 2. So let's try that. So pi over 2, that's going to be cosine of pi over 4, which is 1 over square root of 2. So I get that. And now, here, I could talk about minus pi over 2 because these two aren't the same. So this one is not the negative of that because the real part hasn't changed sign, whereas with these guys, if I flip the sign of x, I get a different Quaternion, but remember, minus q is the same rotation as plus q. So I flip the sign of that to get a different Quaternion, but it's the same rotation. OK, so I can do this for x, y, and then I can do it for z. So I get six of those. I've got four there. So we have 10 rotations, 10 samples of the space that we're interested in. And so the question is, are there more? And there are two different ways to proceed. The one is geometrically. Look at the diagram. How many other ways can I rotate it? And the other one is to just think of this as an exercise in Quaternion multiplication. So let's just do one of those. So for example, I could pull out of that table. Now, I can try and generate the whole group just by multiplication, take what I've got, take two pairs. Let's take-- because it's not very interesting taking the identity, but so let's take something else. OK, according to our rules for multiplication, I get 0 minus x dot y, which is 0. And then we get 0 times x plus 0 times y plus x cross y. And so that's 0 and minus z. Hmm-- no. Well, that's already in the table, right? So that doesn't add to the table. So let's take something else. Let's take some of these. 1 over the square root of 2, 1, x, 1 of the square root of 2, 1, y. OK, that's going to be more interesting. 1 times 1 is 1 minus x that y is 0. And the 1 over square of 2 gives us 1/2. And then the vector part is 1 times y plus 1 times x plus x cross y. So that's 1/2, 1, and plus x hat, y hat, c hat. So that's a new one. So the axis of rotation is 1, 1, 1. And if you want to make it a unit vector, it's 1 over the square root of 3. And that's not in this table. And the angle is cosine theta of a 2 is a 1/2, so let's see. That means theta over 2 is pi over 3. So theta is 2 pi over 3. So that's an interesting new rotation. If you look at the cube, that corresponds to one of its corners. So we're taking the origin and connecting it to a corner, so that's the axis. And we're rotating by 120 degrees. And you can see that that's a rotation of the cube that does bring us back into alignment with ourselves. If we rotate 120 degrees about that axis, everything falls back into place. And so we're just about out of time. But we can proceed two ways. The one is look at that example and some others. And we get a total of 24 rotations. Or if we're more mathematically inclined, we can go this way. Or it's fun to write a little program that just implements Quaternion multiplication and then builds this table by taking pair-wise products and seeing whether you get something new. And eventually, you'll run out, and you have 24 of them at that point. OK, next time, we'll start talking about relative orientation, which is more relevant to binocular vision.
MIT_6801_Machine_Vision_Fall_2020
Lecture_15_Alignment_PatMax_Distance_Field_Filtering_and_SubSampling_US_7065262.txt
[SQUEAKING] [RUSTLING] [CLICKING] BERTHOLD HORN: So we've talked about edge detection. And we've talked about finding objects in two dimensional images. And, in particular, we discussed the Pad Quick patent, which provides an efficient way of doing that by building a model based on some training image and then using probes to collect evidence about a match and building a score and then looking for extrema in that multidimensional score surface, where the multiple dimensions were the pose-- translation, and rotation, scaling, et cetera. And it was a great improvement over what came before, which was blob analysis, binary template matching, normalized correlation, half transform. Well, now we're going to talk about another patent in that category of finding things in images. And also, here the emphasis is more on inspection. That is-- if you read the abstract, it's all about inspection. But, actually, the patent also deals with position and orientation. And what's different from the previous one is that the previous one quantized the pose space in order to do a complete search. And this one assumes you've already got a rough idea of where things are. And it's just going to improve them. So there's an incremental adjustment that will give you a very accurate position. And you'll be happy to know we're not going to go through this in as much detail, particularly because quite a bit of it is based on what we've already done. So there it is. This one's called PatMax. And Bill Silver says that was motivated by two ideas. The one was that we're maximizing some kind of energy. It's an iterative approach. It's a least squares approach in a way. And each step, which is called an attraction step in the patent, improves the fit. And then you stop when the fit is good enough. Or in the preferred embodiment, you stop after four steps. So that's the idea here. And the other motivation for using that Max was Maxwell. So he knew about electromagnetics. And his intuition for this approach was based on forces between magnetics-- well electrostatic components and, in particular, dipoles. And, well, it turns out that intuition is completely wrong because those do not have the properties he wants in here. And, in fact, a much better analogy would be springs-- mechanical analog. So the idea is if you're trying to fit two things together and you can identify corresponding parts, imagine connecting them with a spring, that they'll attract each other. And what the system will do after a while is minimize the energy. It'll rotate and translate and do whatever so that the springs are as least extended from their rest position. And we'll explore that a little bit further. But in the patent, it's all about fields and stuff like that. So the usual stuff-- application, dates, and prior patents, and the inventors, and the signee, and references. And you'll see here the first reference is Hough. That's the one we talked about last time. And then there's a whole abstract, which is mostly about inspection. But, really, the patent is a large part about alignment. And here are more patterns and other publications. And if you read through them, you'll see that some of them are from the old AI Lab. And here's figure one. So figure one looks a little bit like a figure we've seen before. There's a training image. And there's some training process, which generates a pattern. We called that a model before. So they purposefully changed the terminology to avoid confusion. So what we call probes before are now called dipoles. But they are the same thing. They're a position and a direction and a weight. So training in the previous patent meant that we created a model. And the model consists of probes. And the probes are created by each detection. So that's training. And similarly, here, training here is going to be an edge detection process that produces edge dipoles and a field, a two-dimensional vector field. And that's training. Then you have a runtime image. And at runtime, you're performing a attraction process that iteratively finds a good pose, assuming that you have a starting pose. So this one assumes that you already have done something to get an approximation. For example, you could run Pad Quick. And why iterative? Well, because it's formulated as a least squares problem. And if we're lucky, the least squares problem has a closed form solution. And in this course, we try and set things up that way. But in the real world, that's typically not the case. And then you solve it iteratively. And so you stop iterations when you think you have a good enough answer. So you have a starting pose and you have the final pose. There's also a thing called a client map. So little detail-- if you want to work in a Cartesian coordinate system, well, your pixel positions may not be on a square grid. So the client map basically maps whatever you've got onto a nice square grid. And that's important because a lot of cameras do not have equal spacing in x and y. And then there's an RMS error, which you can use to decide whether it's a good fit or not. And since this is about inspection, there are two more measures which you can use to decide whether or not the object is in good shape or not. And we get out evaluated things, which are used in inspection in this diagram. OK, let's go on to the next one. So when I first saw this figure I thought, oh, OK, they forgot to actually put anything into this figure. But that's to illustrate the client map. So suppose your camera has pixels that are rectangular. Then you need to map from that to a square pixel array. So just minor detail. OK, so this is the training. We have a training image. We do the feature detection, edge detection. There are a bunch of parameters that control that, such as the resolution you're working at, how far are you going to downsample? And it produces a field dipole list, which are basically the probes that we talked about in the other patent-- position, direction, and a weight, and maybe more. But more importantly, it produces a field. So the idea is going to be that when you put down the runtime image, it's going to have features that you would like to align with features of the training image. And they are going to be attracted by features of the training image. So you're going to have this vector field, which basically tells you how hard to pull to try and get alignment. And this is going to happen for all of the features. So there are going to be a lot of competing pools, which hopefully overall produce some coherent result. If your image is over to the right, those field values will be negative and will pull things to the left-- some kind of translation. If your image is rotated, then they're going to be field values that have directions that are different in different quadrants. And overall, will have a talk that will improve the alignment. So that's the purpose of the field. And that's going to be something that's on the pixel grid. That is going to be-- that's new. That wasn't in the previous one. And that also brings up another point, which is that the pose in the previous patent was a mapping from the model to the image because the model had the probes in it and you plonk the probes down on the runtime image. Here, we're going to go the other way around. We're going to run feature detection on the runtime image and map it back to the field. Why? Well, because mapping a discrete number of things is much cheaper than transforming a whole image. So in the previous case, we didn't want to take the runtime image-- and you can rotate it and translate it and make a new image. But, of course, that's a very expensive process. So instead, we took the discrete thing, the model, and we mapped it on top of that. Here, we're going the other way. We're having discrete result edges from the runtime image. And we're going to map it on top of this field. Now we could, of course, rotate, translate, scale the field. But again, that would be very expensive. So the pose here is the other way around but the same idea-- mostly translation, rotation, and scaling, although they allow for other things as well. OK, the part that's new and interesting is how the field is generated. And we'll go into that in some detail. There are many steps. And, essentially, the idea is to produce something that will draw you towards the alignment where things in the runtime image match up with things in the training image. This should look very familiar. That's low level. That's our edge detection stuff with some small changes-- left out the [INAUDIBLE] thing here. Just assume that you have some Cartesian to polar conversion-- and then added parameters that control the process, because in different steps of the computation, you might want to change things, like how much you subsample the image, what kind of low-pass filter you use, what method you use for peak detection and interpolation. But other than that, that's just the same old story. So in the training image, we compute these dipoles-- field dipoles they're called. And as I said, they're just really edge fragments. And this is the data structure for them. And there are flags that tell you whether you're near a corner of the object. There are flags where the contrast is positive or negative. And there's a gradient direction. And, optionally, there's a link to the next field dipole. So this has to do with the initialization. So what you do is you have this array, which is going to contain the vector field. And the vectors don't cover the whole array. And the reason is that when you get very far away from an edge, it's very unlikely that it really has anything to do with-- so here you've got an edge in the runtime image. And there's a edge in the training image that's far away. Your belief that those two belong together is reduced when they're far away. So at some point, you just cut off and say, forget it. And so there's a part of the vector field which is going to contain these special markers saying terra incognita-- we don't know what's going on here. So don't have that contribute as evidence to the score. And so you start off, basically, by wiping out the whole array and setting that special value everywhere. So if you don't get to impose a value, that's because you were too far away from an edge. And a lot of this low level stuff is the same. This, you may remember, is all about linking them up into chains and then removing the weak chains-- short weak chains. And so here we have the starting values imposed on that field. So we've now taken the pixels in the field that directly correspond to edge points that we know about, field dipoles. And we've given them a value. And the value corresponds to the distance from the edge and the direction. So it's not just an array of numbers. There's a direction to them and a length. So each of them is a little two vector. And so each of these squares that's been filled in has associated with it one of these field values that tells you how far away you are from the edge. So that's initialization. And now you have to fill in the rest of it. So for squares that are anywhere near an edge, like within four pixels or whatever the threshold is that you pick, you have to somehow figure out how far away they are from the edge. This is very similar to something that's used often in machine vision called the distance map, which is a little simpler. So a distance map is just an array that tells you for every pixel how far away it is from an edge. This is different because you actually give not just the distance but the direction. But the way you fill it in is very similar. So it's an iterative process. You seed it. You put in the actual edge points. And then you go one pixel away-- all the pixels that touch a pixel that's already been filled in. And the question then is, well, what value do you put there? Now, if we were using Manhattan distance, where it's just the sum of x and y distances, then you could do this accurately. You can always accurately compute incrementally how far something is away from an edge. But since we're using Euclidean distance, which makes more sense here, that's actually not trivial. And since in addition, we're using direction, that makes it more complicated also. So here are examples that explain how you compute it. So up there, there's an edge, 906. And we've already filled in 901-- that square. We know the vector that goes over to 906. And now we're looking at the block 900 and saying, OK, what do we fill in here? And we should fill in something because we're touching a pixel that has been filled in-- namely 902. And so we can start off with the value of 902 and adjust it. And you can see there's a bit of geometry there. We can actually figure out what that distance, 912, is if we know what the distance 904 is and that angle and so on. I'll bore you the trigonometry. But it's pretty straightforward. Same over there-- we have that pixel now touching on a corner. But that's still considered connected. So one of them has been filled in, namely the pixel 932 has a vector, 934, that goes to the edge. And now we're filling in the pixel 930 by extending the distance that we have from the other pixel and so on. And so that's the process. And we iterate a number of times until we decide that now we're so far from the edge that it's probably not a good idea to consider these edges to match. So we're sort of building-- one way of visualizing it, if we forget about the fact that the vectors, is as a kind of groove along each edge. So it goes up as you go away from the edge. That's the distance map. So the distance map is this terrain, which at any place tells you how far you are from the edge. And now when you plunk down an edge from the runtime image, you want it to, by gravity, get pushed down, to run down the slope. Now, with one edge, of course, it'll just run through the bottom. But with lots of edges, they're all going to have their own local influence of where they should go. And hopefully, it'll be somewhat connected so that you end up with the whole system settling into a lower energy state. And this is just slightly more sophisticated because we actually have a direction not just a height. And then it gets more interesting when we get to places where there's a collision, where we have information from more than one neighboring pixel. And if they are similar enough, then you can take some kind of weighted average, try and improve the result. It's always noisy. So if you get more than one answer, you can improve things by combining them. But when the angles get too large, then you have to say, well, that's not-- there's a problem. We're at a corner. So here's an example of the first step after seeding. So we've seeded it with the edges that are in there. And now we've extended it by going one pixel away from those seeds. And each of those vectors is the force. And it tells you how far you are from the edge and the direction to the edge. And in the process, we'll end up looking at points on the edge that are not the original points from edge detection, just as in the other case, we started off with points on a grid that were produced by that edge detection step. But then we interpolated our own points because we wanted a certain number of points per unit distance. They're not necessarily based on the pixel spacing. Similarly here, the tips of these arrows don't point at the original edge pixels. They point somewhere else. And we record that. Now, for some purposes, the vector field is the force, is all you use. But they also optionally allow you to record more information, such as what is the closest field dipole. So then you have options on matching. Now, here there are no conflicts yet. And now we've taken the next step. We've moved outwards-- again, one pixel from where we were. And we filled in another bunch of pixels now with longer vectors, larger force. Because we are further away from the destination, there should be a larger pull. It's a little bit like a spring. We're pulling more when the match is over a bigger gap. And the x marks the first place where there's a conflict, because we could have extended this vector here to fill in this pixel. Or we could have extended this one. And they obviously have very different ideas about where the edges. And so we mark that as a corner position, which will then be not contributing to the process later. So you don't want to just blindly use this map. You want additional information. And that additional information could be a contrast direction. So that's the simplest one. Or you might actually look at the directions of the vectors. How close is the runtime gradient match up with the direction of the vector in the field? Or how close is the direction of the gradient in the runtime image with the nearest field dipole gradient, because when you get over here, that direction may be different from this one. So they leave open all of these options. And the process is going to be the same. You're trying to minimize the energy in the system. And you can see how this is better model with the mechanical system where you have springs. So suppose you know that there's a runtime typo here that matches this model dipole. Then you can think of as being a spring between the two with zero rest length. So that when you pull it out, it'll want to shorten. And so the minimum energy state is the solution. The one thing to keep in mind is that as we've mentioned many, many times. With an edge, often, we know accurately how things work in one direction and not the other. So over here when we're talking about a match with an edge, we can't really say that we are matching a particular point on the edge. We just know that it's matching that edge. And so the idea of the spring isn't also completely accurate. What we'd really want is a slightly more sophisticated model that has a little carriage that can run along the edge. So there's a spring, yes. But we don't really care where the other end of it goes. So we could have a little mechanical device that can slide along the edge. And so that mechanical analog is basically the actual algorithm, not dipoles and electrostatic fields and stuff like that, even though that's the language used in the patent. OK, so we talked about how to get the training image. Now we use it. We have the runtime image. We do the feature detection on the runtime image. That produces an image dipole list. We plunk it down on top of the field. And that creates many, many little forces-- one for every feature that we detected. And they are going to adjust the pose. Now, in the simplest case, if we only have translation, it's easy to visualize this. If the runtime image is off to the right of the model image, it's going to get pulled back to the left because all of the springs are extend it to the right. So that's very easy to visualize. If it's rotated, it's a little bit harder to visualize. But again, you're stretching the springs out now tangent to a circle rather than radially. And they're all pulling in the same direction, either clockwise or anti-clockwise. And so the equilibrium position is when you've let it that mechanical system go and adjust itself. We can think of size in a similar way, where if the size is wrong-- suppose that the runtime image is bigger than the model image. Well, all of the springs are going to be stretched outwards and by an amount that increases with radius. And, again, that system will adjust itself by scaling the transformation until those springs are relaxed. And I guess in this patent, the emphasis is on what do you do then. And you compute clutter and coverage, which are measures of how good the runtime image matches the model and how good the model matches the runtime image. And there are various ways of evaluating the result. Ideally, the coverage will be 100% and the clutter will be 0. OK, this is more about how the image dipole gets attracted to the pattern boundary. So, overall, we're setting up a large least square system. So we're saying that each runtime dipole has a certain force exerted on it in a certain direction. And we are going to move all of them in a systematic way to try and reduce the tension, to reduce the energy in the system. And, typically, the movement allowed will be translation, rotation, and scaling in this case. So if you write it all out, it's a huge least square system. And there's no closed form solution. But there's a natural way of computing it which involves a bunch of accumulators. So let's see if we get there. This goes on and on for a long time. So this is the accumulator array, which is basically the upper triangle of a symmetric matrix. So with a symmetric matrix, we only need to keep the diagonal and everything to one side. And these are sums of-- so W is the weight. We already talked about assigning weights based on various properties that might indicate whether a match is particularly good or not. And then positions in the image-- and so these are all what we call moments, things like the integral of something times x to the i times y to the j. So you can see that we need quite a bunch of them. I forget. I think the matrix is 6 by 6. It depends on how many degrees of freedom you allow. If you allow translation, rotation, and scaling, it's, I guess, 4. But if you allow for all 6 of the general affine linear transformation, then it'll be 6. And so you run through the image. And, of course, you can actually do it in parallel, if you're so inclined because they don't interact. You're just collecting evidence everywhere and adding it up. And it produces an overall force. And then you adjust. So, again, as I mentioned, I'm not going to go into as much detail here as in the other patent. So these are simple cases. So the top one here is translation only. And this one is translation and rotation. Don't be scared by tensor. As far as the patent is concerned, the tensor is just a multidimensional array, including array with one dimension or two dimensions and so on. So it builds up to allow more and more degrees of freedom and emotional results. And that's not the end of it, because it's assuming-- it's locally linearized about the current operating point. So you don't get the final solution to the least squares problem. But you get a large improvement. And in the preferred embodiment, after four steps, it's good enough. But of course, you can use any criteria you like for termination. Let's see if there are any others here that-- here's an interesting detail. There's a doubly linked list. So the field dipoles make it easy to move in either direction along the edge because they are represented in the W linked list. So here's multiscale. So for speed, it's good to work at low resolution and get an-- so you have a starting pose. And then you get a low resolution pose at high speed using a low resolution trained patent. And then you use that as a starting pose for the high resolution stage. So they only show two stages of this. You could do more. And also in this case, you don't need as high quality starting pose as you do if you just have one stage of this process. Lots of flow charts. Let's see. Was there anything in here I wanted to point out? This is fun to read because you're thinking of an image of, I don't know, a printed circuit board or something. "Digital images are formed by many devices and used for many practical purposes. Devices include TV cameras operating on visible or infrared light, line scan sensors, flying spot scanners, electron microscope, X-ray devices, including CT scanners, magnetic resonance images, and others are found in industrial automation, medical diagnosis, satellite imaging," blah, blah, blah. So that's just the lawyer trying to generalize this as much as possible. So if afterwards you say, oh, but I didn't use it for looking at a conveyor belt, they can say, well, it doesn't say you need to restrict it to that. And it explains the problems with previous methods and why this is the best thing since sliced bread. And then it gives all the details. And then finally you get to the claims. Let's just take a quick look of the claims. Oh, here we go. What is claimed is geometric pattern matching method for refining an estimate of a true pose of an object in a runtime image, the method comprising-- and then it has these parts-- generating a low-resolution model pattern and so on. It's interesting how this first claim doesn't actually say much at all about the method. So it doesn't give details. So it's very broad. And it's likely to be knocked down if there ever was a dispute because receiving a runtime image-- well, just about anything is going to receive a runtime image. Using a low-resolution model pattern, blah, blah, blah-- well, multiscale. Everyone does multiscale-- and so on. So this is very, very general, trying to cover everything. And the downside of it is that if someone ever challenges it, it's probably going to be covered by prior art. And that's why you have all the dependent claims. So claim one is very general. If you're lucky, it'll cover everything. But there's a good chance of being knocked down. So then you have claim two, which is dependent on claim one, and also includes producing a low-resolution error value, producing a low-resolution aggregate clutter value, and producing a low-resolution aggregate coverage value. So those are all the values used in inspection. And then three is dependent on one wherein the low-resolution error value is a low-resolution root-mean-square error value. So you could use sum of absolute values instead. But this one specifically says root-mean-square. And then four depends on two and so on. And this one has an inordinate number of claims. It' goes all the way down to 55. But that's probably all we want to know about that, unlike the Pad Quick-- so Pad Quick, it searched a complete polar space, admittedly at low resolution. So it didn't need to have a first guess. This one does require a first guess. And it has a capture range. That is there's a region in polar space where, if you plunk down the initial value of somewhere in that region, it will end up giving you the result you want. Then we mentioned that the pose here is the other way around. And that's for computational reasons. We don't want to transform a whole array of either gray values, edge values, or whatever, or fields. So it's the other way around from Pad Quick. And if you read it, there's a repeated emphasis on how they're trying to avoid thresholding. They're trying to avoid quantization at the pixel level. So everything is subpixel. And I think part of that is to illustrate the quality of the result you can expect and partly to avoid prior art because the prior art was all pretty much pixel based and wasn't accurate to subpixel accuracy. Then there's a physical analog, which, as I mentioned, calling these things dipoles is misguided. I mean, do you know what the law is between two dipoles? I mean, it's a mess because it depends on not just the separation between them but the angle and orientation in free space. Plus, forget dipoles. Think of just electrostatic charges. Well, if you bring them together, you have an infinite amount of energy. There's a singularity. And so, wait, we're bringing these things together, the runtime edges on top of the training edges? So there's a singularity. So that doesn't work. But the mechanical analog works very well. It's exactly what they implemented. And it involves springs with that one little change that the springs have to be-- one end of the spring has to be able to slide on the edge. So let's see. Description not tied to the pixel grid. And I guess we've been over what exactly the field is and the optional additional items. OK, then how it use starting evidence-- so we collect evidence, as before, about whether this is a good alignment or not. And they have different ways of assigning weights and that there's this kind of threshold. If you're too far away from the edge, then you don't know what's going on. And there's a kind of unpleasant discontinuity there, where you pull the spring out. And up to a certain separation, it gets stronger and stronger and stronger. And then suddenly, it breaks. So that is one reason that you can't find a closed form solution, because it's a nonlinear problem. It's not a simple linear least squares problem. And so they don't really address that. There's some argument about some kind of fuzzy logic thing, but it's not really pursued. And apparently, in practice, it's not a big issue. Now, this maximum distance that you allow for the separation should depend on how close you are to the answer. And so it should depend on what stage of the iteration you're at. So as you iterate, you get closer and closer to having the runtime image aligned with the model image. And so at some point, it just should be pretty well aligned. And so you can cut off at a lower value. So the field is a fixed thing. And it has a cut off. But then during the iteration, you can additionally throw out matches that have too large a distance between the runtime edge. the Talks lead to rotation. Forces lead to translation. So the matching, the inspection-- it's kind of a weird pattern in that the abstract is all about inspection. But then you read the specification, it's all about figuring out what the transformation is. So the inspection part, sort of tagged on at the end, is based on two things. The one missing features. There are things that are in the trained image that have no match in the runtime image-- and on extra features, things that are in the runtime image that are not in the pattern. And depending on what you are doing, one of those two measures will help you decide whether the object passes the inspection or not. There's also the notion of clutter, which are image dipoles with low weight, presumably things due to the background texture and so on. Multiscale-- blah, blah, blah. We've seen that. OK, the only part we really haven't-- well, we have sort of gone through it in the diagrams, which is the generation of the field. Let's talk a little bit about that. And let me start off by talking about the simplified version, where we just have distance. So what does that look like? So suppose that I have a single point. And there's no directionality to it. Then distance field, of course, is just a set of concentric circles. And then suppose that I have a circle. So I have some edge that goes around in a circle. Well, then, of course, the distance field is just a bunch of circles, except now we're going inwards as well. And then if I have an edge, it's going to look like that. And it gets interesting when I have a corner. So if I have a corner-- and then on the interior-- so in computing this, everything is nice and cozy, except right in the corner. Different things happen there. And so that's a potential problem with straightforward simple methods of filling this in. Now, we are not working in the continuous world. So we're working in a discrete world. And so we have to figure out how to propagate these values. And, as I mentioned, if we were working with Manhattan distance, it would be simple because we can add up the changes in x and the changes in y. And we'd always have an accurate result. So if this one is one of the features, then this is one away. That's one away. That's two away. So Manhattan distance is very easy. Euclidean is not. And there's actually a whole bunch of papers on how to do Euclidean in an approximate way that's still good enough and is also very fast. I mean, you can always do it the slow way by, basically-- if you add a pixel, look at all of the other pixels and see which one is the closest and calculate the square root of the delta x squared plus delta y squared. But, of course, that's very expensive. What you want to do is incrementally, as you fill in this distance field, just add another layer of pixels to it. So we won't talk about those. But just to know that that's one of those problems, like, what's a good way of sorting? There's this problem of what's a good way of computing the distance transform. And, as I mentioned, in our case, we actually have vectors. And the figures in the patent make it pretty clear what to do. I mean, the equations are messy. But the figures pretty clearly tell you what to do. OK. Then we have all of these little local forces. And we talked about adding them up. So one thing we can do is-- so the Fi's are forces at individual dipoles in the runtime image. And the Wi's are weights. And the weights can be all sorts of things according to the patent. They can be predefined. They can depend on the magnitude of the gradient. So you believe things that have a strong gradient more than things that don't and so on. They can depend on how well the runtime gradient lines up with the field vector. They can depend on how well the runtime gradient lines up with the nearest field dipole and so on. So lots of versions. But in general, you want to do something like this. And so that's pretty obvious. We're basically just taking a weighted sum of these forces. And this is going to provide the translation. So we move the alignment based on that. And then what about rotation? Well, very similar. We just take a torque around some center. So that's going to give us a rotation. And so I can just conveniently take the cross product of the vector from some center-- the rotation is going to be about some center. Most conveniently, it's the center of the image. So ri is measured from there. And this is the torque that's exerted by all of those springs. And that will tend to make the runtime image move. So this is a little bit funny because-- no, it shouldn't be-- it's not a vector. So torque is a scalar, which is rotating. It's one degree of freedom. But over here, we've got the cross product of two vectors. So what's going on there? Well, being a little sloppy with a notation, but the cross product of those two-- they're both in the plane. And so their cross product is going to be sticking out of the plane. So if I want it to be pedantic, I could say that. Take the z component-- there's only a z component. But make it into a scalar, basically. And then we can have scaling, I suppose, although that's less common. So that's-- if you want to reference the patent, I guess that's figure 21 and 22. And may the force be with you. Sorry, I couldn't resist that. OK, nearest boundary point, alignment, energy minimization-- yeah. OK. So we'll have a couple of little sidebars here. One of them is distance to a line. And we already mentioned that. But since it comes up so much, I want to go over that again. It depends, of course, on your notation for a line. So, once again, let me propagandize my favorite. OK, so first of all, we can rotate into a coordinate system that's lined up with the line. So we get x prime is x plus theta plus y sine theta. And then y prime is-- so that's step one. And then step two is let's move the origin to there. So we'll call this x double prime and y double prime. So x double prime is just x prime. And y double prime is y prime minus rho. So y double prime is minus x sine theta plus y plus theta minus rho. And the line is the locus of y double prime equals 0. That's by construction. And so we can write the equation of the line as x sine theta minus y cos theta plus rho is 0. And that parameterization doesn't have any singularities. It doesn't blow up when the line is going straight up or straight across or anything. It's not redundant. It has two parameters, rho and theta, which is what you need because the family of lines in the plane is a two parameter family. And it's pretty handy. For example, suppose that we want to do some line fitting. So suppose in our image we have a bunch of edge points. And we're trying to find with high accuracy a line that fits them very well. So what we could do is minimize sigma of the distance squared. So what are our parameters for our rho and theta? Because by using this notation, this is the distance. Well, or maybe negative. But we're taking the square. So who cares? And, of course, it's a calculus problem. So we take the derivative and set it equal to 0. And let's start with d rho. Then we're going to get 2 times-- so forget the 2. And now the sine theta we can bring outside the summation. Sine theta sigma x minus cos theta sigma yi plus rho sigma 1. And if we have n points, we can divide through by n. And then we get-- right? Because the stigma of 1 is n. And so if you divide by n, we just get 1. And so what does this say? So x-bar is the average, is 1/n. And, of course, y-bar correspondingly. And so this says that there's a relationship between-- so it doesn't give us rho yet. It doesn't give us theta. But there's a strong relationship between them. And this is that the line has to go through the centroid, because if I plug in-- if I plug in the coordinates of the centroid, x-bar, y-bar, I get 0. And that's the definition of points on the line-- so the centroid is on the line. So at that point, I might do a sensible thing of moving all my coordinates to the centroid. So I know that's going to be important point. So I can subtract that out. And then if I-- so I'm still trying to minimize this thing here. And if you plug in-- so turn this the other way around. OK, so now I go into this equation. And I plug in-- for xi and yi, I plug in these expressions. Then it turns out that the centroid cancels out because we have this property. We're going to get x-bar sine theta minus y-bar cosine theta plus rho. And we know that's 0. So we can get rid of that. OK, so now we're ready to do the final step, which is to take the derivative with respect to theta and set that equal to 0. And let's see. After we gather terms-- so that's-- what we're going to get is 2 times this expression times it's derivative. And if you multiply all that out, you get this. And, well, that's going to-- so what's this? This is one half sine 2 theta. And this, of course, is cos 2 theta. So I have an expression that involves sine of twice the angle and cosine of twice the angle. So I basically can do an atan2-- atan2-- of-- OK, so that's the sine. So that's going to be this one. 2 times sigma xi yi comma sigma. So we use atan2 so we don't have the ambiguity about which quadrant we're in. And we don't have things blowing up-- no, sorry. That looks kind of a mess. But, obviously, this is the sine part, if you like. And this is the cosine part of the tangent. So we just take those two quantities. And the E square solution has the benefit that it's independent of coordinate system choice. What do I mean by that? It means if I have a different coordinate system, different position, different rotations, of course I'm going to get a different answer. But in that coordinate system, that will be the same line. Why do I make a fuss of that? Well, because that's not true if you fit y equals mx plus c. Now, there are circumstances where fitting y equals mx plus c makes sense if you happen to not have an error in x, if you assume that all the error in fitting is in y. So x is some quantity you have complete control over. Y is something you measure. Then fitting y equals mx plus c makes sense. But here we're talking about image coordinates. There's nothing different about x and y or different rotations and so on. So that's a method that could be used, for example, in some of the patent work we've talked about where we're dealing with edges and we're trying to combine short edge fragments into longer edge fragments. So let's look at-- why don't we look at another patent? So the last one in this series is, again, kind of a subsidiary deal. it has to do with the multiscale business, namely, how do we efficiently-- so we keep on talking about the multiple scales. But how do we efficiently do that? Because, potentially, the computation is very expensive. OK, so let's-- OK, right, because convolution is expensive. If we have a picture with n pixels and we have a kernel with m pixels, then computing the convolution is something the order of n times m. So with 10 million pixels or whatever, if you kernel is reasonably large, it's going to take a very long time. So there's a lot of interest in finding shortcuts, finding tricks that make that job easier. And here's one. So, again, the last one we're going to look at from Bill Silver at Cognex. And this is about, basically, efficiently computing filters for doing multiscale. And so the picture here is-- nth order piecewise polynomial kernel. So the trick here is we're going to approximate whatever kernel you like with something that's a spline-- piecewise polynomial. And it's an nth order spline. And we then take the n plus first difference. So if you think about it, each of those pieces is nth order polynomial. And if you take the n plus first derivative, you get what? Take a second-- order polynomial, take the second derivative-- you get-- 0. OK. And why is that good? Well, it's easy to convolve with 0. So what's the trick? Well, the thing is that we you splice together the polynomials, you may have a discontinuity, not perhaps in the value but perhaps in the derivative, or maybe not in the value and the derivative but in the second derivative and so on. The result is that it's sparse. No matter what you do, maybe a little complicated at those transitions. But large parts of it are 0. So if you do the convolution, your kernel has very small support. And so it's very efficient. It's very cheap to compute. So let's see. What do we got? Filter parameters-- so you can pick different filters based on this. Then you get your signal. And this is the 1D version. You sample it. You convolve it with that now sparse function. But then you have to undo that. And so you do the n plus first sum, which is the inverse of the n plus first difference. And finally you normalize. And there's your filtered signal. So that's the basic idea. Now, there's some things hidden in there that are pretty important-- first of all, that the n plus first sum is the inverse of the n plus first difference. But that sort of makes sense because one-- if you keep on-- left to right-- adding things up, that's the exact inverse of stepping through and subtracting neighbors in either order. So it makes sense on one difference. And then you can convince yourself that, well, these operations commute. So if I have-- let's call them D and S. So let's suppose that this is the identity. And then what we want to convince ourselves that this is also the identity-- D times D, S times S. So we take two differences. And then we take two sums. Well, because these operations commute, I can write this as DS DS. I switch these around. And, of course, I know that this is the identity. And that's the identity. And I'm done. And that generalizes to n plus 1 differences and n plus 1 sums. So that's one idea in there that we can sparsify a convolution and make it cheaper and that we can undo that sparsification by taking sums. And if you go through the algebra of figuring out how many operations, you always come out-- well, pretty much always come out ahead. That's one idea. The other idea is that we're kind of mixing up convolutions and differentiation. And that's a slightly more subtle point. But we can think of integration and differentiation as convolutions. So let's think about integration. So what would you convolve with to get an integral? Instead of ones, right? But all the way from minus infinity to plus infinity or-- so when we compute the integral-- so we have f of x. And we want the integral of f of x at some particular place-- let's say x prime. Then we're integrating from minus infinity up to x prime. And we're ignoring the rest of f of x. So that's a little bit like taking a step function, multiplying by a step function-- well, flipped, right? Because we're multiplying by this and then adding the result. So we'll talk about that a little bit more. So without that being mentioned in the patent, it's depending critically on the fact that derivative operators can be treated as convolutions-- not convolutions with real functions because we need impulses and stuff. But if we allow impulses and distributions instead of functions, then we can treat them. And once we do that, we have all of the power of convolution and all of the nice properties of convolution. So, for example, convolutions commute. Why is that? Well, because the Fourier transform of convolution, as you may remember-- 603-- is the product in the transform space, right? And products commute. So if convolution converts into a product and the product commutes, that must mean that that deconvolution commutes. Similarly, associativity-- if I've got A times B times C, I can think of that either as multiplying A times B first and then taking the result and multiplying by C. Or I can think of it as multiplying B by C first and then multiplying A by that. Well, since, again, the transform of convolution is multiplication, the same must apply to convolution. So we can switch things around, which is what I did over here. Well, no. Here, I used commutivity. But we're are also going to need associativity. So we can associate in convolution, which is going to be very powerful, and allow us to switch things around between the signal and the filter and so on. OK, so let's continue with this. So that's the same picture. The patent examiner just liked that and put it on the first page. And let's look at what we have here. So the sort of smooth curve up here could be a kernel for some sort of smoothing filter, perhaps an attempt to roughly low-pass filter a signal or approximate a Gaussian or something. But it turns out that this is composed of segments that are each quadratic. And so now we take a difference-- or in the continuous word, derivative. And what do we get? Are these straight lines? Why are they straight lines? Because this was second order. So if we take a derivative, it'll be first order. But it's not sparse. We're not done. So now I repeat that. We take another derivative. And because this is just a ramp, we get a constant value. And then this is a ramp going down. So we get a constant value on second derivative. And that's a ramp-- get a constant value on third derivative. We're not done yet. Remember n plus first. So we have to do one more derivative. Well, what's the derivative of a constant? It's 0. So all along these long stretches, potentially-- I mean, this is-- in order to fit into the figure, they've made it relatively compact. But you could imagine we could do this over a longer extent. All of these areas, we get nothing. We get 0, which is what we want. It makes it sparse. But at the transitions, at the places where these quadratic things were spliced together, we do get a result. And, amazingly, there are only two non-zero values. There's only that value in that value. So instead of having to convolve with something that 20 non-zero values, we convolve involved with 2, which is really cheap. Plus, they're the same magnitude. So we don't even have to multiply. We just subtract. But then we don't get the answer we want. We get the third difference of the answer we want. So now we take the result. And we just left to right sum it up-- once, twice, three times. Just like first difference is such a simple operation, this is also a simple operation. We start with 0 at the left. And we just keep on adding the values and writing them into an array. And then we do that with that new array-- same thing. And then we do with the results of that. And if we've done it three times, we're done. So there's a certain cost to that. But it's nothing compared to the cost of doing the convolution with the original function that had large support, had non-zero values for many points. So this is kind of a textbook example of this method because it's giving us a very high compression. The result after taking the third difference is very small. Let's see-- first, second, third. Yeah, third difference. OK, so that's the basic idea. And here's another one. Oh, wait is that the same one? Yes, that's the same one. And so here's the circuit for doing that. So here's the one-dimensional function. Think of it as a row in the image. And there's a sampling operator, which picks out every fifth value-- four of them. And those are the results of that third difference of the convolution function. And then we convolve-- sorry, I think I screwed up here because there's non-zero values at the end as well. So there's this value here and that value. So there are actually four non-zero values in this example. And two of them are magnitude three, and two of them are magnitude one. So that's why we're reading our four values. And the outer ones are multiplied by plus n minus 1. The inner ones are multiplied by plus or minus 3. And we add them up. And that produces a new one-dimensional stream of numbers. And then we push it through three accumulators. And out comes the result. Let's see. And so that's just a picture of a accumulator, where we keep the old value and we add in the new value. I'm not sure why we need a diagram for that. And that's the actual convolution operation. And then we can-- often we want to control the bandwidth. We want to control how much we reduce the resolution of the image. Well, here we can do that very simply by changing the spacing between those four non-zero values. So we can stretch out that spline horizontally or compress it just by changing S. And so here we're looking at the ith pixel and the i minus S plus 1 pixel and the i minus twice S plus 1 pixel and the i minus 3 times S plus 1 pixel. So we can change the scale very easily. And notice there's no change in the amount of computation, whereas if you were using the full original spline, or whatever it's an approximation of, like a Gaussian, then as you increase the size, of course the computation goes up. So here, since we're only doing any work where the parts of the spine spliced together, that doesn't change. So we can easily control the filter parameter. Here's another one. Let's goes through this one. So this one-- we have this curve up here that's, I think, a parabola upside down. We take the first difference-- so the second order is first difference is first order. And we get this line. And then we take the second difference, and because this has a constant slope, of course, we get a constant value for the second difference. We're still not done because it's still not sparse. We take the third difference. And, of course, we get 0 all the way in here. Now, this time, the ends are more complicated because the discontinuity-- there's a discontinuity not just in higher derivative but in the first derivative. It's continuous in the function itself at the ends. But the first derivative has a discontinuity. And that means that the second and third derivative are going to be more complicated. So we have these spikes at the end. And there are two non-zero values in each of these spikes at the two ends. So it's just another example of-- so if, for example, you decide that the Gaussian is a good filter for you to use because of its whatever transformed properties, then you can fit a spine to it. And instead of doing the expensive computation-- or suppose that you say, OK, I know that the ideal low-pass filter has a sync function in the spatial domain. Well, the sync function, for a start, goes on forever. So that's going to be a problem. But also, it doesn't have a [INAUDIBLE] support. It's non-zero almost everywhere. So I can fit a spline to it. And we'll be doing that. And that's a particular-- of importance for us because we want to work multiple scales. So we have to subsample. Nyquist tells us-- as does Shannon and whoever invented it first. It wasn't Nyquist-- that we have to be careful that the sampling produces aliasing artifacts unless we've cut out the high frequency content before we sampled. And so we have to be able to at least approximately low-pass filter our images. And so, how do you do that? Well, you can convolve with the sync function. But that's very expensive to even approximate directly. If we can approximate the sync function with one of these splines, then we can make that reasonable premise. I'm not sure there's-- OK, there are more examples. It doesn't have to be just a lump. So here we have a function that has a positive and a negative peak. Maybe that's an approximation to a first derivative operator. And again, we take the first derivative. We get the ramps, second derivative. We get constant, third derivative. We get zeros and with something at each of the transitions where the spines are stuck together. And we reduce the computation dramatically because, in this case, we have one, two, three non-zero values. Oh, and there's is-- OK, five. Five non-zero values. And this is just about clever details on how to do the sampling efficiently. And that's in the y direction. Now, we can do the same in the x direction. We can do the same in the y direction. And the idea would be that we run the convolution in the x direction. And then we produce a new image. And then we run the convolution in the y direction and produce the final result. And that's one way to proceed. What this diagram shows is that you can do better by combining those two operations. So you do still need some intermediate memory. But you don't need enough memory for a whole image. You only need a memory large enough for the convolution in the other direction. So here we go through the y convolution using the methods we just described. And we keep a few rows. And then we run the x convolution on that, each of these rows. And so that kind of brings up another issue, which is it seems very efficient to do 1D convolutions. But to 2D convolutions are expensive. And so there's an open problem, which is can you use this clever idea of sparsifying the convolution in 2D. And I don't know-- might be a thesis topic in there. But what you can do, meantime, is say, in some cases, maybe we can approximate a 2D convolution by a cascade of a 1D convolution in x and a 1D convolution in y. And we look at that as well. And it goes into why this is important. It talks about the Canny edge operator and other edge operators and multiscale and blah, blah, blah. This is much shorter than the other patents. But it's pretty much just using what I describe. Just for fun, let's look at the claims. A method for digitally processing a one-dimensional signal, the method convolving the one-dimensional signal with the function that is the n plus first difference of an nth order discrete piecewise polynomial kernel so as to provide a second one dimensional digital signal, n being at least two and so on. And then perform discrete integration n plus 1 times to produce the final result. OK. So there's some sort of things we need to fill in there that we haven't talked about. You may be wondering why I brought this. So this is a calcite crystal-- please don't drop-- which is almost optical quality. And I got this in one of those crystal shops. You know how crystals work on your mind. You buy this and you become famous or rich or something. But this is used in optics. And the reason is that it's birefringent. If you look through it at some surface, you will see two copies of the picture. And why are the two copies? Well, as you know, materials have refractive index, which causes light rays to be deflected by Snell's law. Well, this material is unusual in that it has two refractive indices depending on the polarization. So since light in the room is sort of randomly polarized-- there are both types around. Both are deflected but by different degrees. So you get two copies of everything as you move the crystal. So why is that relevant? Well that's a trick that's used in cameras. So in cameras, we have a problem with aliasing because we're sampling and we can't guarantee that the image has been low-pass filtered. And so one of the tricks that's used is to have a very, very thin layer of this material, which causes two copies of the image to appear very close together, a few microns apart. And that's not a perfect low-pass filter. But it suppresses high frequency content. And together with other items, it overall makes the sampling better. Now, if you are willing to pay and take a chance, you can send your fancy schmancy digital SLR camera to a place that will remove this filter because-- geez, it's removing my high frequency content. I want higher resolution. And, yes, you do that. And your camera has, apparently, higher resolution. But then heaven forbid you look at something like your jersey pullover or something with a fine pattern on it. And all of a sudden, you'll see all this interference, moire patterns. So, yes, it's cutting high frequency content. But there's a good reason for cutting the high frequency content. And, for example, when you're brought in front of a television camera, in the old days, they used to tell you not to wear a striped tie or to wear a dress that has narrow, closely spaced stripes because you're going to end up not just with aliasing of the kind we usually see, but it's going to mess up with a color as well. So you're going to get funny colored stripes and so on. And since then, they've improved the low-pass filtering before sampling so that this is not such a big issue anymore. But you can find a lot of videos of lectures at MIT where someone didn't pay attention to this. And instead of watching what's going on, you're watching the person's shirt change colors and shapes. OK, so anyway, we'll talk about that next time, about what has to be done in the camera to suppress these aliasing effects. And that's the material that we use there.
MIT_6801_Machine_Vision_Fall_2020
Lecture_2_Image_Formation_Perspective_Projection_Time_Derivative_Motion_Field.txt
[SQUEAKING] [RUSTLING] [CLICKING] BERTHOLD HORN: We'll start off by talking a little bit about a review of things we've done last time, perspective projection, and a little bit about things that are relevant to the homework problem, particularly the last two questions. And we'll go from perspective projection to motion. So in perspective projection we have a relationship between points in the environment and points in the image. And we found that we had the simple relationship between the 3D world and the 2D world, as long as we pick a suitable coordinate system with the origin at the center of projection, and the z-axis along the optical axis, and so on. But then we're going to go from that to motion by differentiating that equation. And then we'll talk about motion of brightness patterns in the image itself. So we had that, and then we had a vector version, which is only slightly more compact. So it's not, in this case, a great help. But we'll use both of them. We'll switch back and forth and use whatever is suitable for the problem that we're dealing with. So let's start with motion. So that's a static setup. And we can easily relate points in the environment to points in the image. Now, what if there's motion? What if that point in the environment moves? Obviously, there will be some motion in the image. And if we can find out what that relationship is, then maybe in some circumstances, we can invert that so that we can measure the motion of brightness patterns in the image, as you do in the homework problem, and somehow use that to figure out what's happening out in the world. So we just differentiate. We take our perspective equation and we differentiate. So we get-- Now, on the right-hand side, we have a ratio. And so there are two parts to the derivative. And we just apply the rule for differentiation of a ratio. And what I'm going to do a lot is reduce the intimidation level of the equation by substituting more easily digested symbols. So in this case, we've got all these derivatives. Well, what are they? They're really velocities. So there's going to be a velocity in the image in the x direction, which I'll call u. And there's a velocity in the real world in the x direction, which I'll call big U. And then we have a velocity in the real world in the z direction, which I'll call W. So that looks a little bit less scary. And of course, I can do the same in the y direction, so apply the same idea over here. So that's the forward direction. That is, if you tell me what the motion vector is in 3D, here's the formula to tell me what the motion vector is in 2D. And of course, we might want to invert that, but for the moment, let's go with that. Now, we can rewrite this in various forms. And since we'll use this a lot, let's do that. So one thing we can do is we can split up these terms into 1 over z, X over Z. And why is that useful? Well, because we know that X over Z is little x over f. So we can rewrite this using image coordinates. And we end up with-- And one thing this allows us to do is to ask the question, where in the image is there no motion? So if we're looking at the movement of brightness patterns in the image, there may be some places where there isn't any. And those are of particular interest, first of all because we can find them using image processing techniques, and then because they tell us something about the environment or the motion. So you can see, if we set this equal to 0, we find that's true at a point x0 such that x0 over f is big U over big W. And similarly for this one equal to 0. And so the point x0, y0 is that sought-after point where there is no image motion. And we call it the "focus of expansion." And in the case of simple situation, like I'm approaching the wall, the focus of expansion is the point towards which I'm heading. And so we can express it directly in terms of the 3D motion vector. And you'll notice that this is actually just a projection of the 3D motion vector into the image plane. So I take that same diagram over there, and now, instead of mapping the vector to the point P, I'm mapping the velocity. And if I do that, then that defines this point x0, y0 in the image where there is no motion. And that's called the focus of expansion. That's a useful thing, because if you can find the focus of expansion, all you need to do is connect that point, x0, y0 to the origin, and you have a vector that tells you the direction of motion-- so a nice example of inverting that process. Now, once we have the focus of expansion, we can rewrite those equations yet one more time. And so there are a couple of things there that are of interest. One of them is that the f's cancel, so we can do that. And then we actually can get an idea of what the image is going to look like. So let's suppose this is the image plane. Now we're looking at it straight on. And suppose that this is our focus of expansion. And we're moving with respect to-- the environment is moving with respect to us or we're moving with respect to the environment. Einstein said that it's all relative. It doesn't matter. The only thing we can determine is the difference between the two motions. Then, there will be no image motion here. And what about other places? Well, this tells you exactly what the pattern will be like. So if we go to the right such that x0 minus x is positive, while we keep this one 0, what happens is U will become larger. The further we go away, the larger. So we'll have a little vector here that tells us the image motion at that point, and another vector over here, which is perhaps larger. And then we can go to where this is negative, so the other side. Draw a vector here. So I'm starting to draw a little vector diagram of the motion field, if you like, how things are expanding from that point. And I can do the same in the y direction. So suppose I keep x0 equals x, but I change y0, so I'm going along this line. I'm going to get vectors that point this way. And let me be adventurous and go off at right angles where x0 minus x is the same as y0 minus y. Well, that means that U and V are going to be the same. That means I'm going to get a vector at 45 degrees, and opposite direction, like that. And I think you can see that we're going to fill in this whole diagram, this little vector diagram, with arrows that are pointing outwards from the focus of expansion. That that's what this equation says-- that all these vectors are radiating outward, and that's why it's called the focus of expansion. So I'm moving towards that wall. And this is the place I'm actually going to hit. I may be going at an angle, but my whole image is looming. It's zooming outwards, which is a useful clue, a useful cue for measuring distance and velocity. As you saw in the homework problem, there's the scale factor ambiguity that because of perspective projection, we can't actually tell absolute distances. And we have a similar phenomenon here. But what we can tell is this ratio. And so this whole field depends critically on that ratio. So what is that? W over Z. And if I make that larger, then the victors will be larger. That means I'm getting closer to the surface. I'm about to hit it. Conversely, if I make it smaller, that will be smaller. If I make it negative, what happens? Well, all of these vectors are reversed. So it's a focus of compression. There's no such term, but we could call it that. It's just the inverse of the focus of expansion. And that's what happens if I'm moving away from the surface. So I'm taking off from the lunar surface, and the image is compressing as I leave. So what is this? Well, let's turn it on its head. What is Z over W? And W, of course, is the rate of change of Z. So we've got Z over dZ dT. What are the units of that? So Z is in, say, meters. And dZ dT is in meters per second. So the units of that ratio are-- STUDENT: Seconds. BERTHOLD HORN: Seconds. And so can you think of what that quantity is? STUDENT: [INAUDIBLE]. BERTHOLD HORN: Sorry? STUDENT: Time to impact. BERTHOLD HORN: Time to impact, great, so that's going to tell me how long it's going to take before I crash into this surface. And so that's a quantity that we're going to spend some time thinking about, both because it's important if you're say landing a NASA craft on Europa or if you're a fly landing on the ceiling. And also, it turns out to be relatively easy to compute compared to some of other things that we'll be looking at. By the way, the landing on Europa is a JPL RFP. They wanted to have industry give them ideas of how they're going to do that, and how much it's going to cost. And we told them that this is the right way to do it. And we never heard from them again. So I don't know. So if that craft crashes on Europa, don't blame me. I. Told them what to do. So this is beginning to look kind of promising because we can actually see our way to inverting this imaging process and getting some useful information, just from the way the brightness pattern in the image changes. But I need to torture you a little bit more, and do this in vector form. And we'll see later why this is useful. Right now, it's not saving us a whole lot in terms of writing, going from the component form to the vector form, but we'll see that it does eventually. So instead of differentiating the component versions, we're going to now differentiate the vector form. And of course, it's very similar. So what's the time derivative of a vector? Well, it's just the vector where you've differentiated each of the components with respect to time. That's nothing very fancy. So we have that term again that corresponds to this first part here, but because it's a ratio, and because R can be changing, big R can be changing, we need to take account of the fact that it's a ratio. And we get-- and so to make it look slightly less intimidating, we'll switch from Leibniz to Newtonian notation. Again, the underline always means that it's a vector. And I'm splitting this up a bit. So the dot here denotes differentiation with respect to time. And so we have two terms, one of which is obvious scaling-- that there's a certain motion in 3D, and it's going to be magnified or demagnified by the ratio of these two distances. So there's a big motion out here. It's a smaller motion in there by the ratio of those. But then there's this other term that corresponds to motion in depth that we have to take into account. And I rewrote it this way because again, we can make use of the fact that we have this perspective projection equation. And so we can introduce the image coordinates. So focus of expansion, again, would be where r dot is 0. And so that corresponds to z oh, let me put the 1 over f on the other side. That's easier. And so we basically get the same equation. I mean, if we just write out the components of that equation, we get what we had before. Then just wanted to do one more little trick, which is in the appendix of the book, we have some useful results about all sorts of simple math that's needed, one of which has to do with vector equations and with manipulating cross products of cross products, that kind of thing, which I typically don't remember how to do. So I'll go to the appendix of the book. And the appendix is on the materials on the Stellar website. So let me first transform this one more time. So we've got-- that's just rewriting this. And now, we use a result. And the reason we do this is because it allows us to make some general statements about the flow. So cross product of cross products can be expressed as a difference of dot products. Now, you can match up these two lines, and you can see where we're going with this, which is-- So finally there, we have the image motion expressed in terms of world motion using this somewhat odd-looking expression. So why is this of interest? Well, the first thing you can say is that the result is perpendicular to z. Because if you take the cross product of two vectors, the result is perpendicular to both of the vectors. So r dot z equals 0. So is that surprising-- that the image motion is perpendicular to the z-axis? So image motions in that plane-- z-axis is there. No, of course it's obvious. I mean, it couldn't be any other way. This is a good way to check the result. If we got anything else, there'd be a problem. So in the image, we only have 2D. We have x, y. And we have as velocities u and v. We don't have a velocity in the z direction. That would be popping out of the image. So that's interesting. And then something else we can look at is, what if the image motion is radially outward or inward-- that is, the motion is along the radius vector to the point in the scene? What do you expect will happen to the image motion in that case? So you have the baseball coming straight for you-- its image is doing what? STUDENT: [INAUDIBLE]. BERTHOLD HORN: It's expanding, but it's not moving. So this is the case where we would expect image motion to be 0. And now we check this formula. R dot and R are parallel. When we take their cross product, the cross product's length is proportional to the sine of the angle between the two vectors. Where they're the same vector, then that angle is 0, and so the sine is 0. And so this formula, while it took a little bit of work to get there, it's helpful for answering certain questions. And we can right away check things like this. If the thing is coming straight at you or you're moving directly towards it, there'll be no image motion. Well, that's our focus of expansion. So if I'm moving towards the wall, that is the point where the motion vector lines up with the direction to that point. And so I get no image motion. While we're here, we can look at some flow fields. So we already saw a flow field up there for a general motion towards some point. And we could look at just a couple of other cases. So let's suppose that U, V and W could be anything, but let's make it-- so we're assuming now that two of those components are 0, so there's only motion in the world in the x direction. And then what do we expect in the image? Well, from our formulas, we see that the only component of the image motion is little u. So if the world is moving in x, or I'm moving in x relative to the world, then the vector field of how brightness patterns move in the image looks like this. One thing to note, though, is that the length of those vectors is not the same-- that the length depends on z, or if you like, the inverse of z, and so on. So if you look at all of these formulas, we got R dot Z, which is big Z. And look at those formulas up there. You've got 1 over Z. So yes, this is a very simple vector field, but the vectors aren't all the same. So that kind of is unpleasant. At the same time, when something is unpleasant, you can sometimes take advantage of that. So in this case, if we have vector field like this, one thing we could try and do is recover depth, because these things are all inversely proportional to depth. And so again, an illustration of how once we understand the forward process, we can turn that around to try and solve the inverse problem. U is 0, and let's try this one. Well, that's just turning everything 90 degrees. So that's not very interesting-- the same idea. Now, I guess to exploit this idea of recovering depth from these fields, we would need to know Z. We need to know the velocity. So there's two things affecting what these pictures look like. One is the object, the shape, the distance, and the other one is the motion. And if we know either one of the two, then we can calculate the other one. Of course, in practice, often we don't know either one, and we'd like to recover both. And then often, we end up with an ill-posed problem. How about U is 0, V is 0, W is not equal to 0? Well, we already drew that diagram up there, kind of. If we look at the equations, we're just going to get F OE is-- so that's the simplest case of that picture, where the focus of expansion is right in the center of the image. Well, this is starting to hint at some of the things we're going to pursue. We develop the equations for some transformation like this, between something in the world and something in the image, and then we try to invert it. And the forward one is always simple. There's the equation up there. It's just plug in or whatever. The inverse isn't because often, there may be more than one solution, or there may be no solution, or there may be an infinite number of solutions, or it's ill-posed, in the sense that if you make some small measurement error, the answer is going to be perturbed by a large amount. So we'll have to be careful about that. But talking of motion of image brightness patterns, let's talk about that. That's in the homework problem. So let's talk a little bit about that. So we'll ignore for the moment where these images come from. What is an image? Well, an image is a 2D pattern of brightness values. So it's E as a function of x and y, If you like. And we already said something about that in terms of brightness being power per unit area. We'll make that more clear later on. So his question was, what about color? And yes, we will be talking about color at some point, which is kind of, in a way, is this repeated three times. And a couple of reasons not to bring it in at this point. One is, we'll do it more simply just using gray levels. And then color is peculiarly human-centric, in that the only reason that we use three numbers to represent color is because our particular color sensing system has three senses. But we'll talk about all of that later on. For now, assume that we have taken RGB and turned it into brightness, which is, depending on who you read, some combination of the three components, mostly G. Because human vision is most sensitive to G, and judges absolute brightness based on some combination of R, G, and B. Now, there are a couple of things I'm going to do repeatedly. So I want to kind of preemptively talk about them. One of them is I'm going to be switching back and forth between continuous and discrete. So these days, with everything being digital, of course the images we get are discrete. They're quantized. They're quantized in space, and typically on a rectangular grid, which is kind of not the best grid to use, but that's what we use. And then in brightness, so we also don't get continuous values for brightness. We get them quantized, often to as few as eight bits. But it turns out that a lot of what we do is easier to understand in the continuous domain. So I've talked to you about E of x, y. That's my brightness pattern. For any x and y, it'll tell me what the brightness is, the power per unit area. But in practice, it's going to be more like I have an array of numbers with two indices, and dealing with E sub whatever. Correspondingly, in the continuous domain, we'll often be taking integrals. For example, where there's some measurement, that's local at a pixel, but we want to extend it over the whole image. And we take either a single integral or double integral. Well, of course in the discrete world, we'll be taking sums. But those are very straightforward transformations. But they are very useful, because on the one hand, we want to implement this as actual code, and on the other hand, we'd like to make it easier to develop the math. And it's almost inevitable that it's easier in the continuous world. Then we get to derivatives. So of course in the continuous world, I can look at the x and y derivative of brightness. And the combination of these two is called the "brightness gradient," which is going to be very important in many of the things we do. And in practice, I'm going to approximate those by making some sort of a first difference. And we'll see that that's one way of doing it. It's not a particularly good way of approximating the first derivative, but we'll talk about others. And just in terms of writing, this is easier to write than that. So I know that's not a big argument, but we'll find that some of these things have a closed form solution in the continuous domain that you can easily obtain. And it's harder to do the same in the discrete domain. So I'll be switching back and forth. With that in mind, let's look at our 1D image. So of course, images are typically 2D. There are 1D sensors, and they have some benefits. One of them is that you can build a 1D sensor with a much larger number of pixels in one direction than you can a 2D sensor, although these days, 2D sensor could be 2K by 4K, some huge number. But linear sensors have been around with many thousands of pixels. And the disadvantage of a linear sensor is to get a real image, you need to scan it. And this is done quite a lot in industry, where things are moving along on a conveyor belt. And you can use a relatively cheap, very high resolution, linear array sensor. And the conveyor belt motion provides the other dimension of the image. It's also used in satellite imaging, where very high-quality 1D sensors are used, and then the motion of the satellite provides the scanning to produce an image. Anyway, suppose we have a 1D image, and suppose things move. So this is at time t. And this is at time t plus delta t. And think of your optical mouse. You're holding it down on the table. It has a short focal length lens that's imaging the surface of the table. And if there's some texture to it, as there is on the wood over here, then that texture is mapped onto the image. And if you move the mouse, then the image of that surface will move. And your job is to accurately estimate how fast it's moving. And so here's the picture before, two time steps. And so we're trying to figure out, how do I determine what happens? So let's suppose we have a velocity of u. And so we've moved by that amount. So let me indicate that over here. So that's delta x. Now, we're making some assumptions here that we'll get back to in a second. One of them is that the brightness doesn't change. And so if your optical mouse is looking at the surface of the table, then presumably between one frame and the next, the illumination isn't changing. It's using an LED with a constant current, and it's taking 2,000 frames a second, so it's unlikely to change. The geometry is changing a little bit because you've moved, and now you're looking at the surface from a slightly different angle. But for most surfaces, the change in brightness due to that is very small. So right here, the way I've drawn these curves, I've assumed constant brightness. So let's get it right out front-- constant brightness assumption. And we'll see that there are circumstances where that doesn't apply. But there are lots of cases where that is the case. For example, as I'm walking around the room, your shirt still looks maroonish red, and it doesn't really change. And so we are used to most of the world being somewhat constant in color, constant in brightness, as well, even though we know that there are circumstances where that's not the case. For example, if I look at the reflection of the lights up there in my cell phone, they, of course, move relative to the cell phone as I move around. And we'll talk some more about that. So let's enlarge a little piece of that. So we're blowing up this area. And why am I doing that? Well, I'd like to use a linear approximation. So I'm assuming that this-- second assumption-- I'm assuming for the moment this curve is relatively smooth, such that if I look at a small enough part of it, I can approximate it as a line. And then there's a change in brightness, and there's some slope here. And the slope relates the motion to the change in brightness. You can see where this is going. I'd like to use the change in brightness to determine the motion, because how else are we going to measure the motion in the image? The pixels stay in the same place. All we've got is the brightness at a pixel is changing. And somehow, we have to invert that relationship. So in terms of E of x and t, what is this slope? So this is in the x direction, and that's the rate of change of E with x. And so it's dE dx slope. And I'm going to write it that way, as a partial derivative. And might as well use the correct notation right away. We're going to need partial derivatives because we have both x and y and t. And so we need to be clear about which type of differentiation we're talking about. So then that's the slope. So that means that delta E must be the slope times delta x. And so that's going to be-- and I'm going to write this as E sub x. So again, in order to simplify notation, I will often use subscripts to denote partial derivatives. And E sub x and E sub y-- we use so much that it would be a pain to have to write them out in full every time. As we indicated, those are the components of the brightness gradient, which is used for all sorts of things, including edge detection. So now, let me divide through by delta t. And this, I guess in the limit, as I take smaller and smaller time steps, is going to be that partial derivative. And if you like, I can rewrite one more time. So you can see here why using the switch to the continuous domain is helpful. Because I can take the limits here as the delta t approaches 0, and get the partial derivative. Of course in practice, I'm getting frames at a fixed rate, and they have a certain interval. I can't make that interval infinitely small. So I end up with a much messier expression. But the two are obviously related. So that's a result for 1D image, and that allows us to recover the motion. So one thing I probably glossed over is that as we go from t to t plus delta t, the brightness decreases. So this delta E over here is actually in-- for a positive slope, the delta E will be negative. So that may have gotten lost in here somewhere. So remarkably, from a single pixel I can get the velocity in this 1D case. Now importantly, that's not true in the 2D case. And after all, that's the case we want to solve. So we're going to have to work a little bit harder. And now a couple of things you'll see-- first of all, the faster things are moving, the larger E t will be. Well, that's no surprise. If the brightness is changing slowly, then you speed things up, then the brightness will change more rapidly. It's like taking your YouTube video and changing the playback rate. When you change the playback rate, you change all the velocities by the same factor. Then you will have more rapid changes of brightness-- intuitively obvious. Another thing you'll notice is that there's a problem if E x is 0. We can't do this if E x is 0. So if we're, for example, at-- suppose we are here. Well, then if the thing moves, the brightness doesn't change or changes only a tiny bit. And so from that, you can't really tell how far it moved. You pretty much have the same result if it moved that much or whatever. And just in terms of implementation, you're dividing by 0, so that's not going to be good. And it's not just 0. It's when it's very small. Why is that? Well, because when it's very small, it probably means that you don't know it accurately, as a result of the fact that you obtain it by subtracting two pixel values. So let's just again go over this. We're approximating this by saying it's 1 over delta x E of x plus delta x. Well, that's fancy notation, but basically, there are two pixels, we take this value, and we subtract that value. So we read out the gray level there, read out the gray value there, subtract them. And that's how we estimate our brightness derivatives. And similarly-- so what do we do with E t? Well, E t is derived the same way, except that we have two frames. And we pick the same pixel out of the two frames, and we subtract their gray values. So it's just in a different direction that we're approximating the derivative. And why is that important? Well, if there is a small brightness gradient, E x is small, then those two values will be very similar. And since they are not known precisely, they have measurement error in them, we are now subtracting two quantities that are similar in size, so we get a smaller quantity. And it's mostly noise at some point. So that tells us that this is something to avoid. We don't want to be-- and it makes sense. I mean, if the image is uniform and brightness, x is 0, you can move it and you can't tell that it moved. I mean, you'd have to have some reference point like some texture on it to tell that it moved. And if you had texture on it, E x wouldn't be 0. So I know I'm belaboring this over and over again, but it's very important, because it means that this type of measurement is A, very noisy, and B, not trustworthy unless certain image conditions are satisfied. And then the next thing you can do is say, well, we can estimate velocity from a single pixel. But we've got lots of pixels. And that's part of what we're going to be using heavily, that the single pixel result is not great, but we've got a million or 10 million. And so we can do things like least squares to improve the result dramatically. So it may be that the result from a single pixel is really flaky, but that's OK because we divide the error by 1,000 if we think about the statistics of having a million of these pixels. So I keep on saying they're noisy, and at the same time I'm saying we can do this. And the reason we can do it is because we've got lots of them. So we talked about how to approximate how to switch back and forth between the continuous world and the discrete world. And now, I don't want to give away the solution to the homework problem. But the next step, I guess, would be to get a bunch of these and combine them. So we might do something like this. If we have n pixels, then we might try something like this. So now, instead of just using two pixels, we use those two pixels to do the calculation, then we use these two pixels to do the calculation, then we use those two pixels to do the calculation, and so on. And so we add them up. So we have a lot of noisy measurements, and we take the average. And magically, things improve. And so just without going into hairy statistics or assumptions about probability distributions, roughly speaking, when you average n values, you reduce the standard deviation by the square root of n. And when you have these political surveys and they say they have a 5% margin of error, what they mean is that they talked to 400 people. Because the square root of 400 is 20. 1/20 is 0.05-- so same principle. So it doesn't improve linearly with n, unfortunately. That would be even better. But it does improve with the square root of n. So if we have a million pixel image, it improves by a factor of 1,000, which is pretty significant. Now, if we just did this in the simple way, we'd still run into trouble. First of all, if E x is really 0, you'd be dividing by 0. So you can't do that. I guess you could leave those pixels out. And then if E x is small, you still have an answer which is relatively bad. So there are places where the slope is large, where you get good information. And now, you're polluting it by adding in as an equal information from places where the slope is low. And that's why in the homework problem, we talk about weighting. So don't just take an average and treat each of them equally. But multiply by some weight factor. And then, of course, what happens to the 1 over n? Well, you have to compensate for the fact that you've now switched to using weights. As we mentioned, images are really 2D. So let's extend this analysis to 2D, and see what happens. Now, some of this may seem tedious and repetitive to some of you, but we're all on different wavelengths. So I'm going to get the next result in several different ways. And different approaches may appeal to different people. So let me do it this way. So let's start by talking about the image volume. So think about video, and we're stitching together the frames. So each of these cross-sections is a frame. And we end up with this three-dimensional thing which is brightness as a function of x, y, and t. And typically in practice, we slice it across this way. But we could slice the volume any way you like. And in some cases, there are advantages to slicing it in different ways. So that's part 1. And sometimes it's useful to visualize things this way. Then, we're going to be using partial derivatives. So what are they? Well, it's just the derivatives in the axis direction here. So we'll be dealing with dE dx, dE dy, and dE dt. And just as we've been discussing, we approximate them by taking differences-- first differences of neighboring pixels in either the x, the y, or the t direction from frame to frame. So that's all that is. So some people find partial derivatives a little bit more scary than ordinary derivatives, but that's all it is-- we're just taking the derivatives in all three directions. Whereas we could be doing something more complicated. So let's think about that. So suppose that this video is from me moving through an environment, and there's some object in the environment that I'm tracking. And at time t0, it's there in the image. And then in the next frame, it's over here, and so on. So it follows some path in this three-dimensional volume. And I could do this for other points, and develop this whole set of curves. But let's just focus on one. Now, one of the things I might want to do is see how this changes. So when I'm looking at different frames of an image, often what I want to do is not take the derivative at a pixel and see how changes to the next time frame, but I want to follow an object and see how it changes. And in many cases, I'm going to make the constant brightness assumption, that it's not changing in brightness. And so I would like to express some constraint on the derivative along this curve. And so I could have a curve. How do I define that? Well, one way is to give x and y as a function of t. So this green curve-- for each time t, I can give an x and a y. That's how I define that curve. And then, what I'd like to do is look at, for example, the total derivative along that curve, keeping in mind that along that curve, not only is t changing, but x and y are changing in some defined way. So this is the total derivative. And in what we're going to be doing, we'll set that to 0 because we're assuming that this point maintains its brightness. And then I can use the chain rule to split this up. So I'm going to get dx dt times dE dx plus-- so I can take this total derivative and express it in terms of partial derivatives. And again, that looks kind of intimidating, but I can easily rewrite this. So that's equivalent to what we had for 1D motion. So this is showing a relationship between the brightness gradient and the image motion. And if I know the image motion, and I know the image, I can predict how it's going to change. And it's very simple. I just use this formula. And what we're more interested in doing is we've got images, image sequences, and we want to find u and v. So that's where we're going. Now over here, we could just solve for u, because we had a single equation and a single unknown. And that's another thing we're going to do a lot, is equation counting-- how many degrees of freedom? How many numbers are there that we don't know? And how many constraints do we have? Over here, we have one constraint, that equation. We have one unknown, u-- perfect match. So we know that we are likely to get a finite number of solutions. And because it's linear, we get one solution. But when you look at this, you'll see there's one equation, one constraint, but we've got two unknowns. We've got u and v. So if we're trying to recover the optical flow, which is that vector field that we're discussing, it's like we don't have enough information here to do that, which is dramatically different from the 1D case, where we had the match between the number of equations and the number of unknowns. So what do we know? So are we lost at sea? Is it hopeless? So let's look at just what that constraint provides us. So again, the overall objective here is we have a time-varying image, or in the discrete case, a sequence of image frames. And we're trying to recover the motion. And we find that assuming that things don't change in brightness, the images don't change as they move, we end up with that constraint equation. And what does it tell us? Well, one way to think about it is to plot it in velocity space. So we're used to plotting images with x and y as coordinate axes, but actually for some purposes, it's useful to have a different kind of representation. And this is one. And what it means is that any point in here is a particular velocity, that velocity. And for example, this is 0 velocity. It's not going anywhere. Velocity space was used at one time in physics much more than it is now. And it's actually kind of neat. For example, if you look at planetary orbits, they're ellipses with the Sun at one focus. And ellipses are complicated. If you plotted them in velocity space, they're circles, which was exploited in the early days before people could just feed equations into a computer and have them solved, where they had to reason about things geometrically. Anyway, velocity space is neat. And here we have a constraint in velocity space. So this is saying that before we opened our eyes and looked at the image, we didn't know anything. The velocity could be anywhere in this plane. But now we have one constraint on it. So what does that tell us? That must somehow limit the solution space. And yes, it's a linear equation in u and v, and so a linear equation in the 2D world corresponds to what? STUDENT: [INAUDIBLE] BERTHOLD HORN: Line, yes-- y equals mx plus C, for example. So it's going to be a line. And that's great, because it means that we haven't solved the problem, but we've come a long way. So before, it could be anywhere. Now, it's going to be on the line. What we really like is to pin it down to a point. So we've come partway. And so what line is it? Well, we can rewrite that equation like that. And rewrite it a bit more by normalizing this vector. So I'm turning this vector into a unit vector. So first of all, again, a reminder that this is the brightness gradient. And the brightness gradient is very important for all sorts of things. If it's 0, nothing much is happening because the brightness is constant. And more likely, it has a high value at a transition between-- if I'm looking at a transition between the wall and the blackboard, there's a big change in brightness. And therefore, the derivatives will be large. And therefore, the brightness gradient will be large. And not only that, but the brightness gradient as a vector will be pointing perpendicular to the transition. Because E x is very large here, E y is 0. So this brightness gradient is very important. And this is a unit vector, as you can easily tell by taking the sum of squares of the two components. And why am I doing this? Well, because I'm interested in the component of u and v in this direction, the direction specified by the unit vector. And it's this constant. And so what this is saying is that u and v are perpendicular to some line. And this is what it looks like. You can check that. I mean, basically, you need to find this point, and then calculate the distance from the origin. And it's going to be that. So not too interested right now in the details of that, other than that we've shown that it's a line, and that the line depends on the brightness gradient. So it tells us a lot of things already. One of them is that when you make this localized measurement and you are under constraint, you don't have enough equations, you do know something. For example, in this diagram, I don't know what u and v is, but suppose that I give you a different coordinate system. Locally, you're going to see linear patterns. You won't see that. If I keep on magnifying this, it's going to be more and more like a linear relationship. And so the argument is that if you look through an aperture, you cannot determine the motion, you can determine the motion in the direction of the brightness gradient. So that's what we did over here. So in the 1D case, we were done here. In the 2D case, we're not done because we have a mismatch of constraints and unknowns. And so we need more constraints. So we need to know something else. And sometimes, there's prior knowledge of the environment that helps you. Sometimes, you can look again at another time and get extra information. Now, let's suppose that we have the optical mouse problem, where the whole image is moving as one. You're over a flat surface at a constant distance from the lens. And to a very high degree of approximation, the image is just moving as one. So we don't just have one pixel. We can do the same thing at two pixels. That's at pixel 1. And that's at pixel 2. And our job is to recover u and v. Well, it's two linear equations. We can write them that way, and then solve. Now, because we've got two constraints now, we have enough constraint to solve for the two unknowns. And it's just a linear equation, so it's very simple. We can actually write the answer out explicitly. So it's very mechanical. I mean, we just invert this 2 by 2 matrix, and multiply the result by that vector, and we're done. And what is this? Well, this is the determinant of that matrix. So for 2 by 2 and 3 by 3, we can explicitly write it down. Otherwise, we'll just use Gaussian elimination to solve the set of linear equations. So in a way, we're done. We need two pixels. We can solve this, and now it's noisy, so we can improve the result by taking more than two pixels. But before we do that, it's always important to check the edge conditions. This can fail. And it fails when the determinant is 0. So when does that happen? Well, can that happen? Well, sure. Let's rewrite it one more time. So we have this nice linear method, that if we make these measurements at two pixels, we can solve for the motion. But it won't work if this is the case. So what is that? Well, this tells you something about the brightness gradient. This is the direction of the brightness gradient, well, the tangent of it. So what this is saying is that you have a problem if they're related. So this is where they don't have to be the same. I mean, obviously, if the brightness gradients are the same, you subtract those two quantities, you get 0, and things blow up. But they don't have to be the same. They just need to be proportional to each other. We're just saying that the ratio of E y to E x is the same. And that makes sense, again, because that means that the brightness gradient is the same in the two places. That means that the constraint we get out of the equation is in the same direction. It's not providing new information. Or go back to this diagram-- we now have two parallel lines. Well, if there's no measurement noise, they'll be on top of each other. And then you intersect them, and what you get? Well, the same line, so that's not very interesting. Or if you have noise, it'll be even worse because they won't even intersect. So this tells you a number of things. One of them is this isn't going to work if the brightness gradient is the same everywhere. And that makes perfect sense. That's the aperture problem. It also tells you that maybe you need to somehow weight contributions from different image regions based on this. Maybe you don't want to have contributions from a lot of image regions with the same brightness gradient because they're not really providing good constraint. They all doing the same thing. So that tells you that rather than take this result for two pixels, and then apply it to other pairs of pixels, and just take the average, you'll want to weight it in some way. And that's kind of kludgy. I mean, that's trying to fix a problem in the simple way. But we can do we can do this much better. So we have this equation that's supposedly true everywhere in the image. This is our magic equation, which relates image motion to brightness gradients. And that's supposed to be 0. Now, if there's measurement noise, it won't be 0. Hopefully, it will be small. If you plug in the wrong values of u and v, it won't be 0. So maybe one way of solving this problem is to search for u and v that makes that 0. Or if you can't make it 0, make it as small as possible. And so that motivates this approach. So what's going on here? Well, the integrand is just this expression. And if everything was perfect and you have the correct values of u and v, the integrand would be 0, and you integrate it over the whole image, you get 0. And that's the smallest you can get because it's quadratic. You can't get less than 0, so that's it. If you plug in the wrong values of u and v, well, it will not be 0. And so you can base a strategy for finding u and v on that. Now, there are cases where this is going to fail. One is if, I don't know, you're without a light in a coal mine. If E is 0, all hope is lost, obviously. If E x and E y are 0, all hope is lost because that means you're looking at a wall that's constant brightness. And you won't be able to see it moving because there's nothing on it to catch your attention and to track. If there was some texture on it, then E x and E y would not be 0 in some places. And so this is a sort of approach we're going to use a lot. So we have this constraint, which would apply in an ideal case where we know the answer u and v. And we're going to take contributions of that. And now, we can't just add them, because some of them might be positive and some might be negative. So it doesn't make sense to minimize the integral of them. We turn them into something that's always positive by squaring it, and so we're going to find these answers by minimizing that. Now in this case, this is pretty straightforward. We now take that, and this is a calculus problem. So this is a function of u and v, and we're going to set the derivatives with respect to u and v equal to 0. And in this case, we're lucky enough that the equations are linear. And we can solve it. And actually, the answer is very similar to this. We're going to end up with two equations and two unknowns, one equation from the derivative with respect to u of 0 and one with the derivative of v as 0. And so two equations, two unknowns-- cool, but that can fail because they might not actually be different equations. They might be linearly dependent. So we'd want to worry about that. And in this case, the failure mode is-- that turns out to be the determinant of that 2 by 2 matrix. So that corresponds to this thing over here. And in this case, that's when things fail. So when can that happen? Well, certainly if E is 0 or E is a constant because then E x and E y are 0. Or maybe if just E x is 0 everywhere because then this integral is 0 and that integral is 0. So let's see-- so if E x is 0 and E y is not 0, what is that? Well, that means that I have one of those pictures that only varies in one direction. E x is 0, meaning it's constant in the horizontal direction. E y is nonzero. It's varying in brightness in the vertical direction. And if I were to draw isophotes, they'd look like this. So here E x is 0. E y is not 0. And we know that there's a problem. If I move in this direction, the image doesn't change, so can't solve. So E x equals 0 is a problem. Similarly, E y equals 0 is a problem, just turned 90 degrees. So those are kind of obvious. I mean, we know those aren't going to work. Is there anything else? Well, how about E x equals E y? Well, if that's the case, then these two integrals are the same, and actually, that integral is the same, also. So we're going to get integral e x squared all squared minus this thing all squared-- 0. So that's bad. So what is that? Well, that's where isophotes are running at 45 degrees because the brightness gradient has the same x component as y component. The brightness gradient is perpendicular to the isophote. And so E x E y means it's a 45-degree angle for the gradient. So the isophote is 45 degrees down, so that's the picture for that. And of course, that is really the same thing as that, just rotated. And one of the things we're going to do is say that, well, something shouldn't depend on our choice of x and y and the image coordinate system. I mean, the answer would be different, but in the new coordinate system, it should mean the same thing. So in this case, I shouldn't get some result that's particular to the x- or y-axis. It should just give me the same result if I rotate. So that's a 45-degree rotation. I can get any rotation by doing something like that. Because then E y over E x is some constant, and that's the tangent of the angle of rotation or something like that. Plug that in here, and of course, you'll see that you get k squared times the integral over here. And the k squared appears over there. They cancel each other out. So this is actually the most general case. All of these others are just special cases of that one so that summarizes all of the cases we found. They're all with isophotes at some angle, parallel lines. And then the important question is, is that it? It turns out yes, that's it. And it's not easy to prove. You need to at least use the triangle inequality or something profound like that. So it's actually not that hard, but I think it's not really useful to go there. So when can we do it? Well, as soon as there's curvature in the isophotes. Because if the isophote looks like this, if I move it tangent to the isophote, there are still changes. So the only problem is where the isophotes are all straight lines, parallel straight lines. And in another way of thinking about it, if you have areas in the image where there's a "corner," something where the isophote makes a sharp turn-- so here are the brightness gradients. If there's some small area where the directions of the brightness gradients change a lot, that's good. Because what we're worried about is that the brightness gradients are all parallel to each other. That's the bad situation. And so for many purposes, like image alignment and recognition, we look for places that are "interesting." And one interesting way is to look at the isophote curvature or equivalently, the rapid turning of the brightness gradients. Because in other areas of the image, there's less constraint. If I have a part of the image that looks like this, on its own, it doesn't provide enough constraint. Now, if next to it is another area of the image where things look like this, if I have both of these, then that's fine. But if everything has the same gradient direction, that's not satisfactory. So homework problem theoretically due on Thursday. And I know we haven't done a whole lot on this. So if you have problems with it, send me email, and I may say something that doesn't give away the answer but helps you. And we'll go from there. What are we're going to do next is talk a little bit about this concept of noise gain. So the idea is that all this time in the back, we've been thinking about we only have pixels with eight bits, and they're noisy, and our calculations are going to be flaky unless this determinant is large, and so on. So there's a lot of attention to not just a formula for computing the answer, but making sure that the answer is actually meaningful, as opposed to you've taken two noisy numbers and divided them by each other, and the result is not very predictable. So in noise gain, there's a very precise way of talking about it, which is saying if I make this much of a change in the image, what is the change in the result? So in this case, I'm trying to get the velocity of motion. How much is the velocity of motion going to change if I change the brightness gradient somewhere, I have the wrong measurement that's slightly off? And of course, we want that to be not sensitive. We don't want it to be very sensitive to noise. And so we'll make that clear in terms of a one-dimensional transformation where we know the forward function, and we're trying to invert it, and we want to know when that inversion is sensible and when it's not. Oh, questions?
MIT_6801_Machine_Vision_Fall_2020
Lecture_7_Gradient_Space_Reflectance_Map_Image_Irradiance_Equation_Gnomonic_Projection.txt
[SQUEAKING] [RUSTLING] [CLICKING] BERTHOLD HORN: We're starting to talk about what determines brightness in an image and how we can exploit that. And we introduced the idea of a gradient space. Why? Well, because brightness is going to depend on the illumination, obviously, and it's going to depend on the geometry of the situation, including surface orientation. Obviously, the amount of light falling on the surface will, per unit area, will depend on its orientation. And then different types of surfaces will reflect that light in different ways. In any case, we expect that the brightness we observe in an image is going to depend on the surface orientation of the corresponding patch on an object. And so we need to talk about the orientation of the patch. And we had unit normals, and then we had p and q, but these are just convenient shorthands for those slopes in the image, slopes on the surface. So those are derivatives of height rather than derivatives of brightness. And then since we were busy with photometric stereo and we just talked a bit about Lambertian surfaces, which have a property that their brightness depends on the cosine of the incident angle and it does not depend on the viewing direction. So a surface of that type will appear pretty much equally bright from all viewing points that you might have, which is fairly common for material on our human scale. So what determines how surfaces reflect light? Well, we're getting into that, but largely it's microstructure. You know, photons get into the fibers in my paper, they bounce around, they come back out again in a different direction. And that's what determines how bright the surface will appear from a certain direction. And so a lot depends on the imaging situation. If I'm looking at the moon, what constitute microstructure are craters, not fibers of paper. So as we'll discuss later, Lambertian surfaces or near Lambertian surfaces are fairly common in our world. A lot of matte surfaces are pretty good approximations, snow and whatever. But they don't necessarily apply when we go to microscopic scale or to a cosmic scale. Anyway, so Lambertian is a handy approximation for some surfaces. And we can address it in different ways. And if we expand out that end on s, you may remember we got something like this, which is linear in p and q. So that's the good part. But then unfortunately, we divide by this term here, which is not linear in p and q. And there's also this term, although we don't worry too much about that because that's a constant. If the light source is somewhere as defined by ps qs, then that's a constant. And I'm going to get tired of writing that, so I'm just going to introduce a constant rs to represent it. And then we went through the business of what are the isophotes. Well, that's where this expression is constant. And we can get rid of the square root by squaring. And if we do that, we end up with an expression, which is second order in p and q. So it's got p's, p squared, q squared, pq, p and q, and constant. And plotting those gives us conic sections. And in particular, we end up with a diagram like this. And what is this? Well, this says if you tell me what the surface orientation is, I can tell you how bright it's supposed to look. So again, imagine a terrain built above the floor. And I don't know-- x-axis to the right and y-axis forward. Then I can take the derivative in the x direction. That's p. And I can take the derivative in the y direction. That's q. And those define the orientation locally of that surface. Of course, it might be a different p and q somewhere else. And that defines a point in this plane. And I can go to that point and say, what is the value of this function there? And that will be the brightness. And this is obviously handy in graphics because we can have surface models. We can determine not just the z position, the depth, but also these derivatives-- surface orientation. And then we go to this diagram which could be perhaps a lookup table in the computer. And we just look up the appropriate gray level to paint at that point in the image. That's the forward problem. And we are dealing with the inverse problem. So our problem is, OK, I have measured the brightness of E equals E1. What can you tell me about the surface orientation? Well, it's confined to that curve. It's unfortunately not going to tell me uniquely what the surface orientation is. But it's restricted now to a great deal. And so I need additional information. And there are various ways of getting additional information. One is to say, well, most objects in the world aren't haphazard collections of blobs in three space, but they hang together. Objects are solid. They have surfaces. Neighboring points tend to often have similar properties. Different parts of this table have pretty much the same surface orientation until you get to the edge. But that's a hard constraint to implement. We'll get to that later. A much easier idea is, well, if I illuminate the surface differently, I'll get a different map. And I'll get a constraint on that map. So for example, suppose I move a light source from there to over here, and then I draw the same diagram. And now I measure the brightness in that second image under different illumination. And let's suppose it comes out to be, I don't know, this one, here. Well, then I know that the surface orientation is where those two curves intersect. So that's photometric stereo done in the graphical way. We've done it before in an algebraic way. Of course, that algebraic way only worked if we had Lambertian surfaces. This is going to work for any surface as long as you can draw this diagram, which is called a reflectance map. So the reflectance map basically is a diagram that shows you for every orientation how bright that surface will look for that orientation. Now we've talked a lot about Lambertian surfaces, and that's mostly because they allow us to write down the equations and solve them. But real surfaces aren't perfectly Lambertian. And some are dramatically non-Lambertian. So what do we do in that case? Well, it's pretty easy to imagine we can create a diagram like this and use that. And we can perhaps think about building a lookup table. Now this is going the wrong way. Here, if you've got p and q, you can look up what the brightness should be. We need a table that goes in the opposite direction. So let's think about how to do that. So one way is to-- let's just say a normal inversion. One way is to take a surface element and look at it under different lighting conditions and record what you see and then repeat that for different orientations. And you can do that. It's going to get pretty tedious because you have to basically explore this whole space. And every cell in this space in your lookup table is going to require that you reorient that piece. Of course, you can automate that and have some robot do the calibration for you. But it's an alternative that's often easier is to use a calibration object of known shape. And what better than a sphere, partly because it's very easy to make a sphere? And what do we do? Well, we take an image. First, we'll have it lit up from all sides. And the sphere, of course, we'll image as a circle. And presumably the brightness inside the circle is going to be much larger than outside. So we should be able to distinguish the two and fit a circle to this. So we take this image, and then we fit a circle. And what does that mean? That means we find estimates for the center and the radius such that all points within that are bright, and all points out are not. And you can do some subtle, clever things to make that accurate to subpixels and so on, but we won't talk about that now. So what good is that? Well, for a sphere, we have a very convenient relationship. If we draw a point from the center of the sphere to the surface, and then we, at that point on the surface, draw a unit vector, guess what? Those are parallel. That's a unique property of the sphere. So it makes it really easy to know what the surface orientation is because we just connect the center of the sphere to the point on the surface you're interested in. And of course, this doesn't quite apply to the Earth, because it's not a sphere. So need to modify that slightly. And that's why there are several different definitions of latitude, depending on whether you're talking about the local surface normal or the vector from the center of the Earth. And usually we take the local surface normal as the definition for the angle that's latitude. So now what? Well, we could be a little bit more precise here. So we have-- for every point in this image now-- let's call this point x, y-- we can calculate what the surface orientation is. And how do we do this? Well, let's start with a cross-section. Here, we're looking down on the sphere. Now we're looking sideways across it. And I don't know. The camera is way up there. So the surface normal is parallel to this. It's just-- well, this is the point x0, y0, z0, and this is the point x, y, z. And so that's the surface normal. So all we really need is a formula for z minus z0. We're measuring x and y in the image plane, so we know those. We can't measure the depth, so we have to calculate that. And of course, it's just going to be r0 squared. Where's that come from? Well, it just comes from the formula for a sphere, which is that the radius vector has a fixed size. x minus x0 squared plus y minus y0 squared plus z minus z0 squared is r squared. So having that, we can then compute p and q. That comes from the formula we had last time. Last time, we showed how to compute n from p and q. And we also said you can go the other way. If you're given a p and q, you can compute n. And so that's where that comes from. And you'll notice that-- oh, sorry. I think this is plus. Let's see. If we go to the right of the center, the vector tilts to the right, so it's plus. Sorry. So for every pixel, we can-- we know the orientation of the surface, p and q. And then we take a picture. And we get a map under different lighting conditions. So under one lighting condition, we get one-- E1. Under another one, we get E2. And let's suppose we take three. So we develop this numerical mapping from surface orientation to brightness in three pictures taken under different lighting conditions. So we're treating pixels completely independently. Just think of a single point that we're imaging and a single pixel that's imaging because we repeat that at all the others. So now that's going the wrong way. We really want to go the other way. We want to use this information so that when we then later take three images of an object under the three lighting conditions, we can go to some sort of table or some calculation that gives us p and q. So we want to go that way. And in terms of the implementation, we left Lambertian behind. Lambertian's nice and analytic. You can invert it and do all sorts of stuff. But once we've decided that we want to be completely general, then we can't depend on analytically inverting things. So let's use a numerical table. And so this is a three-dimensional array in the computer. And each little voxel here is one entry. And what is in that box? Well, each of these little boxes has a p and q. So what I would do is I would make the measurement in three image situations-- use them-- quantize them to the discrete intervals of this lookup table. Go to that place, and it tells me what the surface orientation is. So that's what I'd like. But how do I build that? So is that clear? I mean, you can see how computationally trivial this is. It costs you nothing just about to interpret these images. There's no complicated iteration or anything. It's just a table look-up. It's hard to imagine anything cheaper than that. So if I can build this table, I'll have a very efficient method for getting the shape. Well, not quite. We'll get local surface orientation. We still have to talk about how to patch that together into shape. But for the moment, we're just going to get surface orientation. Now what I'm doing then is I'm running over this calibration image. And at every pixel, I'm looking-- I'm computing p and q. And I'm measuring E1, E2, and E3. And I use that to put something into this table. So I can quantize my E1, E2, E3. That gives me an index into this 3D array. And I write the p and q there. And I do this for every pixel. And there might be some overlap because I can't represent the table with infinite precision. I'll have to make some sort of compromise where I say, OK, I'm going to quantize-- let's suppose I quantize 200 different values. That means the lookup table will be what? A million entries. It's a lot in terms of cache size and so on. But it's a reasonable value, whereas, let's say, a petabyte lookup table probably wouldn't be very satisfactory. But in any case, that means we need to quantize. And we might need to quantize fairly costly, like 1 and 100. And that means that some of these points may produce the same-- they happen to produce the same rounded-off values. And so they'll be writing on top of each other. So that's one issue, and, well, how do you deal with that? Well, one way is to average them, because presumably, they're slightly different p and q's. And by combining them in some weighted fashion, we can get higher accuracy. But a more serious problem is there could be cells here that never get touched, that never get filled in. And there's a couple of reasons for that. One is just the nature of these numerical quantization effects. But there's a more basic one which is p and q. Gradient space is a two-dimensional space. And we're mapping it here into a three-dimensional space. So we're not actually filling that space at all. So what do we get? Well, we get a surface in that space. So if we ignore the quantization for the moment, we're not filling that space. We're getting some surface. And if you like, you can address points on that surface using p and q. So that's point number one. So that means that later on, if I find some combination of E1, E2, and E3, it's quite possible that it won't happen to be on that surface. Then that means the lookup table is not giving me an answer. So what's with that? Well, if you remember, when we went from two images to three, we introduced the albedo. So that was one case where making a problem more complicated made it easier. Before, we only had two unknowns, p and q. And we ended up with a quadratic, so we had two solutions. And we said, ambiguity. Let's add a third unknown. And then suddenly, we get linear equations, unique answer. Well, this would apply here as well. So our calibration object, of course, is made with one particular albedo. Perhaps we painted it white, and so its albedo is basically 1. But suppose we also want to deal with similar objects where not all the light is reflected? And so how do we do that? Well, it's very easy, because rho albedo linearly scales E1, E2, E3. And so that means that anything on array out here is connected. And rho is 0 on this end and 1 on that end. So suppose we've painted our sphere nice and white, and we made these measurements. Then we've defined that shape, that surface in the space. And now that's for rho equal to 1. Because it's perfectly linear for other rhos, we can just generate those. So when we place an entry in the lookup table up here, we can just say, OK, now we scan along this line. And we fill in all of those cubes. And now instead of just writing a p and q, we write p, q, and rho in there. So it's a 3 to 3 lookup table now-- three dimensions to three dimensions. And in a real situation, maybe you know that the object is supposed to be a certain color. And so having a value, different value of rho, is not really acceptable. It indicates there's something wrong. And that can be pretty useful, like there's a smudge on the surface, or it's not actually the surface you were told it was. So it's like a check-- an error condition that you can check for. Or, for example, suppose that there's something blocking one of the three light sources, casting a shadow. Well, that means that one of these E1, E2, E3s is going to be relatively small. And so you'll be away from the surface. And if you're using this method, you can pick that up and say, oh, I don't know what p and q is here, but there's something weird going on. Or I don't know if you remember the slides, but one of the other things that can happen is that if highly reflective surfaces are close to each other, there will be interreflection. And we will have brightnesses that are abnormally high. And we'll actually be outside the surface. And again, we can say, OK, my method doesn't tell me what the surface orientation is. But there's something going on here. And so I'm not going to make that part of the surface. I'm going to use that to break up the image into parts that hang together and parts that don't. So in the case of those overlapping donuts, this is a way to segment it into-- you start off, the image is one thing. It's just e of x and y. But you know that it's an image of multiple objects. And so segmentation is a big problem. And if we use these methods, we can segment on cast shadows where one donut casts a shadow on another. And we can segment on areas of high interreflection which is where they touch. And so what looks like a drawback can actually be helpful. Now this still doesn't guarantee that we'll have filled in all of the voxels in this three-dimensional space. So actually, there's a customary way of solving this kind of problem, which is to go the other direction. So what we did was we went to the image. And we said, OK, for that pixel we get p and q. And we measure E1, E2, 3, then we put that in this table. The other way is to systematically step through the table rather than step through the pixels, and then for each of these voxels, find the corresponding thing over there. And that way, you can be assured that you filled in the table. This way, it's you're projecting in a nonlinear way from a lattice, a cubic lattice, onto this curvilinear space. And there's no guarantee that you won't have overlaps, and yet you will fill everything in. But that's fairly boring, so I won't talk about that. Now I was going to talk about Lambert's lunch paper and explain exactly why when you cannot see the fatty spot, the illumination from the left and the right is equal. But I'm hoping you'll believe me. It's just a page of fairly boring algebra. So maybe we'll turn it into a homework problem or something. Anyway, that was Lambert's instrument for discovering a lot of things about photometry. So let's get a little more serious about photometry. We use these terms like "brightness" and "intensity" rather loosely. And it's good to make them precise. So the first term is "irradiance." And it's the most trivial concept you can imagine. Here's a patch of the surface of area delta A. And here's some light source. And there's a bit of the power emitted by the light source, which is intercepted by the surface. And the irradiance is just a power per unit area. And in terms of-- it's watts per square meter, if you like. And just for reference, noonday sun in Washington, DC is supposedly 1 kilowatt per square meter. And what does that mean? Well, it's not a very precise value, because it will depend on the state of the atmosphere and the time of year. And what are you measuring? Are you measuring only visible light? Are you measuring near infrared as well, and so on? But roughly speaking, that's a useful number to know, because from that, we can calculate things like-- suppose you have an image sensor that's 4 microns by 4 microns. What energy falls on that? And then you put that image sensor behind the lens, which attenuates it even more. How much comes out? So it's a very simple concept. And unfortunately, it's not terribly useful for us, because we have an imaging system. We're not exposing the sensor directly to the illumination. Well, if we did, we wouldn't learn anything about the environment. Now you might say, well, what we're interested in is how much light comes off. So we could have perhaps a complementary idea where we have-- give a name to this quantity, where we take the power that's emitted divided by the area. Trouble is, that could be going anywhere. And there is terminology for this quantity, but since it's useless to us, we're not going to bother with it. So, I mean, this is obviously useful if you're worrying about heat exchange, like how do I keep my satellite cool enough given that the sun is illuminating it on one side, and there's heat going off into black space in another direction, and so on? But it's not useful for us because we're not intercepting all of this. We're over here, intercepting a tiny part of that. And therefore, it matters that this radiation is not isotropic. It's not going in all directions equally. So we don't want that. So then many textbooks will use this term "intensity." And we often talk about-- well, I try not to talk about "image intensity," because it's really wrong. Intensity has a technical-- it's a technical term that has a meaning. And what is it? Well, it's useful for a point source to talk about how much radiation is going off into a certain direction. So here's a point source-- I don't know, a star, a light bulb. And we're measuring how much power is going in a certain direction. And, well, we need to normalize that, right? We need to take this cone of possible directions and somehow measure how big it is. So we have to define this measure, which is called the solid angle, and the units of which are steradians. And in case you haven't come across it before, it's very simple. So in the plane in 2D, the preferred way of talking about angles is in terms of radians. And it's what you get by cutting that circle with that angle and looking at the length of the arc of the circle and dividing by the radius. So we all know how to do that. Well, this is very similar in that we take this cone of possible directions in three space now instead of in two space, and we cut it with a circle. And we imagine we're at the center of a sphere. We're at the center of the sphere. We're cutting it, and we get a certain area. And now to normalize it, we need to divide by r squared, because this area will grow with radius squared rather than radius. And so that's the definition of solid angle. So this allows us to talk about a set of direction. Doesn't have to be a right circular cone. Could be any shape. I could have something like this. All that matters is what this area is on the sphere. So radians-- we go from 0 to 2 pi. So what's the corresponding thing for steradians? So if I want to talk about all possible directions around me, how many steradians is that? What's the surface area of a sphere? Yeah? STUDENT: 4 pi r squared. BERTHOLD HORN: 4 pi r squared. Thank you. And so it's 4 pi, right? Here, we go 0 to 2 pi. And this goes up to 4 pi. So the set of all possible directions around me-- so if I'm radiating energy, it could go into a solid angle of 4 pi. If I'm, for example, only worrying about light coming from the sky, that's a hemisphere. So that's obviously 2 pi steradians. So there's just one more little subtle thing that's kind of handy, which is if that surface area is inclined relative to the direction to the center of the sphere, in that case-- so this is just another manifestation of that foreshortening phenomenon. And this is sometimes handy because we will have cases where there is an inclination. For example, now we're going to be talking about cameras. And the lens is going to be tilted relative to a subject that's off-center. And so we'll need to account for that. And that's obviously-- and why is that? Well, because this area at an angle is equivalent to an area that's at right angle to the axis. And the ratio of these two lines to each other is cosine theta. And that also is the ratio of the two areas. So now we know how to calculate solid angles-- very handy concept. We go back to the intensity. And intensity, I, is defined as the power per solid angle. And you can see that it's independent of distance. So if I go further out, it will cover a larger area. But we're assuming that there's no loss. So the power going into that cone is the same the further out I go. And so that's a useful quantity. And it's a way of describing how a distant point source might act. And usually in machine vision, that's not the case we're dealing with. We're dealing mostly with continuous surfaces. And we could imagine breaking them up into an array of point sources, but that isn't usually done, and it's not very helpful. So, intensity-- so if you use that word in your homework problems, you will flunk the course. No, just kidding. Since all the textbooks use intensities, unfortunately, we have to accept that that is an alternate term for something else, which we'll talk about next. So the true meaning of intensity is this. And it's never something that people in machine vision consider, unless they're doing things like trying to reconstruct the center of our galaxy and finding what that black hole looks like. So what is it that we want? Well, we kind of had it here. We don't want to know the whole power coming off the surface. We're only interested in what reaches the observer or the camera. And so we introduced this idea of radiance, which is power per unit area per unit solid angle. So we have a little part of the surface here with an area A. And we have a person or a camera over here. And this is a solid angle delta omega. And there's some power delta p going that way. And that's obviously much more what we want because that's what we're measuring. In the image, this power will be projected onto a certain area. And our sensors are measuring power, basically, which, by the way, is unfortunate. It would be great if they measured the electric field. But they're instead measuring the absolute value squared of the electric field. And so amongst other things, we know that quantity can never be negative. And so we can't do any phase imaging with our usual cameras. There would be some real advantages to that, but we can't do that. So that's radiance. And it's power per unit area per unit solid angle. Or in terms of units, it's watts per square meter per solid angle-- steradian. Now if you're a mechanical engineer, and you do dimensional analysis, that's kind of a nuisance, because steradians, like radians, doesn't have units, like meters per kilogram per second or something. They're just ratios. And so the clever trick mechanical engineers use to guess at the answer by just matching dimensions-- it doesn't quite work, because you have these quantities that are dimensionless. So what to do next? Well, what I want to do next is to relate brightness out there to brightness in the camera. And so I'm using this term "brightness" in a very loose way. And we'll justify in a little while why that's acceptable. So in a way, we can talk about a brightness of a surface in terms of radiance. This is how bright it's going to appear. Then in the image plane, we can talk about the brightness we measure as irradiance. So notice that before, I talked about light falling on an object in the world. But the same concept, of course, applies in the image plane. I've got these little areas that are light-sensitive. And what are they measuring? They're measuring irradiance. Well, they're actually measuring energy of a certain time. But of course, energy of a certain time is the same as power, so they are measuring power. So I want to relate those two. And what we're going to end up showing is that they're proportional to each other. And therefore, we can be sloppy. We can call both of them "brightness," as people actually usually do. And so once we've defined the-- being careful about the terminology-- we can justify being sloppy about the terminology. So for this to be meaningful, we need a finite aperture. So, so far, we've talked about pinhole model. And the pinhole model gives us perspective projection, and it's very useful for that. But we know that in terms of image measurement is problematic. And if we make the pinhole too large, the image is going to be blurred. And if we make the pinhole too small, we don't have enough photons to count. And also, there'll be diffraction. So if you make it small enough, it'll appear just like a isotropic source that's radiating into 2 pi steradians. So I don't know if anyone else thought about this. But in notation of infinitesimals, it is customary to do that, because you're dividing by two infinitesimal. So this should be, I don't know, two infinitesimals-- product of two infinitesimals. Now fortunately, we're not going to get too perturbed by that, but that's a good observation. And I guess in textbooks, that'll be the notation. And then you have someone else saying, oh, what do you mean? It's del squared p. That's the power going there. What is that? Why is it squared? So I didn't want to go there. Lenses-- so what we're going to do is invent this device that has the property that it provides the same projection as the pinhole, but has the huge advantage that it actually gives you a finite number of photons. And there'll be a penalty, which is it only works at a certain focal length. It only works at a certain distance. So let's talk about lenses-- ideal thin lenses. So Gauss already showed that there's no such thing. You cannot make a perfect lens, but you can come incredibly close to making this ideal object. So let me-- there are three rules. So of course, you know that lenses are made of glass or transparent material. They have a refractive index that's different from air. And so light rays are deflected when they hit the surface. And simple lenses are made with spherical surfaces. They're particularly easy to grind. You can plunk epoxy-- 100 of them onto a sphere and grind them all at once, because they all have spherical surfaces. Fancier lenses are aspherical, and they're obviously much harder to make and more expensive. But the simple ones are-- but for us, what's important is the following. So rule number one is a central ray undeflected-- un. So what does that mean? It means that if you have a ray of light going through the center of the lens, it comes out traveling in the same direction on the other side, and not just that ray, but any ray that goes through the center, and since time is reversible, it works the other way around as well. So we'll just show rays going in one direction. And so that's a pretty remarkable property. And why is that important? Well, because that gives us perspective projection. That means that in terms of projection, this is acting just like our pinhole. Then number two is a ray from focal center emerges parallel to optical access. So what is that? Well, lenses have a focal length. And let's call that f0. So I can define a point here that's 1 focal length away from the lens. And what this rule is saying-- that if I take any ray coming from that focal center, and I see what it does after it comes through the lens is going to be parallel to the optical axis-- and again, of course, not just that ray, but any ray. And then rule number 3 is-- well, it's really the same thing. Parallel ray goes through focal center. So if I have a parallel coming in from the right, it's going to go through-- that's starting to look a bit messy. So you can see that if you have a bundle of parallel rays coming in from the right, they're all going to go through that point. That's where they are in focus. And from that diagram, we can use similar triangles to get the lens formula. And I'm not going to go through-- it's kind of boring algebra, again, possibly suitable for a homework problem. But at least I'll draw the diagram. So we have these two special points, one of which is 1 focal length ahead. And one is 1 focal length behind. And we have a central ray. And let's see. So then that's parallel, and it's going to go through here. And notice how that didn't hit the lens. That's because this simple model doesn't have an idea of the diameter of the lens. It's what happens in this plane. It doesn't-- I can get a larger lens to do that if I want to. And so what this lens does is kind of remarkable. Namely, it takes all of the rays that come from this point up here. And it magically brings them back together again over here. So that's the good news. The bad news is that if I move this in depth, if I go over here, it will no longer be focused in this plane. It'll be focused in a different plane. We didn't have that with a pinhole. We didn't say anything about focus or length or whatever. So the upside is that we are now bringing a significant number of photons together in order to make a good measurement. The downside is we have this penalty that things can be in and out of focus. Then what you can do with this is draw similar triangles. And you get three equations. And if you put them together just the right way-- and again, it's a page of fairly boring algebra. I don't really want to do that. And also, you should know that from physics. What else? So as I think I mentioned before, we can think of the lens as this amazing analog computer, because rays come in from the side, going in every which direction. And it magically figures out where to send them to. I don't think we really appreciate how amazing this is because we all use lenses since we were very small. And we just accept that you can do things with them that depend on this property. Now this is a planar diagram, but this applies in 3D as well, which makes it even more amazing, because now the rays coming from this point occupy a whole solid angle. One of them is going to come out of the board, and it's going to hit the lens over here. And magically, that part of the lens will redirect it to be over there. And I mentioned that actually, you can't make a perfect ideal lens. And that's connected to this property-- that each part of the surface of the lens has to somehow deal with rays coming from a whole solid angle of possible directions. And so you can't optimize it for one particular direction. Nevertheless, by combining different lenses, you can get very good performance, very accurate performance. And there are trade-offs between different kinds of defects, one of which is radial distortion. So if you think of an image and particularly think of a fisheye lens image, where more than 90 degrees of the world is image-- perhaps as much as 180 degrees. Well, in order to bring that into the plane, you have to squash the outer parts. And so in terms of the image, what's happening is that the distance from the center of the image is distorted and is made smaller than it would be with perspective projection. And that's the only way you can get a large angle in a wide-angle lens or fisheye lens. And that type of distortion is actually inherent in lenses. And we're not very sensitive to it if it's relatively small. And so oftentimes, lenses are designed such that they will suppress certain defects at the cost of increasing radial distortion. And that's why when we talked about calibration, I mentioned that, unfortunately, with real lenses, we need to not just find the principal distance and the principal point, but we may also need to talk about radial distortion. So now we're ready to put it together to see what the irradiance in the image is, given a object radiance out in the world. So I'll draw a diagram of a simple imaging system that includes a lens. Now biological vision systems don't have flat image planes. But all of our cameras do. People have built cameras with curved retinas, but it's hard. And it doesn't seem to serve any particular purpose. So let's assume as usual that we have a planar image plane. So then there's a lens, and there's an object. Let's pick some. So I'm calling the distance from the lens to the image plane f as opposed to f0. Why is that? Well, I'm using f0 to denote the focal length, which is a fixed property of the lens. And because of this formula over here, we know that for things to be in focus, they have to actually be further away from the lens than f. And so-- than f0. And so this is going to be a little bit larger than f0, depending on the magnification. Well, the way I've drawn it, actually, it's going to be a lot larger than f0. So then we said that the central rays are undeflected. So the rays going into the lens here come out in the same direction. And therefore, a couple of things. One of them is that suppose there-- it's a very narrow cone of directions. I can assign an angle to this that it makes with respect to the optical axis. And that angle is going to be the same on the two sides. And the other thing I can say is that these cones of directions have the same solid angle, because they're the same rays, just turned around. There's a small patch in the image that's being illuminated. I'll call that delta I. And there's a small patch on the object that's being illuminated. And we're going to see how much power coming off that patch ends up in this patch. That's the kind of thing we'll be looking at. But first, let's equate those two solid angles. I'm trying to relate these two areas. Then in order to do that, I need to take into account the foreshortening. So there's a unit vector on the surface of the object. And, well, this also is not immune to that effect, because the light is not coming in perpendicular to the image sensor. The light is coming in at an angle, right? If I draw a surface normal to the image plane, then it's not the center of this cone of rays. And what is the angle between the surface normal to the image plane and the incoming rays here? So remember that diagram, where these angles are the same, and these angles are the same? So over here, this angle and that angle are the same so that alpha affects the incident light onto the image sensor as well. So now I can write down-- I can use that formula I had for a solid angle. So on this side, the area is delta I. And then there's a foreshortening effect, cosine alpha. And then I have to divide by the distance squared. So that's f squared or maybe not. It's bigger than f squared, right? So it's f secant alpha squared. So this part is f. This is alpha. And I'm measuring this length. So you can imagine, if I draw this further out, then that's going to become even more obvious. And now that, since we said that the central rays are undeflected, this cone of directions is the same as that cone of directions. So I can just equate that to-- and it's the same thing on the other side. And fortunately, those secants cancel out. So let's see. Do I want delta o over delta I, or do I want the other way around? So that's half of the story. We're making good progress here. And that's important because the total energy coming off that patch depends on how big it is. And then that energy gets concentrated into that smaller patch in the image. And so the irradiance in the image is going to be whatever power ends up here divided by this area, delta I. And so we'll be able to relate the radiance over there to the irradiance over here. And this is what we measure. It's the irradiant state. So now we have to think about how much of the light from that patch on the surface actually is going to be concentrated into that image. So the good news is that because the lens focuses the rays, all I need to know really is the solid angle that the lens occupies when viewed from the object. And so I need to know what the solid angle is because the rays that come out the other side all get concentrated into the corresponding patch. If things are in focus, the light that comes off this patch and goes through the lens is all concentrated into this area in the image. And conversely, light coming from anywhere else has no effect on this. It's imaged somewhere else. So there's a direct match between the power coming off here that makes it through the lens and the power that's delivered to that small area in the image. So what is this? Well, it's the area of the lens. Let's suppose the lens has a diameter d. So it's 4 pi squared. And then there's a foreshortening effect. There's an angle which we drew as alpha. And then we have to divide by the distance squared. And that, again, is f secant alpha squared. So that's the solid angle. And typically, that's actually quite small. And so we typically only gather up a relatively small fraction of the-- oh, sorry, not f, z. And now this time, the cosines and secants don't cancel out, unfortunately. And we get that. So we're almost done. So the power delivered to that small area in the image is the radiance of the surface times its area times the solid angle times cosine theta. And that's going to be L delta o pi over 4 d over z squared. So that's the total power we're delivering. And it's concentrated into an area of delta I. So the power per unit area is we just divided through by delta I. And that's what we actually measure. So that's the conclusion of that. So let's study that a little bit carefully. Let's look at this d over f. What is that? Or maybe f over d looks more familiar. I guess there are no amateur photographers here. That's the F stop. That tells you how open your aperture is, is f over d. And so typically, SLRs will have a maximum opening of maybe, I don't know, 1.8 for that ratio. And you can stop it down to, I don't know, 22, let's say. And so obviously, the square of that controls how much light you get. And that's one way of controlling the exposure. The other one is time. And so often, there are trade-offs between using the aperture opening versus using time. For example, if you want a lot of depth of field, then you can achieve that by making the aperture very small, approaching a pinhole. But the cost is you need a longer exposure. So if things are moving, that's not going to work. Conversely, if you want to have a great portrait, and you want to wash out the background out of focus, then you go the other direction. You open the lens very wide so that it only has a narrow field of depth and then use a very short exposure. So anyway, that quantity is one that's well-known to people working with cameras. And it's intuitive that the image irradiance goes as the inverse square of the F stop. And that's why the F stops are usually done in square roots of 2. So it goes 1, 1.4, 2, 2.8, 4, 5.6, 8, 11, 16, 22, 32. And those are the steps of square root of 2 in size of the aperture, which give you steps of 2 in exposure. So that's that. So that's kind of intuitive and not particularly interesting. The pi over 4-- who cares? It's just a constant. We're not too excited about that. What's really exciting, though, is that the thing we measure, E, our image irradiance, is proportional to L, the thing we're interested in out in the world-- the radiance. So that's why we can be sloppy about talking about brightness, because the brightness of the surface, radiance, is proportional to the brightness in the image, irradiance. So we're measuring brightness in the image. And that has a meaning beyond just power per unit area in the image plane. But it has a meaning out there in terms of how much that object is radiating. And the remaining part is this annoying thing here, cosine to the 4th of alpha. What does that do? Well, it means that the brightness is dropping off as we go off-axis. So if you have a part of the image that's way out in the corner here, it's going to receive less power per unit area than something in the middle of the image. And that means that you need to take that into account. Now fortunately, for a small alpha, cosine to the 4th alpha is as close to 1 as you can get. And so as long as alpha is small, you can completely ignore this. So when does alpha get big? Well, it only really gets big when you have a wide-angle lens-- when you're taking in a large, solid angle of the world. And then alpha may be 20, 30, 40 degrees. And then it starts to matter because cosine of 40 degrees is not any more close to 1. Fortunately, it's a fixed thing, so you can compensate for it. And in fact, if you buy a DSLR, part of the magic that happens that you're not allowed to see or understand is to compensate for this. Also, we're not very sensitive to this effect. So if you have an image that slowly gets darker towards the edge, you really have to focus on it to notice it. So when I was much younger, I was very keen to have a telephoto lens. But I couldn't afford a telephoto lens. And then I saw an advertisement for a Russian catadioptric telephoto lens. And having worked in telescopes, that rang a bell. And I could afford that one, so I got it. And it was quite nice. And then I looked at-- in those days, we used slides and projected them on the wall. So I'm looking at this slide of a predatory bird that I took looking up in the tree and the sky background behind it. And the sky behind the bird is white. And I'm like, OK, this is odd. And strangely, in the corners, the sky is blue. So what's going on there? Well, what's going on is that this lens was cheap in part because it had a rapid drop-off in brightness with the angle, perhaps even worse than this. And so the corners were not illuminated as well as the center. And in the center, there was enough light so that these-- the colors were oversaturated. So not only did I get the green and the red channels overexposed, but I even got the blue channel overexposed. So it just looked white. It had more than the maximum intensity that the film could handle in all three channels. Anyway, so this comes up a lot in other situations. In X-ray imaging, it's slightly different. It's cosine cubed. But it's something that people often forget about. And as I said, you can kind of forget about it, because it's something you can compensate for just in your image-processing chain. It doesn't change from image to image unless you change the lens and the-- yeah, OK. So this formula is central, not because we're going to play a lot with it, but just because it justifies this whole idea of talking about brightness and measuring it using gray levels in the image and thinking that that has something to do with what's out in the real world. That's our description of what the camera does in terms of brightness. So this is the counterpart to what it does in terms of position, which was the perspective projection equation, which, of course, was trivial in comparison. But those are the two key cornerstones of understanding of cameras. So now that we know what it is we're measuring-- well, we already knew it was the power per unit area we're measuring. But now we also know that that corresponds to radiance in the world. We need to try and understand what determines the radiance in the world. And we already mentioned it depends on the illumination. It depends on the material. It depends on orientation. So let's try and make that a little bit clearer and talk about the bidirectional reflectance distribution function. So let me tell you that until, I don't know, the '80s, this field was a complete mess. And there were dozens of different terms, all competing for attention, some of whom had famous people's names on them, and some didn't. And then the National Bureau of Standards stepped in. And a brilliant man named Nicodemus cleaned it up. And who knows? Maybe the last thing the National Bureau of Standards did that was very interesting. But it was a very powerful effect on imaging, not just optical, but X-ray as well. So what is this? Well, the idea is that we have light coming in, and we have light going out. And crudely speaking, when we talk about reflectance, the ratio of those two things is reflectance, right? So crudely speaking, something that's white reflects all of the light coming in. Something that's black reflects none of it. But where's that light going? So it's not as simple as just saying, oh, reflectance is 0.3. It's more nuanced. And that's what this is about. So this basically is a quantity that depends on the incident and the emitted direction. And it tells you how bright the surface will appear. So let's look at it. So we have a patch of the surface. I'll use a little diagram-- should have a rubber stamp for that. And we have light coming in, and we have light going out. And how bright the surface will appear, its radiance, will depend on those angles, but more, because this is in 3D. I've drawn it on a plane, but it's, of course, really in 3D. So I really needed to talk about the directions of these two rays in more detail than just saying they make an angle with the surface normal. And so how do I talk about directions? Well, we've already been through this-- unit vectors, points on a sphere, latitude, longitude. So in this field, it's customary to use these angles. So let's suppose that this line-- this curve down here is in the surface. And I've constructed the hemisphere above it. And this is called the polar angle, also called colatitude. Why? Because it's 90 degrees minus latitude. And this is called azimuth-- azimuth angle. So to specify the direction of light coming in or light going out, I need two angles, polar angle and azimuth. And so here, I've only shown the polar angle. And to draw the azimuth, I'd have to project this down into the plane of the surface and then look at the direction of those lines. And then since the brightness is going to depend on all of those, I can write it as some function like this. And the official terminology is that's the bidirectional reflectance distribution function-- theta E. And as we said, reflectance should be power going out divided by power going in. And so that's what this is, clean and formalized. So what we've got is it's the radiance, how bright the object will appear when viewed from this position, divided by how much energy I'm putting in from the source direction. So this is finally a definition of reflectance that actually works, because the other definition of just saying, oh, it's 0.3 or something, doesn't. Unfortunately, this is much more complicated. But any other definition of reflectance can be based on this. Basically, it's an integral of this. Well, there are lots of questions that immediately come up. One of them is, how on Earth do you measure this thing? Well, one way you can do it is put the light source in a certain position. Put your camera in a certain position. Take a measurement. Then move your camera, blah, blah, blah. Move your light source. But you're exploring a four-dimensional space, so that's pretty expensive. Nevertheless, it's done. And one method is using goniometers. That's just a fancy way of saying angle measurement devices. So what I'm trying to draw here is something that will rotate about this vertical axis and then has an arc along which the apparatus can move. So the first rotation corresponds to the azimuth angle, and the second movement corresponds to the polar angle. So the pole would be up here. And so we can mount the light sensor on one of these. Then we get a second one, and we mount the light source on it as well. And then we make measurements. Well, you'll get tired of it pretty soon, because you're exploring this four-dimensional space. And even if you sample it pretty coarsely, like 1 in 10, you're not talking about 10,000 measurements. But of course, you can automate it. You can build a robotic device that mechanically moves this and go away for the weekend, and it'll do these measurements for you. If you're in a hurry, as people in movie-making are, you can have many light sources that you can turn on and off. So you could construct maybe a whole sphere or maybe a hemisphere. And you distribute light sources all over the surface that can be individually controlled. And then, if you want-- so that takes care of two of the angles. So by picking-- by turning on one of these light sources, you've controlled one of the goniometers to position the light source. And then you can have light sensors or perhaps even cameras interspersed. And now you can do all of your measurements very quickly because you just flash one source at a time and take pictures and then process the images you get to-- and why would you want to do this? Well, suppose you want to realistically model how someone's skin reflects light. Well, you could try and build some mathematical model. But who knows how good that is? The better way is just to measure it. And so that's something people do. You don't want to approximate it with, I don't know, Lambertian or some other well-known model. You want the real thing. And so you can do this. So that's part of the story of this four-dimensional space. The next part is that, well, in most cases, it's not four-dimensional. And I think if you look at this diagram, you can see that what really matters is the angle between these two azimuth lines. [INAUDIBLE] So this is a general formula that applies to any surface. But for many surfaces, what really only matters is the difference between these two angles. And what type of surfaces are those? Well, those are surfaces where if you rotate them in the plane, if you rotate them about their surface normal, they don't change brightness. So it's pretty much true of this thing. And it's pretty much true of lots of things like wood and this floor and so on. So that's a dramatic improvement because now you've got a 3D lookup table to fill in instead of 4D. So what kind of materials are there that do not satisfy this? What materials require that we take the full four dimensions into account? And think about it. Is there surface where, if you look at it, and you rotate it in the plane of the surface, it changes appearance? Can you think of something like that? STUDENT: [INAUDIBLE] iridescent? BERTHOLD HORN: Something that's iridescent. Why? Because an iridescent material has microstructure that's oriented, and like a hummingbird's neck, the feathers are lined up. It produces the color not by pigment but by interference. And so if the microstructure is my fingers, and then I rotate that microstructure, it'll diffract light differently. Our ruby hummingbirds, if you look at the neck of the male from the wrong angle, it looks black. Why? Because it's reflecting all of the light in one direction, and it's only the red. So that's an example of something where unfortunately, you need the full model. You can't reduce and use this. And the other examples, like the semi-precious stones called tiger eye or various other things-- they're basically very fancy forms of asbestos. And people don't call them that, because as soon as you say "asbestos," lawyers come and so on. But it's basically-- just as asbestos, it has a microstructure that's very linear and very tightly packed on the scale of the wavelength of light. And so it has a very different appearance as you rotate it which gives it its appeal as a piece of jewelry. Then some people have very straight black hair that's very parallel. And that will have the same effect, where because they're all parallel, they'll reflect light in a certain way. And so as the head rotates, the sheen on that hair will move. And so you need the full four-dimensional model for that, whereas for a lot of things, like paper and snow and strawberries, the three-dimensional model is-- so what else do we know about this bidirectional reflectance function? Well, there's an important property due to Helmholtz called the Helmholtz reciprocity. Now Helmholtz, of course, lived a long, long time ago, long before Nicodemus of the National Bureau of Standards. So how could he have come up with a property of the bidirectional reflectance distribution function? Well, he didn't call it that, but he had the basic idea, which was basically the second law of thermodynamics. So suppose we have two objects at different temperatures. And there's an object patch down here. And so there's radiation coming from one and going to the other. And reciprocally, there'll be radiation coming from the object at temperature T2 and arriving at T1. And it takes a little bit of handwaving, but basically what it's saying is if it's not a reciprocal, then there will be energy transfer from the colder object to the hotter object, which we know doesn't happen. So what this is saying-- if you interchange incident and emitted, you should get the same value for your bidirectional-- so there's a symmetry, which, in a way, helps you in the data collection, because there's half of the data you don't have to collect. By the way, it reminds me of when I was a student here, there was a famous professor called Lettvin. And he, Jerome Lettvin, came up with a paper called, "What the Frog's Eye Tells the Frog's Brain," which was one of the early attempts to try and understand how neurons work and how image processing might work and so on. And everyone wanted to hear his talk. And he was talking about color. And I was way back in the room. And as a student, I was really intimidated by these people. But I just had to ask. And so I stuck up my hand, and I asked him some question. I at this point don't even remember what it was. And he stared at me for about five seconds. And then he said, well, if you had read the original book by Helmholtz-- the book by Helmholtz in the original German version-- you would know blah, blah, blah. And I'm like, oh, my god, I really killed my career here, because-- and then years later, I was thinking about that same problem. And I realized that he was just using a wonderful debating technique because he had no idea what the answer was. But he certainly put me in my place. And some of these people went to schools where they taught you how to debate. I didn't. So I was just flabbergasted-- anyway, Helmholtz. So that was Helmholtz there. So next time what we're going to do is apply this to look at different types of surface material models, of course, using Lambertian again, and also some new ones that apply to the moon and rocky planets in our solar system and that enable us to determine their surface shape. And I guess there was an extension, right? STUDENT: Yeah, till tomorrow. BERTHOLD HORN: So make sure you keep up to date on what's on Piazza because there was an extension on the homework problem. And other good stuff is happening there, so.
MIT_6801_Machine_Vision_Fall_2020
Lecture_16_Fast_Convolution_Low_Pass_Filter_Approximations_Integral_Images_US_6457032.txt
[SQUEAKING] [RUSTLING] [CLICKING] BERTHOLD HORN: Be a new homework problem out after this class, and we'll try and cover as much as possible that is relevant to that. The last two questions, you will have to wait for next week's class. However, there's extra time. So it's not due in a week. It's due slightly later. So we should be able to get everything in place by Tuesday. OK, so the last patent we were talking about was one that allowed us to efficiently compute filtering outputs in images that have been filtered. And in particular, the interest was in suppressing higher frequency content so that we can sample them without getting aliasing artifacts. But also, we could use them for edge detection and any sort of process that's convolutional and get a speed-up by exploiting the sparse nature of a higher derivative of a spline. So rather than focus on the patent, let's sort of look at some of the relevant material and discuss it without specific reference to the patent. So a major component of this, of course, is Nyquist. So just not to beat a dead horse to death, as the saying goes, when we sample a waveform, it's kind of surprising that we expect to capture something about the waveform, because the waveform has infinite support. And we are just getting these discrete samples. So it's surprising that we can expect to capture that waveform that way. And the only reason that it can work is if there's something very restricted about the types of waveforms we're dealing with. And that's band limiting. So if their frequency content is limited, then there's this wonderful theorem that basically says that if we sample at a high enough frequency, then we actually capture everything there is. We can completely reconstruct it, and it's a constructive proof. It's not something that's simply a mathematical proof. But there's actually a method for reconstructing it and its convolution with the sinc function. So it's a very satisfying kind of theory. And Nyquist and his predecessors showed that the criterion is that we sample fast enough so that the highest frequency component of the signal should have a frequency that is less than fs over 2. And we can sort of see why that is. Now, suppose we sample here and here and here. We seem to get the important part. We get alternating ups and downs. And also, it's clear that if we sample, say at the full frequency, if we have a gap from here to there between samples, that's not going to work, because all those samples are the same. So just if you forget what the formula is, you just look at this diagram and say, well, if I ultimately sample waves-- troughs and peaks, then I captured the waveform. It's a little bit dangerous argument, because if I take that same waveform and I sample at the same frequency, I get this, which isn't satisfactory. And of course, I can get anything in between if I just shift the phase of the sampling. And that's why it's less than, not less than or equal to. So it has to be strictly less than. OK, then if we don't satisfy that requirement, we have aliasing. So why is it called aliasing? And those of you who are thoroughly familiar with this can go to sleep for a moment. We'll just look at this here. So here's a waveform. And let's, again, suppose that we're sampling, but now not exactly on the peaks but a little bit offset. So we have-- in the space of the main, we have a waveform with frequency f0. And suppose that we're sampling at some interval, 1 over delta. And so the samples we're going to get are cosine 2 pi f0 k 1 over delta. Right. So these are our samples, let's say. And so that's-- we can just substitute. Substitute that. And now, if we add 2 pi to the argument, nothing changes, right? In fact, if I add any integer multiple of 2 pi to the argument, I get the same results. So for example, I could-- do I want to add or subtract? It's going to be easier if I subtract. OK, those are still going to be the same samples, right, because I've just changed the phase by some multiple of 2 pi. OK, so that means I can write this. And so what that means is that I started off with a wave of frequency f0. And now, I'm saying that it's the same as a wave of frequency f0 minus fs, or in fact, the same as-- cosine is an even function. So if I wanted to, I could say fs minus f0. And so let's suppose this is zero frequency. And here is my f0. And here is fs. Then I'm saying that something at this frequency fs minus f0 cannot be distinguished from something at f0 and also, by the way, fs plus f0 but not so worried about that. So now, imagine that f0 is higher. So I suppose it's there. Well, then this one comes down, close it, and so on. So clearly, I need to cut things off there. As long as I don't have any frequency content above that, I'm not going to have any confusion. And it's called aliasing, because those other frequencies-- and there's an infinite number of them-- look the same in the sample. Notice, by the way, that that means that you can't fix this after sampling. It has to be done before sampling. And so there are various kluges to try and do that, which suppress higher frequency content. But you've already committed the sin. It's too late. You're no longer able to distinguish frequencies that came from up here from frequencies down there. So and this is something that I guess we should all well know. OK, then we're going to approximately low-pass filter, because truly low-pass filtering is very hard. Before sampling, we need to filter. And that's important when we're talking about multiple scales, because we can't just, like, take every fourth pixel and assume that we're going to get something sensible. That'd be a little bit like designing an Earth mapping satellite, which is really good. It's got incredibly good optics so that each pixel corresponds to a point on the ground. That wouldn't be a very useful picture, because this pixel is hitting that gray stone there. And this pixel is hitting the table. And it'd be incoherent. And it would be a very dramatic example of aliasing. So this is a case, where actually, you want things to get blurred-- blurred just enough. OK, so we'll talk about different ways of blurring pre-sampling filtering. And so I want to start off by talking about something called the integral image. So one idea is, what if we convolve by what I'll call a boxcar, which is a filter like that? Basically, that's averaging, right? We're taking a block average. And to compute the average, we want to compute the sum. So what's an efficient way of computing the sum? Well, of course, the obvious implementation is we just go in, and for every pixel in the output, we find out which pixels in the input it corresponds to, get those values, add them up, and maybe divide by the total number of pixels. But that's obviously expensive, because then the number of computations is roughly the number of pixels times the width of that filter. Well, there's a very easy way of dramatically speeding this up, which is using an integral image. And we will first do it in 1D. So let's suppose that we have a g for grade level. We have a sequence, g of i. And we're going to transform it into a sequence, big G of i, where a big G of i is-- so we just left to right, add up the grade levels, which seems like a weird thing to do. But now, it means that if we want the sum from k equals i to k equals j of the original one, we can just get-- right, we just-- one subtraction and independent of the length of the block average. In the naive implementation, the amount of work goes with the width of the block average. Well, now, that's not the case anymore. It's just a single operation. So if we're going to do this a lot, then it's very efficient to precompute this so-called integral image. And this is actually used quite a bit. Of course, with images, we're dealing with 2D. But you can imagine that we can perform the same operation in 2D, where, now, the value in the output here is the sum of all the pixels in this rectangle. By the way, of course, with pixels, maybe we only need 8 bits to represent the gray level. Once we start adding them up, we need many more bits. So there's some issues about dynamic range. Do you use floating point? You use 32-bit integers-- whatever? But obviously, we need to estimate what is the maximum we're going to get when we add up all the pixels and suppose that each of the pixel is at its maximum value, et cetera, et cetera. So that's all pretty obvious. OK, now, in the 2D case, what I want to do is typically compute a total over a block like this. So it's a 2D block averaging. And that could be for purposes of approximate low-pass filtering. Or it could be for other reasons. So by the way, just because I'm calling it an integral image, doesn't mean it has to be derived from an image. We could, for example, have integral gradient. So we compute the gradient everywhere, which is now a two-vector. And we perform the same operations. Then we get a matrix of two vectors. And now, we can do things with that. And in fact, in some high-speed recognition processing, this is the first step, where you estimate histograms of gradients, and then you compute using block averages of this type. And it's exactly this process, just made more complicated, because we're not dealing with scalars. And this maps nicely onto GPUs. So it's used for quite a bit of recognition and such. So let's see. Suppose I want the total over this rectangle. How can I compute it from the integral image? It's not completely obvious, because if I just take the value at that corner-- suppose I take this value. Well, that's the total of all of these. So that's not what I want. If I take the value at this corner, that's the total of all of those. So maybe I should subtract that. Then I might take the total at this corner. And that's the sum of those. Well, maybe I shouldn't subtract the blue one-- get one more color. But I need to get rid of that bottom part. So maybe I can subtract this one. Now, if I subtract the green and the orange from the red, I subtracted this one twice, the blue one. And so I need to add it back in again. So basically, we get the total over this rectangle by adding this and that and subtracting this and that. So and you can extend this to higher dimensions also, although I haven't seen much use for that. So that means that with-- no, four memory accesses and three arithmetic operations, you can get the total for that block and independent of the size of the block. So it's no more expensive to average of a 4 by 4 than over a 8 by 8 or vice versa. OK, so and that really is the 0-th level example of what's in that patent. And we'll see why. OK, now, how good is this? So suppose that we block average like this. And then we sample. And we're trying to satisfy Nyquist. Are we really suppressing all those higher frequencies? Well, let's look at that. So little bit of Fourier analysis-- when I was a student, I was taught Fourier transforms in six different courses. And each of them used a different notation-- you know, frequency in cycles per second versus radians per second. Divide by 2 pi. Don't divide by 2 pi. And it was very confusing. So I won't hold you to getting the constant multipliers right at least. And then I, later in my life, started doing some mathematics. And the mathematicians had yet another way, which is very sensible. I don't know why engineers didn't adopt it, which is the unitary version of where you divide by the square root of 2 pi. And then the forward and the inverse are exactly symmetrical with only the sign change. But anyway, OK, so what have we got? So we have a block of, let's say, with delta that we're averaging over. And we can make the height of this thing 1 over delta just so that when we get the result, it's the same magnitude as the original value. And so we're really doing a moving average. And so what have we got? So that's going to be from minus delta over 2 to plus delta over 2 of 1 over delta. So we're going to get 1 over delta. I imagine you've seen this before. And so we get-- we get a sinc function. And sadly, I can't write it as a sinc, because again, the mathematicians and the engineers can't agree on the definition. One has a pi in it, and the other one doesn't. And it's just going to confuse it. So I'll just leave it like that. So what does that mean? Well, that means that it is attenuating higher frequencies, because while sinc just keeps on wobbling away as we go off in omega, we have the division by 1 over-- by omega. So it's decreasing inversely with frequency. That's the good part. But it doesn't become 0, except at special places. And so it's not a terribly good approximation of a low-pass filter. But what's even worse is that it doesn't get to the first 0 fast enough. So let's see what we got. So the first 0 is, of course, where the argument of sine is pi over 2. So we got-- and so that's at omega 0 equals pi over delta-- 2 pi over delta-- really-- which is twice the frequency we got for Nyquist. So if we plot it-- so here's what we want. Here's our ideal low-pass filter. And here's what we get. So this is Nyquist. And this is what we're getting. So the good part is we are attenuating higher frequencies. The bad part is we're not aggressive about it. And another bad part is that we're actually attenuating some of the frequencies we want a little bit. So and why is this? Well, this is particularly relevant to us in terms of cameras, because this is what the camera does. So this is our pixel. And here's another pixel. And so they are performing a filtering operation, because they're averaging the light coming over that whole area. And so that's good, because we're performing some low-pass-- approximate low-pass filtering before sampling. It's not ideal, because is that sinc function, that approximation to that blob? No, and it's first [INAUDIBLE]. Then that's why we need additional filtering so that we don't have the aliasing. OK, and so one thing we can do is find better filtering functions. Of course, they're likely to be computationally more expensive. And so let's start off with repeated block averaging. OK, since block averaging is so cheap to compute, we can perhaps perform it twice in the hope of getting something better. And so what we're going to do is take our function and convolve it with the filter. So this is the image. This is our filter, which is a b for block averaging filter. And now, let's take that and convolve it again. Now, we mentioned last time that this is associative. And so we can write it this way. So instead of thinking of it as convolution with boxcar, we can think of it as the convolution of the image with this thing, which is the boxcar convolved with a boxcar. So and that's this triangular waveform, which is kind of smoother. So that's what you-- you would think that would be beneficial, that we don't have as much high-frequency content. So every time there's a discontinuity, that means that there's high-frequency content. If it's a step function, then it's 1 over f. If it's only a discontinuity in the derivative, in the slope, then it's 1 over f squared. So this one here, we expect to drop off as 1 over f squared, because these drop-- these suppress 1 over f each of them. So OK, well, in the transform domain, of course, it's going to be F times B times B, which is the same as FB squared. So this is sinc squared. So this function, this filtering is somewhat better, because now, we're taking this thing squared. So when it gets close to 0, we'll be doing this. Oh, oops. No, square cannot be negative. So it's dropping off at the square of the rate. So instead of 1 over f, it's dropping off as 1 over f squared. And so it's better. But it's not wonderful. And one of the things we can do is do it again. We can do a sequence of block averaging. And since we have a fairly efficient way of doing one, it's not crazy to do several. And of course, we'll just get higher powers of the sinc function, which will mean that we're suppressing higher frequencies better. We're messing up some of the passband frequencies in the process. So we probably don't want to go too far with that. So I mentioned that the transform of a unit step is-- well, which we can easily prove but plugging it into the formula for the Fourier transform. And so what does that say? Well, that says that any time you have a step, you're going to have something that drops off as 1 over frequency. And in images, of course, we're dealing with two dimensions. So this happens in both x and y. So if we have steps in the image, then they will produce some high-frequency content that doesn't drop off terribly fast. So there was a talk here many years ago by a famous mathematician from Harvard, who said that he'd taken a lot of images. And he'd taken the Fourier transform. And he showed that natural images have a power spectrum that falls off as 1 over f. So what was the problem with it? Well, he was very happy to do this on many images, because he was using the fast Fourier transform-- makes it pretty efficient. Well, the fast Fourier transform, of course, is just an implementation of the discrete Fourier transform. And the discrete Fourier transform assumes that your data is periodic. So if you have-- in the one-dimensional case, it says that if this is your waveform you're transforming, what you're really looking at is this, right? So what does this mean? Well, it means that unless you're careful, you're introducing a step edge discontinuity as it wraps around. And that will produce a frequency content that drops off as 1 over f. So the grand result was not a feature of natural images but a feature of the discrete Fourier transform. So look out for that. So how do you-- this is a real problem. You see this again and again in papers. How do you deal with that? Well, what you can do is make things match up so that various ways-- one is called apodizing, which is you multiply by a waveform that looks like this. And that means that at the ends, you've pulled the image down to 0. And so of course, it matches. And this is fairly common there. People have their names on these things. But basically, the simplest one is just a inverted cosine waveform. And I won't write down the formula. It's pretty obvious. You just do 1 minus cosine of some suitable multiple of frequency and divide by 2. And you get this filter. And that has, actually, very nice properties. So that's one approach to dealing with a DFT applied to images. Another approach is to say, well, I don't really know what's outside the frame. This is as much of the image as I got. And now, I'm being asked to guess what's outside there. Well, it could be like this. It could repeat periodically. But maybe it's a mirror image, right? Not very good-- maybe like that-- so and that's better, because now, there's no discontinuity in the brightness itself. So you don't get the 1 over j omega. There is potentially a discontinuity in the derivative, right? So that means we need to look at 1 over f squared. Where's that come from? Well, if we integrate the unit step function, we get the unit ramp, we can call it. And its transform is, of course, the square of this, because basically, we've convolved the step function with itself to get that. And so this is better-- a lot better, because now, it's dropping off as 1 over f squared. You still haven't solved the problem, which is because we're assuming that it's periodic. And so how can we avoid that? Well, we can take an infinitely wide picture. Then we don't have this problem. But that's, of course, not really an option. OK, so we're going to look at some other examples of approximate low-pass filtering in 1D and then in 2D, and I just want to remind you of some basic properties that we need. One of them is the filtering property, the sifting property of the unit impulse. Right. So the unit impulse is kind of a weird thing, because it's not a function, right, because we can say, OK, delta of x is 0 for x not 0. OK, I mean, the function is something, where, for every x, you can say what the value is. And for the unit impulse, we can do that for anything except [INAUDIBLE] And then what happens at x equals 0? Well, it's infinite. So you can see there's a problem there. And the correct way of dealing with that is in terms of distributions. So there are these generalizations. And we need them. We use the impulse all the time. So we need it for that. But we also need them for derivatives of the impulse, which are even weirder. And so someone once noted that aside from the Gaussian, there's just about nothing that has a Fourier transform, in the sense of as a function that you can write down. Almost everything interesting includes some kind of problem, some kind of singularity. And so we might as well get used to that. So how do we deal with the peculiar nature of the unit impulse? Well, this is the way. This is how it's defined. This is its property. And it makes sense, because supposedly, this is 0 everywhere except where its argument is 0. And so the only place that matters is x equals x0. So the only value of f that matters is f of x0. And then the only thing that's left is the scaling. Well, the idea is that we've made the unit impulse so that its area is 1. And one way I find very helpful to think about these functions-- I mean, the impulse is an obvious thing we all know about. But it gets messier. And so it's good to have a way of thinking about them. So here's one way of thinking about it. Here's a boxcar with a unit area. And now, if I make epsilon smaller, it becomes narrower. And it becomes higher. And you can kind of think of the unit impulse as the limit of this. But it's not mathematically correct to say that. But if you want to know what the effect is of, say, convolution with the unit impulse, you can take the convolution with this thing, which you can calculate. It's a perfectly valid thing. And then you take the limit as epsilon turns to 0. So one thing to keep in mind, though, is that's not the only way of defining it, I could have equally well have taken a Gaussian, 1 over-- I probably have the scale factor wrong, something like that. And again, it becomes narrower and higher as epsilon gets smaller. And it defines the same thing in the limit. So don't take this picture too literally. It's helpful. But it's not the only answer. And it may be the easiest one to compute. But OK, so what do we do with this? Well, we can play some games with combinations of impulses. So for example, OK, oh, yeah. I need that. And so what are these? Well, these are just shifted impulses. And so one of them is at minus epsilon over 2. And the other one is at plus epsilon over 2. And their magnitude is 1 over epsilon, where the magnitude is the area under the impulse. It's what we get when we integrate it. So that's the unit impulse, and then we have impulses of different areas here. So that's what that corresponds to. And of course, that's just f of x plus epsilon over 2 minus f of x by epsilon. And in the limit, that's the derivative. So we can-- what's the point? Well, the point is we've connected convolution with derivatives, right. And so we can take the derivative by convolving with this. And that shouldn't be surprising, because this formula matches that, except this is flipped, right? You'd expect the right-hand one to be positive and the left one negative, because we're taking the positive of plus epsilon over 2 subtracting the 1 with minus epsilon over 2. Why is that? Well, because in convolution, we flip. We flip before we shift and multiply and add, right? But the important point there is that there's a close connection between linear shift invariant operators and derivative operators. And we can treat derivatives basically as convolutions. And just to refresh our memory-- right? So one of the two gets flipped. So this one is going left to right just as f of x does. And this one goes in the reverse direction, with respect to this dummy variable. Notice, again, that we have to be careful with the x. We'd like to write the integral blah, blah, blah, dx. But we can't do that, because the x's parameter here. So we need a dummy variable. So introduce some Greek letter. And of course, because it's commutative, this should also work the other way around. But in any case, one of the two has to be flipped. And that's how it's different from correlation. So correlation is exactly the same thing as convolution, except that you don't flip things. And that's why, up there, it looks a little odd, because we kind of expect the positive impulse to be on the right. OK, back to something we mentioned yesterday, which is one way of dealing with the fact that the pixel averaging is not a very good approximation to a low-pass filter-- so in the camera, we're getting some advantage from the fact that we're not making point samples. We're taking an average over the area of the pixel. That helps. But as we saw, it cuts off at-- somewhere we saw that it cuts off its 0. First 0 is twice the frequency where Nyquist says we should be. So we should really be adding some additional low-pass filtering. And no, you can't do it digitally after you've taken the image, because at that point, you've confounded these frequencies. All of these a.k.a.'s have collapsed into one. And you don't know how much came from one contribution and how much came from another contribution-- has to be done in the analog domain before we sample. OK, so it's kind of difficult to do this, because we got 10 million pixels. And you're trying to do some analog operation that cuts out some of the high-frequency content. And I mentioned that one way of doing it was using these birefringent materials. And you only need a very thin layer. So the crystal I brought in would deflect rays by a substantial amount, like quarter of an inch or something. And I picked the large ones just so you could actually see that effect. But here, we're talking about pixels that are maybe 5 microns wide. And so when we do any averaging and deflection, we're talking about something that deflects by-- I don't know-- 5 microns, 2 microns, in that range. So the filter, actually, of the special material can be very thin. You just as to make sure you cut it in the right way in the crystallographic orientation. But that's their problem. Our problem is understanding what it does. So it's a little bit like our formula for the derivative, except there's no minus, right? So what is this? Well, what we're saying is that we're going to have two shifted images. And if we just had a single delta function, then that would be the identity operation, except for a shift. And so now, we've got two superimposed images that are just slightly shifted the tiny amount. And we can model that as convolution with two impulses, one of which shifts the image for symmetry so that the answer comes out as a real number. We like to make it symmetric. And so it's going to shift it to the right a little bit. And the other one shifted to the left a little bit. And that's what this special filter does so that when you look at the actual pixel array, it's no longer the original in-focus image. You've now got two slightly shifted versions. And so how good does that work? Well, we take the Fourier transform, right? So we take-- and so that's going to be 1/2-- epsilon. And that's cosine of-- so that doesn't drop off with frequency. Well, it does near the origin. So let's see what it looks like. With sinc, it dropped off with 1 over frequency. This just keeps on going. But it does drop off here. And what's the first 0? It's pi over epsilon. So if we want to-- how do we pick epsilon? Well, we have a lot of degrees. Basically, we can decide how thick to make this plate. We're not restricted to making epsilon equal to the pixel spacing or equal to 1/2 the pixel spacing. And Canon and Nikon have their own idea of what is the ideal value. Anyway, what this does is now it cuts high frequencies. Unfortunately, there's this. Then there's some other frequencies that are not cut. What's the story with that? Well, the idea is that this is working with the other low-pass filter, the block averaging filter so that, yes, unfortunately, this extremely simple-minded thing that just makes two copies of the image has this property. But that's OK, because when we get out here, the other filter's at a 0 so that the two together do a better job. I mean, I wouldn't say it's perfect. You can still get more effects when you have some high-frequency content in the image, but much less than you would without this filter. By the way, of course, you need two of them, because you got x and y. So you actually have two of these plates, one rotated 90 degrees with respect to the other. And now, you're seeing four images superimposed on each other. And but you can treat this totally separately in x and y. So this analysis doesn't have to be redone. It's just the same thing happens in the y-direction. And so we end up with an anti-aliasing filter that some people take out, because they don't like the way it suppresses high frequencies. OK, so trying to tie this back to the patent-- so is this all bring back 603 or wherever you learned Fourier transforms and linear systems? OK, so what was the idea of the patent? So the idea of the patent was a little bit like the integral image. Is there some clever way of cutting down the computation, because to get good low-pass filtering, we tend to need large support? And if you implement things the obvious way, the amount of computation grows linearly with the size of the support, or in the case of images, quadratically, with the size of support. So it's good to have a small support. And that doesn't mean narrow. It just means that you have only a few non-zero values. And so let's-- so s is going to be our integral. And it's called s, because in the discrete version, we're going to have a sum. So what is it? Well, it's basically a step function-- looks like this. So we're interested in convolution with that. And that gives us so let's see what that does. Right? So we just used the formula for convolution. We plug in this function. Or as we said last when we were there, we can flip them. We can interchange them, because convolution is commutative. And why do we do that? Because now, we can change the limits of integration. Oh, the-- sorry. So this is going to restrict us to psi greater than 0. So the limits have changed. So we just end up with the integral. So well, there's a flip in sign, which corresponds to the convolution changing the order. So OK, apparently that was one too much. All right, anyway, the important thing is that we already said how we can represent derivative in terms of convolutions. Well here's the way we can represent integration. And by the way, Fourier transform of the unit impulse is-- I forget whether it's 1 over j omega or 1 over minus j omega. OK, so we have that operator. And then we have the derivative operator, which we went through somewhere earlier. And it has a Fourier transform that's j omega. And not too surprisingly, those are inverses of one another, because if I were to convolve the two of them, what I'm doing is making an operator that first integrates and then differentiates. And of course, that's the identity. And in the Fourier transform domain, convolution corresponds to product. So in the transform domain, we get the product of this and that, which, of course, is 1. So that's consistent. OK, so the key thing, really, is that these operators can be treated as convolutions. And I can go to the second derivative. I can look at the second derivative, and I can do it in a number of different ways. One is it's the first derivative convolved with itself. So I can take those two impulses and convolve them with another set of two impulses. And I get something that has three impulses, the standard pattern, 1 minus 2, 1-- something like that. So this would be D squared. And I can obviously take higher order derivatives that way. And in terms of transform domain, I just-- convolution becomes multiplication. So if I do two of these, I'm just going to multiply this by itself. And I get minus omega squared. So the second derivative corresponds to minus omega squared. OK, so now, we're going to apply this to our problem of filtering. OK, so suppose that we have a filter that we call F. We're trying to take it apart and put it back together in a way that reduces computation. Now, we said that the derivative of the integral is the identity. So we can certainly write things this way. And then we can make use of various properties like commutativity. So let's start there. And then we can use associativity to write it this way. So what this says is that to convolve with the filter F, to apply the filter F, what we can do is instead apply this filter. Let's call it F prime. And we don't get the answer we want. But we can get the answer we want by integration or some agent from left to right, right? And so why would this be interesting? Well, this would be great if this one was sparse. I mean, that doesn't always happen. But this is where we can apply this method in the case, where our function F is sparse. And not F is sparse, but its derivative. And when does that happen? Well, it happens when it's a constant most of the time. So we're thinking particularly of splines. So we're thinking of filters that have some convenient waveform. But we're splitting it up into sections. For example, here's an approximation to a Gaussian, say. And in each of these, it's a polynomial. And so some higher order derivative will be 0. So if it's an n-th order polynomial, I have to repeat this process n times, n plus 1 times, because the n plus first derivative of an n-th order polynomial is 0. And then what happens is that I get 0 here, 0 here, 0 here. So oops. That's bad. Everything is 0. But you have something here at the transitions, right? So that's what I mean by sparse. So now, in the case of this spline of three sections, we have only four areas, where it's non-zero. And so the computation will be much more efficient. We don't get the answer we want. But we can get the answer we want by taking the result and summing it, integrating it. And of course, if we've taken n plus 1 derivatives forward, we need to do n plus 1 summation. But the summation is trivial. It's just, you have an accumulator. You run left to right. You start with 0. You just keep on adding in the gray levels. And now, you might need to worry about dynamic range. How many bits do you need to represent that? But we won't worry about that. OK, well, let's do a really simple example. So and again, typically, to do a good job, you might need fairly higher order derivatives. So suppose, for example, you were trying to approximate the sinc function. That would be very important to us, because we know that's the ideal one. If we could ever get the sinc function, we'd have a perfect low-pass filter. But so let's approximate it. And what's wrong with the sinc function the way it stands? Well, the main problem is it goes on forever. And so we need to somehow truncate it. And we need to represent it. And how close an approximation do we want? That's not very symmetrical. So let's-- OK, so here's how-- it looks a little bit like the sinc functions, not, of course, going on forever. But we can approximate it this way. And we probably want a cubic approximation. So in each of these four areas, the curve is described by a cubic. And there are different cubics for the different areas. And that's the whole idea of a spline. And now, if we-- it's cubic. So if we take the fourth derivative, it'll be 0 everywhere except at the junctions. So that means that if we do this, use this for filtering, we can do relatively little work, because instead of having-- say the pixel spacing is like that. Instead of having all of those non-zero values, we only have non-zero values at those four places. And this, by the way, is used heavily in all kinds of image manipulation software, like Photoshop would interpolate images using this function, except in 2D. So it's called-- in 2D, it's called by bicubic, because we're doing cubic in x and cubic in y. But they didn't consider a true 2D case, because it's more work. So OK, so say, for example, you take your picture. And you discover, oh, I had the camera rotated 2 degrees. And then you say, OK, Photoshop or whatever your favorite image manipulation software is, rotate it by minus 2 degrees. Well, of course, now, the pixels of the rotated image won't line up with the pixels in the new image. And so you have to interpolate. You have to figure out what values to use. And you could just use nearest neighbor. Well, nearest neighbor is pretty bad. You can see the pixelation. And in particular, if you repeat it-- say you then say, oh, actually, it was 2 and 1/2 degrees. Let me turn it another 1/2 degree, which is probably a bad idea. You want to go back and turn 2.5. But suppose you do it sequentially, then at each step, it's going to deteriorate a little bit. And if you use nearest neighbor, it'll deteriorate a lot. So the next best thing is linear interpolation, which is cheap to compute. But it's not as good as we'll see in the moment as this method. And then you might say, well, why not go even further? Why not approximate the sinc function by something that looks like this-- have one more wiggle in it? And you can do that. But as far as I know, this has never caught on because of the computational cost. So if you think about doing this in 2D-- so we need to look at four places in 1D. In 2D, there's a grid of 4 by 4. So we need to look at 16 neighboring pixels to do a good job. Well, with this one, we need 1, 2, 3, 4, 5, 6, 7. Is that right? Anyway, more-- and by the time you square it, a lot more. So and there's not a huge payoff. You don't get images that suddenly look dramatically better bicubic. And bicubic was invented by IBM. And it was used in early satellite image processing. OK, so suppose that we go back to the simple block averaging just so we can do the numbers more easily. No, I don't want to do this one-- do another one. So again, 1D case-- and now, suppose we want a block average of four. Well, let's start with-- there's one issue. We have to make some assumption about what's outside the range that's given to us. So let's assume it's 0. And at some point when we're here, we'll get 0. And then we shift 1 over. And we take that block average and get 2. And we shift 1 over. We get 3 and 6. And then we get-- better write it down from here before I make a mistake-- 11, 18. No, that's the sum. Sorry. That's the wrong thing-- 11, 16, 16, 13, 10, 6, 7. So that's the naive implementation of convolution. I just shift this block along and add up whatever's underneath the block. So then let me do it the other way, which in this case, doesn't save us a lot, because it's a very short example. So the other way is to add left to right-- 2, 3, 6, 11. And then 7 is 18 and 19, 19, 21, 24, 26, dot, dot, dot. And now, I subtract. So I'm going to take this value and subtract the value four back. So I get 2. Subtract this value. I get 3. Subtract this one, I get 6, 11. And then by 18, I get 18 minus 2 is 16. And 19 minus 3 is also 16. And 19 minus 6 is 13. And you can see how that works. And in this example, it didn't buy as much. But if we're block averaging over a large segment, that could be a huge saving. And if we do it in 2D on images, even more so. OK. AUDIENCE: How do we get [INAUDIBLE]?? BERTHOLD HORN: Because there's a-- we've got-- where were we? AUDIENCE: [INAUDIBLE] BERTHOLD HORN: 18. Oh, up here? AUDIENCE: [INAUDIBLE] BERTHOLD HORN: We're adding 0 at some point. These may not be lined up exactly-- oh, did I screw up? OK, 2, 3, 6, and-- so this one is 16. This sum is 16. And then the next sum is also 16. You see the-- we lose a 1 on the left then, but we add 1 on the right. So God for a minute there, I thought the whole method was flawed. And we'd have to give up and fire the person who wrote that patent. But no, fortunately, it works. So OK, I don't think I'll do another example. That's too boring. So linear interpolation-- so I brought up cubic interpolation and sang its praises. So I should say something about linear interpolation. And we don't think of linear interpolation as having anything to do with convolution. But of course, we can think of it that way. So again, suppose we have samples on the street grid. And then we create a function that connects those points. And the simplest way is just to draw straight lines. And that's linear interpolation. Over there, we draw qubits. OK, so that's the formula that gives us that straight line. So first of all, notice that it's linear in x. So it's a straight line. And at the important points, namely 0 and 1, it gives us the right value. So at 0, we get-- at x equals 0, we get f of 0. That's what we want. At x equals 1, we get f of 1. So that is that line. OK. Now, that corresponds to convolution with that function. And I'll leave you to figure that out. It should be clear after what we've done. And that, in turn-- this triangle, in turn, is the convolution of those two boxcars. That's also easy to see, because we take one of them. We flip it. That does nothing. And then we slide it underneath the other one. And initially, there's very little overlap. The overlap increases linearly until we're right on top of it. And then it linearly decreases until we drop off the right-hand end. So convolution of the two boxcars is that thing. So by the way, you can see that, in some sense, right, just by looking at this, it's an inferior interpolation method to the one that's using that approximation to the sinc function. And it has a number of nasty properties. One of them is that the quality is different close to the sample points than, say, in the middle between them. Why is that? Well, suppose there's noise in the image. Then if I'm here right on top of the sample point, I just inherit the noise of that image measurement. Let's say it's sigma. So there's some sort of variability sigma. Now, if I'm halfway in between them, I'm taking the average of these two values. And so there's noise in this. There's noise in that. But the variance of the noise adds. So what I find is that I'm getting sigma squared plus sigma squared for my variance, which is 2 sigma squared. So the standard deviation-- so this is the variance. So the standard deviation of the square root of 2 times sigma-- so the noise in the interpolated result is different when I'm different places in this transition. Now, that's also true of the cubic interpolation. But it's much smaller effect. In the case of linear interpolation, you're averaging two independent noise values when you're in the middle. So you have reduction in noise. But when you're sitting right on top of the original samples, you're getting the full effect of the image measurement noise. So it's an effect that's kind of annoying. OK, so in terms of transform, we know that this transform is sinc. And this transform is sinc in the frequency domain. And so we get sinc squared. So and we've talked about sinc squared. So this is not a terribly good low-pass filter. It's better than a single stage. It's better than-- oh, so I mentioned that the crudest method is nearest neighbor. So an interesting question is, what convolution does nearest neighbor correspond to? So let's think about that. So we've got the cubic one. And we've got the linear interpolation. And what about nearest neighbor? Well, with nearest neighbor, what my picture would look like is-- let's see. Suppose the next value is up here. So with nearest neighbor, I'm going to get these steps. So it's a piecewise constant. Over here, we have something that's piecewise linear. And in the case of the cubic interpolation, we have something that's piecewise cubic. And you may ask why there is no piecewise quadratic. But that's a long story. So we don't do that. So that's the result we get from-- and you can see that this is obviously inferior to even linear interpolation. And then so nearest neighbor corresponds to convolution with the boxcar. If you think about it, that makes sense. You think of each of the samples as an impulse. And now, we're convolving those with a boxcar. And that will create a box like this and a box like that and a box like this. So each of these impulses creates a box. And you put them together. You have the linear interp-- the nearest neighbor interpolated result. And so in some sense, the linear interpolation is doing that twice. It's twice as good, in some sense. It's the quadratic in the transform domain. OK, so as long as we live in image coordinates, x and y, we can treat them independently as we have, for example, with interpolation. We said, well, we'll use a cubic spline in x and a cubic spline in y and so on. But we keep on saying that really, things should be-- rotation is symmetric. There's no preferred direction in natural images. And that's not entirely true, because we often line things up with a horizon. And then we have man-made structures, which are rectangular often. But in some sense, they shouldn't really be some special bias. Why did we pick a square pixel or a rectangular pixel coordinate system? And so what if, instead, we're thinking about a rotation asymmetric situation? And one of the questions that comes up is, what's the equivalent of low-pass filter? So we can easily imagine a low-pass filter like this. So u and v are the frequency components corresponding to x and y. And here, we've got a low-pass filter in the x-direction and a low-pass filter in the y-direction. And that's a legitimate way of thinking about limiting the frequency range, but it's not rotationally symmetric. So more interesting question is-- do I want to go-- yeah. What if we have a low-pass filter that's rotationally symmetric? What is the real two dimensional equivalent to what we've been talking about in 1D? And so this function-- I'll call it a pillbox. I guess nobody knows what pillboxes are anymore, although old people these days often have some sort of pillbox. So they remember whether they've taken their medications. But those boxes are rectangular. That's not what I'm talking-- I'm talking about a round thing. Viewed from this-- perspective view would be like this. So its value here is 1. And its value out here is 0, right? It's passing all those low frequencies. And it's not passing any high frequencies. And you can find its transform. But I'll just write it down, because it's somewhat tedious. So actually, inverse transform, right, because we're going from frequency domain to the other domain, except the transform's pretty much symmetrical. And we have a rotationally symmetric object. So in that case, it doesn't matter at all. OK, so what's rho? Well, that's in a polar coordinate system in the spatial domain. And you'd imagine that the inverse transform of this should be rotationally asymmetric, as well. And it is. And if you go along any radius, it will vary as this ratio, where J1 is the first order Bessel function. And this is a famous function in optics. For example, when Airy first talked about the resolution of a microscope, he found out that its point spread function, its response to an impulse, is that. And then his famous equation for resolution is based on the first 0 of this function. So this is the equivalent, in some sense, of the sinc function. And if we plot it-- now, if I plot it on the blackboard, it'll look exactly like the sinc function, because it has a central peak. And it goes to 0 and then comes out the other side and dies away. And you'd have to actually look at the values to see that it's different. So there are two ways to, in which it's different. The one is that the first 0 is 3.8371, dot, dot, dot rather than at pi. And the zeros are not multiples of that first one, although they tend to be closer and closer to periodic. And the other thing is that it drops off-- so in the sinc function, it drops off as 1 over f, right, because we got the sign at the top that's always between plus 1 and minus 1. And we're dividing by the frequency. So the sinc function drops off as 1 over distance. This drops off as 1 over something to the 3/2, right? I don't have it. Well, so it drops off actually a little bit faster than the sinc would have. So that's the ideal low-pass filter for an image. And that's in 2D. And we've only shown the cross-section of it. But its rotation is symmetric with this cross-section. And so when you look through a microscope, for example, at a single point of light and you were to plot the electric field in the image plane, it would look like that. Now, negative values, brightness-- well, the thing is we don't sense the electric field. We sense power. We sense the square of it. So in terms of image brightness, we actually have the square of this. We get that and so on. And Airy's resolution criteria is based on this first 0. And it says, if you've got two objects that are this far apart, you can tell that they are two objects sort of, because the second object-- its blob would be something like that. And it's kind of obviously arbitrary. But it's a useful number to have in mind. OK, so that's sort of neat. And it leads to something else, which is the Fourier transform is symmetrical, pretty much, forward and back. So here, we've taken the-- let's see. We've taken the inverse Fourier transform of something in the frequency domain. And we've got this Bessel function thing. Now, when we're talking about defocus, we were talking about pillboxes basically. So let's do that. And now, that's in the spatial domain. We said that if your image is out of focus, you're convolving with a pillbox in the spatial domain, right? Remember, we had the lens. And the lens brought light to a focus. And unfortunately, our image plane was in the wrong place. So we cut that cone of light. And we then get a pattern like this. And the radius depends on how badly we are out of focus. Let's call it R. Now, this time, the value inside here is not 1. It's 1 over pi R squared. Why? Well, I want the integral to be 1, because basically, all the energy that used to come to this point is now spread out over this area. And if I go more out of focus, it's spread out over a larger area. So I should really normalize to that area. Now, because of the symmetry of the Fourier transform, we can pretty much just write down the answer, because if the pillbox in the frequency domain turns into that Bessel functions thing in the spatial domain, then a pillbox in the spatial domain should turn into that Bessel functions thing in the frequency domain. So that's very important. This is what defocusing does. OK, and you can see that we have a decrease of high-frequency content. But also, we actually kill some frequencies completely, right, because of these zeros. So if I were to plot the energy in the frequency domain, I would expect to see an area, where it's actually 0, and another area, where it's actually 0 and so on. And this works. So we had a Euro student take a DSLR and take pictures of-- I don't know if-- the memorial to the fallen in the center of building 10. And then he commanded-- now, this is a Canon that allows you to control the lens. So he commanded the lens to take a step, took another picture, commanded the lens to take a step, took another picture. And so he got a whole stack of pictures that are in different degrees of defocus. And one of them is pretty much in focus. Now, unfortunately, Canon doesn't tell you how big the step is. So part of his exercise was, can you figure out what the step size is? Well, you can if you can figure out how big these pillboxes are. And how did he do that? Well, he looked at the frequency domain. And in the frequency domain, once you know where this ring of zeros is, you're done. You can calculate the radius. You can calculate R. And so he was able to determine how R changed as he incremented the lens position. And then knowing the aperture of the lens, he could calculate the step size. And OK, well, it's not quite that simple, because there's noise. So it's not like you can actually expect this to be exactly 0. But if you just plot this as a picture, you look at it as a picture, there's a dark, dark ring. And then there's another dark ring. And then you can eyeball it. There's another effect, which is that in an ideal lens, this is exactly what we would expect to get. But in practice, as we mentioned, lenses always have some compromise between different problems-- chromatic aberrations, spherical aberration, astigmatism, and so on. So the actual point spread function isn't a pillbox like I've drawn it over there. It's a little different. And that was part of the exercise. Find out how that changes. And interestingly enough, it's different below focus than above correct focus. So in our ideal model, of course, you would expect exactly the same defocus effect if your epsilon below perfect focus than when your epsilon above perfect focus. And in practice, that's not quite the case. If you look at the cross-section of the pillbox instead of-- this is the ideal pillbox cross-section, of course. And if you look at it on one side of defocus, it's more concentrated near the edges and not so much. I mean, for a start, it's not a perfect step edge. And then on the other side of defocus, it looked more like that. So you could imagine that that led to some difficulty in actually implementing this idea. But it's a first approximation. So here's an interesting idea. Suppose that you get a picture. And you want to know whether it's the original or whether someone printed it out and then took a photograph of that. And we'll get back to that in another way. I mean, one of the things that happens is that we get perspective projection. So we print it out. Now, we take the camera. We get perspective projection again-- is the effect of two perspective projections in sequence. It's the same as a single perspective projection. And it turns out it isn't. So you can tell, as we'll see, that there's been some cheating going on, that someone modified the picture and took another picture of it. But here's another way. Suppose that it's slightly out of focus. And then you take a slightly out of focus picture of that slightly out of focus picture. Is that equivalent to taking a picture that is more out of focus, like the sum of the two out of focus values? And so what are we doing? We're convolving with the point spread function, which is this pillbox once. Take the result. And then we convolve again with the pillbox. And that's equivalent to multiplying by that and then multiplying by that, right, because convolution in the spatial domain corresponds to multiplication in the frequency domain. And this is not-- this product is not equivalent to J over something for any value of R. Or if you just look at the frequency domain, if you pass it through both of these operations, you will lose all of these frequencies. But you will also lose all of the frequencies of the other operation, which could be here. So it might not be trivial to use this method. But conceptually, at least, there is a difference between a defocused picture of a defocused picture and any single defocus step. And you should be able to tell that something fishy is going on. So oh, OK. So just a note-- so I didn't say a lot about the patent, because I thought it'd be more important to understand the linear systems theory behind it. And when we're done with that what we're going to do next is photogrammetry. Photogrammetry is about using images to take measurements. And it was an early application of photography. You send someone up in the hot air balloon. And he takes a picture. And maybe he takes a picture from a different position. And then you can create a map. You can actually get three-dimensional information. So we'll be talking about using images to create, particularly, 3D information. So it's kind of going up. We've been working most of this time at what some might consider very low level. We're dealing with pixels, gray levels, radiance, irradiance, and so on. So now, we're going up. So we will assume that we know how to get certain features, like edges. And then we're going to try and match them up between images in order to get information about where the camera was and information about the objects. So OK.
Principles_of_Microeconomics
Chapter_18_The_Market_for_Factors_of_Production_Principles_of_Economics.txt
in this chapter we're going to talk about the markets for factors of production or the market for inputs so one of the things that we're going to be thinking about in this chapter kind of the main thing that we're going to be thinking about is what determines the income that a get a person gets paid for a job so for example medical doctors earn more than High School teachers and and we want to understand what is it that that helps us understand why that's the case so when we think about factors of production you you may have heard that term before I use it interchangeably with inputs so if we're talking about the inputs we're thinking about the things that are used to make output goods and services so the inputs are going to be things like land labor and capital those are typically the three inputs that we think about land being physical space labor obviously being um a person doing physical labor and then Capital would be buildings and equipment so if we think about the market for those inputs we would analyze the market for for an input the same way that we would analyze the market for an output and it's going to be the what we know about output markets is that price is determined by demand and supply so really everything that we're going to do in this chapter boils down to demand and Supply except we're thinking about the demand and supply for an input now if we think about the demand for an input if we think about say the demand for labor we would call that a derived demand because the demand for labor affirms demand for labor is derived from its desire to produce the the good or the service so demand for inputs is derived from the production of the output if if the firms were not producing any output then they would have no need to purchase any of the inputs so let's start by thinking about the demand for labor we're gonna we're gonna think about how the labor market works and then here in just a little bit we'll talk about the market for land and the market for Capital and we'll see that it works exactly the same so let's start with labor labor markets function like other markets that we've talked about in in other videos so if we think about what those markets look like we're going to have a demand for labor a supply of labor the price of Labor we'll call the wage here so that's just a price and then down here I'm going to put l for labor we can think about that as the number of units of Labor maybe workers hours if we wanted to we're not too worried about that so let's make sure we write labor here one of the things that we want to do is we want to start by thinking about the firm that is behind the demand for labor as being a perfectly competitive firm so we'll think about the the firm is being participating in a perfectly competitive output market so let's suppose this firm is selling its output in a perfectly competitive market and let's suppose that the input Market is also perfectly competitive now out there in the real world um most markets are not perfectly competitive but we need to start there and then we can think about what happens when one or both of these markets is not perfectly competitive and let's suppose that this firm produces something like say meals a boxed lunch it doesn't really matter what the the quantity is but I came up with a numerical example here in just a second and and um so we need something to talk about so let's suppose that the firm is producing these boxed lunches I'm just going to call them meals um and let's suppose that the firm remember that if you're a perfectly competitive firm then that means you're a price taker so the firm is a price taker in the market where it sells these meals and it's a price taker in the market where it buys labor from it doesn't have any control the firm is small compared to the size of both of the markets that it participates in the output market and the input Market the firm is also going to desire to maximize profit so if if you're in the principles of micro class let's say you've probably already talked about firms that are maximizing profit so hopefully that's familiar to you if not I've got videos on that so um the firm is going to be making its revenue from selling these meals that it's making and it's going to be spending money on the inputs and and labor here is one of those inputs that it's going to be purchasing now The Profit maximizing choice of number of meals to produce is then going to determine the profit maximizing amount of Labor that it needs to purchase okay and so that's why we call this a derived demand it's derived from the firm's decisions in regards to how many meals it's going to produce so now clearly the productivity of workers is going to be a key part of this decision okay so what we need to do is we need to think about the production function okay so I'm going to put you've probably seen a production function before I'm going to put my production function right over here I'm going to start with a column here for labor and let's just think about the number of units of Labor starting at zero and going up to five so we can think about these as the number of people employed to make these meals and let's think about the number of meals that we can make in a day I'm going to call that Q well that's just going to be output okay so this is the output number of meals the firm produces if they have no workers then they produce no meals let's suppose that one worker can produce 50 meals in a day and then if they bring in let's suppose they want to produce more than 50 so they bring another worker in to produce meals that suppose they can produce the two of them can now produce 90 three can produce a hundred and twenty four can produce 140 and 5 can produce 150. if you've watched my videos in particular my cost of production videos you've seen numbers like this before with there we talked about a production function and we used it to come up with a total cost curve and actually I used these exact numbers in a different example um so now let's think about what that production function this information the number of workers and the amount of output that we can produce let's think about what that production function looks like I'm going to add some more columns to this so as you're taking notes don't um put anything just to the right of that but let's draw this production function now any production function we typically put the amount of the input that we're talking about down here on the horizontal axis and in this case we're going to put labor and then we put the amount of output up here on the vertical axis so labor goes up to five so one two and then our amount of output goes up to 150 but it's not going up by the same amount each time so I'm going to put 50 here and 100 here and 150 here and then we're just going to graph these combinations of points if we have zero workers we get zero units of output one worker we can produce 50 meals in a day two workers we can produce 90 so that's going to be somewhere right in there three workers we can produce 120 so that's going to be somewhere maybe right around there four workers we can produce 140 so that's going to be something like that that one probably should be up there and then five workers we're going to produce 150 so that's something like that and if we connect those we get what's referred to as the production function and you can see that this production function is not linear it starts to curve off and flatten out let's talk about um why that thing is not linear why is it that as you bring in more workers you're not why why is it that is when we bring in that second worker why are we not able to get a hundred meals in a day why is it only 90 and then we bring in that third worker and now we only get 120 instead of 150 um in order to understand that we need to define something else that we call the marginal product you've probably seen it before I'm going to put marginal product right here but let's define it right over here so our marginal product is just the change in total product or the change in total output any time you have marginal anything the word marginal you can think of as change in so marginal product is going to be change in total product let's write it here marginal product which I'm going to abbreviate MP is going to be how much output changes as we change the number of workers that we've got so I'm going to use that triangle to stand for the word change in if you've watched my videos you've seen me do that before so we've got change in output when we change the amount of Labor that we're using and we're always going to be at least in this example thinking about a change in labor of being one unit so we're going to start here where we have no units of Labor and now we're going to use the first unit of Labor and we see that output goes from 0 to 50. so the marginal product or the change in output is 50 when we use that first worker and then when we go from one worker to two our output goes from 50 to 90 so it went up by 40. and then when we go from two workers to three output goes from 90 to 120 so it went up by 30. you can see that marginal product is going down by 10 each time so it's down to 20 and then 10. so there's marginal product marginal product turns out to be the slope of the total product or the production function so the slope of this little segment right here would be 50 the slope of this little segment right here is 40 slope here's 30 20 and 10. so let's talk about why marginal product is not constant if this thing was linear then marginal product would be constant right it's not because of something that we call the law of diminishing marginal product and if you've seen if you've talked about costs of production you've probably already discussed that but the idea behind the law of diminishing marginal product is that anytime you've got some fixed input so for example let's suppose you've got a fixed amount of space for your workers to work in or a fixed amount of equipment for them to use and you want to increase output as you bring more workers in they start to have to share that equipment and share space they get in each other's way and so everybody's productivity goes down and and somebody maybe your mom or your grandma might have taught you a lesson about that by saying that there's too many cooks in the kitchen what that means is there's enough Cooks in here that productivity is going down and if we were to take Cooks out of the kitchen productivity would go up so that's the law of diminishing marginal product so now what we want to do we're going to add still yet another couple of columns out there we want to talk about something that we call the value of marginal product value of marginal product so when you bring another worker in that results in output going up that's what this is when you bring that second worker in output goes up by 40 and we want to know okay well what's the value of that 40 units what's the value of the marginal product that another worker is producing um when I learned this we called it marginal revenue product and so some textbooks still call it marginal revenue product in fact a lot of times if I'm not really careful I'll just accidentally call it marginal revenue products so I'm going to abbreviate this VMP but if you hear me say marginal revenue product when I'm referring to this thing that's what's going on they're they mean the same thing and the value of marginal product is really simple all you have to do is take the price of the good and multiply it by marginal product so value marginal product is just marginal product the number of additional units that you can produce when you bring in another worker times how much each of those units sells for that tells you the value of the marginal product so if you hire another worker and and that results in output going up by 10 or 10 units and each of those units sells for five dollars and that's fifty dollars worth of um product that that bringing that worker in has gotten you so we can think about calculating our value of marginal product let's suppose that a meal sells for ten dollars okay so the price let's say the price of a meal is equal to ten dollars now that would be determined in the market for these meals um so let's calculate our value marginal product here oops value marginal product we're obviously not going to calculate it we've we're always when we're talking about these marginal things we're going to be thinking about starting at zero and going to one so we're not calculating anything for the the zero there so when we bring in that first work or marginal product goes up by 50 at 50 meals and each one of those meals is worth ten dollars so the value of marginal product is going to be five hundred dollars now when we bring in that second worker our marginal product is 40. that's an additional 40 meals valued at ten dollars a meal that's 400 dollars worth of meals that bringing in that second worker got you 30 marginal product of 30 times 10 is 300 you can see that this thing is going down by a hundred dollars each time two hundred dollars one hundred so now we know hear what each worker gets us in terms of output but what we're really interested in is it what's that worth in terms of dollars what's that worth in the market so now the firm is going to be making a decision about how many workers that it wants to produce or that it wants to hire it knows the value at the margin of each of these workers what we need to know now is what's the cost at the margin of each of these workers um so let's suppose that the wage is say a hundred and fifty dollars per day okay so suppose the wage is equal to a hundred and fifty dollars per day so if you bring another and bring in another worker it's going to cost you a hundred and fifty dollars per day so I'm going to put another column over here let's just put wage and this is we've everything here is on a per day basis um so 150 dollars 150 dollars bringing in another worker costs you a hundred and fifty dollars at the margin so now the question is how many workers is this firm going to want to hire well if we think about this first worker let's think about the way that we make decisions one of the basic principles of economics that we know is that a decision maker takes an action if and only if the benefit at the margin is bigger than the cost at the margin well here's the benefit at the margin that's what the value of an additional worker is here's the cost of an additional worker so if we bring in that first worker that's going to result in 500 additional dollars coming into the business and it's going to cost us 150 clearly that's a good idea so we do want to hire the first worker if we think about the second worker here the second worker is going to bring in less than the first worker they're going to bring in 400 but still only going to cost us 150 that's a good idea too all right third worker brings in 300 costs us 150. that's a good idea fourth worker brings in 200 costs us 150 so we want to hire all of these workers up to the fourth including the fourth the fifth brings in a hundred and costs us 150 we don't want to hire that fifth worker so hiring four workers would be the profit maximizing decision for this firm if we were to graph so let's think about what this margin value marginal product curve gets us or the marginal revenue product curve if if you're used to hearing that if we were to graph that thing here's what it looks like so if we put up here the value marginal product and that's just in terms of dollars so that's what we've got on the vertical axis and then down here we've got the number of units of Labor if we were to graph this it would be some points or graph the quantity of Labor and the value marginal product it would be 500 400 300 200 100 it would look like this and what that tells us is we can look at the wage let's suppose we had a wage out here suppose it was right there that wage is determined in the market if we were to go over from that wage to our value marginal product curve that would tell us how many units of Labor the firm is going to want to purchase so this if that was the wage the firm is going to produce or purchase that many units of Labor it's going to buy all these units of Labor up to that point so what this is this value marginal product curve what that is is really the demand for labor whatever the wage is we know that the firm is going to want to hire all of these units of of Labor up to that point and not want to hire any of those units of labor now the reason that they want to hire the say this unit of Labor is if you go up to the marginal value marginal product curve you see that the value of that unit of Labor is right up here and here's how much that unit of labor costs at the margin the value is greater at the Mark than cost at the margin so the firm certainly wants to hire that unit if we look at that unit right there there's the value at the margin there's the cost at the margin they want to hire that unit they want to hire every unit up to that point but if we were to look at units of Labor out here I'm going to extend the wage out here if we were to look at units of Labor out here then here's the value at the margin and right up here is the cost at the margin that's a bad idea they don't want to buy any of those units of Labor so whatever the wage is the wage can be up here the wage can be down here if we go over from that wage to the value marginal product curve that tells us the number of units that the firm wants to purchase so that is the firm's demand curve for labor okay so this is the value marginal product curve is the firm's demand for labor I'm going to say demand curve for labor so let's think about now that we know what the firm's demand curve for labor looks like let's think about what would cause that demand curve to shift so let's call this what causes causes um the labor demand curve to shift labor demand curve to shift so let's talk about some things that will shift that labor demand curve the key to understanding this is in making sure that you understand that there's the value marginal product curve we can see that it is the price of output multiplied by the marginal product of labor so anything that changes the price of output will cause this curve to shift around and anything that changes the marginal product of labor will cause this curve to shift around so let's start by thinking about output price output price I'm just going to say here what we're talking about is p so because the value of marginal product is the output price multiplied by marginal product anything that causes output price to go up will increase the demand for labor and anything that causes output price to go down will decrease the demand for labor so an increase NP will cause an increase in value marginal product and an increase in demand for labor and the reason is that when that price output price goes up output is more valuable okay and so these values would rise and consequently the firm is the derived demand for labor is going to increase because the good that is being used or being produced with the labor is now more valuable than before so output price will cause a change in output price will cause a change in the demand curve for labor we could also think about technology if we have a technology change the way we would describe a change in technology is that that would change the marginal product of labor so any technology that increases marginal product will increase the demand for labor Okay so um let's just say an increase in marginal product will result in an increase in the value of marginal product because there's more marginal product there and that increase in value of marginal product will shift that demand curve for labor to the right now it is possible for a change in technology to result in US using less labor so for example um we could talk about what's referred to as labor-saving technology improvements so the development of a kiosk that you can put at the front of a fast food restaurant that will allow employ customers to order from it rather than ordering from a person that replaces a a human worker um so that's certainly a possibility but most of the time we have technology improvements most of the time those are what we call Labor augmenting technology improvements meaning that it makes the workers that you've got more productive and consequently it results in an increase in demand for labor okay let's talk about uh so we've got output price we've got technology I'm going to put here that that technology is going to change marginal product okay it's going to change that we can also think about a third thing and that would be the the supply of other factors supply of I'm going to say other inputs so if the supply of other inputs like say Capital goes up that's typically going to increase the marginal product of the other inputs that are used along with it okay so an increase in the supply of capital would increase the marginal product of labor and vice versa so we could think about say more physical equipment in our kitchen would allow our Cooks to be more productive than before so an increase in the supply would also cause a change or an increase in marginal product or a decrease in the supply of capital could decrease marginal product okay so that's an important thing that can shift the demand curve for labor and that's really the three that we're going to think about so in terms of how to remember that just remember anything that causes a change in the output price output price goes up or down that's going to change demand for labor and then these two things that can change marginal product technology and the supply of other inputs let's talk now so so this is our demand for labor let's talk now about the supply of labor foreign we've got half of what we need we've got the demand for labor once we add with that the supply of labor then we'll be able to see that equilibrium where our equilibrium wage gets determined so let's think about the choice of how much to work that's an important decision that we all make we can work a little we can work none if we want no hours we could work a lot and so if we think about how we make that decision of how much work to engage in versus how much time we spend not working and I'm gonna I'm gonna think about our time spent not working as leisure so we've got two uses for time labor and Leisure and it I've got another video on on utility and how consumers make choices and this is one of the examples in that video uh that labor Leisure decision it's an important decision that we all end up making um now if we think about the the price of of labor that or what the reward to working would be the wage so let's suppose your wage is twenty dollars foreign in the market you don't have any control over that but let's suppose the wage is twenty dollars that means that if you choose not to work if you choose to engage in an hour of leisure then you're giving up twenty dollars that you could have gotten for working for that hour so the opportunity cost of leisure is twenty dollars okay so this wage is going to be the opportunity cost of leisure not working if we were to think about what happens if the the wage goes up let's suppose wage increases to 25 dollars if the wage increases to 25 well now the opportunity cost of leisure has gone down or excuse me has gone up and the if the cost of leisure goes up all other things equal we're going to want less leisure so as the wage Rises we tend to work more and engage in less leisure okay so all this is telling us is that the labor supply curve is upward sloping so if we were to put wage up here and the amount of Labor that workers are supplying that's what's what it's going to look like now it's actually a little bit more complicated than that so if you were to go and watch the video that I've got on on consumer Choice you'll see that it it's not quite that simple because what if it if the labor supply curve always just sloped upward like that then what that means is that if we just keep increasing the wage you just supply all of the hours that you've got for working you don't engage in anything except work well we we all know that that's not really how we behave it turns out that Leisure is actually a normal good and so there's another thing happening behind the scenes as wage goes up and that is that your income is going up you don't have to work as many hours to earn the same income as before and so what tends to happen is that at some level of wage people start to say you know what I don't need to work as hard now to earn the same amount of money so at this higher wage I'm going to work a little bit less and engage in more Leisure and if my wage were to go up even more you know what I'm just going to engage in even more Leisure than before so there's actually very good evidence to indicate that labor Supply curves actually Bend back like that there's actually a downward sloping portion on the labor supply curve so if we increase wage at low levels of wage people work more and engage in less Leisure but at some point as we continue to increase wage they start to say you know what I'm working enough I'm going to start engaging in more Leisure because at these higher wages I don't have to work as hard to make the same amount of money okay we're going to leave it like this we're going to keep it nice and simple so now let's talk about what causes the labor supply curve to shift so I'm going to clear this off and then we'll talk about those things okay so let's think about what causes labor Supply to shift and the first thing that we'll think about would be changes in tastes changes in taste so your decision about how much you work is going to be determined by your preferences um it could be that there are other things other uses of your time leisure activities that you might all of a sudden become very interested in so it may be that let's suppose you're you're not too many years from retiring and um you think you know what I'm still going to work for another who knows maybe five or six years and then all of a sudden your preferences change and you get interested in something and you decide to retire early or it could be that maybe you're young and you have very little interest in working right now but then all of a sudden you decide that you know what I I I'd like to have some more money but to to do some other things and in order to get that money I need to work so it could be that you decide to start working more than before if we look at say labor force participation rates one of the things that we've seen is that for young males 16 to 24 the labor force participation rate has been declining for decades a smaller and smaller percentage of of males 16 to 24 are actually working and there's been a lot of research on what's going on what is it that that is causing young males to not want to work like they did 50 years ago and nobody's exactly sure it's probably different for every person but there are some things that have kind of come out in the research that that might be the culprits gaming is one of those things substance abuse is one of those things a larger percentage of young males are unfortunately get involved with with substances that are addictive and that can result in them not wanting to work or if they spend a lot of time playing games video games then the that takes up time that would come under a change in preferences we could also think about changes in alternative opportunities I'm just going to say alternative opportunities and what we mean here is that the supply of labor in one market is affected by changes in other markets so if wages rise in one market then some workers are going to leave other markets to come to the market where labor is more valuable and so you can see changes in the supply of labor in in other markets being affected by what's happening in the market that we're thinking about right now another thing is the number of workers we could say uh I'll say number of workers but an important aspect of this would be say immigration workers move from from one region to another it can be across National boundaries it can be from state to state can be within a state and so what's happening is that if workers are moving regionally then supply of labor is going down where they're leaving and going up where they're going to so now that we understand why the demand and Supply curves look the way they do for labor and we understand what causes the supply and demand curves to shift now we can start to think about equilibrium in the labor market and it looks just like equilibrium looks in any other Market that we've talked about so we've got the demand for labor we've got the supply of labor we've got the wage up here number of units of Labor down here what's behind the demand for labor is the businesses and what's behind the supply of labor is the workers if we were thinking about the market for gasoline back when we first started talking about demand and Supply usually they're what we're talking about is that the supply curve represents the producers the firms and the demand curve represents consumers but now when we're talking about the labor market it's the demand curve that represents the firms because they're the ones buying the inputs and so our equilibrium of course is right here our equilibrium wage we could call W star and our equilibrium amount of Labor we could call l-star so there's equilibrium in the market anything that changes demand or Supply we'll change that intersection and consequently we'll change the equilibrium wage and the equilibrium amount of Labor so let's just do an example or two um let's first before we do that let's go back to something we saw earlier in this video and that is that the wage is equal to price times marginal product which we can write as value marginal product anything that changes the wage is going to change the value of marginal product by the same amount okay so let's do an example Let's uh start out at equilibrium point a we've got our wage up here amount of Labor here I'm going to call our initial wage W1 our our initial amount of Labor L1 let's suppose so our story is going to be we start at a the wage is equal to W1 and the initial amount of Labor that's employed is going to be equal to L1 that's the amount of Labor demanded and the amount of Labor supplied and then let's suppose that we have a change in preferences let's suppose that we have a decrease in labor force participation decrease in labor force participation that's one of the things that we talked about is changing the supply of labor that's right up there changes in tastes so I'm going to call my initial supply of labor S1 that's going to decrease our supply of labor to S2 we get a new equilibrium right up here at point B and we see that that's going to drive up the wage to W2 it's going to drive down the amount of Labor bought and sold to L2 and you can see this is what we're doing here is exactly how you would analyze a change in demand and Supply demand or Supply in an output Market it's just that we're talking about an input here so not hard at all let's do one where we have a change in labor demand let's suppose that um suppose the price of output goes up so if we were to think about another labor market Let's uh consider the impact on wages of say welders if the price of the output that they're producing goes up so let's Suppose there are some some welders that are involved in producing metal trailers okay so let's start out same thing we've got our demand that one's going to shift so I'm going to call it D1 we've got Supply here's wage amount of Labor our initial equilibrium at a initial wage at W1 an initial amount of labor at L1 so start at a wage is equal to W1 labor is equal to L1 and let's suppose that uh the price of these metal trailers goes up increase in the price of metal trailers so remember here our value marginal product our demand for labor looks like this W is equal to Output price times marginal product and so what we're increasing here is we're increasing that we're increasing the price of metal trailers don't be confused and think that somehow we're increasing that price this is the price of Labor up here so if that price goes up output price goes up that means that at the margin what these workers are producing is now worth more that's going to increase demand for labor from D1 to D2 we get a new equilibrium up here at point B and we get a higher wage and a larger number of or amount of Labor that's employed in the the market so we could have also changed labor Demand by changing technology that would have changed marginal product or we could have changed the supply of some other input that's used in the production process either the amount of space that they've got land or maybe the amount of equipment that they're working with um so here's what we're seeing we're seeing that one of the basic principles of economics is that your standard of living is determined by your ability to produce something that somebody else wants to buy and that other thing that people want to buy the more valuable that is in the market the higher your standard of living is going to be so your standard of living depends upon the value of the marginal product that you can produce of course that also is dependent upon the amount of of the input that there is how many people are there that can do what you do now we've assumed here all of this stuff that we've done has been assuming that the markets that we're thinking about here are competitive and it turns out that out there in the real world most markets are not competitive so we could think about let's talk about one labor market that is not competitive and we could think about a type of Market that we call monopsony monopsony sounds a lot like Monopoly you've probably talked about Monopoly and Monopoly is where there is one seller of a good monopsony is when there is one buyer so we could have an input Market where there's only one buyer of inputs so let's suppose that maybe we have a small town with one big business in town maybe it's one big Factory that employs a whole bunch of people in that town and so maybe that factory is really the only real employer in that town well that means that they're the one buyer of Labor and being the one buyer is going to give that monopsony market power in the same way that being the only seller being a monopoly gives the monopolist market power and it allows them to drive the price up of what they're selling well for a monopsony that allows the monopsonist to drive wages down they're able to use their Market power to pay the the the inputs that they're purchasing labor less than if this was a competitive market and if you want to see examples of monopsony um a good place to look for monopsony is is sports economics um if you're a major league let's say a baseball player with with skills to play at the major league level that's a monopsony there's one employer of your labor and that is Major League Baseball and consequently the teams the owners of the teams are able to use their Market power to drive wages down let's think about other inputs so we've talked about Labor we've spent all of our time here talking about Labor so far let's talk about land and capital so we're typically talking about rental prices so we could think about the rental price of land usually we think about households as owning the land and renting that land to the businesses so the price at the margin that a business pays for land is going to be the rental rate we can think about that as uh working the same way for Capital the owners of capital we would think about as being the households and then the households rent that Capital to the businesses now a more realistic view of the world would be that a lot of capital is actually owned by the businesses they're not renting it from the households but that's fine it doesn't really complicate things too much if we just think about the households as renting it to the businesses so the rental price for land and the rental price for capital is determined exactly the same way as the wages in the labor market if we were to think about the the land market let's draw a small picture here for land and a small picture here for Capital um let's start with capital I'm just going to put our so here's the capital rental market so I'm going to put rental price so we've got demand and we've got Supply over here and the intersection of the demand and Supply determines the rate at which capital is rented for and it works the same if we're talking about land the only difference here is we've got some demand for land I'm going to draw my supply of land more inelastic I'm going to draw that supply curve for land as being not completely vertical because there's some land that's not being used for say production purposes that's very uh almost unusable but if the rental price was high enough people would clear that land and make it available so the the supply of land is more inelastic than the supply of capital but it's still just the intersection there that determines the rental price of land and I didn't intend to draw those so that the rental price of land and capital is equal that's just a coincidence that they're pretty close to each other in my pictures so land labor and capital all three of those inputs are are each going to earn the value of its marginal contribution to the production process so what determines the amount that an input is paid is the value of its contribution to production it's the value of what it can produce at the margin um if we're thinking about the rental price we could think about how the rental price here compares to say the purchase price and for an asset that's used in say production the value of any asset is going to be equal to the present value of the stream of Revenue that it's going to generate in the future so the purchase price would be equal to the the present value the discounted value of all of the future streams of Revenue we don't need to worry about that that's that doesn't change anything that we've done right here that just helps you understand what we're talking about in terms of rental price versus just the purchase price if we think about um markets one of the things that we know is that markets are linked a lot of times in principles classes we talk about a single Market at a time um and really there's a term for that you you probably do not need to remember this term but that term is called partial equilibrium analysis we're looking at one market at a time but out there in the real world what happens in one market causes other things other changes in other markets that will Ripple through the economy so if the price of gas goes up let's suppose spring rolls around and demand for gas increases well that's going to drive the price of gas up but then gas is used as an input into the production of lots of other stuff in the form of transportation costs and so when input prices go up the supply curves for lots of goods go up and that's going to drive the price of goods and services typically up so markets are linked when we start thinking about those linkages then what we're thinking about is more of what we call a general equilibrium analysis analysis so partial equilibrium One Market at a time General equilibrium would be thinking about that linkage and thinking about how what happens here will have will affect this Market which may affect other markets which may affect other markets okay so if we think about an example of this um we could think about maybe some a storm and that storm destroys some trucks let's say so now the supply of trucks goes down trucks used to transport Goods let's say so um the supply of trucks goes down that drives up the price of trucks increase in the price of trucks the fact that there's fewer trucks will also affect the productivity of truck drivers right a truck driver without a truck is not very productive so this is going to affect productivity and the fact that productivity of truck drivers changes is going to end up affecting the wage that they earn that's because productivity of truck drivers has gone down demand for truck drivers goes down so we're going to say demand decrease in demand for truck drivers and consequently we know that that will drive down the wage of truck drivers so this storm can have impacts that Ripple through various markets and this is only one little chunk of all of the possible things that can happen um so now let's get back to uh kind of our original question and that is well why is it that medical doctors make more than High School teachers and and the answer now that we can see is that it boils down to demand and supply what medical doctors produce is worth more in the market than what high school teachers produce it also depends a lot on how many people are in each market how many medical doctors are there compared to the number of of high school teachers and and the fact of the matter is that um the training that you need to be a medical doctor is much much more rigorous than the training you need to be a high school teacher so consequently medical doctors are going to earn more in the market it's not it's not a reflection of somehow our priorities being out of place that that's not what we're seeing here what we're seeing is that it depends on demand and supply and the things that change demand and the things that change Supply and that demand for labor is a function of the the value of the good in the market and the productivity of the people that are that are producing it so hopefully that gives you an idea of how labor markets work um you can go into a lot more depth in terms of Labor markets and what happens when you've got Market power in those labor markets but this is a good place to start so I'll see in another video
Principles_of_Microeconomics
Chapter_2_Thinking_Like_an_Economist.txt
in this video we're going to talk about thinking like an economist so we're going to think about kind of how economists approach problems how we we think about understanding human behavior the most basic thing that we're going to do is use the scientific method so we're going to be thinking about something that we want to explain we're going to be thinking about some some potential explanations for behavior that we observe we're going to think about testing whether or not our hypothesis explains the behavior or whether it doesn't and if it doesn't explain the behavior then we're going to think about changing our hypothesis so we're going to use that scientific method very similar to what you would doing in a hard science one of the things that we need to talk about is that we're going to be making assumptions so in terms of how we approach understanding human behavior the way that we need to start is we need to make assumptions to simplify the problem so economists a lot of times are criticized for making assumptions but assumptions are made in all different disciplines so if let's suppose that we were talking about a physics problem and we were trying to calculate the amount of time that it would take for this pin if I drop it to hit the floor then what we would do if we were going to calculate that using physics is we would make the simplifying assumption that it's dropped in a vacuum if we make that assumption then it becomes a much more easy problem to figure out if we don't make that assumption then we have to figure out the effect that wind resistance is going to have on this pin and factor that in and that is a very complicated process it depends on the shape of the pin it depends on things like the air temperature it depends on the humidity it depends on the exact way that I drop this pin and so if we assume that it's in a vacuum we can ignore a whole bunch of that stuff and it becomes simpler so that's what we're going to do is we're going to think about making simplifying assumptions so let's talk about the use of economic models so you have have used models before in your early education and you'll continue to use them throughout your education an example of a model that you've used before might be a let's suppose that at some point when you're probably grade school somebody has shown you a picture of the sun shining down on grass and then a cow eating the grass and that's designed to get you to understand how energy gets from the Sun into that cow and it's a simplification of reality right it's something that we can show to a small kid and they're able to kind of grasp what's going on at least in a general sense but that simple model ignores a very complex set of processes that are going on there if we think about the sun shining down on the grass and the grass growing then we're ignoring photosynthesis and we're ignoring a whole bunch of things about the Sun and and how the Sun gets the through the atmosphere and and then the the cow eats the grass and we're we're ignoring digestion and all of the complex processes that are going on into the cow to convert that food into energy but it's a good place to start so in terms of what we're going to do we're going to be thinking about lots of different models we're gonna start with very simple models and it's at first going to feel to you like this is this is too simple I will have students in a face-to-face clay class raise their hand and say well but wait things aren't that simple things are more complex and what I always tell students is to enjoy the simplicity things will get more complex as we move through this but at the beginning let it be simple enjoy the fact that it's simple even even though your brain is going to want to say but wait but wait it's not that simple it'll get a little bit more okay so let's start by thinking about a very simple model that we use in economics and that's going to be what we call the circular flow diagram so when we look at the circular flow diagram that we're going to represent households over here with a little box and then over here we're going to represent businesses or what we're gonna call firms and there's no reason don't infer anything about the fact that I drew that box a little bit bigger that's that's an accident so we've got all the households represented over here we've got all the firms represented over here and we're going to have a circle up here that we are going to use to represent the market for goods and services market for goods and services so this is where you go if you want to buy a pizza or you want to buy a new pair of shoes or you want to buy a car okay and the firm's are the ones that are making all of those goods and services the households are buying them down here let's put the market for inputs some textbooks call this the market for the factors of production the inputs are just the things like land labor capital all of that stuff that the firm's use to produce the goods and services so the households let's just think about this as labor that's the easiest one to think about it's the households that own the labor and sell it to the firm's so let's start a start with our market for goods and services so the goods and services go from the firm's to this market so these are the goods and services and they go to the households through that market so again these are the goods and services the household pays for those goods and services with dollars so here are our dollars when households are buying goods and services we call that expenditure the number of dollars they spend we call expenditure those dollars go into the firm's so when the dollars are going into the firm's we call that revenue so households buy goods and services from the firm's so the goods and services are going this direction and the dollars are going this direction if we think about this market for inputs then the inputs come from the households into the firm's so these are things like labor and land and capital those things are owned by the households and they go into the firm's these are the inputs these are the factors of production those land labor capital so I'm going to say labor etc and the firm's by those so the firm's are buying the inputs and they're paying wages rent if they're renting land profits go to the households and so the dollars are coming these are dollars right here the dollars are coming into the households that's income so in this market for inputs the inputs are coming from the households into the firms and the firms are paying for them with dollars and those dollars are the income of the households so this circular flow diagram is a good way of getting kind of a grip on how a basic economy works it helps us understand that dollars are flowing circularly the households take their income and spend it on goods and services which becomes revenue for the firm's and the firms are using that revenue to pay wages and pay rent and pay profits for the inputs and that goes into the households our goods and services are going in that direction our inputs are going in this direction so it's a nice simple way to to start understanding how an economy works it ignores several things so we don't have in here any government so the number of dollars that households spend on goods and services not all of those dollars go as revenue into the firm's some of them go as taxes into the government so if we wanted to make this more realistic we could include a government we don't have any other countries so if you buy if you're a household you don't have to just buy from firms in your country you can buy from firms in other countries and we don't have that in here so we can make it more complex but we don't need to this is a good place to start to understand how dollars flow and how goods and services and inputs flow in an economy so that's an example of a simple model we can also talk about another model that we're going to be thinking about and that's going to be what we call a production possibilities frontier so a production possibilities frontier I'm going to abbreviate PPF PPS production possibilities frontier what a production possibility frontier shows us there's going to be all the possible combinations of output that an economy can produce given its inputs and given its technology so given what the country has to work with inputs and technology the PPF shows what it's capable of producing so if we were to draw a picture of a PPF let's pretend let's make life very simple and let's pretend like there me too goods that are going to be produced let's suppose we've got an economy in which there are only cars that are being produced and an economy in which there are only computer cars and computers those are the only two goods so we need to decide how much cars to produce and how many computers to produce don't think about what's being used to produce computers or cars don't think to yourself well but it can't be that simple because if you've got workers then they have to eat and so we would have to have food being produced don't make it complex enjoy the simplicity pretend as if we don't have to worry about tools or food or anything like that let's pretend that if we use all of the resources in this economy to produce computers let's pretend that we can produce 3000 and let's suppose that if we produce only cars we can produce 1000 don't worry about the fact that my scale here is not to scale on the horizontal axis don't we're not worried about that let's draw a production possibilities frontier typically our production possibilities frontier is going to look like this it's going to be bowed out it's going to be curved like that and what this production possibilities frontier so this thing is what we would call a PPF this production possibilities frontier shows us all of the possible combinations of computers and cars that we can produce given the inputs that we've got in this economy and given the technology so if we look at a point like point a point a would be a possible production point let's suppose the point a would correspond to maybe I don't know 2200 computers and let's say 600 cars and so that's a possible production point we would describe point a as being an efficient point if we were to think about a point down in here like point B point B would be what we would call an inefficient point it's a possible production point we've got enough inputs and resources to produce point B but we could produce more cars and computers than just point B so we don't want to end up at a point inside the PPF we could think about a point like Point C Point C would be what we would call an unattainable point it's outside our production possibilities frontier it's a point we don't have enough inputs or enough technology to actually produce so let's think about starting at Point a if let's suppose that we want to produce a hundred more cars than we're currently producing at Point a that means that we're going to move in this direction a hundred cars but notice that that would put us at a point outside of the production possibilities frontier so if we're going to have a hundred more cars that means we have to have fewer computers and so if we think about another point let's call that point D that's point D where we've got now a hundred more cars let's suppose that's down here at seven hundred but we've got fewer computers let's suppose that that means we can only produce two thousand computers so let's think about first the fact that the downward slope of this production possibilities frontier illustrates the fact that there's a trade off society here faces a trade off if we want to produce more cars that leaves us with fewer resources that we can use to produce computers so the downward slope shows us that we've got a trade off we can also do more than that we can actually see what the opportunity cost of that one hundred cars is so what I want to do is I'm gonna clear off that side of the boards so that I can kind of do some work over there and then we'll kind of think about what the opportunity cost looks like on this picture so what we've got here is a movement from point A to point D and we're getting a hundred more cars let's extend that down so the cost of 100 cars what we're giving up to get that 100 cars the cost is 200 computers we can change that to see how much each car cost us so if we said the cost of one car if we want to turn that 100 into a 1 we've got to divide it by a hundred but that means we also have to divide this by a hundred so that says that the cost of each car is two computers now let's go back here and look at what's going on in this picture if we were to draw a little line between points a and point D and we were to calculate the slope of that line remember the slope is the rise over the run well this the rise if we go from here down to there that's 200 computers and if we go over horizontally that's 100 cars so the slope between a and D is 200 the rise over 100 which is equal to 2 notice that the slope of the PPF right there is exactly equal to the opportunity cost of a car so we get an important result here and that is that the opportunity cost of the good on the horizontal axis is given by the slope of the production possibilities frontier okay so the fact that the opportunity cost and the slope Oprah the same is not a coincidence the downward slope of this production possibilities frontier is showing us the opportunity cost of producing this good on the horizontal axis let's talk about the fact that the opportunity cost is not constant so clearly we've got a production possibilities frontier that looks like this notice that the slope of the PPF if I look at the slope up here versus the slope down here versus the slope down here it's not constant what we see is that as we move in this direction the slope is getting steeper and steeper and steeper in absolute value the slope is getting bigger I know the slope is negative but in absolute value it's getting bigger and bigger and bigger and what that means is the opportunity cost of a car is not constant it actually increases as we produce more and more cars so one important thing that we'll be thinking about especially in the next chapter is we'll be thinking about what can cause the opportunity cost to increase this is what we call an increasing cost production possibilities frontier we could have what we call a constant cost production possibilities frontier that's a linear production possibilities frontier turns out that what causes a PPF to be curved versus being linear is that the inputs are specialized notice we've got cars and computers the inputs that would be used to produce cars are going to be different than the inputs that are used to produce computers and the result of that is that the production possibility frontier is bowed out or curved like that if the inputs that are used to produce this good and the inputs use to produce that good are the same if they are not specialized in our production possibilities frontier is going to be linear you can see that the slope is constant so the opportunity most of the good on horizontal axis does not change so if we had something like tables and chairs that would be an example of two goods that use the same inputs in the same technology and so in that case the inputs are not specialized the production possibility frontier is linear let's finish up this section by noticing that the production possibilities frontier can shift if we had a change in technology then our PPF could shift out say in that direction or if we had a PP F so let's suppose our technology change so the computer production we could produce more computers than before it could shift out in that direction or it could shift out in both directions so the production possibilities frontier can change depends on if our inputs change or if our technology changes if you look at the rest of the chapter what you'll see is that there's a discussion of the difference between microeconomics and macroeconomics I talked about that in the first video there's a little bit of a discussion about what economists do lots of economists work in Washington and the book kind of describes some of the things that that they work on the book also talks a little bit about the difference between a positive statement a positive economic statement and a normative statement I think that's something that's really easy to understand if you're looking at the book let's talk just briefly about it a positive economic statement is a statement about how the world is a normative economic statement is a statement about how the world should be so if I were to say this statement the minimum wage creates unemployment that is a positive statement it's a statement about how the world works doesn't have to be true turns out that is a true statement but it's testable that's the way you judge whether or not a statement is a positive statement or a normative statement if it's testable it's a positive statement we can test whether or not raising the minimum wage or or adding a minimum wage where one didn't exist before we can test to see whether or not that creates unemployment if I were to say we should have a minimum wage well that's a normative statement that's a statement about how I believe the world should be okay so your book talks about that typically normative statements have to do with your opinions about what you think should happen and positive statements are about how things actually work and again the positive statement could be wrong it doesn't have to be a true statement it just has to be testable so I think that's something that you can look at in the book so what we're going to do in our next video is we're going to talk about more about production possibilities frontier and we're going to talk about that principle that that trade can lead to gains for people that trade is not a zero-sum game so I will see you in the next video
Principles_of_Microeconomics
Chapter_16_Monopolistic_Competition.txt
market that falls in between perfect competition and monopoly we're going to think about monopolistic competition so if you look at the name here monopolistic competition you'll recognize that it is kind of a combination of monopoly and perfect competition and so what we're going to see is that it is indeed kind of a little bit monopoly and a little bit perfect competition and so we already kind of know a little bit about what this is going to look like let's think about where this fits in we've got perfect competition and we've got monopolies that we've talked about now we're thinking about a type of market monopolistic competition that falls in down here monopolistic competition now I'm not going to put like a little line there because this isn't a a spectrum where that you hit some point and all of a sudden it becomes monopolistic competition this is kind of a continuum we can easily identify the end points but the other things that fall in between there we're we just know that perfect competition is not monopolistic competition is not perfect competition okay actually what we're going to see is monopolistic competition it's very similar where it's basically perfect competition but the firms are going to sell differentiated goods the goods are not going to be identical so let's write out here the characteristics so here are the characteristics of a monopolistically competitive market the first one is there are lots and lots of buyers and sellers lots of buyers and sellers now that's the same as the first characteristic of perfect competition lots and lots of buyers and sellers the second characteristic of men ballistic competition is that the goods are not identical we're going to say that there are differentiated goods differentiated goods so each firm is selling a good for which there are not perfect substitutes there may be substitutes for the good but they're not perfect substitutes okay and then the last characteristic is that there's free entry no barriers to entry so you can see that this is a lot like perfect competition these two characteristics are the same but the firm's gonna have some market power because it sells a somewhat unique good it's a it sells a differentiated goods so there is not a perfect substitute for what the firm is selling let's talk about some examples of this so if we think about examples of monopolistically competitive markets so if I were in a face-to-face class and I were to ask my students give me examples of things that you buy most of the examples that I would get in that class would be examples of goods that are bought and sold in monopolistically competitive markets it is by far the most common type of market so if we think about examples of this examples are things like restaurants so if you think about say a pizza there are lots of different places you can go buy a pizza and basically a pizza is usually round dough with usually tomato sauce and maybe some meats and vegetables some cheese on it so that's basically what a pizza is but if you think about it different pizza restaurants each have a little bit unique pizza so a Domino's Pizza is not the same as a Papa John's Pizza it's not the same as a a Pizza Hut pizza or a little Caesars or if you think about any of the kind of smaller non Chane Pizza Restaurants they're also in a somewhat unique good now it's not a lot different from what the other firms are selling but it is different enough that each firm is going to be able to raise its price a little bit and not lose all of its customers so if you think about all the different pizza options that are out there if you like pizza then you probably have a favorite well what that means is you might be willing to pay a little bit more for your favorite than you would be for other pizzas okay and so what we'll see is that each firm is going to face a downward sloping demand curve they're able to raise their price a little bit and not lose all of their customers with a perfectly competitive firm because the goods were identical if a perfectly competitive firm raised its price any it lost its customers because there were perfect substitutes for what it was selling so our examples restaurants consumer electronics there are lots of different options for you to buy in terms of televisions if you wanted to television there are tons of different brands of television but they're all basically going to do the same thing for you but they're all somewhat unique there's movies clothing let's say video games books there's all kinds of them furniture and in all of these cases there are lots of buyers and sellers there's free entry if you want to if you want to start a restaurant you can there's nothing that prevent you from doing that if you wanted to make a video game you can I mean you've got to get your if you want to start a restaurant you've got to get a business license you've got to abide by the laws and I'll pay your taxes and all that stuff but there's nothing that prohibits you from doing it so these are examples of monopolistically competitive firms now let's go back to this this idea that the differentiated goods gives the firm each firm a little bit of market power so each firm let's say it this way each firm has a small amount of market power what that means is each firm faces a downward sloping demand curve for its product let's think about what that means remember that the elasticity of demand depends on the availability of close substitutes and the more substitutes that are available the more elastic demand is so if we were to think about the demand curve that any particular firm faces remember there are lots and lots of buyers and sellers so if we were to think about the demand curve that a particular monopolistically competitive firm faces because there are substitutes for its product remember it doesn't face the market demand curve because it doesn't sell to all of the buyers in the market it's going to sell to just a small sliver of the buyers and because it faces because it it has substitutes for it's good it's going to face a very elastic demand curve but its demand curve that it faces is going to be downward sloping and that means it can raise its price a little bit and not lose all of its customers if it raises its price very much it could lose all of its customers but it faces a downward sloping demand curve and we've got a name for the demand curve that it faces we call it a residual demand curve each firm faces what we call a residual demand curve let's think about what that term means a residual demand curve differentiates it from the market demand curve residual here we're using the word residual to mean kind of leftover so if you're a one firm in the market and you've got a some customers that buy from you but the customers that buy from you are the customers that didn't buy from somebody else so you get kind of the leftover demand that's not satisfied by the other seller in the market so each firm faces a residual demand curve and it's downward sloping it is not the market demand curve not the market demand curve that's important each firm faces a residual demand curve not the market demand curve here's the nice thing because each firm faces a residual demand curve and the demand curve that it faces is linear we can use that rule that we know about drawing the marginal revenue curve so we know that all firms maximize profit by producing the quantity where marginal revenue equals marginal cost because the monopolistically competitive firm and let's label this monopolistically an op elastically competitive firm this is just one of the firms in the market the demand curve that they face is downward sloping our marginal revenue curve lies below it and has twice the slope we know how to draw the marginal revenue curve already because of what we learned in our Monopoly chapter so each of these firms has a very small amount of market power it depends on how differentiated its good is the more differentiated its good the more market power it's going to have okay none of these firms are going to have near as much market power as what the monopolies had because the monopoly faces the market demand curve there are no substitutes for its good now the firm still as I just said maximizes profit by producing the quantity where marginal revenue equals marginal cost I said it in the last video and I said it probably in the video before that I'll say to get here this is important if you're not sure how to solve a problem and it's a firm problem and you're trying to figure out what quantity this firm is going to produce you need to be looking for where marginal revenue equals marginal cost so we know exactly how to solve this problem if we were to draw the cost curves for this monopolistically competitive firm so again let's label this monopolistically competitive firm let's put our marginal cost curve up here let's go ahead and put the average total cost curve in here now we've got our cost information let's add our revenue information what we're really interested in is the marginal revenue curve but let's put the demand curve that this firm faces now remember it's pretty elastic because there are substitutes for this firms good but because it's downward sloping we know that the marginal revenue curve has the same intercept and twice the slope so I'm going to draw it there's my marginal revenue curve the firm is going to look where marginal revenue equals marginal cost that happens right there so this is the quantity that the monopolistically competitive firms going to produce I'll call it Q MC for monopolistic competition once the firm knows what quantity it's going to produce it's going to use the demand curve to figure out what price it's going to charge it wants to charge the highest price that consumers are willing to pay for that quantity so we go up to the demand curve we get the price there's the price the monopolistically competitive firm is going to produce it's going to charge and hopefully you see that this is the exact same picture that we talked about with monopoly everything is the same the only difference is that because this is monopolistic competition we know that this is not the market demand curve that's a residual demand curve okay if this was a monopoly picture that would be the market demand curve so this is just one of several firms that are in this market if this was a monopoly we would have a picture of the whole market because there's only one firm because this is monopolistically competitive market we've got a picture of one of the firm's but there are other firms but the picture itself and how you find the right quantity in the right price is the exact same process as what we did with monopoly you look where marginal revenue equals marginal cost that gives you the quantity then you go up to the demand curve that gives you the price so this at least how you solve the problem is exactly what we did with monopolistic or with monopoly how you figure out profit is exactly the same as what we did with monopoly so we've got the quantity we've got the price I'm not going to draw it in here you can go back to that previous video if you want to see how to figure out profit but all you need is the average total cost right there would be the average total cost of that quantity we draw a little dashed line over here and that area would represent the profit that this monopolistically competitive firm earns let's talk about before we think about the difference between the short run and the long run let's also talk about the fact that the monopoly has no supply or the monopolistically competitive firm has no supply curve for the same reason that the monopoly has no supply curve the firm still makes a supply decision but we can't label any of these curves S and call it the supply curve because we need the marginal revenue curve we need the marginal cost curve and we need to demand curve to figure out quantity and price so the monopolistically competitive firm has no supply curve for the same reason that the monopoly has no supply curve what we need to do now is we need to think about the difference between the short run and the long run so let's go back to monopoly for a second with monopoly we didn't worry about the short run the long run the reason is that there's no entry in the in the long run with monopoly there are barriers to entry with this type of market there is free entry there are no barriers to entry so what's going to happen is if we've got a monopolistically competitive firm that's earning a positive then in the long run other firms are going to enter that market and what we'll see is that that profit gets driven to zero so what we need to do is clear this off and then we'll take a look at what that looks like all right let's think about what's going to happen in the long run if a monopolistically competitive firm or if the firms in the market are making a positive economic profit the way we want to do this is we want to think about what would happen to the demand curve that we face let's suppose we owned a firm let's suppose we owned a pizza restaurant let's think about what impact we are going to experience if another firm enters the market so if we think about what's going to happen to demand for our product if another firm enters the market if you think about it long enough you'll probably realize that demand for our product is probably going to fall and it might be easier to think about it the other way around let's suppose that we are one of several pizza restaurants in town and one of the other pizza restaurants leaves the market then that's probably going to be mean more customers for us right so if another firm leaves the market our residual demand curves going to shift to the right so if we draw a picture of the residual demand curve that we face as a firm I'm gonna draw it fairly elastic since we're talking about monopolistic competition I'm going to draw it a little bit flatter than I would if we were talking about monopoly so if this is the residual demand curve that we face then if another firm enters the market the residual demand curve is going to shift to the left from say d1 to d2 this would be the effect of entry into the market or if another firm left the market then that is going to increase the residual demand curve that we face so exit will cause our residual demand curve that we face to move to the right to say d3 now let's stop and think about that for just a little bit because if I didn't talk about this everybody would probably say okay yeah that makes perfect sense and nobody would stop to think about the possibility that that shouldn't make sense to you because if you go back and you think about probably the very beginning of your very first introduction to the demand and supply model we would have talked about the determinants of demand the things that shift demand curves hopefully you remember them realistically you might not so let's review what the determinants of demand are so if we think about the determinants of demand these are the things that shift demand curves determinants of demand these are your income when your income changes your demand for goods and services will change and it depends upon whether or not for you the good is a normal good or an inferior good so income a very important determinant of demand prices of related goods that's another very important determinant of demand prices of related goods another one is your preferences another one's your expectations about the future buyers expectations and then there's a fifth one and the fifth one is the number of buyers in the market number of buyers now let's think about this when I taught you the determinants of demand these were the five determinants that we talked about that is not consistent with what we're doing in this picture what we're changing here is the number of sellers in the market now the number of sellers is a determinant of something but it's any terminus of supply so if I didn't talk about this and you went home and you were studying this like I know all of you do every evening you go home and you get out your economics textbook and and you read through it and you review the stuff that we've already done and and if I hadn't explained to this when you're reviewing tonight and you go back and you review the determinants of demand you'd say hmm wait a second that's the number of buyers not the number of sellers why is it that today when we were shifting a demand curve we had the number of sellers changing a demand curve that's a determinant of supply well here's what it boils down to this is a residual demand curve this is not a market demand curve these are the determinants of demand but here what we're thinking about are the things that shift a market demand curve the things that shift a residual demand curve include all of these things but the number of sellers in the market's market also shifts the residual demand curve so if we think about our residual demand curve for pizza from us the the demand that we face from our consumers then if consumer incomes go up and pizzas a normal good that will shift the residual demand curve to the right so this determinants of demand also shifts a residual demand curve as a matter of fact all of these if if consumers like pizza more than before for whatever reason their preferences for pizzas change so that they want to buy more pizzas at every price than before then that's going to shift our residual demand curve to the right or if the number of buyers in the market changes if a lot more people move to town then that will shift our residual demand curve to the right so all of these things still shift residual demand curves in the expected way it's just that now we've got something new that shifts a residual demand curve and that's the number of other firms that are in the market okay so it's important not to get confused here and not to think that all of a sudden the number of sellers is a determinant at demand it's not it never has been it never will be but it is a determinant of the residual demand okay so now let's think about what happens if the residual demand curve shifts let's suppose that we've got a decrease in residual demand let's draw the residual demand curve here we'll call it d1 price quantity and let's draw the marginal revenue curve that goes with that residual demand curve let's call it m r1 well if the residual demand curve shifts to the left because there's entry into the market so let's shift that residual demand curve to the left to d2 then what's going to happen is that our marginal revenue curve is also going to shift to the left because remember your marginal revenue curve always has the same intercept as the demand curve so that marginal revenue curve is going to also shift it's going to shift mr2 so when our residual demand curve shifts the marginal revenue curve goes with it now once we understand this that entry will shift a residual demand curve to the left and exit will shift a residual demand curve to the right now we can see what's going to happen to a monopolistically competitive firm if it's earning a positive economic profit so let's draw a picture of what's going to happen here let's start with a picture of a monopolistically competitive firm earning a positive profit okay so let's I'm going to draw this picture relatively large because we're gonna be cramming a lot of stuff into one picture so when you're drawing your notes it would be useful to draw a big picture that way you don't get a small picture and you're trying to find intersections and and tangent C's and all of that in a small picture let's go ahead and put our marginal cost curve up here let's put our average total cost curve and let's put it I'm gonna put it I don't want to it too high I don't want to put it too low I'm gonna put it somewhere right kind of in the middle like this there's my average total cost curve now I want this firm to be earning a positive profit this is a monopolistically competitive firm and I want their profit to be positive so I'm gonna put the residual demand curve up above their average total cost curve so I'm gonna put residual demand somewhere like that now remember it's it's fairly elastic so you don't want to make it flat you don't want to make it perfectly elastic but you don't want to give it a lot of slope either now our marginal revenue curve has the same intercept and twice the slope so I'm gonna make it come down like that I'm going to call it demand curve one marginal revenue curve one let's make sure we label our axes this is price this is quantity so we've got a monopolistically competitive firm let's identify their quantity first we're going to look for marginal revenue equals marginal cost that happens right here there's we're gonna call it q1 let's identify their initial price so we go up from that quantity to the demand curve we hit it right there so there's p1 let's go ahead and identify at least the rectangle that would give us profit so we've got remember profit is equal to price minus average total cost multiplied by Q we've got price and quantity we need the average total cost so we go up from that quantity to the average total cost curve we hit it right there there's average total cost 1 now profit is going to be the area of this rectangle I could shade it in but I'm not going to because we're going to to be doing some more stuff on this picture but the area of this rectangle would give us profit the vertical distance is price minus average total cost that term and the horizontal distance is quantity that term so if you take a vertical distance times a horizontal distance you're getting an area and that's the area of that rectangle right there this is a firm earning a positive profit I'm gonna label let's label our initial point right here point a right and let's put our story here we're gonna start at 1 or excuse me start at a price is equal to p1 quantity for the monopolistically competitive firm is equal to q1 I meant to write Q equals q1 but I've got q1 equals q1 profit is positive now if this was a monopoly that's the end of the story because there's no entry in the long run with the monopoly but it's not a monopoly this is a monopolistically competitive firm and there's free entry so in the long run because profit is positive other firms will enter in the long run because profit is positive other firms enter now as other firms enter this firms residual demand curve that it faces is going to start to shift to the left and as it's residual demand curve shifts to the left it's marginal revenue curve comes with it so let's draw with a different color a marginal excuse me a residual demand curve that has been shifted to the left now I'm not going to draw the end of the story here I'm going to draw you a residual demand curve that's been shifted to the left but the firm is still going to be making positive profit I'll show you the end of the story here in a second but let's suppose that exit or excuse me entry is taking place and so the residual demand curve has been shifting to the left and let's suppose it shifts this much so we get this leftward shift of the residual demand curve to d2 now as that residual demand curve shifts to the left the marginal revenue curve is coming with it in our marginal revenue curve shifting to the left is going to end up at marginal revenue curve - now let's think about what's happening here now our intersection between the marginal revenue and the marginal cost curves is right here so the firm as the residual demand curve is shifting to the left the firm is going to be reducing the quantity that it produces down here to q2 let's think about what's going to happen to price so demand for this firm's product is decreasing so we would expect them to be able to charge less than before which is exactly what happens the firm is going to charge the highest price for q2 that it can get so we go up to the new demand curve and that happens right here the price is going to fall to p2 now let's think about what happens to average total cost now remember this average total cost curve is u-shaped so as we move in this direction average total cost is rising now in my picture I didn't move very much so average total cost doesn't go up very much it's going to be right there which is a little bit higher than where it was I'm just going to extend it over here put a little line there and that's going to be average total cost too but now notice what's going on the area that would represent profit I'm going to shade it this time it's this area right in here that rectangle would represent the profit that the firm earns and you can see that the profit that they earned is smaller than what they earned before when we were at Point a we could call this now point B so let's add to our story here as other firms enter residual demand decreases I'm going to abbreviate that residual demand decreases quantity Falls to q2 price falls to p2 and profit Falls now profit is still positive price is still greater than average total cost so profit is still positive so the entry would continue and the residual demand curve is going to continue shifting to the left quantity will continue decreasing price will continue falling average total cost will continue rising and that will continue until price and average total cost come together at which point profit is equal to zero so this continues let's just say right here here's the last step this continues until profit is equal to zero now let's look at how that's going to happen and this is kind of challenging with this picture because it's already complicated but this residual demand curve is going to be shifting to the left and what's gonna happen is it's going to shift enough that it ends up being just tangent to this average total cost curve now that's hard to see in this picture because it's complicated so what we need to do is I'm going to clear this off and then we'll draw a picture of what the end point looks like and then it'll be easier to see how price and average total cost have come together let's draw another picture here and let's think about what the end of this story is going to look like so we've got our marginal cost I'm going to draw my average total cost curve and I'm going to give it I don't want it to be big and flat and sweeping like that I'm gonna give it a nice ball shape like this it's average total cost and I'm doing that just because it'll be easier to to make the picture look like it's supposed to look so in if our residual demand curve is up here then the firm's going to earn a positive profit and entry is going to be taking place and that residual demand curves going to be shifting to the left and it's going to shift to the left until it ends up being just tangent to that average total cost curve now let's finish I'm going to go ahead and extend this up to my vertical axis there now I need to draw my marginal revenue curve that goes with it but in order to make this picture look the way it needs to look we need to be careful about where I put my marginal revenue curves so this point is going to be where my price is this is going to be my quantity now for this to be the quantity my marginal revenue and my marginal cost curve this is the demand curve my marginal cost curve and my marginal revenue curve need to intersect right there so if I go from here down through that point and I make that my marginal revenue curve now I've got an accurate picture of what the end of this process is going to look like so the residual demand curve is going to shift to the left until it ends up being just tangent to that average total cost curve and at that point the price and the average total cost will have come together because notice that if I go up from this quantity right here to the average total cost curve I hit it right there average total cost is equal to price right there this is a situation where we've got a monopolistically competitive firm earning a profit equal to zero this is also what we would call long-run equilibrium in a monopolistically competitive market long-run equilibrium where our firms are earning zero profit when profits equal to zero there's no incentive to enter the market there's no incentive to exit the market you would never exit the market and less profit became negative okay so this is kind of the end of that process where we had the complicated picture here and that residual demand curve was shifting to the left it's going to continue shifting and our price is going to continue falling an average total cost is going to continue rising until they come together and that happens when the residual demand curve has shifted enough that it's just tangent to that average total cost curve and at that point the firm is earning zero profit and there's no incentive for other firms to continue entering no incentive to exit so we've got a long-run equilibrium what we need to do is we need to ask this question is profit equal to zero inevitable in the long run think about what we've just learned so in the perfect competition chapter we talked about the fact that profit gets driven to zero in the long run we have to remember that that's economic profit so that means that everybody's next best employment alternative is covered okay same thing here this is economic profit so we're not saying that people can't put food on their tables we're saying that profit is driven to the point where its revenue is just big enough to cover your costs no more no less and those costs include all of your implicit costs in addition to your explicit costs okay so is a negative or is a economic profit let's put that in here is economic profit equal to zero inevitable in the long run and the answer is yes the key to earning a positive economic profit is that your price needs to be bigger than average total cost and unfortunately for firms in this type of market there's free entry so if there's ever a difference between price and average total cost if price is ever more then that's going to create an incentive for other firms to enter these firms can't stop those other firms from entering and the profit gets driven to zero so let's talk about ways that the two things the firm might be able to do two things the firm can do so let's suppose that your dream and I tell my my face to face class is this all the time let's suppose your dream is to start a restaurant or I don't know to start a bar and here I come and I trample all over your dream because I've just told you that if you're gonna enter the restaurant business you need to be prepared to earn a profit equal to zero in the long run now that's economic profit there are way worse things than economic profit being equal to zero what I always tell my students is that I'm not trying to destroy any he's dream I'm not trying to tell you that you shouldn't pursue something that you've always wanted to do but you need to be prepared for the fact that your profit will be driven to zero in the long run now there are two things that that are at play here there are two things that you could do to try to change this one of them is that if you could get your residual demand curve the demand curve that you face to shift back to the right then once again your residual demand curve would be above the average total cost curve and your profit would be positive so the two things the firm can do one is to try to increase the demand curve they face let's say it this way try to increase the residual demand curve they face what that means is you could try to further differentiate your product that's really what we're talking about here further differentiate your product so that's the first thing you could do the second thing you could do is you could try to shift your cost curves down if you could get these cost curves to shift down then it doesn't matter if the residual demand curve stays right where it is if you could shift the cost curves down then your price would once again be above average total cost and you would make a positive profit so the second thing you could do is decrease costs let's say decrease cost of production both of those things would result in a positive profit here's the problem even if you're capable of doing these two things if your profit becomes positive in the long run other firms will enter that market and your profits gonna go to zero again so you would have to try to continually do these things let's start with this one decreasing your costs that's hard to do and there's a lower bound on it you can't decrease your cost to zero to run a business involves incurring some costs and so this you can do it but other firms can also do that and and so that's challenging at least you're eventually going to reach a point at which you're as efficient as you can get the other thing that you can do is to try to to rebrand yourself as a matter of fact if you've ever seen the TV show Bar rescue where Jon Taffer comes in and basically what he does is he visits bars that are in trouble bars where they're earning a negative economic profit they're losing money and if you think about what Jon Taffer does it's this it is exactly this right the first thing he does is he comes in and and usually some people are gonna get yelled at right and maybe somebody's going to get fired and and once you get past that there's some crying and and some people are mad there's almost always somebody that's way too drunk but basically once you get past that first day then they get down to business and here's what John does he starts looking at inefficiencies he looks at ways to reduce their cost and there's always ways that you can do that usually what's happened is somebody's hired their drunk uncle and their drunk uncle is just drinking up profits and and they don't have a POS system and so he installs that and and he looks at ways to make the behind the bar area much more efficient maybe it's it involves having a couple two or three stations where people can make drinks and and essentially that allows them to be more efficient so he attacks this decreasing costs and then the other thing he does is he attacks this right here II brands the place you can't take a failing business and just try to keep doing the same stuff over and over you got to try to turn it into something new something that people are curious about something with a new idea and so he does exactly these two things and what ends up happening is at the end of each episode there's always some some information there at the end about how much better they're doing and that's great because you know they've attacked the two things that you need to attack and those two in those types of markets what I'd love to see though is what happens in three years what happens in four years because staying efficient is challenging you've got to have a really good manager or multiple good managers to stay efficient and I'm betting that most of these bars they get lakhs and and they start going back to their old ways and and their cost of production start to sneak back up and then you know you can come up with a great new idea but other people if you've come up with a good idea other people come up with their good ideas also so if you come up with a new drink or you come up with a new game or you come up with I don't know new decor well in two years everybody's used to the decor and and other bars have come up with their own games and and now your drink is not new anymore so I've heard Jon Taffer say that you probably need to rebrand every couple of years maybe every three years you need to redo this come up with something new he's he's it's right out of this playbook he's a genius because he knows exactly what needs to be attacked and if your plan is to run a business these are the things you need to pay attention to right so I don't want to to step on anybody's dream and say that man you're just your profits gonna be driven to zero because there's nothing you can do this is what it's going to look like well if you're good at this and and and you've probably been in places that have been very successful for long periods of time and sometimes it can be really hard to to put your finger on exactly what it is that makes people like a place it's possible there are there are bars and restaurants and other types of monopolistically competitive firms that are really good at staying efficient and kind of coming up with new ideas or maybe they've hit on an idea that it's it's really hard to duplicate in other places a lot of times it's the owner right it's the owner that's friendly and and establishes a rapport with customers and and you'll see that in a lot of those bar rescue episodes it's the owner that is the the thing that's keeping them from being successful and turning that around you can do that it's it's easy to do that I'd say in a week but stayin turned around that's a challenge and so if you want to see this in action watch a few episodes of bar rescue it's it's I would love to have Jon Taffer come be a guest lecturer in my classes because he can explain this stuff every bit as well as I can let's talk for a second about comparing monopolistic competition to perfect competition and we can do this with a picture that's very similar to what we did with monopoly let's draw a picture right here of a perfectly competitive market actually let's draw a picture of a perfectly competitive firm and then right here let's draw a picture of a monopolistically competitive firm monopolistically competitive firm quantity price so what we've seen here is in both types of markets profit is driven to zero so let's draw a picture of long-run equilibrium in both of these markets now the cost curves are not different in both of these markets we've got our marginal cost and average total cost and those are going to those don't depend on the type of market that the firms in the difference lies in the marginal revenue curve that the firm faces and that comes from a difference in the residual demand curves that the firm faces both of these firms face a residual demand curve we didn't talk about that with a competitive firm but they do face a residual demand curve and it's perfectly elastic at the market price and what we know is that in the long-run that market price is driven to this point and our marginal revenue curve ends up coming along here it's horizontal and we end up being right there the firm produces this quantity here's the market price and what we see is that a perfectly competitive firm ends up producing at the efficient scale now we see something different with a monopolistically competitive firm the firm the residual demand curve that it faces is downward sloping and so we end up with a tangency right here right there between the residual demand curve and the average total cost curve and this ends up being our quantity this ends up being our price right here's where the marginal revenue curve comes down and intersects the marginal cost curve so we end up with that picture with a monopolist Stickley competitive firm now we can compare both of these we can use in this pic we can use this picture to see what the monopolistically competitive firm is going to produce compared to perfect competition and what price they're going to charge compared to perfect competition what we see is the monopolistically competitive firm uses what little market power it's got to restrict quantity and drive price up but now it's not able to drive price up very much because it doesn't have very much market power and you can see that the more elastic this demand curve is the closer our tangency is going to get to that point and if it becomes perfectly elastic then we're right there okay so the flatter this demand curve the less the firm is able to restrict quantity and drive price up now we've got a name for this the quantity that the monopolistically competitive firm produces is right there q MC this quantity I'm not going to label it but that would be the perfectly competitive quantity and this difference right here between the perfectly competitive quantity and the quantity the monopolistically competitive firm produces we call excess capacity we say the firm is producing at excess capacity what that means is they are not producing at the efficient scale they are producing a quantity that is smaller than the efficient scale in other words they're producing in a plant that's a little bit too big for them now they're not doing anything wrong they are still maximizing their profit but the way the incentives work it's encouraging them to produce a quantity the quantity that is a little bit too small for the firm or for the plant size that they've got and so their average total cost is going to be a little bit higher than it could be that's going to create some deadweight loss we also see that the price that the firm charges a monopolistically competitive firm is a little bit higher than the price that would be charged by a competitive firm and that creates some inefficiency there we've got price greater than marginal cost we call that the mark-up over marginal cost we saw the same thing with monopoly markup over marginal cost so this firm these monopolistically competitive firms are using their market power to restrict quantity and drive price up but they're not able to do it very much okay so in terms of production we've got some inefficiency the firms are producing at excess capacity that creates deadweight loss in terms of consumption price is driven up above marginal cost and the quantity is restricted below what would be produced in a perfectly competitive market and so we get some deadweight loss from that so just like we saw with a monopoly the presence of monopolistic competition creates some deadweight loss but let's think about that it's not all bad though we've got some deadweight loss but think about what's going on here we're thinking about perfect competition in terms of deadweight loss being zero right deadweight loss is zero but notice that we probably wouldn't want to live in a economy where every market was perfectly competitive because all the goods are the same one of the nice things about monopolistic competition is not all pizzas are the same right you can go out and and you can you could go out in town and buy 20 different types of pizzas you could go to a bigger city like Kansas City and buy 200 different varieties of pizzas all somewhat different there's a pizza for everybody and that's a good thing product variety makes life interesting and so even though some deadweight loss is created in this type of market there a lot of product variety and that that's something that we get well-being from so it's not the type of thing that we look for government to step in and try to fix this deadweight loss let's also think about what's going on in this picture in this picture what we're seeing is the profits equal to zero monopolistically competitive firms in the long run are going to earn a profit equal to zero if the government were to step in and force the monopolistically competitive firm to do something different to try to fix the deadweight loss then their profit will turn negative right if if the monopolistically competitive firm given the freedom to choose its own quantity and its own price still earn zero profit then if the government forces them to choose any other quantity or choose any other price their profit has to be negative and they'll end up going out of the market in the long run exiting the market so even though these firms have a little bit of market power they are not able to use that market power to turn it into positive economic profit in the long run and the thing that works against them is that free entry when there's free entry your profits going to get driven to zero because because if it's ever positive other firms are going to enter now in terms of trying to shift your residual demand curve to the right what we're seeing here is that in this type of market there's going to be a very strong incentives instant incentive for firms to engage in marketing that's something we have not talked about in our previous videos so we talked about perfect competition we've talked about monopolies with perfect competition you don't see advertising you don't see an individual farmer running ads on TV to buy their corn the reason is they know that that would be ineffective consumers treat the good as if it's identical so any dollars they spend on an ad trying to convince you that farmer Bob's corn is better than everybody else's is wasted money in monopoly we don't see advertising or at least we don't see very much we do tend to see a little bit of advertising but it's kind of unusual advertising I don't think De Beers is running them anymore but De Beers used to run some commercials that said nothing more than a diamond is forever and every once in a while back when those commercials were running a student would ask me they would say well that just seems like a waste of money because they're not telling you who to buy your diamond from they're just saying diamonds are forever well they don't need to tell you who to buy you their diamond your diamond from because if you buy a diamond you're buying a De Beers diamond so all they need to do is tell you to buy more diamonds if they can convince you to buy more diamonds because they've got a monopoly or a near monopoly on the market you're gonna be buying one of their diamonds so in monopoly you see a little bit of advertising but it's kind of unusual in this type of market restaurants movies books clothing electronics we see tons of advertising so the question is this all of that advertising that gets done is it productive or is it not and and that's a legitimate question to ask and and there's not really a nice clean answer for it I would say most economists would probably say that it's not all or nothing there's some advertising that is beneficial and there's some that's not if if we've got firms that are trying to tell us that they're good is better than other firms goods when in fact it's not I would argue that that's probably misplaced resources we could use society's resources better than that on the other hand if there's a better product out there if there's a better restaurant if there's a better television then I'd like to know about it because I like to use my money in the best way possible and so advertising tells us when there's a better product you can kind of a safe bet is that if a firm's going to engage in spend the money to advertise there's a decent that they've got a reason to do that if they're if they're products better they're gonna want to spend the money to tell you that it's better that's not always true but sometimes it is so I would say that some advertising is good some marketing is good some is a waste of resources so like a lot of things it's not all it's not nothing there's also value in having a brand name firms have an incentive to protect their brand name and you protect your brand name by protecting the quality and the integrity of your product so there are some people that would argue that having brand names that that the business can protect legally is wasteful and maybe some of it is but I would argue that if if you're in an unknown town let's say you're traveling and you come into a town and you don't know what the local restaurants are like but you see a McDonald's well McDonald's may not be the greatest cuisine there is but you know what you're gonna get and so that has some value you may not choose it you may you may be thrilled by the idea of driving into a new town and trying a brand new restaurant and and sometimes that may pay off sometimes you're gonna get restaurants that are just terrible but if you want to avoid that and you want consistency and you want to know what you're gonna get for your money having that brand name has some value so it's similar to marketing some brand name protection maybe is wasteful in terms of society's resources but much of it is probably not so this gives you an idea hopefully of how the most common type of market structure works monopolistic competition firms have some market power a little bit but they're not able to turn it into a positive economic profit what we've got to do now is we've got to talk about the last type of market that we're going to think about and that's an oligopoly and an oligopoly is where there are just a few firms because there are just a few firms what we're going to see is the strategic interaction between those firms is really important so we need to talk about game theory in that discussion so I'll see you in that video
Principles_of_Microeconomics
Chapter_3_The_Gains_From_Trade.txt
in this video we're going to talk about the gains from trade so one of the challenges about thinking about why people trade with each other is that it's very common for people to think about the world as a zero-sum game and a zero-sum game is a game where if I'm made better off you have to be made worse off so the game of poker is an example of a zero-sum game and in that game if we both came into the room with say $100 and I walked out with a hundred and fifty dollars then you have to walk out with only 50 there's a zero-sum in terms of the number of dollars that there are in the room turns out that a lot of things in in this world in which we live in work that way but a lot of them don't and trade is one of those things so what we want to do in this video is think about why people trade with each other or why people trade maybe with businesses or why countries will trade with each other so let's start by thinking about kind of a hypothetical scenario let's suppose that you and I are roommates and we both have two tasks that were responsible for we have to mow the grass and we have to do homework and let's pretend for right now that it doesn't matter if I do your homework or if you do my homework it's just busy work that has to get done so we're both responsible for mowing grass and for doing the homework and let's suppose that I'm really good at mowing the grass and you're really good at doing the homework so for whatever reason I can mow faster than you and and you're better at doing the homework because you go to class I don't and so for most people it seems reasonable that that if I focus on mowing the grass and you focus on doing the homework that both of us can be made better off there are gains to be had from us kind of specializing in what we're each best at let's change it a little bit and let's suppose that you're better at doing the homework and you're better at mowing the grass than I am if I were to ask you are there games to be had from trading from you and I trading with each other you'd probably say well they're there it would benefit me if I can get you to take care of some of the things that I'm responsible for because I'm the one that's not very good at either thing it would benefit me but it probably won't benefit you turns out that it will turns out it's very easy to show that the trade will benefit me and trade will benefit you but because people are so used to thinking about the world as if it's a zero-sum game that's counterintuitive and a lot of people have a hard time kind of wrapping their head around that so what we want to do is talk about that this basic principle of economics that that people can gain from trade that's not to say that all trade produces gains we'll talk about which trades are good for both people and which trades are not good for both people the way that we're going to approach this problem is we're going to think about two people trading with each other so let's suppose that we've got two people we're going to have a farmer and a rancher so we've got a farmer and a rancher these two people are going to be producing two Goods they're going to be producing meat and potatoes and let me give you a little bit of information about how much meat and how much potatoes they can produce the first thing we need to think about we talked about in an earlier part of this class we talked about production possibilities frontier and we talked about when the production possibility frontier is going to be linear and when it's going to be curved when it's going to be bowed out what we saw there was that a production possibilities frontier is bowed out if the inputs used to produce each good or specialized we call that an increasing cost PPF if the inputs that are used to produce the good or not specialized then we have a constant cost PPF we have a linear production possibilities frontier we're going to assume in this case that the inputs used to produce meat and potatoes there's only gonna be one input we'll just think about it as time and it's not specialized so our production possibilities frontier x' will be linear that makes life easy so I don't want you to worry about how meat gets produced or how potatoes gets get produced I want you to think about it as if the farmer and the rancher all they have to do is is think about production of meat and and at the end of a certain amount of time there's some meat okay don't worry about tools or anything like that so let's think about the number of minutes need needed to make one out so I'm going to put a table together here this minutes needed for one ounce we're going to start with meat here then we'll do potatoes and let's put up here the farmer and the rancher and let's think about how long it takes each of them to produce each of these goods let's suppose that it takes the farmer 60 minutes to produce an ounce of me 60 minutes per ounce so it's as if the farmer just thinks about production for an hour and at the end of that hour there's an ounce of meat let's suppose it takes the farmer 15 minutes to produce an ounce of potatoes for a rancher it's going to take the rancher 20 minutes to produce an ounce of meat and ten minutes to produce an ounce of potatoes so what we can see is that this is similar to the example I set up for you where the rancher can produce both meat and potatoes faster than the farmer so what we want to think about is whether or not there are gains to be had from the farmer and the ranch or trading and again most people would say well yeah if the farmer who is less productive can get the rancher to trade with them the farmers going to be made better off but the question is can the rancher be made better off by trading with somebody who is less productive than them in everything and we're pretending as if meat and potatoes are the only two goods in our world here okay what we want to do is change this table just a little bit let's suppose that both the farmer and the rancher work for eight hours the choice of how many hours to have them work in a particular day is completely arbitrary we could make this nine hours or 24 hours it doesn't matter but let's treat it as if it's a typical workday so this is going to be the amount produced in eight hours not produced in eight hours and again let's do meat potatoes for both the farmer and the rancher so if it takes the farmer an hour to produce an ounce of meat and they spend all eight hours of their day producing meat at the end of the day they would have eight ounces of meat let's do the rancher production of meat takes the ranch or 20 minutes to produce an ounce of meat that means in a in an hour the rancher can produce three ounces of meat and in eight hours the rancher would be able to produce 24 ounces of meat so these two numbers tell us the exact same story that these two numbers tell us that if they both work for the same amount of time the rancher can produce more takes the rancher a third the time to produce an ounce of meat which means if they work for the same amount of time the rancher can produce three times as much it takes the farmer 15 minutes to produce an ounce of potatoes that means every hour the farmer can produce four ounces and in eight hours would be able to produce 32 ounces takes the rancher 10 minutes to produce an ounce of potatoes that means every hour the rancher could produce 6 ounces of potatoes and in 8 hours would be able to produce 48 ounces and so we see a very similar story the rancher can produce potatoes faster than the farmer which means that if they both work for the same amount of time the rancher would be able to produce more potatoes than the farmer so we're going to use these numbers to think about what happens if they trade but we need to start by thinking about the situation that each of them are going to be in if they don't trade so let's think about the situation where there's no trade neither of them are going to trade with each other what these numbers give us are going to be the end point of the production possibilities frontier for each person so let's draw the farmers let's start with the farmers production possibilities frontier and then we'll draw the ranchers production possibility frontier if we were to think about the farmer and then we'll put the rancher right over here so let's start with our farmer and let's put let's put potatoes on the horizontal axis it doesn't matter we'll put meat on the vertical axis you can do it the other way around as long as you do everything correctly you'll get the exact same answers so over here for this is going to end up being our rancher and for our rancher will also put potatoes on the horizontal axis meat on the vertical axis so as long as you do it the same for both of your people everything will work out fine so the end points are given by these numbers if the farmer were to produce only meet the farmer would be able to produce 8 ounces of meat we'll put that right there if the farmer produced only potatoes would be able to produce 32 ounces so we'll put that right out here there's two points on the farmer's production possibility frontier and we know that because the inputs are not specialized the production possibilities frontier is going to be linear so there's the farmers PPF now we want to pick a place for the farmer to start let's let's suppose that the farmer spends half of the half of the day making meat and half of the day making potatoes if the farmer spends half the day making meat then at the end of the day there's going to be four ounces of meat and 16 ounces of potatoes so let's that point is going to be right there in the middle of that production possibilities frontier I'm going to call that point a that's the point where the farmer has four ounces of meat 16 ounces of potatoes now this production possibilities frontier represents all of the possible production points that the farmer could choose to be at and we've just picked one it also if there's an trade it also represents the possible combinations of potatoes and meat that the farmer can consume so we can also if there's no trade we can think about this is also representing the consumption possibilities for the farmer we could call it a consumption possibilities frontier so we're thinking about the farmer producing at Point a the the farmers day looks like this the farmer wakes up spends four hours of the day making meat the other four hours of the day making potatoes it can be in any combination it could be the first hour spent on meat the next hours spent on potatoes then the third hour on meat and the fourth hour on potatoes and alternating or it could be the first four hours spent on potatoes in the last four on meat it doesn't matter as long as four hours are spent on me and four hours are spent on potatoes then that's going to be the combination of meat and potatoes that the farmer has at the end of the day the farmer consumes the meat and potatoes goes to sleep wakes up and does it the next day what we know about a production possibilities frontier is that first it's impossible for the farmer to consume at a point out here all of these points beyond the production possibilities frontier are unattainable and we also know that if the farmer one day woke up and said you know what I'd like to have some more potatoes than I ate yesterday the farmer can eat more potatoes than 16 ounces but that comes at a cost and the cost is that the farmer is going to have to consume less meat so the downward slope of this represents the the opportunity cost of consuming more of one good they have to have less of the other so there's the farmers production possibilities frontier let's draw the ranchers production possibility frontier now the rancher if the rancher spends all the time on production of meat can produce 24 ounces so that's going to be somewhere up here and if the rancher spends all the time all eight hours on potatoes can produce 40 bounces so that's somewhere out here and we get the ranchers production possibilities frontier let's do the same thing for our rancher let's pretend as if the rancher spends half of the day making Mead half of the day making potato so that's going to put the rancher at a point that we'll also call point a their initial production point and at the end of that day they're going to end up with twelve ounces of meat and 24 ounces of potatoes that's their consumption point we can see a couple of things first off we see that the rancher eats better than the farmer there's more meat and potatoes for the rancher to eat and that's a result of the fact that the rancher is more productive than the farmer we can see that the rancher the end points of the ranchers production possibilities frontier are out farther than the end points of the farmer's production possibilities frontier and again that's a result of what we're seeing in this table that the rancher is better at producing meat and potatoes then the farmer is so let's think about the amount of meat and potatoes let's let's put together a little table here where we kind of summarize the amount of meat and potatoes that the farmer and the rancher have let's put our farmer here and let's put our rancher here and let's think about the amount of meat and the amount of potatoes that the farmer has and then the amount of meat and potatoes that the rancher has so at the end of the day if they're not going to trade the farmer has four ounces of meat and 16 ounces of potatoes so this is with no trade the rancher has 12 ounces of meat and 24 ounces of potatoes and let's draw a line here because what we're going to do now is we're going to allow the farmer and the rancher to trade with each other so let's suppose now so day after day the farmer and the rancher wake up spend half of their time making each good consuming the combination of meat and potatoes that we've identified there is point a in each picture they go to sleep neither of them are able to reach a point outside of their production possibilities frontier and then one day let's suppose that the rancher goes to the farmer knocks on the farmers door and says hey farmer look I've I figured out a way that we can change what we produce and then do some trading with each other it's very simple we're going to change what we produce do a little bit of trading and both of us are going to be made better off the farmers probably as we all would gonna be a little bit skeptical but let's suppose the farmer sits down and listens to the ranchers plan and and let's think about what the ranchers plan is going to be so let's put it up here the ranchers plan is going to consist of two parts the first part is going to be that they're going to change what they produce ok the farmer is going to produce only potatoes farmer only potatoes so the farmer is going to move to a new production point let's identify it let's call it it's going to be right down there at that end point we'll call it point B that's going to be the farmers new production point the rancher so the farmers specialize in potential izing in potatoes the rancher is going to excuse me the yeah the rancher is going to specialize in meat but not completely and we'll talk about why here in a little bit but the ranchers are actually going move to a point right up here let's also call that point B for the rancher and that point is going to be where the rancher produces 18 ounces of meat and 12 ounces of potatoes so the ranchers going to spend three-fourths of the time making me and only one-fourth of the time making potatoes so let's identify that up here the rancher is going to produce 18 ounces of meat and 12 ounces of potatoes so that's the first half of the ranchers plan pretty straightforward the second part of it is that they're going to trade and so what's going to happen is that the farmer is going to take 15 ounces of the potatoes that the farmer produces and trade them to the rancher in return for five ounces of meat so the trade is going to be five ounces of meat for 15 ounces of potatoes five ounces of meat for 15 ounces of potatoes and the trade looks the exact opposite depending upon whose perspective you're taking so what's happening here is that the farmer is trading away 15 ounces of meat in return for five ounces of potatoes so essentially the farmer is buying five ounces and potatoes and paying 15 ounces of or excuse me buying five ounces of meat and paying 15 ounces of potatoes from the ranchers perspective the rancher is buying 15 ounces of potatoes and paying five ounces of meat it's easy to get your head kind of tied in a knot with this so there's the ranchers plan let's think about now what this plan looks like in terms of this table down here okay so let's think about with trade with trade so everything below the line is going to be what is going to happen with this trade let's start by identifying production so the amount of production with trade well the farmer is going to produce zero ounces of meat and 32 ounces of potatoes so the farmer is specializing completely in potatoes the rancher is specializing in meat not completely but the rancher is going to be producing 18 ounces of meat and 12 ounces of potatoes so there's what production looks like let's talk about the trade so the trade from the farmers perspective the farmer produces no meat but gets five ounces in the trade so there's plus five the farmers producing 32 ounces of potatoes but it's going to trade away 15 of them so the farmer loses 15 ounces of potatoes the rancher is giving up five ounces of meat in return for 15 ounces of potatoes so this line this is just the mirror opposite of that one okay it depends on whose perspective you're looking at it from now let's identify consumption so now once we've identified the amount that they're each going to produce and the amount that they're going to trade we can figure out what they've got to consume so this is consumption at the end of the day the farmer produced no ounces of meat but gets five in the trade so the farmer in terms of meat has five ounces at the end of the day the farmer produced 32 ounces potatoes and traded 15 of them away that leaves the farmer was 17 ounces of potatoes let's identify that consumption point right up here on the farmer's production possibilities frontier so if we look at the farmer's production possibilities frontier five ounces of meat and 17 ounces of potatoes that's a point right up there let's call it Point C that is a production or a consumption point that is outside of the farmer's production possibilities frontier the farmer could not have achieved at that point on their own but now that probably doesn't surprise us because the farmers the least productive person here so the fact that they're trading with somebody who's a lot more productive probably doesn't surprise us that the farmer is able to get outside of the production possibilities frontier let's think about the rancher the rancher produces 18 ounces of meat trades five away so the rancher has 13 ounces of meat at the end of the day produces 12 ounces of potatoes and gets another 15 in the trade that leaves the rancher with 27 ounces of potatoes at the end of the day if we were to identify that consumption point 13 ounces of meat and 27 ounces of potatoes that's going to be some point out there what we see is that not only does the rancher get to consume at a point outside of their production possibilities frontier they consume at a point even farther outside of their production possibilities frontier then the farmer did so both of them are able to achieve a point outside of their production possibilities frontier let's identify down here the gains from trade so let's just call it gains so if we're thinking about how much production has increased which allows consumption increase there's one extra ounce of meat that the Farmar gets to consume one extra ounce of potatoes that the farmer gets to consume one extra ounce of meat that the rancher gets to consume and then three extra ounces of potatoes that the rancher gets to consume by specializing here by changing what they produced and then trading both of them are able to consume a bundle that they could not have produced before or not of consumer before but not only that there's now two more ounces of meat in the world and four more ounces of potatoes in the world the economic pie has gotten bigger that's what we mean when we say the world in terms of trade between people and in terms of trade between countries the world is not a zero-sum game the economic pie gets bigger so that everybody can have a bigger slice of the pie what we need to do is we need to think about why where do these gains from trade come from the answer to that question where do the gains from trade come from the answer is this trade allows each of them to specialize in what they do best and if I'm gonna face the face class what ends up happening is I'll write that on the board trade allows it each person to specialize in what they do best and then I'll ask the class how do you feel about that and it's always the same every time I ask the class how you feel how do you feel about that does that make sense everybody says yeah that makes sense it sounds good trade has allowed each of them to specialize in what they do best but then I ask the class well so in this problem what does the farmer do best and then it dawns on him that well nothing right the farmer does nothing best in this problem that is exactly how I set the problem up the farmers is slower at producing meat and slower at producing potatoes the rancher is better at both of them so if you just say okay yeah that sounds good that trade has allowed each of them to to specialize in what they do best you're kind of forgetting that we've set this problem up in a very particular way to illustrate a point so what we need to do is we need to think about what it means to do something best because it is true that each person is specializing in what they do best it is true that the farmer is better at producing potatoes then the rancher is but what we need to do is clear the board off here and then kind of think very carefully about what it means to do something best so let me clear this off and then we'll sort that out so understanding where the gains from trade come from in this problem relies on the ability to understand what it means to do something best if you think back to the discussion that we had early on in the class we were talking about opportunity cost and one of the examples that I talked about was the opportunity cost of going to class and if I ask a an introductory class what's the opportunity cost of coming to class today most people will say well it's it's 50 minutes I have to spend 50 minutes in the room and then that kind of starts to get to the answer but the answer really isn't the number of minutes the answer is what's the other thing that you could have been doing if you weren't in class so what would you be doing in the 50 minutes so the minutes the number of minutes is just kind of an intermediate step and so if you think about what we've done here that table that I had over here at the beginning that table showed us the number of minutes needed to make an ounce of each of the good and by focusing on those minutes we're focusing on something that we shouldn't that's not really the cost the cost is what's the thing that you can't be doing if you're doing this good what's the other thing that you can't be doing it doesn't matter how many minutes and that's really hard for for students to kind of wrap their heads around because we're used to competing with each other if you had a brother or a sister it's probably the case that you guys competed with each other in terms of maybe how fast you could run or or how much you could eat or this time was always something that was used to compare yourself with the other person and it's not until you get into an economics class that somebody like me tells you the number of minutes doesn't really mean what you think it means at least not in the case of what we're talking about here what we want to know is who actually produces meat and potatoes more efficiently and it doesn't matter the amount of time that the key here is that if you're going to produce more potatoes that's time that you can't spend producing meat so it doesn't really matter the number of minutes that are in between there it's how much less meat you have to produce if you're going to use produce another ounce of potatoes so the key here lies in understanding the difference between what economists call absolute advantage and comparative advantage so absolute advantage let's talk about the definition of that absolute advantage has to do with how fast you can produce something a producer has an absolute advantage over another producer if they need less of the input to produce a unit of the good another way to think about that is a producer has the absolute advantage over another producer if in the same amount of time they can produce more of the good so what we have here is a situation where the rancher has the absolute advantage in production of potatoes and the absolute advantage in the production of meat you can see that from the tables that we talked about because the number of minutes needed to produce an ounce was smaller for the rancher than it was for the farmer for both goods you can also see it from the production possibilities frontier because the end points of the ranchers production possibilities frontier farther out than the farmers so that's telling us that the rancher has the absolute advantage but that's not what we need to focus on because absolute advantage deals with just the number of minutes needed to produce we need to think about what economists call comparative advantage a producer has the comparative advantage over another producer if they have the lower cost of production so what we need to do is we need to think about what's the actual cost of production if the farmer is going to produce one more ounce of potatoes how much fewer ounces of meat are they going to be able to produce now same with the rancher the rancher wants to produce one more ounce of potatoes how much meat is I have to produce we're going to think about the intermediate step of how much time gets Perdue gets used up in production of potatoes but only to figure out how much less meat they can produce so let's think about what that's going to look like let's start from the ranchers perspective and we're going to go through some steps of logic here once we get through this and we see how it works we're gonna think about a faster way to do this so in terms of solving problems what I'm about to do here is only to teach you why it works the way it does once you get this kind to get your head wrapped around it or we won't be doing this to solve problems there's a quicker way than what I'm about to do but we need to do this it takes the ranch or 10 minutes to produce an ounce of potatoes i erase that table but if you look in the notes that you're taking right now and you should be taking notes if you look back in those notes you'll see that it takes the ranch or 10 minutes to produce an ounce of potatoes so let's write that out takes the rancher ten minutes to produce an ounce of potatoes one ounce of potatoes that is ten minutes that can't be used to produce meat so let's write that out here's step one that's ten minutes that can't be used to produce meat so here's where we get to the actual cost how much meat could the rancher have produced in ten minutes in ten minutes remember it took the rancher 20 minutes to produce an ounce of meat so in ten minutes the rancher could produce a half ounce of meat in that 10 minutes the rancher could have produced 1/2 ounce of me and now we've got what we need the cost for the rancher of producing that ounce of potatoes is 1/2 ounce of meat this 10 minutes is just an intermediate step that you need to use it but it's these two numbers that matter so what we can get from this is for the rancher the opportunity cost of one ounce of potatoes is 1/2 ounce of meat now let's think about that for a second you think back to when we were first talking about production possibility frontiers we spent some time talking about the fact that the slope of the production possibilities frontier tells you the opportunity cost of the good on the horizontal axis the reason I started here with potatoes was because I've got potatoes on the horizontal axis so what we've just found out is that the opportunity cost of an ounce of potatoes is 1/2 ounce of meat for the rancher if we were to go over here and look at the ranchers production possibilities frontier if you look at the slope of that thing the slope is constant because it's a line and the slope here the easiest way to figure out the slope is to look at the rise and the run that corresponds to the end points of this thing the slope of this thing is 24 over 48 which is equal to 1/2 that's not a coincidence we're just seeing exactly what we talked about in that previous video where the slope of the production possibilities frontier tells you the opportunity cost of the good on the horizontal axis so we don't really want you understand that point you don't have to go through all of this starting here and going through these steps to get to that if you want to know the opportunity cost of an ounce of potatoes just look at the slope it's one half ounce of the other good which is mean if we look at the slope of the farmer's production possibilities frontier the slope here would be 8 over 32 which is equal to 1/4 so what we see is that the opportunity cost of an ounce of potatoes is smaller for the farmer than it is for the rancher the farmer is actually better in terms of their cost of production the farmer is better at producing potatoes than the rancher is one thing that I need to point out here a common mistake that I sometimes see and in my principles class is that when you're thinking about the slope of this production possibilities frontier you're taking this number and dividing by that number it is just a coincidence that this number divided by that number is also one-half that's just because we've chosen that particular point if you did tried to do that with point B and looked at that number which I believe was 18 and divided that by 12 the coordinates of that number you would not get 1/2 so don't just pick a point and divide the vertical coordinate with the horizontal coordinate that's not how you calculate the slope you've got to look at the rise and its associated run same thing over here 8 divided by 32 is 1/4 4 divided by 16 is also 1/4 but that is just because that point is right there in the middle of that thing so don't just divide some coordinate with another coordinate look at the end point okay as long as you divide this end point by that end point you're always going to have the right answer so now we can see that the farmer has the lower opportunity cost for production of potatoes and that's what it means to have the comparative advantage a producer has the comparative advantage over another producer if they have the lower opportunity cost and in this case the farmer has the lower opportunity cost for production of potatoes what we want to do now is think about putting a table together so when you solve problems with this to these types of problems it's very important that you be methodical about how you solve them one way one thing that you need to do is put this table together put together a table that shows the opportunity cost of let's start with potatoes because that's the one that we've got on the horizontal axis and then we're gonna do meat so the opportunity cost of an ounce of potatoes we've just figured out for the let's put here the farmer and then our rancher the opportunity cost of an ounce of potatoes for the farmer is 1/4 ounce of meat I'm just gonna say 1/4 meat it's in terms of ounces and the opportunity cost of an ounce of potatoes for the rancher is 1/2 ounce of meat a lot of times what I'll do is circle the smaller one just so that when I go back to look at it it's it's obvious what I'm looking for 1/4 is less than 1/2 so the farmer has the comparative advantage in production of potatoes let's think about production of meat who has the comparative advantage in production of that well turns out that the opportunity cost of the good on the vertical axis is given by the reciprocal of the slope so if you want the opportunity cost of meat all you have to do is flip over the slope so you just take the reciprocal of these numbers for the farmer the opportunity cost of an ounce of meat is 4 ounces of potatoes and for the ranch or the opportunity cost of an ounce of meat is 2 ounces of potatoes again two is less than four so we've just seen that the opportunity cost for production of meat is lower for the rancher that's why the rancher specialized in production of meat and clearly you can see that because these numbers are reciprocals of these numbers the smallest one over here the person with the smallest opportunity cost in this column is going to have the bigger opportunity cost in that column what you get here is that it's not possible for one person to have the comparative advantage in production of both goods that's important over here we've seen that one person can have the absolute advantage in production of both goods the rancher did in this problem but in terms of comparative advantage it's not possible to have the comparative advantage in production of everything the other side of that coin is that it's impossible for you not to have the comparative advantage in something so if you've identified that the farmer has the comparative advantage in production of potatoes you immediately know that the rancher has the comparative advantage in production of meat but again I would make sure that you always put this table together because this is how you solve these types of problems so the gains from trade come from each person specializing in the good for which they have the comparative advantage so I'm going to put here compare it saying that somebody has the comparative advantage that is equal to saying that they have the lower opportunity cost if you have the comparative advantage you have the lower opportunity cost what's going on here is that we call it comparative advantage because you are comparing their opportunity costs okay what we need to do now is talk about the range of so in in this problem I gave you the trade five ounces of meat for 15 ounces of potatoes what we want to talk about is where did that come from because a natural question you might be asking is well so how do I figure out the trade I mean if if I were to ask you give me a trade that would work that would make both of these better off we need to figure out how you would approach that problem so I'm going to clear a bunch of this stuff off and then we'll take a look at how to figure out which trades will work all right let's think about the range of prices at which gains from trade exist so I've got the opportunity cost table up here we can identify that the farmer has the lower opportunity cost for production potatoes rancher has the lower opportunity cost for production of meat but let's think about which trades would work once we know that that tells us who's going to specialize in which good but it doesn't tell us which trades will work so let's take a look at the what I'm going to call the the price implied by the trade and it didn't it depends on whose perspective we're taking price implied by the trade let's think about the farmers perspective first so if we start with the farmer the farmer got five ounces of meat for 15 ounces of potatoes so let's write that out got five ounces of meat for fifteen ounces of potatoes let's think about how much each ounce of meat cost for the farmer okay so that means that each ounce of meat cost three ounces of potatoes so we could rewrite this let's say got each let's say got let's not say each you'll say one ounce of me for 3 ounces of potatoes that's how much each ounce cost the way I did this was to use that kind of golden rule of algebra if you're going to do something to this you've got to do it to that I want to turn five into one I need to divide five by five that 5 over 5 is 1 but if I'm going to divide that by 5 I need to divide that by 5 also 15 divided by 5 is 3 so the farmer got each ounce of meat for 3 ounces of potatoes and let's think about why that makes the farmer better off the reason that makes the farmer better off is because that price that the farmer paid each ounce of nice meat cost the farmer 3 ounces of potatoes that falls right in between the range of the opportunity costs for the farmer in the rancher so remember the farmer here is buying meat if the farmer produced meat themselves it would cost 4 ounces of potatoes for each ounce of meat but the farmer is able to buy it for 3 ounces of potatoes buying meat is cheaper for the farmer than producing it themselves so the farmer is happy about that trait in terms of the rancher is is the rancher happy well of course because the rancher is selling the each ounce of meat for 3 ounces of potatoes and it only costs the rancher two ounces of productive potatoes to produce each ounce of meat the ranchers selling it for one more ounce than it cost them to produce it so the farmers buying it for one less ounce then it would cost them to produce and the ranchers selling it for one more ounce than it costs them to produce both of them like this price and so any price that fell in between that range of two and four would create gains from trade in a face-to-face class I always we'll ask the class how many numbers are there between two and four and there are always a handful of people that say there's only one number between two and four no there's not there's one integer between two and four but there's an infinite number of numbers between two and four numbers like two point five three point one two point seven nine three so this one's nice and simple three falls right in the middle there but there's an infinite number of prices that would create gains for trade between the two of them there's also an infinite number of numbers that lie outside of this range so in terms of what's going on there it's you just need to pick one in the range it's an infinite number of them let's look at this from the ranchers perspective so the rancher got 15 ounces of potatoes four or five ounces of meat so if we want to know how much each ounce of potato's cost the rancher we need to divide this by 15 to turn it into one but then we've got to divide that by 15 5 over 15 is one-third so this is the same as saying got one ounce of potatoes for 1/3 ounce of meat if you think about why the why this trade and this trade is just the mirror image of that one that's why this number and that number are reciprocals of each other but if we look at it from the ranchers perspective the rancher is getting each ounce of potatoes for 1/3 ounce of meat and that creates gains from trade because 1/3 falls right in between 1/4 and 1/2 okay so the same thing is happening here the farmer is selling potatoes is able to produce each ounce of potatoes for 1/4 ounce of meat and then sell it for 1/3 ounce of meat the farmers made better off the rancher likes to get potatoes this way because if the rancher produces an ounce of potatoes it costs them a half ounce of meat they can get it in the trade for only 1/3 ounce of meat so both of them like it and you should be able to see that any trade that implies a price that falls in that range is also going to imply a price that falls in this range because this range is the reciprocal of that one so now you understand hopefully why this trade creates gains from trade because it implies a price that falls right in between the two opportunity costs what we need to do now is is clear this off and then I want to work through one problem from beginning to end with it takes a while to kind of go through all of these steps and and talk about logically why one thing leads to another but when you when you actually solve a problem with with a two-person trade problem it's actually fairly quick so let me clear this off and then we'll take a look at a problem let's take a look at what a problem is going to look like one of the problems with the farmer Rancher example is that sometimes the students think that the farmer specializes in potatoes because I've called them the farmer and that's not the case I could call them just person a and person B and the rancher specializes in meat because they're a rancher so what what I need to do is give you a problem where the name of the person or the name of the country doesn't tell you anything about what they might specialize in as a matter of fact I could go back and change that problem to where the farmer specialized in meat and the rancher specialized in potatoes that would be very easy to do but I don't know that creates some confusion for students so let's take a look at a different problem and this time let's use two countries so let's suppose that we have the US and Mexico and let's suppose that our two countries can produce each can produce two goods let's make them food and computers and let me give you the information for how much food and computers the US and Mexico can produce if that's all they produced so the way that I can give you this information to you we could do it in terms of how we did the farmer Rancher problem where I can tell you how many minutes it takes to produce a unit of food and how many minutes to produce a unit of computers and then tell you how much time they're going to work and we could go through that let's just let me give it to you kind of in sentence form I'll write a sentence here and then I'll kind of quote from it but let's suppose if the US produces let's say only food it can produce let's say 400 units produce 400 units if the US produces only food it can produce 400 units if the US produces only computers I'm going to abbreviate that there it can produce let's say 100 the US produces only computers it can produce 100 computers if Mexico let's start with food produces only food let's suppose Mexico can produce 300 units and then if Mexico produces only computers let's suppose they can produce say 20 so we're not going to worry about how long it takes to produce food and computers and and how much they work I'm just going to give you the endpoints of the production possibilities frontier there so the information and a problem could be provided to you a number of different ways another way I could provide it is just to show you the production possibilities frontier so there's multiple ways let's go ahead and draw the production possibilities frontier x' for the US and for mexico so let's put the US here let's put food on our horizontal axis and we'll put computers up on our vertical axis for both the US and Mexico here's computers food [Music] here's Mexico and so these numbers that I've given you are just the endpoints of the production possibilities frontier we're going to be assuming like we did with the farmer Rancher problem that the inputs used to produce these two goods or not specialized now I realized that with food and computers in the real world they would be specialized but when we're working problems here I want it to be as simple as it can be for you so let's draw the production possibilities frontier for the US the US produces only food it can produce 400 units so the endpoint of its PPF is going to be right there in the food direction if the US produces only computers they can produce 100 so there's the other end point inputs are not specialized so we get a linear production possibilities frontier if Mexico produces only food they can produce 300 so there's the end point for Mexico in the food direction Mexico produces only computers they can produce 20 so that's going to be somewhere down here also their production possibilities frontier is linear looks a little curved but pretend that's a line so now we've got the PPF for the US and Mexico and I could ask a few questions about at this point I could ask about absolute advantage so I could say which country has the absolute advantage in production of food well there's a couple ways you can answer that you could look at whose end point is out farther the u.s. is end point is out farther in the food direction you could also just look at the table if they produce only food then the US can produce 400 units Mexico can only produce 300 so you can get the answer from either the table or from the picture it's also the case that the US has the absolute advantage in production of computers they can produce 100 compared to Mexico's 20 so with just that information about all I can ask you is who can who has the absolute advantage let's talk about the opportunity cost and for the opportunity cost we're gonna put together this table so let's talk talk about the opportunity cost of let's start with what we've got on the horizontal axis 1 unit of food and then we're going to do it for the US and we're gonna do it for Mexico and then once we've got this column we're going to do one unit of computers I always when I put the table together my first column is always the good on the horizontal axis because it's just the slope that's the easiest so for the u.s. the opportunity cost of a unit of food is equal to the slope of this production possibilities frontier the slope of this thing is 100 over 400 so the slope of that thing is 1/4 it's 1/4 of a computer if we look at the opportunity cost of the unit of food for Mexico you have to be careful with this one because when I ask students in a face-to-face class to calculate the slope sometimes they accidentally say that it's 2/3 they accidentally think 200 over 300 but it's 20 over 320 over 300 we can knock off one of those zeros that's equal to one fifteenth so the opportunity cost of unit of food in Mexico is one fifteenth of a computer so if we're thinking about who has the lower opportunity cost for production of food it's clearly Mexico they only have to give up a fifteenth of a computer whereas the US would have to give up 1/4 of a computer Mexico is better at producing food if we want this column all we have to do is invert these numbers so for the US the opportunity cost of a unit of computers is 4 units of food and for Mexico the opportunity cost of 1 unit of computers would be 15 units of food and so clearly the US has the lower opportunity cost for production of computers so now that we've got that table put together then you can answer a variety of questions about who has the comparative advantage who's going to specialize in which good who's going to export which good to each country so let's let's identify a few ways that I could think about asking you questions here one is we could just talk about who has the comparative advantage I could just directly ask you who has the of advantage in food and who has the comparative advantage in computers well in terms of food production you just look at the food column and you can see that Mexico has the comparative advantage in production of food in terms of computers it is the u.s. that has the comparative advantage in production of computers what that means is if we were thinking about specialization who's going to specialize in which good specialization you're going to specialize in whichever good you have the comparative advantage in so the u.s. let's well let's start with food mexico has the comparative advantage in food so mexico will specialize in food production in any problem that I give you we're going to think about them specializing completely in the farmer Rancher problem that we did the rancher did not specialize completely in meat and you might ask well why I mean why did the rancher move up towards the meat end point but not all the way well I'm not a hundred percent sure that that problem comes from the textbook by Mankiw and I'm not a hundred percent sure why Mankiw didn't have them specialize completely in production of meat because the gains of trade gains from trade would have been even bigger I think it's probably because the numbers work out a little bit nicer but in any problem that I give you I'm gonna have you have the person for whom they have the comparative advantage in and say food have them specialize completely in food so in terms of computers the US will specialize completely in production of computers so if I were going to identify the production points up here Mexico is going to specialize completely in food production there will be their production point and the u.s. is going to specialize completely in production of computers so there will be their production point this correspondent to point B in our other example in the farmer Rancher example there will be the production points that each country is going to produce that another thing that I could ask is I could say who's gonna export food who's going to export computers well clearly Mexico is going to export food because the US isn't producing any food only Mexico's producing food and the US will export computers because they're not producing anything except computers so who's going to export I could ask that question in terms of who's going to import but again once you know who's going to specialize in which good that's an easy question to answer I could ask about total production total let's call it total world production of let's start with food and then do computers so if I were going to ask you suppose these two countries trade what's going to be total world production of food well Mexico is going to be the only country producing food and they're going to produce 300 units so there will be 300 units of food produced and the u.s. is going to specialize in computer production they can produce a hundred computers so total world production of food is going to be 300 total world production of computers is going to be 100 this is usually the point at which students start to their they get betrayed by their brain and they start to say but wait a second total world production of food is 300 units but if the u.s. produced food it could produce 400 units so maybe the US should produce food because they could produce can produce more but what's happening there is your brain is going back to thinking about absolute advantage and and what we've seen here is that absolute advantage does not create gains from trade it's comparative advantage it eats each country or each person specializing and the good for which they have the comparative advantage so when you get to this and you see that there's a number in that table bigger than some of these numbers don't get sidetracked by that another example or another type of problem that I could ask about this as I could say what's the range of prices range of prices so let's suppose I ask you what's the range of prices for food now let's do it for computers for computers range of prices for computers at which both of the countries would agree to trade well all you have to do is pick any trade that implies a price between 4 and 15 units of food per computer so the range of prices for computers would be anything between 4 and 15 units of food so for example one unit of one computer for 10 units of food that would be a trade that would work because one computer for any number in between there one computer 49.3 two units of food that would be a a trade that would imply gains if I ask you for the range of prices for food then you would think about any trade that implied a price between that range and you might ask well in the real world how does the trade get how does the price get figured out well if you just had two people it would be up to those two people or these two countries to negotiate with each other what we'll see is in our next chapter we'll talk about demand and supply and we'll see how prices when there are lots of buyers and sellers how prices get determined but here if you've just got two and just be up to the US and Mexico to negotiate some price so whichever country was better at negotiating they might be able to push the price more in their favor than in the other countries but any any trade that implies a trade or any trade that implies a price between those two numbers is going to create gains from trade so let me show you one other thing you we could use this to right let's say one computer for anything between 4 and 15 units of food there's a homework problem that I will have you do and sometimes I put a problem like this on the test let me put a line here to kind of differentiate what's going on here there's a homework problem where you have to do something like this because they're going to ask you what about a trade of 7 computers or 20 computers what would be the range here well the key to answer any problem like that if they don't ask you if I don't ask you about one computer what if I ask you about what's the range of prices for 20 computers well what you would do is write this sentence one computer for anything between 4 and 15 units of food and I'm getting every one of those numbers right here one unit of computers for anything between 4 and 15 units of food or if I ask you about food it would be one unit of food for anything between one fifteenth and one-fourth of a computer if you then want to turn that into something other than one computer like let's say it would be 20 computers for anything you just multiply these numbers by 20 so it would be anything between 80 and 15 2,300 20 computers for anything between 80 and 300 units of food that would create gains from trade so if you want to put it in terms of something other than one unit of food or one unit of computers just write that sentence and then multiply everything by the number thank so that gives you hopefully an idea of what problems would look like this problem even doing working through the problem because I showed you all of the different things I could ask that takes some time but working through a problem creating that table that that's really the key to this type of problem working through it doesn't take a lot of time as long as you're comfortable interpreting the numbers in that table as long as you do a good job of being careful about drawing these production possibilities frontier x' these numbers are these answers are pretty easy to get so that gives you hopefully an idea of where the gains from trade come from it's not absolute advantage it's comparative advantage it's comparing the actual opportunity costs that's a that's the opportunity cost represents the actual cost of production so I will see you in the next video
Principles_of_Microeconomics
Chapter_5_Elasticity_Part_2.txt
what I want to talk about now is what this elasticity measure is really telling us about the shape of the demand curve so up to this point you've learned how to calculate the elasticity and you've learned how to interpret the number that you've got that was the previous video so what we want to do now is think about what this means graphically what the elasticity is telling us really is something about the steepness of the demand curve and I'll give you a picture that will help you understand that here in a second but let's start with a couple of extreme demand curves I want to give you some pictures of special cases of demand curves let's think about suppose your demand suppose there was a good for which your demand the number of units you wanted to buy didn't change if price change so maybe a kind of an extreme example of this is let's suppose you have a medical condition and that medical condition requires you to take one pill per day to keep your heart running and if you don't take that pill then your heart stops and you die and if you take two pills that doesn't help you you just need one pill per day and I know that's an extreme example but let's think about what your demand curve would look like in that case your demand curve there is going to be perfectly vertical what that means is it doesn't matter if the price of that pill it's gonna be vertical at one pill per day it doesn't matter if the price of that pill is really low or the price of that pill is high you need you want one pill per day the price of that pill could get so high that you can't afford to buy it and that's a whole different issue but if we think about how much you want to buy at reasonable prices it's not going to be affected by the price in other words you're not going to respond at all to a change in price and we would say that your demand is perfectly inelastic so here demand is perfectly inelastic you don't respond at all when the price changes if we were to calculate your price elasticity of demand it would be zero a one percent change in price causes a zero percent change in your quantity demanded here so that's the first kind of extreme demand curve let me draw you another one and we'll talk about this type of demand curve a lot later on in the class when we start thinking about perfect competition but let's suppose that if the price there's a good for which if the price goes up any at all you switch to a substitute good in that case the demand curve would be perfectly horizontal and if the price was anywhere above this point you would buy none of the good ok this would be what your demand curve would look like if there was a perfect substitute available so if two goods were perfect substitutes and you're consuming this one and then it's price goes up you would switch to consuming this one because it's a perfect substitute so here we would say that your demand is perfectly elastic perfectly elastic you calculated the price elasticity of demand it would be infinite price elasticity of demand would be infinite you'd need to use some limit mathematics to calculate that we're not going to calculate that but this gives you an idea of kind of the two extreme situations for a demand curve it's either going to be perfectly up-and-down perfectly inelastic perfectly a perfectly inelastic perfectly elastic or something in between there most of the time it's going to be in between these are extreme cases so what we see is that elasticity tells us something about the steepness of the demand curve let me give you another picture most demand curves that we're going to be thinking about in this class they're not going to be perfectly inelastic or perfectly elastic they're going to be downward sloping but what we're seeing is that the flatter the demand curve the more elastic demand is here you respond infinitely much here you don't respond at all the more steep the demand curve is the more inelastic demand is so let's think about a couple of demand curves let's draw a demand curve that's relatively steep here let's call this D 1 and let's pick a price let's suppose we have a price like p1 and we look at the quantity demanded at that price q1 so down here's Q up here's price just like normal so at that price of p1 we can look at the quantity that people want to buy if we were to decrease price we see that people will want to buy more but what I want to do is I want to draw another demand curve through that point we're going to draw a demand curve through that point call it d2 so demand curve d1 is steeper than demand curve d2 we know that that means the demand curve d1 is more inelastic than d2 another way to say that is d2 is more elastic than d1 let's decrease price now to p2 so we're going to decrease price and compare the response with each of these demand curves with demand curve d1 we see that when we decrease price from p1 to p2 consumers buy more of the good but not a lot more and the reason is is because that demand curve is relatively steep demand is relatively inelastic if we look at demand curve d2 if we start at the same point the same price and we decrease price by the same amount with demand curve d2 we get a much bigger response consumers respond a lot more when price Falls quantity demanded rises by a lot more with demand curve d2 than it did with and curve d1 so with demand curve d2 we would get the result that consumers respond a lot to a change in price compared to d1 this demand curve is more elastic so elasticity is a measure of the steepness of the demand curve what that means is we've got to think now when we draw a demand curve and here we'll we'll learn that this is true with a supply curve also we need to think about how steep we make that demand curve and how steep we make that supply curve because the steepness that we give it is going to tell us something about the elasticity so if we were going to draw the demand curve for gasoline we know people don't respond very much to the change in the price of gasoline so d1 would be a much better demand curve for gasoline than d2 would be okay so we have to think about that and it depends on the type of good that we're considering you might ask well why don't we just use the slope I can remember when I first learned this that's the first thing that popped into my mind if what we're interested in is a measure of the steepness of a demand curve why don't we just use the slope because we already know the slope it's easy to calculate it's just the rise over the run the problem with using the slope is that it has units of measure so if we were looking at the slope of the demand curve and let's say this was the demand curve for 4 apples then it would be price up here and the quantity of apples down here and so we could see if we decrease price by $2 how many more apples does the consumer want to buy and so it would be dollars per Apple but then that would create a problem if we wanted to compare that to the demand curve for gasoline because with the demand curve for gasoline it would be dollars per gallon of gas and so we've got apples and gallons of gas and we've got a unit of measure problem there if we will instead think about these changes in percentage terms the percent change in price and the percent change in quantity then eliminates our units of measure and we can compare the elasticity of the demand curve for apples with the elasticity of the demand curve for gasoline and so if we were just to use the slope we wouldn't be able to compare different demand curves but by using the elasticity we are able to do that so that's why we don't use the slope let's talk about why elasticity measures at this point you may be thinking well this just seems like a way to make a principles of microeconomics class more complicated why do we need to come up with some mathematical new mathematical way of measuring the steepness of these things and the answer is that the elasticity of the demand curve is going to be very important in terms of understanding how firms are helped or hurt by changes that take place in them in a market so let's start by looking at the relationship between the revenue that AFIRM earns total revenue and the elasticity of the demand for what it's selling let's start by thinking about where total revenue shows up if I were to draw a demand curve here so here's a demand curve let's think about the price of the good let's suppose the price of the good is right there and let's think about the quantity at that price if we go over here to the demand curve there would be the quantity at that price let's call it p1 q1 I'm leaving off a couple of things here actually one important thing for that to be the price in this market my supply curve would be going right through that point but I want to leave that off I don't want this to be any more complicated than it has to be the total revenue that a firm earns is going to be equal to the price of the good multiplied by the quantity of the good so if you're selling a hundred units and you're charging two dollars per unit you're bringing in revenue of $200 so this vertical distance right here is P and this horizontal distance right there is Q total revenue is equal to price times quantity so what this is telling us is this total revenue can be found by taking this vertical distance P and multiplying it by this horizontal distance Q if you take a vertical distance and multiply it by a horizontal distance you're getting an area and this is telling us that the area of this rectangle right here is equal to total revenue that area tells us the revenue that firms in this market are earning and so what we want to do is we want to think about how this area how total revenue changes depending upon the steepness of the demand curve that the firm all the firms are facing in this particular market so I need to clear this off and then we'll draw a picture here that compares two different demand curves and we'll see it's very easy to understand why the steepness of this demand curve matters when we're thinking about revenue let's do something similar to the picture that we had up here just a little bit ago let's consider two different demand curves we'll separate this out so that we can get a better idea of what's happening in these two different markets so let's think about a demand curve over in this picture that's relatively steep okay so here's a demand curve I want to start with a particular price let's suppose we start with a price equal to two dollars and let's go over to this demand curve and think about a quantity let's suppose at that price the quantity demanded in this market is 50 so again I'm gonna leave off my supply curve right there but the supply curve would be going through let's call that point a now what I want to do in my right-hand picture is I want to start at the same place in that market but I'm gonna make my demand curve relatively elastic I'm gonna make it flatter so what I want to do over here is use the same price of to go over about the same distance and have a quantity of 50 and I'm going to make my demand curve in that picture relatively flat through that point not all the way flat obviously but relatively flat relative to this one and let's also call that point a let's think about this is going to be two different markets this in this market demand for this good is relatively inelastic in this market demand for the good is relatively elastic okay but let's think about total revenue at a total revenue at a so total revenue at a is going to be the price times the quantity it's going to be $100 if we do in this market total revenue at a it's going to be the same it's going to be price times quantity which is $100 so firms in each of these markets are going to be earning the same amount of revenue okay now what I want to do is I want to change the price let's suppose that price doubles let's make my price go up to four so right up here's four now if I go over at that price to that demand curve on the right hand side because it's relatively flat I hit it really quickly right there's the quantity demanded at that price let's suppose it's ten units and let's call that point B let's do the same thing over in this market we double the price up here to four because this demand curve is relatively steep when we go over and we hit the demand curve what we see is that we hit it right there let's suppose that's 40 we'll call that point B so the same thing has happened in each of these markets we started at the same place and we raised price by the same amount and we get two completely different outcomes in terms of revenue so let's calculate total revenue at be over here total revenue at B is 4 times 40 that's a hundred and sixty dollars if we think about what happens over here total revenue at B is 4 times 10 that's $40 that doubling of price helps firms over here but hurts firms right there now let's think about what that means we'd have to wrap our head around that because if I and I'm in a face-to-face class and I ask before I do this if I ask my students if you're running a firm what do you hope that the market does to the price that you're selling you're good for almost all of my students will say ok if I'm running a firm sellers like high prices so I should hope that the market drives the price up because then I'm gonna make more dollars per unit and so I would want the market to drive the price up well what we're seeing right here is it's not always true that's not always true at all it depends on the type of demand curve that you're facing it depends on what you're selling and the consumers price elasticity of demand for that product which you have no control over you don't control consumers price elasticity of demand if you're in this type of market if you're selling a good for which there are close substitutes and the market drives your price up then that's going to end up hurting you consumers are going to substitute away from your good towards other goods and your revenue is actually going to go down on the other hand if you're selling a good for which there are not close substitutes then if the market drives the price up that's going to help you because consumers are going to buy less but not very much less so notice here that elasticity helps us understand why in some cases firms are going to be made better off by price going up and in other cases firms are going to be made better off by price going down there are firms if you think about this firms in this type of market suppose price started at four then they would start earning $40 of revenue if the market drove their price down if the supply curve increased and it drove their price down by half they're going to be better off they're gonna make more revenue and hopefully you understand what's going on here there are two things changing right if price goes up that's good for you if you're a seller you're gonna sell every unit for more dollars the problem is when price goes up consumers are going to want to buy less so there's a good effect and there's a bad effect price goes up but quantity goes down and whether or not you're better off at the end of that process depends whether depends on whether or not price went up in percentage terms by more or less than quantity went down in percentage terms on the other hand if the market drives your price down that by itself is bad for you your have to sell your good for fewer dollars than before but there's a silver lining and the silver lining is that consumers are going to want to buy more of the good because price went down so you have this bad effect price going down but you have this good effect you're going to sell more units and it depends on which effect is bigger is price going to go down in percentage terms buy more or less than quantities going to go up and of course the major that we've got elasticity tells us whether or not the percent change in quantity is bigger than or less than the percent change in price so hopefully that helps you understand why some firms are gonna want their price to go be driven down by the market and some firms would want their price to be driven up by the market here's the general rule when demand is inelastic when demand is inelastic price and total revenue move in the same direction price and total revenue move in the same direction in other words if price goes up total revenue goes up that's this example but if price goes down total revenue goes down if we start here at point B total revenue would go from one 4160 down to a hundred so when demand is inelastic price and total revenue move in the same direction when demand is elastic price and total revenue move in opposite directions that's the example on the right in this example when price went up total revenue went down if price were to go down total revenue would go up so in order to order to understand how a change in the market which we're talking about changes in supply here how that impacts the firm we need something about the steepness of the demand curve that the firm faces let's talk about another way that we can think about the terminology elastic and inelastic because it turns out that the elasticity of demand with a linear demand curve the elasticity of demand changes along that linear demand curve if I were to draw a demand curve where I've got price and quantity so here's a demand curve I've extended it from my vertical axis down to my horizontal axis there's a demand curve D it turns out that right here if if we were to look at the price elasticity of demand right there in the middle of that demand curve a couple of prices right there and infinitesimally close to each other in a couple of quantities we calculated price elasticity of demand right there it would turn out to be equal to one negative one demand would be unit elastic if we were to calculate the price elasticity of demand using two prices and quantities up here on this end of the demand curve if you calculated that you would get a price elasticity of demand greater than one an absolute value and we would conclude that price elasticity of demand let's say price elasticity of demand greater than one demand is elastic up here if you were to calculate the price elasticity of demand down here you would get a price elasticity of demand less than one we would say that demand is inelastic what that means is that it depends on the portion of the demand curve that we're in as to whether or not demand is elastic or inelastic remember the elasticity is not the same as the slope if it was the slope then the elasticity would never change because the slope of a line is the same here and there and here and there but we're not using the slope we're using the percent change in price and the percent change in quantity and so the elasticity of demand varies along a linear demand curve so I could use the term elastic and inelastic to refer to two different demand curves this demand curve is more elastic than this demand curve or I could use the term elastic and inelastic to describe one end of a demand curve a linear demand curve or the other end of a linear demand curve when I talk about elastic or inelastic when I use those terms from now on I'm going to be using them to compare the steepness of two demand curves okay if I ever need to in the future talk about one end of the demand curve or the other then I'll make sure that I tell you that I'm talking about that so if I say think about demand being elastic you need to envision in your mind a relatively flat demand curve okay but keep in mind that elasticity does change what we want to do now is think about some other demand elasticity's so let's clear this off and then we'll take a look at that there are a couple of other demand elasticity's that we need to calculate the nice thing about them is you calculate them exactly the same that's what we did with price elasticity of demand it's just going to be the ratio of two percent changes so if we think about the first one we're going to talk about it's going to be what we call cross price elasticity of demand cross price elasticity of demand I'm gonna abbreviate it that way I'm going to denote it CPE D cross price elasticity of demand cross price elasticity of demand is going to tell us something about how your demand for one good changes when the price of another good changes so we could think about how your demand for hotdogs changes when the price of buns changes or maybe how the your demand for Pepsi changes when the price of coke cages so we're gonna be thinking about the relationship between your demand for one good and a related price remember the one of the determinants of demand is the price of related goods so that's what our cross price elasticity of demand has to do with and it's easy to calculate it's just the percent change in quantity of good one I'm going to put here good one just a little arrow to indicate that this is going to be the percent change in the quantity of one of the goods divided by the percent change in the price of good to a related good so you can see that the formula looks the same as what we did for price elasticity of demand but there we were looking at the quantity demanded of that good and the price of that good now we're looking at the quantity demanded of one good and the price of a related good okay but you calculate these percent changes exactly the same as what you did with the price elasticity of demand so I could tell you that suppose when the price of hot dog buns is this amount people buy this number of hot dogs and when the price of buns changes to this amount the number of hot dogs purchased changes to that amount so I give you two prices and two quantities you calculate the price elasticity of demand exactly the same as what we did with or the cross price elasticity of demand exactly the same as what we did with the price elasticity of demand the interpretation is a little bit different let's look at that so remember that two Goods could be substitutes for each other or they could be complements for each other if two goods are not related then the cross price elasticity of demand would be zero but if they are related then they're either going to be substitutes or complements let's start with substitutes so with substitutes when we have to any time we're thinking about the cross price elasticity of demand we have to pay attention to the sign with substitutes when you calculate the cross price elasticity of demand it's going to end up being a positive number with compliments it's going to end up being a negative number let's think about why that's going to be true so for thinking about substitutes let's think about Coke and Pepsi an increase in the price of Pepsi so this is going to be positive an increase in the price of Pepsi would cause people to buy less Pepsi and more coke so an increase in the price of Pepsi would increase the demand for Coke a positive over a positive is a positive so for substitutes your cross price elasticity of demand would be positive we could think about a decrease in price suppose we think about a decrease in the price of Pepsi so suppose the price of Pepsi goes down that's a negative it's the price of Pepsi goes down people are going to want to buy more Pepsi quantity demanded goes up and because they're buying more Pepsi they're going to buy less coke so the quantity of coke that people buy is going to go down you're gonna have a negative over a negative which together is a positive so anytime you've got substitutes your cross price elasticity of demand is going to be a positive number if we're thinking about complements it's going to be a negative number let's do use this ratio to figure out complements let's think about hot dogs and hot dog buns let's suppose you have an increase in the price of buns you have an increase in the price of buns people will want to buy less buns and because they're buying less buns are going to want to buy less hot dogs so you're going to have an increase in the price of buns will lead to a decrease in demand for hot dogs you're gonna have a negative over a positive which is a negative so for complements it's gonna be negative or we could think about a decrease in the price of hot dog buns the price of hot dog buns goes down people are going to want to buy more buns and because they're buying more buns they're going to want to buy more hot dogs so this one would be positive you're going to have a positive over a negative the ratio then would be negative so you can see that with complements the cross price elasticity of demand is a negative number so if I were to ask you say on a test or a homework if I gave you the cross price elasticity of demand I didn't tell you whether it was complements or substitutes if I give you this like I say that suppose cross price elasticity of demand is equal to negative three-fourths then you would be able to say okay well that's a negative number that means these two goods are complements and a 1% increase in the price of one good causes a 3/4 percent decrease in demand for the other good so you interpret it the same it's just the sign that matters here not whether or not it's bigger than or less than one we don't right now we're not going to worry about that so that's cross price elasticity of demand there's also income elasticity of demand income elasticity of demand remember one of the determinants of demand is your income your demand for goods and services will change if your income changes your income elasticity of demand I'm going to abbreviate this way income elasticity of demand and it's easy to calculate it's just a percent change in quantity divided by the percent change in your income remember that your demand for a good we can think about the good being either a normal good or an inferior good let's start with a normal good for normal Goods your income elasticity of demand the income goes up then your demand is going to go up if income goes down your demand for that good will go down so for a normal good your income elasticity of demand it's going to be greater than zero it's going to be positive and for an inferior good it's going to be negative the Ferrier goods income elasticity of demand is negative let's just think about how that works let's start with the normal good and normal good for which if we have is a good for which if we have an increase in your income that's going to cause you to want to buy more of it so your demand would go up a positive over a positive is a positive or a normal good is a good for which a decrease in your income will cause you to buy less of the good a negative over a negative is a positive so for normal goods the income elasticity of demand is a positive number and inferior good is a good for which when your income goes up you buy less of it your demand goes down a negative over a positive is a negative number so for an inferior good your income elasticity of demand is negative or for an inferior good if your income goes down you buy more of the good so we've got a positive over a negative and again that ratio would be negative so for inferior Goods your income elasticity of demand is negative so for an inferior or excuse me for an income elasticity of demand problem I would say suppose when bills income is $100 per week he buys this many pizzas and when his income goes up to $200 a week he buys this many pizzas and so you would have two quantities you calculate the percent change to income levels you calculate the percent change keep track of if one's going up or one's going down keep track of the sign but calculation of the income elasticity of demand is not hard at all as long as you kind of think your way through it let's talk now about the price elasticity of supply so price elasticity of demand tells us about the steepness of the demand curve price elasticity of supply tells us about the steepness of the supply curve so we're going to think about price elasticity of supply price elasticity of supply I'm going to abbreviate it PE s and as you can imagine it's very easy to count oculi price elasticity of supply is just the percent change in quantity supplied divided by the percent change in price calculate your percent changes using the midpoint method it's not hard we use the exact same terminology so the price elasticity of supply tells us how much sellers respond to a change in price so if the price goes up remember sellers like high prices if the price goes up do sell or sell a lot more or a little bit more if they sell a lot more we would say that supply is elastic if they sell just a little bit more we would say supply is inelastic and and if we wanted to understand what determines how elastic supply is the elasticity of supply really depends upon the technology that's used to produce the good so if we think about my elasticity of supply for teaching if the university came to me and said hey we've got some money we'd like to pay you to teach another class I'm able to do that as long as I've got the time to do it I can I can teach another class if if they wanted me to on the other hand suppose I was a seller of Lakeside homes and let's suppose yesterday I just sold my last Lakeside home and then today all of a sudden the price of lakeside home skyrockets and I think to myself boy I wish I had another one to sell today it's gonna require some time for me to build another have another lakeside home built so I can start building them but it's going to be six months or a year before I can have them built and ready to sell so it really kind of depends upon the production technology we can think about the time horizon so supply tends to be more elastic over longer time horizons which really is kind of the story I just told you with lakeside homes so it depends a lot on on kind of the technology let's think about some special cases and we can think about two different REME cases of supply curves if we thought about a good for which the amount of it was fixed then if we thought about we could think about some antique guitar the number of this particular guitar that was made by the national guitar company in 1929 well there's going to be a certain amount of those and if we thought about what the supply curve looked like for something that was available in fixed supply this is a supply curve now that supply curve would be vertical we would say that supply is perfectly inelastic and if we were thinking about a supply curve that is horizontal oops that should be quantity down here I'm getting confused our supply curve could be horizontal we would say that supply is perfectly elastic okay so here the quantity doesn't respond to a change in price it doesn't matter whether those guitars sell for a little bit or a lot there's a certain number of them here we would say that supply is perfectly elastic we'll talk about an example of this later on when we get to where we're discussing perfect competition and how those types of markets work if we think about interpretation interpretation of price elasticity of supply is exactly like price elasticity of demand remember the quantity supplied and price move in the same direction so our price elasticity of supply is always a positive number we just don't even think about the sign because it's always positive but if price elasticity of supply if you calculate it and it's greater than one we would say that price alas that supply is elastic supply is elastic if you calculate the price elasticity of supply and it's less than 1 we would say that supply is inelastic and if we calculate calculate the price elasticity of supply and it's exactly equal to 1 we would say that supplies unit elastic just exactly like what we did with price elasticity of demand so interpretation of price elasticity of supply is exactly the same as interpretation of price elasticity of demand it's just that we're talking about the steepness of a supply curve so if we were to think about a dip like irv typically we're not going to be thinking about these extremes we're going to be thinking about maybe a very elastic supply curve where it's it's very relatively flat or we could think about a supply curve that's relatively steep we would say that this supply curve is more inelastic than this supply curve or the other way to say that is this supply curve is more elastic than this supply curve so the elasticity of supply is just a measure of the steepness of the supply curve what we want to do is clear this off and then we'll finish up by thinking about an example here that hopefully will tie together all of what we've just done right here let's think about how to use this concept of elasticity to understand something that happens in a market so let's think about kind of a real-world example of when elasticity understanding elasticity might help you have a better understanding of something that you see out there in the world one of kind of the the classic examples of how a misunderstanding of elasticity or not knowing what elasticity is might lead you to a distorted view of the world is what happens when we have a technology improvement that affects say the farming industry and this is something that that I witnessed firsthand I went to graduate school up in in Iowa at Iowa State University and I had come from the Ozarks down in southern Missouri and down there what it meant to be a farmer was that you had dairy cattle or you grew hay on your land they're down in the Ozarks there's not a lot of row crops but when you get up into north Missouri and more up into Iowa its soybeans and corn and things like that and so and farming is big news up there because that's a very important industry and so when I moved up there I started hearing more about farming and it wasn't very long before there was some news story that had come out and there was a new corn hybrid that had been developed and and this new corn hybrid was I think I don't remember exactly what it was about the the new strain of corn but it was something that it was either more drought tolerant or it was resistant to some pests but what it was going to do was to allow farmers to grow more corn per acre of land that they had in production and it turned out that farmers opposed this and there were a lot of people that were not farmers that kind of looked at this and said boy you know this just doesn't make sense to have farmers oppose something that's going to allow them to produce more and so there were a lot of people that kind of laughed at the the reaction farmers had to this technology improvement so let's think about what would happen if we had a technology improvement that allows farmers to grow more corn per acre let's think about this in terms of elasticity and in terms of the revenue that farmers are going to earn so if we think about drawing this picture now let's draw a picture of the market for corn what we need to do kind of the whole point of this chapter is to make you realize that now you can't just draw any demand curve and supply curves that you want to up here it's going to be important for you to think about the steepness of the demand curve and possibly the steepness of the supply curve so let's start by thinking about the steepness of the demand curve for corn turns out in this case that a technology improvement is going to shift the supply curve so we're not going to have to worry too much at all about the steepness of the supply curve but too steep the domain card is important so let's think about the market for corn okay so this is the corn market and let's think about whether or not demand for corn is elastic or inelastic so the way to think about this you can think about how you would respond to a change in the price of corn so suppose that you want to I don't know you want to grill out tonight and you're gonna throw some corn on the grill and so you go out to the grocery store and you're gonna get some corn you probably don't pay very much attention at all to a prior the price of corn most people if you want to buy corn it's such a small share of your budget that you just go out and you buy it you don't know if a aera corn costs 20 cents or a dollar it's not really important to you so if we think about the demand for corn it's gonna be relatively inelastic it's going to be pretty steep when the price of corn goes out goes down you don't run out and stock up on corn okay now let's put a supply curve up here let's suppose that we have a supply curve that looks like that s1 let's label this initial intersection point a and let's think about a price and a quantity let's suppose the price up here is maybe $3 per bushel of corn doesn't matter if that's really in a neighborhood of what a bushel of corn costs we're not really worrying too much about that and let's suppose our quantity down here is a hundred if this is the market for corn that could be maybe millions of bushels we don't really worry about that right now so let's think about total revenue that farmers are going to earn at point a total revenue at a our total revenue at point a is going to be 3 times 100 it's going to be 300 if this was millions of bushels that would be three hundred million dollars so now let's think about what happens when this new corn hybrid is developed it's going to increase you know that technology is a determinant of supply and if we have a technology improvement it's going to shift that supply curve to the right it's going to allow farmers to grow more corne so let's shift it to say s2 we have an increase in technology that's a supply curve s2 we get a new equilibrium right here at point B we see that that increase in supply not surprisingly is going to drive price down let's suppose it drives it down to two dollars per bushel it's going to drive quantity up but because that demand curves relatively steep it's not going to drive quantity up by very much let's suppose it goes up to 110 and if we think about total revenue at point B so total revenue at B is equal to 2 times 110 that's going to be 220 what we see is that this increase in supply brought about by this technology improvement is going to cause the revenue of farmers to go down so looking back on that situation those people who are laughing at farmers and saying gosh these farmers don't even know can't even see this technology improvement as something that's going to benefit them well in terms of how many dollars are going to come in in terms of revenue it's going to hurt them the farmers knew exactly what was going on it was the people that weren't farmers that misunderstood why farmers were opposing this so it's important to think about the steepness of the demand curve in this case because our supply curve was shifting we didn't need to worry about whether or not it was relatively flat or relatively steep because it was going to move us to a new point on the demand curve it's only the steepness of the demand curve that matters in this particular situation ok so that should give you an idea of why it's important to understand something about elasticity when we start thinking about changes that happen in a market let's finish up by just kind of some briefly summarizing the elasticity's that we've thought about we started with price elasticity of demand and there were thinking about the price of the good and the quantity demanded of the good and the way you interpret that is whether or not it's greater than equal to or less than one right that tells you if demand is elastic or inelastic or unit it then we talked about the cross price elasticity of demand the sign of that one is important the sign of this one we know is always negative so we don't worry about the sign we think about this one in terms of whether or not it's greater than equal to or less than zero is it positive or negative that tells us whether or not the two goods are complements or substitutes then we talked about the income elasticity of demand and that when we also interpreted in terms of whether or not it was greater than equal to or less than zero the sign of that one matters it tells us whether or not the good is a normal good or an inferior good and then we finished up by thinking about the price elasticity of supply and we interpret that the same as what we did with the price elasticity of demand whether or not it's greater than equal to or less than one all of them are just a ratio of two percent changes and all of your percent changes are calculated by using the midpoint method the numerator of every one of these things is going to be percent change in quantity and then the name of it tells you what's in the denominator in the crop in the price elasticity of demand it's the price of the good in the denominator what you're dividing by with cross price elasticity of demand it's the price of a related good with income elasticity of demand in the denominator it's the percent change in income and then with price elasticity of supply in the denominator is the price okay the numerator is always a percent change in quantity so once you hopefully once you get your head wrapped around that you realize that it seems like a lot of information at first but these things are so consistent between them that it should be easy to understand and hopefully it's easy for you to work through these as you do homework or test questions so that wraps up elasticity I will see you in a future video
Principles_of_Microeconomics
Chapter_5_Elasticity_Part_1.txt
in this video we want to talk about a concept known as elasticity and this is a way of taking the demand and supply chapter that that model of demand and supply that we've talked about and making it more useful so up until this point when you've thought about demand and supply we've put a demand and supply curve up there and we've looked at the intersection of that and we know that that that intersection tells us the price that's going to end up existing in that market and the quantity of the good that's bought and sold but we haven't paid any attention to really how we draw the demand curve and the supply curve and so in this chapter we're going to be thinking about that and it's going to be something that we'll use throughout the rest of principles of microeconomics class it's a micro economic concept that you use very widely and we'll talk about how it gets used but first we need to think about what we're trying to measure and then how to measure it so we're going to be thinking about a measure that we're going to call the price elasticity of demand price elasticity of demand now before we and I'm going to abbreviate that PE D price elasticity of demand before we get into that let me give you kind of an intuitive explanation for what an elasticity is so an elasticity really is not necessarily a something that economists came up with it's just a mathematical way of measuring the strength of a relationship between two things that are related to each other so we can think about elasticity's in a scientific context or a physics context or an engineering context and we can also think about it of course in an economics context but let me give you a couple of examples so let's suppose that you've got a rubber ball and you take that ball and you throw it against the wall and then you measure how far the ball bounces and rolls back once it hits the wall now the force that you use to throw that ball against the wall is going be related to how far it rolls back all other things equal the harder you throw that ball the more it's going to roll back towards you it's also going to depend upon the the whatever the ball is made out of so if you think about a baseball if you throw a baseball against the wall it's going to bounce off and it's gonna roll back but it's going to roll back less than if you had one of those little super balls and you take that and you throw it against the wall and you measure how far it rolls back for an equal amount of force the super ball is going to roll back farther and so there's going the the relationship between force and distance is going to be different depending on the the material that the ball is made out of we could also think about two things that are related let's suppose that you're driving a car and you push on the brake pedal and then you think about how quickly that car starts to slow down if you've driven different cars you know that different cars a lot of times the brakes will be have a different amount of responsiveness there are some cars where if you press the brake even a little bit it starts to break really quickly and then other vehicles where you have to press on the pedal really hard to get it to start to slow down it depends on a lot of things it depends on the type of braking system it depends on the age of the brake pads that are on the car it depends on a lot of different things but there are two things related there the amount of pressure that you use to push down on that pedal and then how quickly the car starts to stop now what we're going to be interested in here is the relationship between the price of a good and how much of it somebody wants to buy so if we think about a demand curve we started thinking about a demand curve by looking at a demand schedule and what a demand schedule is it's just a list of prices and a list of the number of units that the person wants to buy it each price and so what we know is that as the price goes up quantity demanded goes down but what we're interested in is how much does it go up or down if price go is up does quantity demanded fall off a lot or does it fall off just a little bit so if you think about a couple of goods that you purchase we could think about maybe your demand for gasoline so the price of let's suppose that when you get done watching this video you remember that you need to buy some gas and so you get in your car and you drive down to the gas station and much to your surprise you see that the price of gas is twice what you expected okay so you get down there and something's happened and at every gas station the price of gas is doubled you're probably going to be disappointed about that because now you've got to pay twice as much for a gallon of gas and you're probably going to think about how you can use a little bit less gas you might think to yourself you know what I need to take fewer trips to the store maybe when I go to the store I'm going to buy more groceries so I don't have to drive back to the store in two days but the fact of the matter is you still need to get to class and you still need to get to work and you still need to drive places and so even though you might use a little bit less gas you're not going to use a lot of lot less gas so there you can have a big increase in the price and your quantity demanded is not going to respond very much we could think about a different good we could think about your demand for Pepsi let's suppose the price of Pepsi doubles just the price of Pepsi then think about how you're going to respond to that for a lot of people if the price of Pepsi doubles they're just simply going to switch to Coke or switch to some other soft drink and so there you can have a relatively small increase in price and people will change their behavior a lot they will respond a lot to that small increase in price so in the the price of gas can change a lot and people won't respond very much the price of Pepsi can change a little and people will respond a lot there's a different amount of responsiveness for those two goods and you will talk here in just a second about what determines how responsive you are but you probably can already see that the difference between the two examples gas and Pepsi is that with the Pepsi example there is something good to switch to with the gas example there's not anything good to switch to we'll talk about that here in a second but let me give you a definition for the price elasticity of demand and we'll talk about some terminology that we use so the price of rice elasticity of demand measures how quantity demanded I'm gonna abbreviate it that way how quantity demanded responds to a change in price so let's think about some terminology that that economists use to describe it we're going to use the term elastic and we're going to use the term inelastic so I'm going to use the term elastic when quantity demanded responds a lot to a change in price and I'm going to use the term inelastic win quantity demanded responds a little bit to a change in price and I'll define for you here in just a second what I mean by a lot or a little so you can think about this maybe as a piece of elastic if it stretches a lot we would describe it as being elastic if it doesn't stretch very much if it just stretches a little we would describe it as being inelastic and so hopefully that terminology makes sense to you but essentially it measures how willing consumers are to move away from a good when its price goes up so if the price of gas goes up how willing are you to move away from consuming that good the price of Pepsi goes up how willing are you to move away from consuming Pepsi we can think about that in terms of a price decrease it price elasticity of demand measures how willing consumers are to move towards a good consume more of a good as its price goes down but those are the that's the terminology that I'm going to use let's think about the things that Herman how elastic demand is and there are going to be a handful of these things things that determine how elastic demand is let's talk about the first one and I already alluded to that it's the availability of close substitutes availability of close substitutes so if you think about the gas example there are substitutes for using gas if you're going to consume no gas then there are substitutes for that walking is a substitute for not driving skateboard horse there are a whole bunch of substitutes there's just not good substitutes if you need to get two if you need to go any distance at all then a car is really really convenient okay so if you're not going to be driving that car for not gonna be got buying gas and if you're thinking to yourself well I just take public transportation well the price of the gas that you're using is just in the price of that ticket that you're going to buy and so there you're still using gas they're not going to use any gas you're gonna move away from it there aren't very many good substitutes no close substitutes so we would describe your demand for gas as being inelastic as a matter of fact let's just put here gasoline as kind of the the classic example of the good that has a inelastic demand on the other hand if we're thinking about Pepsi there's a very close substitute for Pepsi Coke is a very close substitute for a lot of people it's almost a perfect substitute so in that case if the price of one of them goes up and the other one doesn't people are very willing to move to that other good so for demand being elastic we could say Pepsi or coke those are good for goods for which there are close substitutes very close substitutes so that's the first thing that determines how elastic demand is the second one is the passage of time so if we're thinking of out how you would respond to say a change in the price of gas let's suppose you you go out today and you find that the price of gas is doubled there are some changes that you could make right now maybe you can carpool maybe you can just take fewer trips maybe if you're the kind of person that just likes to drive around and look at the leaves in the fall maybe you do less of that but you still need to get to school and you still need to get to work so you're not going to make huge changes right now but let's suppose the price of gas stayed high and next month a month after month it was just high you might start thinking about making other changes so for example if you've got a vehicle that doesn't get great gas mileage you might think about getting rid of that vehicle and buying something that gets better gas mileage or you might think about getting rid of your car and buying a motorcycle and being able to consume less gas that way so over longer time horizons your demand tends to be more elastic you tend to respond more when you have more time to respond over short time horizons you tend to not be able to respond very much at all so that's something that determines how elastic your demand is another thing is has to do with whether or not the good is a necessity versus the luxury necessities verse and necessity versus a luxury so and it's necessary good I would describe a necessary good as things like food and water you could describe them other goods as being necessary and for those they're necessary because there aren't really good substitutes for them so this one's very similar to the first one for luxury goods for thinking about an example of a luxury good that might be something like going to see a movie so if you think about having some extra income and maybe this weekend comes up and you think about things that you could do one of them might be to go see a movie but if all of a sudden price of going to see a movie went up there are lots of other things that you can spend your weekend doing so you and your friends might decide not to go to a movie maybe you go to a concert or maybe you you do one of a hundred other things that you could be doing on the weekend so for luxury goods those tend to be goods for which there are lots of substitutes and so demand tends to be more elastic for luxuries less elastic for necessity goods so that's something that determines how elastic your demand is we can think about the definition of the market definition of the market so if we were thinking about how we define a market we could define a market say as the market for vegetables and if you think about what substitutes there are for vegetables there are some there are meats and grains and other things that you could be eating or we could think about a much more narrow definition of the market the market for carrots if we think about how your elastic how elastic your demand is going to be for that broad definition for vegetables versus your demand for a particular type of vegetable like carrots there are many more substitutes for carrots than there are for that broad category of vegetables for the broad category there are everything that's not a vegetable it's going to be substitutes for vegetables if for if you're trying to decide what to eat for thinking about a narrow definition like carrots there's going to be everything that's not vegetables plus everything that's not a carrot so there are more substitutes for narrow definitions of the market than there are for broad definitions of the market so for broad definitions because there are fewer substitutes your demand would tend to be less elastic for narrow definitions your demand would tend to be more elastic so how you define the market that determines how elastic your demand is and then the last one that we're going to talk about is going to be the share of the good in the consumers budget share of the good in the consumers budget so if you think about all of the different goods that you consume there are some goods that consume a relatively large share of your budget and some goods that consume a relatively small share of your budget I was in a principles of microeconomics class early on in my career and we were talking about kind of the demand supply model and how you react to a price and we were back talking about just why had demand curve slopes downward and what I had told the class was that there's this inverse relationship between price and quantity demanded all other things equal if the price goes up you want to buy less and if the price goes down you want to buy more and this student raised his hand in the back of the class and he said I don't think that's true and I said well give me an example of you know something where you don't think you'd pay attention to the price and what he said was well if I were going to go to the grocery store and say buy some gum I don't pay any attention to the price if the price goes up or down I don't pay any attention to that if I want some gum I just buy some gum and he's got a good point but the point is the the rate he's still paying attention to the price it's just so that the price is so low he doesn't really care if it goes up or down so if I asked him which I did I said well what would happen if you were to take that pack of gum and go to the register and all of a sudden it said $10 you would probably and he said I would I wouldn't buy it for $10 and and that demonstrates that you're paying attention to price but you have pretty good information that it's not going to be $10 it's such a small share of the budget of your budget that you don't really pay that much attention to it so the price can go up some and it or it could go down some and you're not going to to buy a lot more or a lot less on the other hand if we're thinking about good that have a price that would result in a larger share of your budget then you're gonna pay more attention so gasoline tends to consume a much larger share of our budget and we tend to pay more attention to that so all of these are things that help us understand why some Goods have an elastic demand and some Goods have an inelastic demand and again we'll talk about how to know whether or not demand is elastic or inelastic what we need to do is clear this off and then we're going to talk about how to calculate the price elasticity of demand let's talk about how to calculate price elasticity of demand it's gonna be a very simple mathematical calculation if I'm in a face-to-face class and and I tell the class that that this is going to be a simple mathematical calculation a lot of people will start to get a little bit nervous and say oh no I'm not really that good at math and I don't know just this scares me and let me just say this is going to be very simple if you can average two numbers together that's really in my opinion the hardest part of this you'll have to remember a couple of things that are very easy to remember but calculating an elasticity is not hard at all let me give you the definition price elasticity of demand it's going to be a number that you're going to calculate and in the numerator up here we're going to have the percent change in quantity demanded percent change in quantity demanded in the bottom in the denominator we're going to have the percent change in the price now I'm going to write this a little bit more concisely I'm going to use some notation here price elasticity of demand I'm just gonna write percent I'm gonna use this Greek Delta to indicate change in I'm gonna have the percent change in quantity demanded divided by the percentage in P and that's really it you're just going to be taking one number and you're going to be dividing it by another number let's think about a very simple example of how to do this so here's an example let's suppose that price increases by say 10 percent and quantity demanded Falls by 20 percent price increases by 10 percent and quantity demanded Falls by 20 percent we know there's this inverse relationship so if price goes up quantity demanded is going to go down but let's calculate the price elasticity of demand so it's just as simple as plugging those two numbers in there our percent change in quantity is 20 percent you can put that in as point - you can put it in as 20 it doesn't matter our percent change in price is 10 percent to be technically correct since quantity is falling by 20 percent that should be a negative 20 so let's go ahead and do that and then you would reduce that 20 over 10 is just negative 2 that's your price elasticity of demand so if you've already got the percent changes in quantity and the percent change in price then it's very simple to calculate these things it's just one number divided by another number and we'll talk here in just a second about what exactly that number means how to interpret that number but first let's talk about how to calculate this if you're not giving these given these percent changes if you have to yourself calculate the percent change let's think about doing that before that let's talk about this negative sign so I said to be technically correct since quantity demanded is falling by 20% we need to make that a negative 20 I could have given you this problem price decreases by 10% and quantity increases by 20% in that case the price decrease would be 10% and our denominator would be negative in either case the the number that you get out is going to be negative so what economists typically do is we just kind of ignore the negative sign we know that there's this inverse relationship between price and quantity demanded price 1 goes up price goes up quantity demanded goes down price goes down quantity demanded goes up so one or the other is always going to be negative we know that price elasticity of demand is always negative and so we just ignore it if you were to tell me that in this example the price elasticity of demand was just two that would be a right answer because I would know that we're ignoring that negative sign let's think about how to calculate percent change and so if you think about calculating a percent change let me give you a couple of examples let's suppose that on you take two tests and on test one let's suppose you get an 80 and on test two you get let's say a hundred and suppose I ask you to calculate the percent change in your test score let's review how to calculate a percent change so the percent change if you're calculating the percent change in anything you're going to take one number and divide it by another so we're going to take how much it changed by the change in test score let's write this out first we're going to take in the numerator we're going to have the change in test score and we're going to divide that by where it started the first test score divided by the first test score test one first test score we're going to multiply then you would multiply that by a hundred to turn it in % so let's go ahead and do that I'm gonna ignore the hundred you'll see why here and it's actually let's just go ahead and do it so our percent change in test score here the test score changed by 20 our first test score was 80 and then we're going to multiply that by a hundred that gives us 25% increase in your test score so your test score changed by 20 and 20 is one quarter of 80 that's a 25% increase let's change this now let's keep the two test scores but let's switch the order of them let's make test 1 supposing test on test 1 you've got a 100 and on test - you got an 80 and then let's calculate our percent change so the percent change this time your test score went down so it's going to be a decrease of percent decrease it's going to be negative but let's not worry about that let's just think about the percent change it changed by 20 now your initial test score was 100 we're going to multiply by 100 that gives us a 20% decrease in your test score so this creates a challenge for us it creates a challenge because we're using the same two test scores but it depends on whether we start with the 80 or we start with the 100 if we start with the 80 it's a 25 percent change in your test score if we start with the 100 it's a 20 percent change in your test score we want to eliminate that problem and there's a nice simple way to eliminate it and that is that we're going to use what we're going to call the midpoint method instead of changing dividing by the first test score we can divide by the average of the two test scores okay so if we think about calculating this percent change with the midpoint method then here's what we're going to get if we use the first two test scores we're going to get our percent change using the midpoint method it changed by 20 we're going to divide by the average of the two so we're going to divide by the midpoint of these two the midpoint and the average or the same thing and the midpoint is 90 and then we would multiply by a hundred so if we use these two and we use the midpoint method we're going to get this if we use these two our percent change is going to be it changed by twenty the average of the two of these is again 90 we multiply by a hundred notice that now we get the exact same number and we've eliminated this problem of having two different percent changes depending up on the starting point so this is the way that we're going to calculate our percent changes we're going to use this midpoint method so you'll see in the textbook the midpoint method of calculating price elasticity of demand this is what it refers to we're going to calculate our percent changes in price and our percent change in quantity using the midpoint method that way it doesn't matter which price we start with and that'll make sense I think once you see me calculate a couple of these things so I want to clear this off and then we'll do some examples of this so let's do a couple of examples of calculating the elasticity price elasticity of demand using this midpoint method so we're going to have to let me put my definition back up here a price elasticity of demand is equal to the percent change in quantity demanded divided by the percentage changes is using that midpoint method so here's what the percent change in quantity is going to look like it's going to have in the numerator the change in quantity so I'm just going to put change in quantity up there and then in the denominator it's going to have the average of the two quantities so I'm just going to put Q average right here so that's how you calculate your percent change in quantity you're going to have two quantities you're going to look at how much the quantity changed and then you're going to divide by the average of the two to calculate your percentage in price we're going to have the change in price up here in the numerator and then in the denominator we're going to have the average of the two prices and that's it you're going to calculate your percent change in quantity that's going to give you a number you're going to calculate your percent change in price that's going to give you a number and then you're going to divide one number by the other to get the price elasticity of demand so let me give you an example let's suppose when the price of pizzas suppose when price of pizzas is equal to $10 let's think about a guy named Bill suppose when the price of pizzas is $10 bill eats 50 per year bill eats 50 and let's suppose if the price falls to $8 per pizza when price is equal to 8 let's suppose Bill eats 70 bill eats 70 and let's calculate our price elasticity of demand so we've got two prices and we've got two quantities here are the two prices ten and eight here are the two quantities we don't care which is the first one because we're going to use that midpoint method so let's calculate first our percent change in quantity the percent change in quantity quantity here changes from 50 to 70 or from 70 to 50 it doesn't matter remember we're not worried about the sign of this thing so you don't even have to worry about that quantity changes by 20 and the average of these to fifty plus seventy is 120 divided by two is 60 or you can just think about the number right in the middle the midpoint it's going to be 60 I'm going to reduce that to one-third there's our percent change in quantity let's do the percent change in price so our price changed by two and the average price is nine if you add those together you get 18 divided by two you get nine we can't reduce that anymore and then let's calculate the price elasticity of demand so the price elasticity of demand is the percent change in quantity which is 1/3 divided by the percent change in price which is two ninths now you have to remember that rule of how to reduce a fraction divided by a fraction a ratio of two ratios remember that the rule is you you just simply write the top one one-third and you multiply by the reciprocal of the bottom one which is 9 over two and we can kind of do some reducing here that gives us a price elasticity of demand of three halves and we'll talk about exactly what that number means remember we calculated our price elasticity of demand earlier and it was two this one is is one and a half three halves so you can see that calculation of the elasticity is not hard the hardest part I think a couple couple of parts that are you might think are it might be something you haven't done in a little while you need to remember how to average two numbers that's easy and then you need to remember this trick of how to reduce a fraction that's got a fraction in the top and a fraction in the bottom but once you remember how to do that trick it's it's very simple let's do another example just to illustrate so let's suppose here's another example when p1 when price is equal to $2 I'm not going to give you even the story with bill it doesn't matter we just need the two prices in two quantities when price is equal to two let's suppose quantity I'm going to call it q1 is equal to 13 if price fall or rises to 10 so if price is 10 then our quantity is going to be lower let's suppose quantity is equal to 3 oops that should be q1 is equal to 3 that shouldn't make you one that should be cute too there we go so if price is equal to 2 quantity demanded is equal to 13 and if price is equal to 10 quantity demanded is equal to 3 and I've given these things subscripts 1 & 2 but you don't have to it doesn't matter which one you treat us the first price remember we're using the midpoint method so all you need is a price and it's quantity you need two of those two pairs of price and quantity so let's start by thinking about our percent change in quantity the percent change in quantity quantity here changes by 10 and the average of those two is going to be 8 and I can reduce those I can reduce that fraction by taking a 2 out of the top and a 2 out of the bottom that gives me 5 over 4 there's my percent change in quantity now I'm going to do my percent change in price I should put here percent change in quantity here's my percent change in price price here changed by 8 and my average there is 6 I can again take a 2 out of those out of the numerator and the denominator that gives me 4 over 3 so now I've got my percent change in quantity right here I've got my percent change in price right there I can calculate the price elasticity of demand price elasticity of demand is equal to my percent change in quantity 5 over 4 divided by my percent change in price which is four over three so I'm going to write the numerator that's equal to 5/4 multiplied by 3/4 that's 15 16 so remember when you're multiplying two fractions you don't need any kind of common denominator or anything you just multiply the two tops and multiply the two bottoms so now my price elasticity of demand is 15 over 16 a little less than 1 so you can see that the calculation of the price elasticity of demand is not hard you just need to first calculate your percent change in quantity calculate your percent change in price each one of those is just how much it changed divided by the average how much it changed divided by the average and then divide one by the other any class any problem that I give you say on a test or any quiz that you might take I would give you numbers where you just leave them as fractions and it's real simple to do this part of it I will say though that we in my online and face-to-face classes use an online homework program and most of the problems on the online homework program I don't create those problems I choose from a set of problems and I can tell you that on the online homework every online homework problem they don't leave you they don't give you numbers that are nice and simple like these a lot of times these will be decimals so they might have this as point three three three and that's fine you need a calculator there but any problem that I'm gonna give you I'm not gonna require that you you need a calculator I would say you know if you're comfortable converting these to a decimal that's fine I don't recommend that you do that and in fact in my face-to-face classes I require that you just do the diffraction stuff here because it's nice and simple but calculation of the elasticity is easy one of the things that you gonna see as you read the textbook is that you're gonna see a formula the textbook gives you a formula for calculation of elasticity and what I've found is I teach you this in my face-to-face classes if I get to this point and I ask the class does this seem simple to you everybody will say yes it it seems simple you have to remember a couple of things you've got to remember this definition you've got to remember to how to calculate your percent change in quantity and your percent change in price but once you remember those the actual calculations are not hard when I used to teach this actually when I learned it it wasn't taught to me quite like this it was taught to me a little bit differently and then when I started teaching I taught it that way to students and my experience was that a significant number of my students would have real problems calculating elasticity I would be grading the tests and it would just be one student after another about half the class would miss parts of this stuff or maybe miss the whole thing and it always wondered I always wondered it always bothered me because I thought to myself you know this is not hard these calculations are not difficult I don't know why it is that the students have so much trouble doing this and then one semester I thought you know what maybe it's the way I'm teaching it maybe what I need to do is just kind of change what I do and so I thought you know I'm gonna try this semester I'm gonna try it this way and just see what happens and so I did it and I was a little bit nervous about the outcome I was grading the test where the students were doing the elasticity problem and all the students got most of the elasticity questions I was used to counting all kinds of points wrong on that counting all kinds of questions wrong and taking points off and and I was grading test after test and everybody was getting the elasticity questions right and I remember thinking oh gosh I think somebody's cheated here I think somebody must have broken into my office and gotten the test that's I've never had most of the students get it right I think they were all but maybe three or four students got the problems right and then I thought well no maybe I mean that is a possibility but I think that's a very slim possibility maybe it is that that the way I did it this time just made a big difference so I went back to the class when I gave the test back the next day and I said hey how did you guys do the elasticity and they said well what do you mean it's not even hard and I said well yeah that's right but why wasn't it hard and they said well I don't know it just wasn't so I think it has to do with the way that you teach this that can make it easier or hard let me show you how it was taught to me so the way I used to teach it I would go into a classroom I would talk about what an elasticity is we'd talk about the rubber ball thing and and I would talk about you know the terminology elastic and inelastic and then we talk about what determines whether or not demand is elastic or inelastic availability of clothes substitutes and then we'd say okay let's let's think about how to calculate it and then I would put a formula up on the board and here's what that formula would look like here's the formula for calculating the price elasticity of demand price elasticity of demand looks like this it's Q 1 minus Q 2 divided by Q 1 plus Q 2 over 2 this stuff multiplied by 100 divided by P 1 minus P 2 divided by P 1 plus P 2 divided by 2 multiplied by a hundred let's say there's the formula for elasticity and now I'd give an example I'd give an example like this and then we'd start plugging numbers in and and we'd get the price elasticity of demand and students would miss it they would struggle with it and I think what it is is that when you start with this formula it makes it feel more complicated than it needs to be when you start with a formula like this I think it instantly creates a block in students minds and there are a lot of students that are convinced they're not good at math and those students are going to say I knew it I knew that this was going to be something that was hard and and students used to ask me if they were gonna get to have a formula sheet and I would always think to myself why would you need a formula sheet for this because I had done it so long that it made sense to me and I would say no if you're not gonna get a formula sheet this is easy but students didn't believe it if you think about what's going on here there are lots of ways to make this simple what I found is that we don't need to worry about the hundreds there's one in the numerator and one in the denominator they're gonna cancel out we just don't even need to think about them this term right up here this is just the change in quantity right if you write it out q1 minus q2 it somehow feels more complicated than if you just call it the change in quantity and this term right here that's just the average of the two quantities writing it out like that makes it again feel more intimidating for some students but it's just the average quantity let's just call it that you already know how to calculate an average so we don't need to complicate it with the formula for an average this is just the change in price and this is just the average price so what I started doing was not writing it out like this at least not until I had already convinced students that it was easy to do and then just having them remember this stuff if you just remember that you're gonna calculate your percent change in quantity by looking at how much it changes and then divided by the average that's easier than trying to remember this if you're trying to plug things into this formula that's just hard it's just too many steps that it's too easy to make a mistake if you'll just do it like this it's way easier so in the textbook you're gonna see this and if you if you were to look at videos on YouTube and find other people demonstrating how to calculate elasticity they're going to show you this formula and and there's nothing wrong with that if you're the kind of person that likes formulas this is great but if you'll just remember it like that for a lot of students that makes it a lot easier so calculation of the elasticity is not hard it just boils down to - remembering this as the most basic thing and then just calculating the percent change in quantity and the percent change in price using the midpoint method which has to do with the average in the denominator so I need to clear this off and then we'll talk about how to interpret these numbers what does it mean if price elasticity of demand is three-halves or or fifteen sixteenths or - so let's clear this off and we'll talk about that so let's think about how to interpret a price elasticity of demand it's it's actually very straightforward if you calculate the price elasticity of demand and when you calculate it the number that you get is greater than one so maybe it's three-halves or if you're working with decimals it could be 2 or 2.5 but if it's greater than 1 then we say that demand is elastic demand for the good is elastic if you calculate the price elasticity of demand and the number that you get is less than 1 and we're going to be thinking about these in terms of absolute values remember it's going to be negative but we're not worried about the negative sign here if you calculate it and that elasticity of demand is less than 1 we would say that demand is inelastic demand is inelastic and if you calculate the price elasticity of demand and it's exactly equal to 1 we would say that demand is unit elastic so let's think about kind of what the number itself means let's think about let's start by thinking about how to understand these and why these definitions make sense if the price elasticity of demand let's just take this first one if the price elasticity of demand just greater than one remember that the price elasticity of demand is the percent change in quantity divided by the percent change in price that's the definition of the price elasticity of demand so if that thing is greater than one that's telling you that the percent change in quantity is greater than the percent change in price our percent change in quantity is greater than the percent change in price so that's telling us that quantity is changing in percentage terms by more than the price is changing in percentage terms so people are responding a lot to a change in that price so we would say that demand is elastic if the price elasticity of demand when you calculate it is less than one then that's telling you that the percent change in quantity divided by the percent change in price is less than one which tells you that the percent change in quantity is less than the percent change in price so here people aren't responding very much demand is inelastic price could change by say 10 percent the quantity is only changing by less than 10 percent maybe 5 percent or 4 percent or 8 percent and then obviously let's if price elasticity of demand is equal to 1 that's telling us that the percent change in quantity is exactly equal to the percent change in price so using the definition of this we can tell that what's what this number is telling us is whether or not the percent change whether or not the numerator is bigger than or less than the denominator whether or not percent change in quantity is bigger than or less than or equal to the percent change in price so let's think about how to interpret the actual number let's suppose that you calculate the price elasticity of demand and it's equal to fifteen sixteenths I think that was one of them that we just calculated let's think about how you would describe that if you went home and you showed your mom these notes on price elasticity of demand and and you showed it to her and how to calculate it she was impressed and and she said to you okay but but what does the number mean I understand that that the size of the number whether or not it's bigger than or less than one tells me whether demand is elastic or inelastic but what does that actually mean here's how you would reply you would say that a 1% change one percent change in price causes a fifteen sixteenths percent change in quantity demanded that's exactly how you would describe it we always put it in terms of a one percent change in price just by convention because it's easier to understand then so if we had a price elasticity of demand you calculate it and it turns out to be two the way you would describe that is a 1% change in price causes a two percent change in quantity demanded it's easy to understand in this case because your price elasticity of demand is two that's two over one here's the percent change in price and up here's the percent change in quantity so a one percent change in price causes a two percent change in quantity here we could say it differently we could say that a 16 percent change in price causes a fifteen percent change in quantity but that's hard to understand so what we've chosen to do is always put it in terms of one percent change in price so let's do one more if we had a price elasticity of demand equal to four-thirds the way that you would say that in terms of a sentences a 1% change in price causes a 4/3 percent change in quantity demanded so that's how we interpret the actual number what we want to do now is we want to start thinking about what this means in terms of terms of the shape of the demand curve and then we're going to calculate think about some price less to other elasticity price elasticity of supply and some other demand elasticity's
Principles_of_Microeconomics
Chapter_9_International_Trade.txt
in this video we're going to talk about international trade now international trade is a huge topic I teach an upper level international trade class every other semester and even in that class devoting an entire class to the topic there are lots and lots of things I don't have a chance to cover so we're just going to kind of scratch the surface of of what there is to know about international trade in this video but this will give you a good idea of kind of how basic trade models work we'll use it to think about some trade policies we'll think about the effect of a tariff and we'll think about the effect of a quota there are other trade policies so let's start by just kind of thinking back to if you've watched my videos or if you're in a economics class you probably already know that one of the um basic principles of economics is that trade can make everyone better off that doesn't say that trade always makes everyone better off what we're going to see is that there there's always going to be some people who gain and some people who lose but if we're taking a an overall view of the economy and we're thinking about the economy of a whole as a whole what we see is that the economic Pie gets bigger so trade makes people better off there and we call that gains from trade I've got a video called gains from trade if you're not sure about that watch that video and there we just talk about two people or two countries trading with each other we don't talk about any kind of trade restrictions or anything like that so let's think about a situation where we've got a market and there's no trade taking place and we've got a special name for that we call that autarkey let's just say I'll write it down here autarkey autarky is the situation where no trade is taking place a country that's not trading with other countries so let's think about a market we've got our demand curve that represents um the buyers in this particular Market let's say this is the market for uh say clothing we'll say the domestic market for clothing our country's market for clothing um we know that our price is going to get driven to P star our quantity in the market is going to get driven to Q Star right up here is going to be consumer surplus right down here is going to be producer Surplus so that's what a market looks like you've seen that before now let's suppose that textiles or clothing is available on the World Market as well now at this point we're not trading with the rest of the world a lot of times the kind of the standard in an international economics class is to call the Market that you're thinking about home and the other markets foreign so I may say home I'll try not to use that terminology because the the book that I'm basing these discussions on doesn't use that terminology but if you hear me say home then I'm just talking about the domestic Market um so let's suppose that clothing is available on the world market well that means there's going to be a price of of textiles of clothing on the World Market and that price out there could be higher than the price here in our country or it could be lower than the price here in our country now if the price is higher than it is in our country let's start with the price being lower let's suppose the price is lower out there in the world than it is in our country the world price is below P star then we would be buying we're going to want to import those goods into our country because they're available cheaper Elsewhere on the other hand if the price is above P star then we're going to want to export clothing because we can produce it at a price that's cheaper than what it can sell for out there in the World Market will be exporting it okay and this really is just an indication of comparative advantage right it's it's the idea that if you can do something more efficiently then somebody else then you have a comparative advantage in that your opportunity cost is lower and you tend to specialize in that so that's all we're talking about if the price is lower on the world than they in the world then they have the comparative advantage in it let's start by supposing that the price is higher than uh our autarky price so I'm going to call this our autarkee price I will say this the textbook I'm basing the discussion on does not use the word autarky um that's fine you need to know what it is autarky means no trade so when I say autarky price I'm talking about the price that would exist in the absence of trade so let's suppose that the world price is higher than P star then we're going to want to export let's draw a picture right here and we'll think about what that's going to look like so put my demand curve Supply let's put I'm going to just put a dotted line over here to indicate what our autarkee price would be but let's suppose that the world price is right up here PW put a dotted line right over here what's going to happen is when we open up trade when we open up this economy to the rest of the world to engage in trade then the price in our economy is going to rise to PW and the reason is that no seller here once we open up trade no seller in our economy is going to sell it for less than what they could sell it on the World Market so price is going to go to PW now let's think about what's going to happen at PW at that world price right up here if we go over to the demand curve we will see how much buyers in our Market in the domestic Market want to buy and it's going to be this amount I'm going to write QD up there that's quantity demanded domestically but now domestic sellers are going to want to produce this quantity there's quantity supplied now that would if we didn't have trade if if this was just a market autarky then if the price was PW what would happen is there's a surplus and that Surplus would give create downward pressure on price and price would fall until we get to right there right anytime there's a surplus and sellers can't sell all they want to sell because buyers don't want to buy as much as sellers want to sell there's going to be downward pressure on price but that's not what's happened because now there's another player in the game and the other player in the game is the rest of the world the other countries that are out there so sellers are going to be able to sell this excess Supply on the World Market and so this amount the amount that domestic buyers do not buy that's going to be exports that quantity of clothing will be exported to the rest of the world so now let's I'm going to put some areas up here I'm going to call this a let's call this B it's called that D and that c right down there so now let's uh think about what consumer and producer Surplus would be um before trade if we had autarky the autarky situation so before trade or in autarky let's talk about what consumer and producer Surplus will look like if you are not familiar with what consumer and producer Surplus look like you should go look at my videos on consumer and producer Surplus so before trade we ignore this stuff before trade The Price is Right Here at our autarky price and consumer surplus is the area under the demand curve and above the price it would be a plus b so consumer surplus equals a plus b producer Surplus in autarky would be the area under the price and above the supply curve it would be right there in this picture it would be area C so producer Surplus is equal to C total Surplus would be all of the area under the demand curve and above the supply curve it would be a plus b plus c now let's think about with trade so if we open up trade with trade now the price Rises to the world price we have excess supply of the good in our country and that excess Supply gets exported to the rest of the world and so there's our exports let's think about what consumer surplus is now so consumer surplus now when our price rises from here to the world price our consumer surplus here domestically Falls to just area a so consumers notice that when we open up trade consumers lose B I'm going to say consumers lose area B consumers are in this situation are not going to be happy about this they're worse off let's think about producer Surplus so producer Surplus now that price has risen producer Surplus is the area under the price and above the supply curve and so notice now it's an area a much bigger area all of this area under this dotted line and above the supply curve is going to be producer Surplus it's going to be B plus C plus d all right B plus C plus d and notice that this area B that used to be part of consumer surplus it ends up being part of producer Surplus so the effect of trade opening up trade in this country is to transfer some well-being away from the consumers to the producers in this country The Producers clearly are going to be better off because price goes up so B is a transfer and that transfer is from consumers to producers and then total Surplus total Surplus ends up now being a plus b plus C plus d right so total Surplus a plus b plus C plus d and you can get that by looking at the picture or you can get that just by adding up consumer surplus and producer Surplus it's going to be a and b and c and d so notice that area d in terms of total Surplus total Surplus before trade was just this triangle right here after trade there's the addition of this area D so the gain in total Surplus gain of d so what we get is a situation where producers when when we open up trade and the world price is higher than our autarky price then we're going to be an exporter and the impact of that is that it's going to help sellers producers will be made better off consumers in this country will be made worse off clothing prices are going to rise but in terms of the well-being of of the economy of the well-being of everybody total Surplus goes up gains from trade right let's think about what would happen if the world price is lower than our autarkee price so let's draw another picture here and let's put our world price same thing I'm going to put a little dotted line right here that's where our our autarkee price would be but let's suppose when we open up trade the world price is down here PW world price um so let's think about what's going to happen when we open up trade and the price Falls to down here because once we open up trade no buyer in this economy is going to buy at that price if they can buy on the World Market at this price so sellers are not going to be able to do anything about this price falling okay so what we see is that at this world price here's the quantity of units that sellers want to sell that's quantity supplied but at this world price here's the number of units that buyers want to buy there's quantity demanded and so we have excess demand in this economy now if we weren't trading if we were back in the autarkee situation and the price was PW then this excess demand would simply cause price to rise here in this country right buyers want to buy more than sellers want to sell there's a shortage and that shortage would create upward pressure on price until there was no longer a shortage and that would mean that price would go up here to the autarkee price but now there's another player in the game and that other player is the rest of the world and so this excess demand even though domestic sellers are only going to want to sell this amount domestic buyers want to buy this amount and they can buy this excess part on the World Market so this will end up being Imports so here's the quantity that's produced in the economy domestically and then this amount is imported and so total quantity demanded ends up being that distance right there okay so let's do the same thing I'm going to call this a and b let's call that c and then that d so now let's talk about before trade so the autarkee situation let's start by thinking about consumer surplus so consumer surplus with no trade would be the area under the demand curve and above the price there's our autarkee price so it would be area a producer Surplus would be the area under the autarky price and above the supply curve so it would be B plus c and total Surplus would be all of the area under the demand curve and above the supply curve so it would be a plus b plus c once we open up trade now we get excess demand for the good clothing textiles this excess demand and that excess demand is the amount that gets imported into the economy so with trade let's start by thinking about consumer surplus so consumer surplus is going to be the area under the demand curve and above the price but now our price is PW so consumer surplus is going to be a plus b plus d a plus b plus d and you can see that consumers in this country will be happy about that it drives the price of the good down and so the gain in consumers I may have said producers there consumers are going to be happy about this so the buyers are going to get this gain of B plus d That's additional consumer surplus so consumers gain consumers gain B plus d so in this situation the consumers are going to like this trade will benefit the consumers producers are not going to like this so producer Surplus when the price falls down here to PW producer Surplus is the area under the price and above the supply curve so it falls to area C so producers lose B lose B we can see that opening up trade transfers some producer Surplus to Consumers and then total Surplus is going to be the sum of consumer surplus plus producer Surplus it's going to be a plus b plus C plus d a plus b plus C plus d foreign just like we saw over there there's this gain of d so once trade is opened up the country our country is either going to become an exporter or an importer depending upon whether or not our autarky price is higher than the world price or lower than the world price but regardless of whether or not we've become an exporter or an importer there's going to be this gain from trade we're going to gain d but we can see there are winners and losers in this case our producers win consumers lose in this case our consumers win and our producers lose so trade can make everyone better off there's this gain what we see is that the gain to the to the winners is bigger than the loss to the losers so what we could do is we could take some of this gain we would be able to compensate the people who are made worse off and still have some economic pie that we could share so the gains outweigh the losses it's definitely from society's Viewpoint it is a good thing but now it should be easy to start to understand why trade policy can be so contentious the politics around trade policy because there are clearly going to be some people that win and some people that lose and the losers are going to complain and the losers are going to hire lobbyists and those lobbyists are going to go to politicians and they're going to argue against either opening up trade or they may argue for some trade restrictions we'll talk about those here in a second so this is why there is a strong debate about International Trade even though we can say look International trade's a good thing it causes the economic pie to get bigger the problem is it causes some people's slice to get smaller and other people's slices to get bigger we could move things around to compensate those who are made worse off so we can still say from from an overall Viewpoint it's good we should have it we don't want to prevent free trade if we can get away with it but that's why some people argue against it what I need to do do now is I'm going to clear this off and then we're going to talk about the effects of a tariff all right let's think about the effects of a tariff so a tariff I will say before we talk about a tariff that I have a terrible time trying to remember whether whether the word tariff has one r or two one F or two I've been teaching International economics for a long long time and I can't tell you the number of times I've misspelled the word tariff so if I do it here oh well just be prepared for it so a tariff is simply a tax on Imports now this is different from what we see in say in other videos I've talked about the effect of attacks this is not a tax on every unit of the good um so in the the the country that imposes the Tariff the tax only applies to the good that is the portion of the good the units of the good that are imported into the country so keep that in mind it is a tax on Imports so let's suppose that we have an importing country obviously if if the country is exporting the good and we impose to tariff the Tariff would have no impact because there are no Imports of the good into the country so we have to have an importing good or an importing country before a tariff would have any impact so let's draw a picture here of what's this is going to look like um we've got our demand curve our supply curve here would be our autarky price I'm not going to worry about identifying that because it's not going to apply if we've got International Trade going on let's suppose we have the world price right here PW there's the world price if we didn't have a tariff then if we go over from that world price we would hit our supply curve here that's the amount of the good that's produced domestically foreign Ty demanded so this full distance out to the demand curve that would represent the total amount that buyers want to buy quantity demanded this amount is supplied by domestic Sellers and then this amount would be the Imports into the country of the good okay now what happens is if this country imposes a tariff then it's going to drive the world price up so every unit that's imported has this tax so I'm going to call this PW plus the Tariff t so now let's think about what happens in terms of producers so producers are going to like this it drives the price up it makes units of the good more expensive and so quantity supplied goes up domestic sellers are going to produce and sell more it makes quantity demand to go down so we can go out here now here's quantity demanded and so we can see that here is the amount that's imported at the world price with no tariff once we impose the Tariff we get a smaller quantity of imports um so I'm going to put some dotted lines down here that would be the amount of imports with no tariff once the price gets once the Tariff is imposed and it drives the price up our Imports are going to fall to that amount so this distance right there now let's put some letters on here to in indicate uh some areas so I'm going to call that a let's call That Little Triangle right there b let's call this C d e f and then let's call this last thing down here G as you can see in terms of consumer and producer Surplus this gets a little more complicated than some of the other things that we've seen so now let's do the thing that we always do let's start by thinking about what happens if there is no tariff and let's start with consumer surplus so consumer surplus with no tariff there's no tariff here's the world price consumer surplus is the area under the demand curve and above that price it's going to be a and b and c and d and e and f a plus b plus C plus d plus e plus f it's just this big triangle under the demand curve and above the world price producer Surplus would be just area G it's the area under the world price and above the supply curve so producer Surplus is G total Surplus is going to be producer Surplus and consumer surplus added up so it's going to be a b c d e f and g a plus b plus there's total Surplus now let's impose the Tariff so with the Tariff it drives price up now to this price the World plot Price Plus the Tariff and so consumer surplus is going to be a and b Now consumer surplus equals a plus b so notice that this tariff is going to hurt consumers consumers are going to lose C D E and F so consumers lose C plus d plus e plus f producers are going to like this right it drives the price up domestically sellers like high prices so they're going to end up now getting producer surplus of C and G right so producer Surplus equals c plus let's say G plus c started out at G now there's this addition of C you can see that consumers lost C producers gained it so this tariff transfers economic well-being from consumers to producers so you can start to hopefully Now understand why a lot of domestic producers will argue for trade restrictions for a tariff because that tariff increases their economic well-being at the expense of consumers now they're not doing it because it's at the expense of consumers but that's what ends up happening um so producers gain C Ain C and then total Surplus let's talk about what total Surplus looks like so total Surplus let me identify what it is and then we'll talk about what's going on so total Surplus is going to be a b C e and G right so is going to be a plus b plus c Plus e plus G so now let's talk about why it looks like that why is e going to be included as part of total Surplus and that's because that ends up being part of the uh um Revenue that the government is going to create from the Tariff remember the tax revenue ends up being um generated because every time a unit is imported T dollars goes to the domestic government so the loss here the loss of total Surplus is that little triangle there d and that little triangle there f so government revenue is e let's talk about y because with the Tariff there's Imports there's Imports with the Tariff so that horizontal distance is Imports this vertical distance right here is the amount of the Tariff that's T so if we take that vertical distance right there multiply it by that horizontal distance right there we get the Tariff Revenue which is area e so I'm going to say it's also equal to T times the quantity of imports I'm just going to write quantity and then you put as a subscript Imports that q that quantity right there would be that distance so now let's talk about dnf dnf or dead weight loss and weight loss equals D plus F that's the loss right there let's talk about what each one of these represents so d D is going to be what we would call a production distortion production distortion here's what the Tariff does the Tariff raises price in the country and when price goes up quantity supplied increases it it creates an incentive for sellers in this country to increase the amount that they sell and consequently it creates dead weight loss that's inefficient right our domestic sellers are having and have an incentive to produce and the incentive means that these sellers right in here with that cost of production are producing and that's inefficient because the good is available at this cost from the World Market there's no reason these we're we're creating an incentive that causes sellers inefficient sellers to enter the market and sell the good when the good is available at a lower cost elsewhere and then f is what we would call a consumption distortion so what happens is the Tariff raises price domestically and these buyers that are represented along that portion of the supply curve end up leaving the market they choose not to buy the good now the value they place on the good is right there right the height of that demand curve represents the value they place on the good and on the World Market the good is available at this price and yet because the government in this economy has imposed a tariff attacks those consumers choose not to consume the good and it creates an inefficiency there's a dead weight loss there's a mutually beneficial transactions that could take place that do not because of this tariff policy so a tariff is is one way that the government can generate Revenue if you look at the early history of the United States there was a point when all of the revenue of the U.S government was generated through tariffs that is clearly not the case now it's generated now typically through the income tax um but tariffs have been used um it's a a very widely I won't say very it is a the probably the most common um trade restriction that's out there there are lots of other trade restrictions let's talk about another one that we'll call a quota an import quota so a tariff is a restriction on price an import quote is a restriction on quantity it's a limit a quantity limit on Imports foreign it's a policy that says okay no more than this many units can be imported into the country a tariff adds a charge per unit on Imports but it doesn't in any way place a cap on the number of units that can come in it certainly reduces the quantity that come in but that's how people respond to incentives this is a quantity limit and so the way this is typically enforced is that the government issues licenses to import you can't import the good unless you have a license and then they they restrict those licenses to the quantity that that they want imported so now let's talk about when the quota would be binding and when it wouldn't be let's let's draw a quick picture here so we've got our demand curve supply curve let's put our world price up here there's world price now what that tells us is at that world price there would be the number of units produced in the country there's quantity demanded and this would be Imports but now let's suppose that the government places a quota well first off let's suppose the government places a quota on the good on Imports that is bigger than this amount right here so let's suppose that amount there is a thousand units and then the government says okay now we're going to issue licenses and and no more than 5 000 units can be imported well then it would have absolutely no impact right because it's a non-binding quota so for the quota to be binding that limit has to be set at an amount that is less than this amount right here so let's suppose it is less well so what would happen is let's suppose that uh the Quant the quota is half this amount so we figure out what half that amount is let's suppose it's right there well the way you'd analyze it is use it just find the place where that horizontal distance is the quota amount well that means you've got everything you need right there is going to be quantity supplied domestically there is going to be quantity demanded this distance would be the amount of imports which is equal to the quota and it's going to drive price up just like a tariff it's going to drive price up in our country that imposes the quota so the price of the good would be end up being right here not because the charge has been imposed but because we've restricted quantity to a certain amount restrict quantities can drive price up now here's the interesting thing so a quota the effect of a quota is exactly the same it's completely equivalent to a tariff that would result in Imports of this amount so if this amount that distance right there was equal to that distance right there then this price right here would end up being equal to that price right there quotas and tariffs are equivalent to each other if the quota is set to the amount of imports that would exist under the Tariff what that means is all of the stuff that we just did right here the consumer and producer Surplus impacts would all apply to a quota so you can take what we did right there you can apply it apply it to a quota and everything's going to work out the same it's going to create production Distortion a dead weight loss there it's going to create a consumption Distortion a dead weight loss there now there's going to be a difference it's going to be one difference and that difference is that a quota doesn't generate revenue for the government unless the government auctions off or sells the rights to be an importer if they do that then it does generate revenue and the revenue would be that rectangle right there so we can see that both tariffs and quotas create deadweight loss they cause total Surplus to be lower in the economy it it diminishes economic well-being um so this is an argument for free trade this is a strong argument for free trade because free trade avoids these distortions it doesn't create these these production and consumption distortions that diminish the size of the economic pie let's talk about some other benefits of free trade other benefits so let's say other benefits of free trade so if you ask economists do you support free trade do you not support free trade it's going to be nearly unanimous not completely but a vast majority of economists support free trade this is part of the reason here are some other benefits so one of the other benefits is increased variety of goods T of good so opening up trade with the rest of the world opens up your economy to Goods that are produced all around the world and there's a wide variety of things there are things that are available in other parts of the world that are not available here and if we didn't have free trade we wouldn't have access to those things now that's a benefit that's that can be hard to quantify there is a field of Economics that actually I studied this field I've done research in this field it was one of my areas when I got my PhD and that was non-market valuation so these are these are things that can be hard to quantify but trust me we can we can quantify them so what's it worth to have that access to that increased variety of goods I mean think about it you can go just about any decent sized town or certainly any City in the U.S and you can get food from all over the world you can you can buy goods and that have come from every place and that's a result of free trade that wouldn't happen with free trade and having the option to buy this increased variety of goods makes us all better off if we didn't have access to those things you would be worse off and let me say this you are better off even if you don't consume all of those other Goods having the option to consume that makes you better off so that increased variety of goods that is another benefit of free trade let's say also lower cost of production lower costs of production through what we're going to call economies of scale now there's a decent chance you haven't heard that term economies of scale yet if you're in a principles of micro class you may not yet have done the cost of production chapter that's where we would talk about economies of scale but here's what economies of scale are economies of scale are benefits to getting bigger in a lot of production processes there are benefits to getting bigger in the form of efficiencies that come about because you can let workers specialize in tasks when we open up an economy to free trade that opens up our domestic producers to be able to satisfy demand all around the world which means that typically they're going to increase the scale of production produce more output and a lot of times there are efficiencies that are created because of that so let's leave that as it is another benefit would be say increased competition an increased amount of competition between firms when trade is opened up that means firms have to compete with not just firms here but firms in other countries and and maybe firms here aren't going to like that but I can tell you that competition drives firms to be more efficient it drives firms to be Innovative it drives firms to try to produce goods for consumers that give them a good value for the money that they're spending competition is good for the economy and so um that's another benefit of free trade we could think about increase productivity so what ends up happening is that once you open up trade um and there's more competition then that means that firms that are inefficient are going to get driven out of the market and that's not good if you're that firm don't be that firm so the most productive firms the firms with the comparative advantage are going to be the ones that survive the rigors of the competitive market and uh so this one is very similar to The increased competition it it just gives firms incentives to be more productive we can also say that in enhances the flow of ideas enhances the flow of ideas so opening up trade brings products into your country it brings Technologies into countries that that maybe don't have the resources to develop those Technologies on their own and that's good that that makes those countries more productive um so that that flow those flows of ideas technology and and better ways of doing things that that uh enhances well-being another one that we could add here would be uh free trade reduces what I'm going to call Rent seeking rent seeking Behavior it reduces rent seeking behavior from firms here's what that means you probably haven't heard the term rent seeking before rent seeking just means seeking special favors so once an economy once the government starts to protect certain industries with tariffs or quotas then people in other Industries firms and other Industries start to want that for themselves they start to say hey you did that for the clothing industry you need to do that for the sugar industry you need to do that for the steel industry it's kind of like in my class my policy has always been there there's no extra credit you get points through homeworks you get points through tests I've never offered extra credit never will and the reason is if I start to offer extra credit or if let's say I gave one student some extra credit for something then everybody starts asking for it the best policy in my opinion for me is just to not do any of that that way I avoid the whole issue I don't have students coming to me at the end of the semester saying look I'm gonna I know I'm getting this bad grade but I'll write you a 10 page paper I don't need that so it reduces rent seeking Behavior if nobody gets protection for their industry then you don't have people spending billions of dollars on lobbying trying to get that to happen for their industry so this for a vast majority of economists is a very strong argument for why we would have free trade what I need to do now actually I don't let's just keep going here all we need to do is talk about some stuff so let's talk about why you might be able to make a I'm going to be generous and call it a somewhat shaky argument against free trade so let's think about some some arguments against free trade I will be perfectly honest and tell you that most of these arguments I do not find very persuasive a couple of them one of them in particular maybe so let's talk about the fact that we do have restrictions in trade actually I would say that over the last uh probably 10 years at least in the U.S we've moved towards having more tariffs more trade restrictions than we had had in Prior decades so let's think about some of the arguments that people make against free trade one of them would be say the jobs argument foreign the argument is this opening up trade destroys some domestic jobs if you open up free trade yeah probably domestically some jobs are going to vanish um the drives price down if we're going to be let's say we're an importing Nation we saw that for an importing nation that hurts producers they decrease quantity that they Supply and when they decrease quantity supplied they need less labor some jobs are going to vanish most economists would point out well okay yeah some jobs are going to vanish but jobs are going to open up in other parts of the economy when we open up trade the rest of the world buys stuff from us so even though some jobs in some sectors are going to go down there are going to be other jobs in other sectors so if what we care about is overall well-being then we're not that concerned about some jobs Vanishing while other jobs open up now if you lose your job you don't like that that's not a a pleasant situation so I'm not saying we don't care you know too bad you lost your job there needs to be some concern for that and maybe we have some training and unemployment benefits and but but this is very similar if you've had a principles of macro classes very similar to say frictional unemployment things change the economy changes we don't want to necessarily prevent that that's a sign of a healthy economy so I would I don't find that to be a persuasive argument at all yeah some jobs are going to vanish but other jobs are going to open up um we could also make say a national security argument National Security so let's suppose that uh we have free trade and and the steel industry starts to vanish out of the economy we start to import more and more steel from from other countries that have a comparative advantage in that and so our Steel manufacturing industry just declines until it's practically gone then you might be able to make an argument look let's suppose we need to go to war if we need to go to war then we need to have a reliable source of Steel so that we can make tanks and planes and all the stuff that we need and so there might be a national security argument I I guess I would say this is the argument that I find to be somewhat persuasive yeah there are probably some industries that for national security reasons we need to make sure that we have at least some magnitude of a healthy sector in that I would also argue that we have to be really careful about this because a lot of times the argument the National Security argument is made by the industry itself rather than somebody who knows a lot about National Security so if a national security person is making the argument I might find that persuasive if it's the uh the clothing industry that says look if there's going to be a time when we just are going to have to we're not going to have access to International clothing markets and we need to protect our clothing Market I'm going to I'm probably gonna say nah that just sounds like you want special favor so I find this one somewhat persuasive but you have to pay attention to who's making the argument is it the clothing industry or is it somebody who actually knows something about National Security there's also an argument that is called infant industry argument infant industry so the argument is this uh let's suppose we have a new industry the argument is okay we need to we need to kind of protect this industry while it's young let it kind of grow up and get its feet under it and and then once we once it gets kind of established then we can let it compete against the world but but while it's young we need to protect it um I don't find that persuasive at all if an industry can't develop and compete on the World Market when it's young why would we expect it to be able to compete later on it just uh is it's a way of trying to get special favors um so I would say most economists are pretty skeptical of that argument as well um we could think about another one called unfair competition argument so the argument there is firms will often argue that they shouldn't have to compete against firms in other countries that are getting special favors so maybe in in some other country the firms are getting a subsidy from their government and so firms in our country would go to our government and say look you need to either subsidize us or have a quota or a tariff so that you can protect us because we shouldn't have to compete against it's unfair that we have to compete against those other firms that are getting special arguments or special treatment I I would say I don't know I don't find that to be persuasive the arguments for fair trade or for free trade are still still the same so I'm just going to leave that one as it is we we could say another one might be protection as a bargaining chip so some people would argue we need to either protect our Industries or threaten to protect our Industries to try to get what we want politically on the world stage the problem with that is if you threaten protection and somebody calls your bluff then you have to protect and that hurts you so I guess what I would say is that's a dangerous game to be playing other countries can retaliate with the same thing and the last thing we want is everybody to protect so these are probably the best arguments that can be made for protections like tariffs and quotas but most economists are still pretty skeptical of those most economists would still say look free trades is better we need we need more free trade rather than less um if you've had a well so a lot of times in my classes at least once we get to uh the chapter where we talk about oligopoly oligopoly is a market structure where there's a small number of firms when there's a small number of firms we we talk about Game Theory Game Theory is one of the most interesting sub-disciplines within the field of economics um and so if you want to let's say when you get to the chapter on oligopoly what you'll see is that protection tends to be a what we call a dominant strategy let's say this there are strong incentives for countries to engage in trade protection tariffs and quotas and other restrictions when you study Game Theory what you realize is that it would be better if we call all countries kind of cooperated with each other and we engaged in free trade when one country starts to protect Industries there's going to be incentives for other countries to retaliate and that's the last thing we want economic pie just starts to shrink one way that you can overcome this problem we call this problem the prisoner's dilemma so if you want to know more about that go to my video on oligopoly or just Google prisoners dilemma you'll find all kinds of information about it one of the solutions for the prisoners dilemma is if if the prisoners if the people involved have a relationship with each other or if the people involved can punish each other for breaking an agreement then that increases the likelihood of cooperation now with prisoners with with the prisoners dilemma we don't want cooperation it's good for criminals not to be able to cooperate this case cooperation would be better it would be better if countries could cooperate and everybody engage in free trade well so one of the things that can encourage free trade or create an environment where free trade can Thrive is if we have agreements between countries relationships between countries or or maybe or trade organizations that can impose penalties if a country breaks the agreement and so what we've seen is free trade agreements like NAFTA the North American Free Trade Agreement or Gat g-a-t-t so NAFTA is a Free Trade Agreement North American countries Gatt Gat stands for General agreement on trade trade and tariffs these are trade organizations groups of countries that have gotten together to cooperate these are things the reason we have these is that they increase the probability of of countries being able to engage in free trade and sticking with the agreement hopefully that gives you an idea of kind of how some simple models of of imports and exports work kind of the welfare implications of free trade and of some trade restrictions and then some arguments for and against free trade I'll see in another video
Principles_of_Microeconomics
Chapter_14_Perfect_Competition_Part_2.txt
in the previous video we talked about what the supply curve looks like for a perfectly competitive firm in both the short run in the long run and what we saw was that it was portions of the marginal cost curve so in general you can think about the marginal cost curve as representing the supply curve for a competitive firm but then it's different portions of the marginal cost curve in the short-run it goes all the way down to the bottom of the average variable cost curve in the long run it only goes down to the bottom of the average total cost curve and the key there is that in the long run all costs are variable okay so now what we need to do so we need to think about what the market supply curve is going to look like because remember it's the intersection of the market supply curve and the market demand curve that's going to determine the price so each individual firm is just one small part of the market so we need to think about how the whole market is going to function so we're going to start with the easy case and we're going to think about what the market supply curve looks like in the short-run in a competitive market so let's start by thinking about the short-run and keep in mind here we're after the market supply so let's just put market supply up here that's what we're after the nice thing about the short-run is that we know that the number of firms is fixed in the short-run so if you were to ask me how many pizza restaurants will there be in Warrensburg tomorrow I would be able to tell you because I know that it's going to be the same number there is today nobody can throw up a pizza restaurant within 24 hours and if pizza restaurant let's say is leasing its building then it's lease is not going to run up in the middle or run out in the middle of the month and so tomorrow there will be the same number of restaurants as there are today now if we start moving out into the future remember we don't define the short-run in the long run in terms of a set amount of time we define it in terms of the period of time during which the firm has a fixed cost or a fixed input so if we were to if you were to ask me what's the number of pizza restaurants going to look like in six months or a year well then it could be different we could have the new restaurateur to move in could have one go out so for now we're going to think about the short-run number of firms is fixed so let's put that number of firms that's important number of firms is fixed now that makes our life easy because we know that each individual firms supply curve is its marginal cost curve and so if you think back to when we were looking at what the let's put a supply curve over here let's suppose we have two firms here's a firm here's a firm and then if we were going to think about the market supply curve what we did so we picked a price like say p1 we went over and we looked at how much this firm wanted to supply at that price that gave us a quantity here and at that same price we looked at how much this firm would want to supply and then that told us that over here at that same price of p1 the total quantity supplied in the market if these were the only two firms is this quantity Plus that quantity it's going to end up being something out here and so that would give us a point on what would end up being the market supply curve so we already know that if you know how many firms there are then the market supply curve is just the horizontal summation of the individual firms supply curves so it's easy to get in terms of the short run market supply curve let me show you an example where we use a few numbers here let's suppose that we had say a thousand identical firms remember we're talking about perfect competition and there are lots and lots of firms so we wouldn't just have two firms we would have lots and lots of firms and so if I were to ask you to come up with the market supply curve for a competitive market I'm not going to be asking you to horizontally over a thousand different firms or 500 different firms even if you did it for 20 it would be really really challenging and time-consuming I guess it wouldn't be that challenging but it would be time-consuming if we assume that the firms are identical then it becomes very easy because we can just draw one of the supply curves and then we know what every other firm's supply curve looks like so let's make that assumption let's suppose we have a thousand identical firms let's write that out here one thousand identical firms now that's clearly a simplifying assumption firms aren't identical out there in the real world but that's fine we're gonna start simple if we think about what the supply curve for one of those firms looks like let's suppose it looks like this we've got price quantity here's the firm's marginal cost curve right which we now know is the firm's supply curve let's suppose that at a price of $1 this firm would want to supply a quantity of 100 units if the price was $2 let's suppose the firm would want to supply a quantity of 200 units so and let's label this marginal cost because we know that that firms marginal cost curve is its supply curve now if we want to get what the market supply curve looks like let's draw a picture out here and I'm going to you're gonna have to bear with me here because the scale of my horizontal axis right here is not the same as the scale of my horizontal axis in this second picture as a matter of fact what I'll start doing here in just a little bit as I'll start labeling this one Q to represent the amount that a single firm produces and so I'll call that little Q and then I'll call this one big Q that helps you kind of remember that the scale here is different than the scale here so if we think about what's going on here if we were to do this picture we would have a thousand of these things all set up out here we're not we don't need to do that because we know that if every firm's ID Nicoll that means that at a price of $1 every firm's going to be supplying a hundred units there are a thousand firms so the total quantity supplied would be 100,000 so if we go out here this would be a quantity of 100,000 100 firms each supplying a thousand units we know that if at a price of $2 if each firm is supplying 200 units and there are a thousand firms then the total quantity supplied would be 200,000 that gives us another point out here on our market supply curve and if we were to connect those we would get the market supply curve so this is our single firm let's make sure we label that that's our single firm right here's our market so you can see that if we if we're talking about the short-run then we already know how to get the market supply curve it's very easy it's just the horizontal summation of the individual firms supply curve what we want to do now is we want to think about what the market supply curve looks like in the long run and that's a more challenging situation because in the long run the number of firms is not fixed so in the long run we don't know this number all right it's easy when we know that number but if we don't know that number then we don't know how many firms were going to be summing across so we have to think about the incentives that each particular firm faces in the market so let's talk about here the market supply curve in the long run and the key here is that the difference between the long run and the short run is that there is entry let's say or exit in the long run we would never have a situation where there's entry and exit being happening at the same time we know that if if price is above average total cost that means profit is positive if profits positive firms are going to be entering that market if price falls below average total cost profit will be negative and if profits negative firms will start to exit that market so you're either going to have profit positive and they're being entry in the long run or profits going to be negative and there's going to be exit in the long run but these two things would not happen at the same time so now let's let's make this as simple as we can we're going to make an assumption that's kind of similar to this identical firm assumption but we're going to relax it a little bit so let's suppose that all firms have access to the same technology and they're going to have access to the same markets for inputs now here's what that means this means that the cost curves for the firm's are all going to be identical if they have access to the same technology in the same markets for inputs then this means that no firm has a cheaper source of inputs than any of the other firms ok so they're all going to be paying the same prices for inputs they all have the same technology that means nobody has a cheaper way of producing the good ok so this means that they're all going to have the exact same cost curves so this is kind of like assuming that they're identical but we're not really assuming that they're identical we're just making sure that they have the same cost curves now let's talk about this entry and exit decision so the decision of whether or not to enter or exit a market depends on whether or not firms are making profit in the market or whether or not firms are losing money in the market whether their profit is negative so if you're an entrepreneur and you're looking around for an opportunity a market to enter a business to start then what you're going to be looking for is you're going to be looking for markets where the firm's that are already in the market are making a positive economic profit you're not going to be looking for situations where the firm's are losing money especially not if you've got access to the same technology in the same market for inputs you may be thinking yourself well but wait what if I what if I come up with a better way of producing the good a more efficient way of producing the good well then you would have better technology than everybody else and so in that case you might enter a market where all the firms are maybe losing some money but you've got a better way of producing it so maybe you could make money whereas they would lose we're assuming that away for right now okay in the real world that kind of stuff does happen but for right now we're not going to let it we're going to start with the simplest situation so now let's talk about what will happen if first profit is positive and so I'm going to write out here kind of the order of events and I want you to think carefully about what's going to happen if we start with a situation where firms are profitable so if firms in the market if firms are profitable then what we're saying here let's write out what this means we're saying that this means that economic profit is positive profit is greater than zero and this happens this is the same thing as saying that price is greater than average total cost right remember we talked about what profit looks like and let's put it up here remember the profit looks like this profit is equal to price minus average total cost multiplied by Q profits also equal to total revenue minus total cost these are equivalent statements we can do a little bit of algebra and turn this into that we did it earlier so if price is greater than average total cost then this term will be positive we'll multiply it by a quantity which is obviously positive and our profits positive so we're going to start here firms in the market are profitable then that means in the long run firms are going to be entering the market so number of firms is increasing and I'm just going to put here that entry will be happening that's our entry condition right enter a market if firms are profitable well if the number of firms is going up think back to what happens in in the market if the number of firms is increasing remember that the number of firms is a determinant of the market supply curve so if the number of firms is going up this means market supply is increasing well if market supply is increasing then this means that the market price is going to be falling if you think about the supply curve and the demand curve and what happens if that supply curve shifts to the right then that means the price is going to be falling so price Falls well if price is falling then profits going to be going down so if this number is falling then we know that profit Falls and let's think about when this is going to stop so as long as profit is positive firms will be entering the market market supply is going to be increasing market price is going to be falling profits going to be falling this will continue as long as profit is positive so this continues until profit is driven to zero continues until profit equals zero so we know that if profits positive in the market then that's going to create a series of events that will drive that profit to 0 if profits positive firms will enter the market to get in on that positive profit they're not entering to drive profit to 0 they don't want that to happen but the act of entering shifts the market supply curve to the right it drives price down and this all continues until profit goes to zero let's think about what now happens if profit is negative so if firms in the market are making losses so let's if firms are making losses well if firms are making losses then what we're saying there is the profit is less than zero profit is negative that's what a loss is is a negative profit and of course that happens when price is less than average total cost so you can see that if price is less than average total costs then this term is bigger than that term and this whole thing is negative so our profit will be negative quantity is always positive and so or zero if the firm's choosing not to produce but quantities never going to be negative negative quantities don't make economic sense so if this terms negative then profits negative well if profits if price is below average total cost that's our exit condition that means that the number of firms is going to be decreasing because firms will be exiting the market so the number of firms is decreasing if the number of firms is decreasing in our market supply curve is going to be decreasing right if that's a determinant of supply so market supply decreases now if you think about what effect that has if the market supply curve is shifting to the left that's going to be driving price up so market supply decreases that causes our price to rise market price rises if the market price is going up then profit Rises now remember that profit is negative so for profit to rise that means the losses are getting smaller so I'm going to say here profit rises but let's put here in parentheses that the losses get smaller losses get smaller and let's think about how long that's going to continue well as long as profit is negative firms will continue to exit as long as price is below average total cost firms will continue to exit so there's profit this this exit will continue until price is driven up enough that profit goes to zero so let's just say this continues until profit equals zero continues until profit equals zero so now let's think about what we've just come up with here we've just seen that if firms in the market are profitable if profits are positive then in the long run other firms are going to enter the market and that profit is going to get driven to zero the entry would continue until profit Falls to zero we've also seen that if firms are losing money if price is less than average total cost which makes profit negative then in the long run firms will be exiting the market that exit is going to be driving price up these firms are not exiting to help the other firms that stay in the market they're exiting because their profit is negative but when they exit that causes market supply to decrease it causes price to rise and that will continue until profit goes to zero so what we've just come up with is that in a free market let's put it over here and this is this is an important conclusion okay so in a free market in a competitive market profit equals zero in the long run entry-and-exit let's say entry or exit will drive profit two equals zero in the long run well what this means is that if price is ever greater than average total cost it will be driven down until it's equal to average total cost if price is ever less than average total cost it will be driven back up until price is equal to average total costs so in the long run price is equal to average total cost what we need to do is clear this off and then we'll take a look at kind of what that looks like graphically and it's something that we've actually already seen let's kind of summarize what we've just figured out here and that is we know the if price is ever greater than average total cost then profit is going to be positive and firms will enter and as the firm's enter that market that's going to drive price back to equal average total cost and profit will be driven to zero if price is ever less than average total cost we know that that means that profit is negative and firms will be exiting in the long run so the act of exiting will be driving price up profit is starting negative because price is lower than average total cost and our profit our losses will be getting smaller until they are driven to zero so we've just figured out that in the long run profit equals zero say in the long run profit equals 0 which means that price is equal to average total cost now let's think graphically about what that means if we look at our picture here of actually let's think about a couple of other things that we know about a competitive firm and then we'll look at the picture we know so we know the profit maximizing strategy for a perfectly competitive firm is that they're going to produce where price equals marginal cost remember all firms produce to maximize profit produce the quantity where marginal revenue equals marginal cost for a competitive firm price and marginal revenue are the same thing they're equal to each other and so for a competitive firm they're going to produce the quantity where price equals marginal cost what that means is we know that if price equals marginal cost and in the long run price is going to end up being average equal to average total cost because profit is equal to zero we know that price will be driven to where it equals average total cost in the long run in a perfectly competitive market now let's think about what that means what that means is that each of the firm's is going to be producing at their efficient scale because this point here this bottom of the average total cost curve that is the efficient scale of the firm so let's draw now the picture that we've got let's put our firm picture right here so we've got the cost curves of the firm let's put our marginal cost up here and let's put our average total cost we saw earlier that if price is ever up so let's put this point on there if price is ever up here profits going to be positive if price is ever down here profits going to be negative right there's the price that would make profit equal to zero because if this is the price then the firm would look where price equals marginal cost which would happen right there the firm would produce this quantity we saw in a previous discussion when we were talking about the cost of production that that quantity is the efficient scale of the firm so what we see is the entry and exit is going to drive price equal to the minimum average total cost in the long run it can go up here in the short run but entry is going to drive it back down it can go down here in the short run but eggs that will drive it back up so in the long run price is going to be driven to where it equals the bottom of that average total cost curve now let's think about what that means in terms of the market supply curve so what this means is that in the long run price is always going to be driven back to this price right there where it's equal to the minimum average total cost for the firm it doesn't matter if there are a few firms in the market or a lot of firms in the market if there are a few firms and that the total market quantity would be relatively small but price would be equal to this price if there were a lot of firms in the market then quantity would be relatively large but price would still be equal to this so if we're thinking about what the market supply curve looks like it's going to be horizontal the long run market supply curve given the assumptions we've made in this type of market in a competitive market it's going to be perfectly elastic so here's our firm and right over here is our market and this is the long run market supply curve in a perfectly competitive market price is going to be driven to equal the minimum average total cost so now let's step back for a second let's think about something usually what I'll do in my face-to-face classes is I'll ask the class how many of you would go to work every day to earn a profit equal to zero because what I've just told you is that in a competitive market profit is driven to zero in the long run and if we think about markets that are perfectly competitive there aren't a lot of them but there are a lot of markets that act like they're perfectly competitive if we think about examples of perfectly competitive markets commodity markets corn soybeans things like we've seen that the the gasoline market is very close to a perfectly competitive market so there are several that are we can describe as perfectly competitive and what I've just told you is the profit gets driven to zero in those markets in the long run and most students don't really even stop to think about what that means so if I were to ask you would you get up every day and go work a job where your profit is equal to zero raise your hand if you would very few students in my class will raise their hand there will be a few that will raise their hands some of them won't know why they're raising their hand there they will feel like I'm getting ready to pull the rug out from under you which I am right but most students will keep their hand down which means if you would keep your hand down you've forgotten something that I knew you would forget I told you back in a previous chapter that you would forget this so let's think about let me tell you a different type of story and then we'll understand why more about what this means let's suppose that suppose I get tired of teaching and I decide to quit my teaching job and I decide that my new plan in life is I'm going to sit at the end of my driveway and sell lemonade and so I finish out the semester and then you don't see me anymore and then let's say maybe a year goes by and you see me out of the store and you say hey doctor hasn't Vito how are you doing and I say hey I'm doing great and you say hey how's that how's that that lemonade thing going for you how are you doing there and I say to you I'm making zero profit and then I walk away what would you think about me I hope that you would remember that I'm an economist and when I talk about profit I'm talking about economic profit and we talked about what the difference between economic profit and accounting profit and so I hope you would remember that I am NOT talking about accounting profit if accounting is equal to zero then I'm making just enough money to cover my explicit costs the costs of sugar and water and and cups and lemons that I need to make to make or buy to make my lemonade but no more I hope you would remember that I'm an economist and when an economist says profit equals is equal to zero that means I'm making enough revenue to cover all of my explicit costs and all of my implicit costs and if you think about what my biggest implicit cost is going to be if I quit this job and sell lemonade my biggest implicit cost of selling lemonade is the opportunity cost of this job that I'm giving up it's what I make in this job so I've just told you when I told you that my profit is equal to zero then I'm making enough money selling lemonade at the end of my driveway to compensate me for this job that I've given up that's a whole different story right I can't make enough money selling lemonade into my driveway to earn what I could earn teaching so when we say that profit is driven to zero in a competitive market we are not saying that people can't put food on the table what we're saying is that people are being compensated for the cost of the value of their time whatever their next best employment alternative is they're making that amount so do not think about this in terms of accounting profit or it won't make sense nobody would get up and work a job every day if their accounting profit is equal to zero but lots of people get up and work a job where their economic profit is equal to zero because they are being compensated for whatever their time is worth based on their next best alternative okay so it's very important that you understand that what we want to do now is we want to think about what happens if the market demand curve shifts we want to think about the effect that that has in the short run in the long run so let's just start by going back to what happens if we were to do this say before we ever studied this perfect competition chapter if we were just talking about a market we're talking about market demand and market supply we would draw a picture like that here's our equilibrium price we'll call it P star our equilibrium quantity we'll call it Q star actually let's call it Q 1 and P 1 let's get rid of that there let's call that demand curve D 1 if we were to think about what happens if demand were to increase then our demand curve would shift to the right to D to our equilibrium would move from point A up here to point B and we would see that that ends up driving our price up from P 1 to P 2 and our quantity up from Q 1 to Q 2 so an increase in demand would drive our price up and it would drive our quantity up and that's where we stopped in our previous demand and supply chapter and that's fine what this is is a short-run picture this shows us that in the short-run an increase in demand is going to drive price up and it's going to drive quantity up what we want to do now now that we understand what's going to happen in the long run and that's this stuff now we can add to this we can think about what's going to happen first in the short-run if demand were to increase in the short-run it's going to drive price up but we now want to think about what's going to happen in the long-run and so we'll clear this off and then we'll draw a picture and we'll we'll see kind of the rest of this story let's start by thinking about what long-run equilibrium looks like in a perfectly competitive market so remember an equilibrium is a condition of rest so if we're in equilibrium that means nothing's going to be happening well we just talked about what happens if profit is positive then something's going to be happening if profits positives then other firms are going to be entering the market if profits negative something is going to be happening the profits are negative then firms will be exiting the market so an equilibrium is going to be if profits equal to 0 profits equal to zero there's no incentive to enter and there's no incentive to exit right you wouldn't exit because all of your costs are covered all of your explicit all of your implicit costs are covered and if profits equal to zero well nobody's looking to enter that market so let's take a look at what our long-run equilibrium looks like I'm going to let's put our market over here and then let's put a picture of our firm over here because remember the firm responds to what happens in the market the firm is part of the market but the firm is reacting to the price that is created in the market the firm is a price taker so we'll call this our market and this is going to be one of the firm's in the market now remember we're assuming that our firms all have the same cost curves so if I draw you a picture of one of the firm's you know what all the firms look like so we've got our price I'm going to label this big Q over here and I'm going to label this little Q over here because the quantity that one firm produces is just a small portion of the total quantity bought and sold in the market so now let's draw our law our demand curve there's our market demand curve that has to do with all the consumers in the market we're going to let's put now our let's go ahead and put our marginal cost curve up here and our average total cost curve for the firms now we know that the entry and exit is going to drive price always to equal the price where the firm's will earn zero profit which happens right here so the firm's will be producing this quantity this will be our price that exists in long-run equilibrium now we've also seen that the long run market supply curve is horizontal at that price so there's our price this is our long-run market supply curve it's perfectly elastic we also saw that our short run market supply curve is upward sloping and for us to be in a long-run equilibrium which this is the picture that we're drawing here we're going to be calling this long-run equilibrium our short run market supply curve needs to intersect right here so if our short run market supply curve is right there it's called that short-run supply now we're going to be in a situation where nothing's going to be happening as long as the demand curve doesn't shift then our price is going to be this price the firm's are going to react to that price and at that price they produce this quantity this is the price where the firm's are all going to earn zero profit so there's not going to be any entry it's not going to be any exit ok so this is a picture of long-run equilibrium if you ignore this long-run supply curve in this picture right here then this is just a plain demand supply curve this would just be the horizontal summation of all of the individual marginal cost curves ok now what we want to do is we want to talk about what happens if demand goes up or if demand goes down remember demand is not under the control of any of the firm's this has to do with the consumers in the market so let's think about what happens if market demand goes up and let's what I'm going to do is I'm going to draw this picture again right over here and then we're gonna do a whole bunch of stuff over here but I want to keep that picture because at the end we're gonna kind of compare where we started and where we ended so let's draw this picture again and this probably goes without saying you should be drawing these along with me because this is going to look a little complicated at the end the story is actually very simple and the picture is very simple if you understand where you start and where you begin or where you start and where you end if all you do is you just sit there and passively watch me do this then when I ask you questions on a test it's going to be more challenging if you'll work through it with me then it'll be much more easy I think much less difficult so let's draw this picture again so we've got over here our market we've got here our firm I believe that it's easier to start with the firm picture because we need to know where this average total cost curve is in order to figure out the price that the firm is going to earn zero profit at so go ahead and put your marginal cost curve in here and your average total cost once we've got the average total cost curve we can figure out the point at which the price at which this firm would earn zero profit and it's right there I'm going to call it p1 now over here in our market picture we know that the long-run supply curve in the market is going to be horizontal at that price where the firms are in zero profit so there's p1 we've got our market demand curve we'll call it D one it's going to shift and we've got a short-run supply curve we'll call it short run supply curve one because it's going to shift our market quantity is going to be found by looking at the intersection of the market demand and market supply curve so that happens right here we'll call that q1 big q1 here is little q1 remember that this quantity in that quantity are not the same quantity this is just one of a lot of firms each firms going to be producing this quantity and if we add it up across all the firms we get that quantity okay now I'm going to call this point let's start right here this is going to be my initial equilibrium I'm going to start at a and over here in this picture I'll just call that little a okay so let's put together our story down here our story is we're going to start at a and little a price is equal to p1 market quantity is equal to q1 and each firm the firm quantity is equal to little q1 that's where we're going to be starting and notice that each of these firms is going to be earning a profit equal to 0 so we are starting in long run equilibrium all firms are earning a profit of 0 now let's think about what happens market demand increases so let's suppose that one of the determinants of demand changes maybe this is a normal good and consumer incomes go up okay or maybe it's an inferior good and consumer incomes go down or maybe the price of some related good changes in such a way that it increases the demand for this good let's not worry about which particular determinant it is let's just suppose that our market demand shifts to the right to d2 now what we want to do is we want to think about what happens in the short run and then what's going to happen in the long run so let's start with the short run so let's say market demand increases let's say to d2 we end up in the short run so let's start here with the short run effect so in the short run we get a new equilibrium right up here at point B so the way you find the short-run equilibrium is you look at the intersection of your new demand curve and the short-run supply curve and so what happens in the short run is exactly what you already know happens from an increase in demand it drives price up and it drives quantity up so our equilibrium price in the market is going to rise to p2 so we get an increase in the market price and we can see that our market quantity increases to q2 so let's go ahead and put that down here our market price price rises to p2 let's say quantity rises to q2 now let's think about what happens for our firm the firm's are all price takers they don't have any control over the price but we've seen what happens when price rises we've seen how a firm responds to that the firm is going to look for price equals marginal cost so we go over here to our marginal cost curve and we hit it right the we already know that a firm will respond to an increase in price by increasing the quantity that they produce so each firm will produce a higher quantity than before so firm quantity Q little Q rises to Q 2 now the firm's all made a zero profit at a price of p1 so when price goes up their profit is going to go up now let's look at what profit is going to be in this picture we know how to figure out where profit is we've got our price we've got our quantity we need to know the average total cost of producing that quantity q2 so if we go up here to the average total cost curve if we hit it right there there's the average total cost of producing that quantity Q 2 if we bring that across here then the area of that rectangle right here will be the profit that each firm earns now notice that profit that little second line this line right there isn't the same as p1 because when the quantity changes average total cost has to go up average total cost is at a minimum at q1 so if the quantity Rises average total cost has to go up okay so there's the profit that each firm is going to earn so profit is now positive so that's what happens in the short-run an increase in demand drives price up firms respond to that increasing price by increasing quantity and making a positive profit now let's think about what happens in the long run so let's draw a little line here and now let's put our long run over here so in the long run now we're back to that story we had over here on this side of the board in the long run if firms in a market or earning positive profit other firms will enter so because profit is positive other firms enter in the long run now we know that if other firms are entering that's the determinant of supply the number of firms is going to be increasing and so what's going to be happening is as other firms enter the supply curves are going to start to shift to the right so the supply curve here this short run supply curve let me use the edge of this paper the short-run supply curve is going to start to shift to the right and as it does it's going to be bringing price down the more it shifts to the right the more it brings price down so firms enter market supply increases now notice that there's two supply curves here I didn't label it but this one's our long-run supply curve but its horizontal and if a horizontal line shifts to the right it still just looks the same so we don't need to worry about that one it's the short-run supply curve shift e so if we think about it shifting it's not going to just shift that much because if it shifted just this much price would still be positive excuse me profit would still be positive because price would still be above P one so that entry would still continue so the entry is going to continue until the short run supply curve curve has shifted enough to bring price back down to P one and that would happen once the supply curve is shifted over to this point right here so our supply curve our short runs we'll call that short-run supply to our supply curve is going to shift to the right let's say it increases to short-run supply to we get a new long-run equilibrium right here at Point C so a new equilibrium at sea let's label here we had our short-run equilibrium right here at point B that would correspond to this point over here I'll call that little B okay now in the long run other firms enter that shifts our supply curves to the right and as the supply curve shift to the right it's going to be bringing price down and as price Falls profit Falls if we look in this picture as price Falls then our marginal revenue curve for this firm would be shifting down and if price falls back to p1 then the firm is going to be right back there to point a producing quantity q1 so price falls back to let's say price back to p1 let's talk now about our firm quantity our firm quantity Q is back to little q1 each firm is right back to point a producing a quantity of q1 let's talk about our market quantity here's our new long-run equilibrium in the market our quantity out here market quantities up here at q3 so market quantity Q increases to q3 and of course because we're back here at p1 profit for each firm is back to 0 so we're at a new long-run equilibrium where each firm is making a profit equal to 0 now let's think about how is it that if our market quantity is q3 which is higher than where it started how is it that each firm can be back to producing q1 and the answer is that there's injury there's now firms in the market each firm is making zero profit producing q1 but because there are more firms the market has expanded quantity is out here at q3 so let's just briefly run through what happened here we started at a price of p1 all firms were making zero profit producing q1 little q1 in the short run we had a or we had an increase in market demand and in the short run that drove our price up to p1 or excuse me - p2 drove our price up to here firms responded to that higher price by increasing the quantity they produce they ended up moving up here to point B and making a positive profit that's the short run impact in the long run because profits positive other firms start to enter the market and as they enter the supply curve shift to the right and that's bringing our price back down so as this entry is taking place our price is falling back down to p1 and as price falls back down to p1 these firms decrease their quantity back to q1 and because price p1 was the price where they earned zero profit they're back to earning zero profit but now there are more firms in the market each of which is earning a zero profit so these firms enter not because they want to drive profit to zero that's the last thing they want to do they want to get in on the profit but the act of entering drives it back down to zero so we're back in a new long-run equilibrium and all firms are earning zero profit what we want to do now and and this probably looks very complicated this when you just look at the picture and you look at the story it appears to be complicated but the story is actually very straightforward and it's relatively simple what we want to do now is think about what would happen if we had a decrease in demand so this is an increase in demand causes profit to be positive in the short-run but profit gets driven back to zero in the long run will clear this off and then we'll talk about it deep in demand will see that profit becomes negative in the short-run but it's driven back to zero in the long-run so we'll do that here after we clear this off all right let's do a decrease in demand now and let's do the same thing we're going to start with a picture of long-run equilibrium so we've got our market here we've got our firm here let's start with marginal cost and average total cost when you're drawing the picture it's always best to start by drawing your cost curves because you need to know where this point is so right here is the price at which the firm's are going to earn a zero profit and that's where our long-run market supply curve is going to be horizontal so here's our long-run supply curve horizontal at the price at which firms are in zero profit so now over in this picture we can put our market demand curve call that d1 we're going to shift it let's put our short run supply curve comes right up through here that's also going to shift our initial equilibrium in the market we're calling point a our initial equilibrium for our firm we're calling little a and so right here's the total market quantity q1 and here's the quantity each firm is producing little q1 q1 should be a 1 each firm is earning a zero profit and remember that this quantity is not the same as that quantity our scales are different on our horizontal axis so let's put our story up here we're going to start at a price is equal to p1 market quantity is equal to q1 firm quantity is equal to little q1 each of the firm's is producing a little q1 because remember each firm has the same cost curves all of all of our firms have the same cost curve so all we needs one picture and each of those firms is our in a profit equal to zero so we're in a long run equilibrium now let's start with a decrease in demand suppose market demand decreases market demand Falls let's say - D - so we get a decrease in market demand - D - in the short run that takes us to a new equilibrium right here at point B let's put our B right there so we get a new equilibrium at B and we know that a decrease in demand is going to result in a decrease in the market price or price Falls to p2 and the market quantity Falls to q2 alright so price falls P Falls to p2 quantity Falls to q2 for our firm because our market prices fall into p2 each of our firm's is a price taker so there's the price p2 if we go over and see where mark price would equal marginal cost it happens right there so each firm responds to that decreasing price by decreasing the quantity that it's going to produce and now we can identify the loss that the firm's going to earn or each of the firm's are going to earn they earn zero profit at p1 so if price falls down here - P - they're gonna earn a negative profit we've got our price in our quantity we need the average total cost of this quantity if we go up from that quantity to the average total cost curve we hit it right there and notice that that point has to be higher than where we were at Point a because if you move either to the right or the left of Point a average total cost rises because we know that average total cost is u-shaped so right up here would be the average total cost of producing this quantity Q 2 and this rectangular area right there would be the loss that each firm earns negative profit so we get a firm equilibrium let's call that little B profit is negative profit is less than zero let's put here that the firm quantity Q Falls to little q2 Falls - so this is our short-run decrease in demand drives price down firms lose money now let's think about our long-run because the profit is negative each firms earning a negative profit firms will start to exit in the long run so because and this sentences is very important because profit is less than zero firms start to exit as they exit remember the number of firms is a determinant of market supply so the supply curve start to shift to the left and as the market supply curve as that we shift this short-run supply curve to the left we see that price starts to rise and that supply curve will continue shifting the exit will continue until the supply curve has shifted enough to bring our price back up to p1 which would happen right here so we get a decrease in supply so short-run supply shifts left and it will shift until it has brought our price back up to p1 so we get a decrease in short-run supply to short-run supply - so short-run supply shifts left to short-run supply curve - we get a new long-run equilibrium right up here at Point C so we get a new equilibrium new long-run equilibrium at C notice that our price is back to p1 price back to p1 that means let's think about how each of these firms as the supply curve shifts to the left and it brings price back up as price increases firms are going to be increasing their quantity back up to Q 1 so when price goes back to p1 each of the firm's is right back to point a producing quantity Q 1 so firm quantity is back to little q1 each firm is back to earning zero profit profit is back to zero now our market quantity is down here at Q 3 so let's put quantity Falls to q3 each firm is back to producing q1 but the total quantity in the market has fallen to q3 and the reason is that there are fewer firms in the market because firms have exited the market so the size of the market shrinks but we're back to a new long-run equilibrium at Point C where each firm is earning zero profit our firm started at Point a went to point B and went back to point a remember they're their price takers so they don't have any control over any of this that's happening hmm so if demand increases if market demand increases in the short run profit will be positive because it's going to drive price up but entry will drive that price back down to where the firms are earning zero profit if our market demand curve decreases in the short run profits going to turn negative because that market decrease in the market demand curve drives price down in the long run though firms will start to exit and as they exit the supply curve shifts to the left and it brings price back up the exit will continue until profit goes to zero until price gets back to p1 and we're in a new long-run equilibrium where all of the firms are earning zero profit again so you can see here that this again this looks relatively complicated but as long as you think about it just in terms of the story that we looked at earlier verbally where firms in the market are making profit then in the long run other firms are going to enter and as other firms enter the number of supply or the number of firms is going up and supply is increasing as long as you remember that story you can use that story to build the long-run part of this it's actually very easy and we already know the short-run part we learned that beginning once you first start learning demand and supply you learn the short-run part of it what we want to do now is just kind of think about why the long run supply curve might slope upward so we've got a long run supply curve here that's perfectly elastic and the reason is entry and exit is always driving price back to p1 but there are some circumstances under which we could have an upward sloping long-run supply curve so let's say when might long run supply curve sloped upward what we've got here is what we call a constant cost industry we don't have the picture the picture we did prior to this one was an increase in demand let's think about what happens there what we saw in the example right before this one is that when market demand increases that drives price up and firms respond to that higher price by increasing the quantity that they produce as they increase the quantity that they produce that means demand for inputs which comes from firms demand for inputs is going to be increasing because they're producing more output well an increase in the demand for inputs typically is going to drive the price of those inputs up we didn't change the price of inputs in this picture we allowed where we held the price of inputs constant if we were to allow the price of inputs to go up then these cost curves would start to shift up if the cost of production goes up then the cost curves are going to move up and that means that the price at which the firms earn a zero profit is higher than before well if the price at which firms aren't zero profit is higher than where it started then that means your long-run supply curve is going to slope upward so we could have an increasing cost industry increasing cost industry what that means is that as the industry expands as market quantity goes up as market quantity increases input costs go up input costs increase the result of that is you're going to have a long-run supply curve that is going to be upward sloping which is more typical of what we think about a supply curve looking like ok so instead of this long run supply curve being horizontal it would be upward sloping it's also the case that you could have some firms with better technology than other firms okay so if we relaxed the assumption that all firms have the same technology or if we were to relax the assumption that all firms have access to the same inputs if some firms have access to cheaper inputs then those firms will have cost curves that are lower than the other firms if we relaxed either of those assumptions the technology assumption or the assumption that they have the same faces the same input prices then we can get long-run supply curves that slope upwards so if some firms have better technology some firms have better technology then that can result in our long run supply curve being upward sloping here's the general conclusion though the long-run supply curve is always more elastic than the short-run supply curve so we can have a situation like this where we have our short-run supply curve and we have our long-run supply curve but notice that the long run supply curve can be upward sloping but it's always going to be more elastic than the short-run supply curve and if you think back to our last elasticity chapter we talked about that both demand and supply tend to be more elastic over longer time horizons that's that's what we're seeing right here in the extreme case the long run supply curve can be perfectly elastic but this is very rare typically most industries are going to be increasing cost industries where as they expand that increased demand for inputs drives the price of those inputs up and so as an industry expands the price at which the firms earn zero profit is rising which causes our long-run supply curve to be increasing so let's finish this chapter by kind of summarizing what we've learned so let's put I'm gonna sneak it in right over here let's put a little box right here this is going to be kind of our what we've learned in this this chapter so we the most basic level we learned the theory of the supply curve for a competitive firm and what we saw is that firms maximize profit by producing the quantity where price equals marginal cost if you think about what that means that means that in a perfectly competitive firm or market you can be assured that the price you're charged is equal to their cost of production from a consumers perspective it doesn't get any better than that what we're seeing is that these firms because they have no market power their price takers they have no control over the price they end up earning a zero economic profit and so price ends up being driven to equal the average total cost and not just any average total cost it ends up being driven to equal the minimum average total cost each firm is producing at it's efficient scale so what we've got here is we've got efficiency in terms of consumption the price that consumers pay is equal to the cost of production and we have efficiency in terms of production the firms are producing at their efficient scale so total surplus is going to be maximized there's going to be no deadweight loss with a perfectly competitive market so each firm is charging you a price as a consumer they're charging you a price that's equal to their cost of production what we'll see is that other types of markets where the firm's have some control over the price that's not going to be true those types of firms will not produce at their efficient scale the price that they're going to charge you is going to be higher than their cost of production in some cases they're going to be able to earn a positive economic profit but in a perfectly competitive market they're going to end up earning zero profit let's just put that here all firms are going to earn zero profit in the long run at least given the the simplifying assumptions that we made that they have the same technology in the same market for inputs if you were to relax that and and let like we said some firms have better technology or cheaper inputs then you can have a situation where some of the firm's will earn a positive profit but those marginal firms are going to earn a zero profit and the marginal firm would be the first firm to exit if price fell okay or the last firm to have entered the market but before profit went to zero so what we're going to start talking about in future videos are going to be firms where or markets where firms have some some control over the price where firms have some market power but this is a good place to start because we're going to compare everything to this situation so I'll see you in another video
Principles_of_Microeconomics
Chapter_7_Consumer_Surplus_Producer_Surplus_and_the_Efficiency_of_Markets_Part_2.txt
what we want to do in this video is we want to take a look at the efficiency of competitive markets we want to use consumer surplus and producer surplus to really think about whether or not a free market is a good thing compared to other ways of organizing economic activity so we've got consumer and producer surplus let's think about what our objective is going to be here our our goal is going to be to maximize the economic well-being of buyers and sellers okay and so first we need to make sure that we're clear that what we're talking about here is economic well-being there may be other ways of defining well-being that that are worth talking about there there are but what we want to do right now is talk about economic well-being okay essentially we're going to think about the well-being of buyers and well-being of sellers we're going to add that together and we're going to define what we call total surplus so total surplus is just going to be consumer surplus plus producer surplus okay we know what each of these things look like it's the area under the demand curve and above the price this is the area under the price and above the supply curve sometimes this is known as economic surplus so you may hear me I try in this class to just call it total surplus but sometimes I'll slip up and call it economic surplus that's they mean the same thing so now let's think about what total surplus looks like I'm usually going to Bree v8 total surplus TS so let's draw a picture of our demand curve and our supply curve so we've here's our demand curve here's our supply curve total surplus I'm not going to put the price on here but if we have the the market determined price it would be right there this area up here would be consumer surplus and this area down here would be total or producer surplus we're not going to worry about which is which this is going to be total surplus so total surplus is the area under the demand curve and above the supply curve it's got to be both of those at the same time there's a lot of more area that's under the demand curve but it needs to be both under the demand curve and above the supply curve okay so you can see right now that the most this biggest total surplus can be is this triangle right here just from a geometric point of view there's other area that's above the supply curve out here but that's not under the demand curve okay so there's our definition of total surplus our goal is going to be to think about how to maximize total surplus in the market we're not interested in putting at this point consumers on a different level than producers if I were to ask most students what do you identify with being a consumer producer most students most people identify with being consumers most people consider themselves a consumer but you have to think about the fact that you're also a producer if anytime you're working at a job you are a producer you're selling your labor so all of us are both consumers and producers okay so it's not the case that this just represents the firm's this represents anybody selling anything like their labor okay so we're gonna we're gonna just add up the well-being of buyers and the well-being of sellers because all of us are both of them and we're gonna think about how can we maximize just well-being of all of the people on both sides of this situation here so our goal is to maximize total surplus maximize total surplus we're going to talk about some allocations different quantities that can be produced in a market and we will say that any particular allocation is efficient if it maximizes total surplus so when I say an allocation is efficient then what I mean is that it maximizes maximizes total surplus okay any allocation any outcome that does not maximize total surplus is not efficient now let's think about the market equilibrium okay let's think about how a market works let's talk about where everybody is in a market if we put up here our demand curve and our supply curve let's think about who's buying and who's selling now we know that the price in a market the price is determined by not by the buyers or by the sellers we're talking about a competitive market here neither the buyers nor the sellers have any control over price price emerges as a result of the interaction of the buyers and the sellers consumer surplus is up here producer surplus is down there but right now let's talk about who's buying and who's not buying who's selling and who's not selling now we're going to be thinking in this section here about what's the right allocation what's the right quantity to have produced and let's start this conversation by thinking just conceptually about somebody that we're going to call a benevolent social planner and the benevolent social planner is going to be some being that knows everything can control anything or can control everything and wants everybody to be as well-off as possible and so what we're the question we're going to ask the conceptual question we're going to ask is what would the benevolent social planner do what quantity down here out of all of these quantities that could be produced what's the quantity the benevolent social planner would choose to have concern have produced and consumed and who would the benevolent social planner choose to have consuming and not consuming who would the benevolent social planner choose to produce it and not produce it let's think our way through that by thinking about what would happen if let's suppose we're in a classroom and and all of us take some trip on a plane and we crash on some deserted island and there we are stuck and we've got to decide who's gonna engage in different jobs right and or and so let's suppose since I'm I'm leading the field trip so let's suppose I'm in charge we've got to decide who's gonna who's going to go out and find water who's going to go out and and get some shelter create you know build something that that we can you know get shelter and who's going to go out and and find food for us to eat well the way we've talked about that type of decision in a principles of macro class you know that that you tend to specialize the gains from trade are biggest when people specialize in what they have the comparative advantage of well if it's my job as the leader of the group to to somehow decide who's going to do those jobs I don't know what everybody has a comparative advantage in and so it would be hard for me to figure out who should go do which job so we could just let everybody go out and do whatever they want but our goal is to maximize total surplus and so the size of the economic pie is not going to be as big as possible if everybody goes out and and ends up not concentrating on the good for which they or the job for which they have the comparative advantage on the other hand let's think about a different situation where maybe we have to allocate who gets a good and who doesn't so think about let's suppose we're all in a room and there's there's 40 of us and and I've got a big pizza with 40 slices and we try to decide who's going to get a slice and maybe who's not one way that we could allocate the slices of pizzas to give everybody a slice but let's suppose that our goal is to make everybody as well-off as possible well if we give everybody a slice then that probably means that I'm gonna give a slice to somebody who doesn't really want to slice so suppose there's somebody who doesn't like pizza or somebody who's already eaten and they're not that interested in and if I do that then I've miss al akkada the good I've given a slice to somebody who didn't want it and I missed out on an opportunity to give it to somebody else who did want it let's make the problem even harder let's suppose there's 40 people in the room and there's only 20 slices of pizza and now you have to decide who gets a slice and who doesn't you got to look somebody in the eye and say nope you don't get one well ideally the way we would allocate it is to make sure that the people who value it the most get it and the people who don't value it at all we don't allocate any to them both of those decisions the decision of who to produce who's going to produce a good and who's not who's going to consume a good and who's not the benevolent social planner would have no problem making those those decisions in terms of who did the jobs and who consumed the good because the benevolent social planner would be able to look at the people and know how much they value a slice of pizza and would allocate the pizza to the people who value it the most the benevolent social planner would be able to look at a person and knew what their comparative advantage is compared to the other people and have them do that job so if our goal let's suppose we've got a whole bunch of people here little tiny people and we've got to organize economic activity we've got to make a decision about should we organize economic activity with with markets allow people to buy and sell what they want within the bounds of the law or should we should we go with a socialist outcome where the government makes the decisions about how much of each goods produced and and who gets what or a communist outcome where same thing the government plans who's gonna get it who's not who's gonna produce it how much is being produced that's really what we're trying to figure out which of those outcomes would be better which would the vanilla social planner choose let's think about how a free market works so actually let's let's summarize what we just said benevolent social planner I'm going to abbreviate bsp benevolent social planner in order to maximize well-being we just talked about the idea that the benevolent social planner would allocate the good to the people who value it the most and lest you think that that means that only the rich get it that's not what we're talking about here the value you place on something is related to income but it is not completely determined by income just if you go back to the guitar example we had the table of willingness to pay your willingness to pay for something does not tell us your income okay so the benevolent social planner would allocate the good to the people who value it the most and would allocate production to the most efficient producers allocate production to the most efficient producers so now let's think about what the free market does let's think about who consumes and who doesn't consume who produces and who doesn't produce so here's the price in terms of buyers the buyers are represented along this portion of the demand curve these people are the buyers they choose to buy not because the government told them they could or couldn't they choose to buy because for these people their willingness to pay is greater than the price here's the price here's their willingness to pay they choose to buy the good these people down here choose not to buy the good these are what we're going to call our non buyers they choose not to buy it again not because the government told them they couldn't they choose not to buy because their willingness to pay is less than the price right out here's the price here's how much they value it our producers are represented along this portion of the supply curve these are the producers let's call them let's say sellers and they choose to sell the good because they can sell it at a price that's higher than their cost of production so they choose to sell because the market determined price is higher than their cost of production these people out here are the non sellers they choose not to sell not because the government told them they couldn't but because the price is less than their cost of production here's how much they can sell it for here's what it would cost them to produce the good so notice that the buyers and the sellers are represented on this portion of the picture and the non buyers and non sellers are represented on this portion of the picture in a free market price that price signal causes the buyers and non buyers the sellers and non sellers to sort themselves voluntarily into two those who buy and those who don't those who sell and those who don't notice that if the buyers are the people with the highest willingness to pay the non buyers have a low willingness to pay the sellers are the most efficient sellers in a free market and the non sellers are the relatively inefficient people so notice that the free market does the exact same thing in terms of what the benevolent social planner would do that's not enough yet to say that the free market maximizes total surplus what we have to do now is think about another picture let's think about the demand curve and the supply curve remember that the demand curve represents value and the supply curve represents cost of production here's the free market quantity we want to think about what would happen if we chose a different quantity so what if we're government planners and we just choose remember we don't get to see this picture what if we just choose this quantity q1 instead of Q what's the problem with q1 if we just stopped right there well if we stop right there we can go up to the supply curve and up to the demand curve to see at the margin how much people value it and how much the cost of production is if we go up to the supply curve we see that right there's the cost of production I'm going to call it cost of production one at the margin for that quantity there's how much it costs to produce that particular unit if we go up to the demand curve we can see how much consumers value it there's willingness to pay at the margin will call it willingness to pay one we see that at this quantity of q1 willingness to pay is greater than the cost of production the marginal benefits bigger than the marginal cost we shouldn't stop there in terms of total surplus you can see that if we stop there we would get this amount of total surplus but we would miss out on this triangle right there any missed total surplus we call deadweight loss so if we stopped at q1 we would experience some deadweight loss or we could think about what happens if the government just decided to produce that quantity what's wrong with q2 well if we go up to the demand curve we can see the value at the margin there's willingness to pay - and here's the cost of production at the margin Co P - you can see this quantity is not a good quantity because at the margin it costs more than what people value it right so the problem with this quantity is that willingness to pay at the margin is less than cost of production the benevolent social planner would never stop here because we would miss out on this amount of total surplus the benevolent social planner would never move production out to this point because we would also create deadweight loss out here what quantity would the benevolent social planner choose well that quantity from a geometric point of view that's the quantity that maximizes the area that's under the demand curve and above the supply curve it maximizes well-being from a economics point of view in terms of the first first equal marginal principle which we'll talk about later on what we see is that total surplus is maximized when marginal benefit is equal to marginal cost at that quantity at the margin willingness to pay is equal to the cost of production this is the quantity that maximizes total surplus so what we can say is the free-market outcome ends up being the same outcome is what the benevolent social planner would choose the same people are consuming and producing the good those who value it to most and the most efficient producers and in terms of the quantity that gets produced the benevolent social planner would choose the quantity that maximizes well-being maximizes total surplus and creates no deadweight loss any other quantity creates deadweight loss so let's think about kind of a summary of what we're seeing here here's how you summarize kind of the the market outcome and whether or not it's good if we were planners let's suppose we were government planners and we are deciding which quantity to produce here's the information we've got right there remember as planners we don't get to see the demand curve for every good and service and the supply curve for every good news service so as a central planner whether we're talking about communism or socialism we just have to pick a quantity maybe we picked the same quantity that was produced last year let's pick that quantity and we got to pick the quantity in this market we got to pick the quantity and lots of other markets and the problem is we don't know what the demand curve and the supply curve look like so it could be that when if we could see the demand curve in the supply curve we would see that there's what they look like and now we have chosen a quantity that results in not as much well-being as if we had chosen the free market quantity or allowed the free market to operate and result in that quantity so the general conclusion that we can get is that the best a planned economy could hope for is to tie the free market it can't beat it the free market results in a maximization of total surplus there's no other way of organizing economic activity is better than that now we have to talk about here at the end the fact that that does not mean free markets are perfect there are times when the outcome of a free market is not the best possible outcome those are times when there's an externality present whether it's a positive externality or negative externality or times when we're talking about a public good and we don't have time to talk about those right now we'll talk about impossibly later but let me just leave you with the conclusion that we've just demonstrated that free markets are the best way to organize economic activity they are not perfect but that's the place we need to start central planning the planners regardless of how good their intentions are do not have the information necessary to even get close to the amount of well-being that would be created with a free market I'll see you in another video
Principles_of_Microeconomics
Chapter_21_Theory_of_Consumer_Choice_Utility_Maximization.txt
in this video we're going to talk about the theory of consumer choice so this chapter is really based upon a model that economists call the the the model of utility maximization so we're going to be thinking about utility and utility is really um the way that economists use the word is really kind of in place of of human well-being sometimes you hear people talk about utility in terms of happiness but it's not necessarily happiness it's just that you consume goods and services that provide you with utility certainly some of them can provide you with happiness but it's we think of it as well-being Okay so basically we're going to think about how to use that model to understand that the decision understand the decisions that a consumer makes so in this chapter we're going to be thinking about where demand curves come from and we're going to be thinking about some of those things that you would have learned about when we first introduced demand and Supply to you things like the income and substitution effect in this chapter we're going to actually be able to see those things in action so what we want to do is we want to think about the decisions that that we make in terms of of buying goods and services so we're going to reduce the consumer's problem down to something that's as simple as we can possibly reduce it to and so if you think about your goal as a person your goal is going to be to take the income that you've got the resources that you've got and use that income to purchase the best combination of goods and services that you possibly can so we're going to be thinking first about what we're going to call a budget constraint budget constraint so you are constrained by the income that you've got by whatever your budget is so the goods and services that you can purchase are the set of goods and services that you can afford now we're going to suppose that you can't borrow money and uh there's no saving that takes place we're going to think about this as just a single time you've got a certain amount of money and you want to spend that amount of money to buy the best combination of of the goods that you can and there's no reason to save any money tomorrow and you can't spend more money than you've got so we're going to keep everything really simple who are going to start by thinking about just two goods that's as simple as we can get we're going to think about pizzas and we're going to think about Cokes we're going to consider a consumer who has to take a certain amount of money let's say that the income that they've got or their budget let's say is five hundred dollars okay so you've got five hundred dollars maybe this is for a month okay um and you've got to decide well how should I spend that on pizzas and cokes okay those are the only two goods that there are don't worry about anything else rent or fuel or anything your life in this problem consists of deciding how to spend your 500 on pizzas and cokes um so now let's think about some prices let's suppose the price of a pizza I'm going to call that P with a little p subscript price of pizza let's suppose ten dollars per pizza and let's suppose that the price of a Coke is say two dollars foreign so now we want to think about the combinations that you could purchase and there's an infinite number of combinations one possibility would be to take all of your money and spend it all on say pizzas if you took all 500 of your dollars and spent that money completely on pizzas at ten dollars a piece you could purchase 50 of them um you could also spend all of that money on just Cokes if you wanted and if you had five hundred dollars and you spent it all on Cokes at two dollars each you could buy 250 Cokes so what I want to do is create a table those are just two possibilities and and probably no reasonable person would want to spend all their money on one good or all their money on another good typically we like to have some of multiple Goods so I want to put together a table here that just includes some possibilities I'm going to say here pizzas this will be the quantity of pizzas that you could consume and then the quantity of Cokes that you could consume let's say the spending on pizzas spending on pizzas spending on Cokes thank you and then let's put our our total over here that's the total amount of money that we've spent now remember that total is going to add up to 500 because we can't spend more than uh the 500 that we've got so let's start where we spend all of our money on pizzas if we purchase 50 pizzas and zero Cokes then we will have spent 500 on pizzas and zero dollars on Cokes that's our five hundred dollars total so that's one possibility but chances are we're not going to want to spend all of our money on pizzas another possibility and these These are they're an infinite number of possibilities if we assume that that Cokes and pizzas are infinitely divisible don't worry about that let's suppose we decided to spend uh money on 40 pizzas well at ten dollars each that's going to be four hundred dollars right so we've spent four hundred dollars on pizzas um that leaves us a hundred dollars that we can now spend on Cokes and at two dollars each we can purchase 50 Cokes so that's a hundred dollars spent on Cokes that's our 500. so that's another possibility we could purchase 40 pizzas and 50 Cokes or we could purchase 30 pizzas that would be a total amount of three hundred dollars spent on pizzas that would leave us two hundred dollars that we could spend on Cokes and at two dollars each that means we could buy a hundred Cokes so there's our 200 and those two add up to our 500 um and you can see where we're going with this we're just going to be thinking about spending less buying fewer pizzas and more Cokes and these two numbers are always going to add up to 500. so another possibility would be to purchase 20 Cokes that's 200 dollars um that would leave us three hundred dollars that we can spend purchase 20 pizzas that's two hundred dollars that would leave us three hundred dollars that we could spend on Cokes at two dollars each that's a hundred and fifty so there's our 300 and again those add up to 500 the next one would be 10 pizzas um 200 Cokes that would be a hundred dollars spent on pizzas four hundred dollars spent on Cokes and then we get to the other end where we're buying no pizzas we're instead purchasing 250 Cokes so we've spent nothing on Pizza all five hundred dollars on Cokes and that total that we've spent is 500. so those are some possibilities right now our question is well which one of these is the consumer going to consume or we there's remember a whole bunch that we don't have on this table you could buy 31 Cokes or 31 pizzas or you know 15 pizzas so there's a whole these are just some representative points what I'm going to do though is I want to graph these and let's look at what this looks like visually what I'm going to do is I'm going to put the quantity of pizzas on the horizontal axis and the quantity of cokes on the vertical axis and we're just going to think about graphing the possibilities here now in an economics class students are used to typically seeing a demand Supply type picture so you're probably used to seeing pictures where we've got a quantity on the horizontal axis and we've got a price on the vertical axis or vice versa you have to be careful because this is not that picture this is a picture where we've got a quantity on this axis and a quantity on that axis and we're thinking about the combinations of these two goods Cokes and pizzas that we can consume so one possibility if we're thinking about Cokes is that we could spend all of our money on Cokes and we could buy 250 of them so there's that point right there is a point that is going to be on this budget constraint and then there's a whole bunch of points in between there and the other end point is where we spend we purchase zero Cokes but 50 pizzas that's another point and then if we connect those two points we get the budget constraint now we could go through and we could graph each one of these points but each one of those would lie on that budget constraint once you know the end points then you just draw a straight line in between them so those that represents our budget constraint sometimes you might hear people talk about it as as maybe a budget line um I've even heard it called the consumption possibilities Frontier occasionally I will use that terminology because it makes sense these are these are all of the the possible bundles that you can consume okay um let's talk about what the slope of this thing is so the slope here if we were to think about the slope the slope of this is going to be this rise over the run it's going to be 250 divided by 5. the slope of this is going to be 5. I'm just going to write 5 over 1. you go down 5 over 1 down 5 over 1. by the time you've gone down 250 you've gone over 50. okay now I'm also going to show you how to to graph that in general it's actually very easy so here's what income here's what this budget constraint looks like it's actually very simple income we're going to spend all of our income and so our income is going to be equal to the amount that we've spent on say Cokes the price of coke times the quantity of Cokes we've consumed or purchased this is the amount that we've spent on Cokes plus the price of pizzas times the amount of pizzas that we've purchased so this is the amount spent on Cokes the amount spent on pizzas if we add up the amount we've spent on each it has to add up to our income okay so I'm just gonna a lot of times we might call that just I now here's what I'm going to do I'm going to solve this I want to graph this thing now if I want to graph it I need to solve for the thing that I've got on the vertical axis right up there I need to solve for the quantity of Cokes so I'm going to move this thing to the other side I'm going to have income minus the price of pizzas over Quant multiply by quantity of pizzas is equal to the price of Cokes multiplied by the quantity Cokes all I've done is moved that to the other side and of course I called that I now just for convenience whatever I'm solving for I always like to have on the left hand side so I'm just going to reverse everything I'm going to rewrite it this way price of Cokes times the quantity of Cokes is equal to income minus the price of pizza times the quantity of pizza I haven't done anything I just reversed it in order to get that quantity of Cokes by itself I simply need to divide by the price of Cokes right so if I divide by the price of Cokes it's going to cancel out over here that's going to leave me just the quantity of Cokes which is what I want that's what I'm trying to solve for so on my left hand side I'm going to have the quantity of Cokes we've got two terms Each of which is divided by the price of coke so I'm going to break those into two terms I'm going to have income divided by popc minus price of pizzas divided by the price of Cokes multiplied by the quantity of pizzas that right there is the functional form for that if we were to plug in our income which is equal to 500 and plug in the price of Cokes to and the price of pizza is 10 then what we would get is our income 500 divided by the price of Cokes 500 divided by 2 would be 250. this is your vertical intercept 250 that's there this is the slope the price of pizzas divided by the price of Cokes 10 divided by 2 is 5 that's where that slope comes from okay so you don't have to write it out like this you can work through it just by thinking about the end points these endpoints are are how many Cokes you could buy if you spent all your money on Cokes well if you had five hundred dollars and a coke costs two dollars you can buy 250. this endpoint is how many pizzas you can buy if you spent all of your money on pizzas well if you got five hundred dollars and a pizza cost ten dollars you can buy 50. so you can easily graph this this budget constraint with just this information you don't have to do that but that may give you a better idea of where it's coming from let's talk about what would cause that budget constraint to shift actually let's first think about what it shows us so the budget constraint shows us all of the bundles that we can consume all the bundles that we can afford so we can afford this bundle we can afford that bundle we can afford any bundle on this budget constraint we can of course afford any bundle in here right for example if we wanted we could buy one pizza and one Coke that would cost us twelve dollars that's some point way down in here right but we're we want as much well-being as we can possibly get and there's no reason to save our money for tomorrow because there's no tomorrow this picture represents everything okay we could generalize this model to a model that's much more complicated and has multiple time periods we don't want to do that right now so we could buy that bundle we can buy any bundle in here but we can't buy that bundle can't buy that bundle we can't buy any bundle outside of this budget constraint now let's think about what happens if we start to change any of these numbers if we change our income or if we change what uh the price of pizzas or the price of coaxes let's suppose that now income is uh say a thousand dollars instead of 500 let's suppose we have twice as much income so let's suppose income is now equal to a thousand now rather than doing this on that picture I don't think I've got enough space up there to do it so what I'm going to do is I'm going to draw quickly another version of this and I'm going to put it right down here 250 here's my budget constraint goes down here to 50. that's the quantity of pizzas quantity of Cokes so there's my budget constraint let's think about what happens if now instead of five hundred dollars we've got a thousand dollars well that means we're going to be if we were to spend all of our money on Cokes instead of just being able to buy 250 we'd be able to buy twice as many because we have twice as much income so we would be able to buy 500 instead of 250. or if we were to spend all of our money on pizzas and we've got a thousand dollars then if a pizza cost a hundred or ten dollars we can buy a hundred of them so this thing instead of being 50 our end point would be a hundred and if we draw that we get a budget constraint that is parallel to the old budget constraint and it shows us that now with twice as much income we can consume a whole lot of bundles that weren't affordable before with only five hundred dollars this is the set of bundles uh or combinations of pizza and Coke that we could consume anything in this set or on this and we're going to want to be on that budget constraint but now with a thousand dollars we can consume any of these bundles but again we're going to want to be on that budget constraint so we're going to be wanting to choose a bundle somewhere on there okay so if we change income if we increase income it moves that budget constraint out parallel to itself if we decrease income it moves your budget constraint inward parallel to itself it's important that you recognize that the slope of those two things is exactly the same okay if we calculate the slope of this one it would be 500 divided by 100 which is 5 just like the slope of that one is okay so changing income shifts that budget constraint out or in parallel to itself let's talk about a change in one of the prices let's think about what happens if we change either the price of pizzas or the price of Cokes now clearly if we increase either of the prices it makes us worse off increasing the price of pizzas decreases the cut the number of bundles that we can consume or increasing the price of Cokes makes us worse off it means that we can afford fewer bundles than before we'll look at a picture here in a second decreasing either price is going to cause us to be able to afford more bundles so let's think about uh let's suppose the price of pizzas price of pizzas Falls to five dollars so let's it started out at 10. and it falls to five I'm going to do the same thing that I did up here I'm going to draw this budget constraint again so we've got the budget constraint starts out like this 250 50 down here we've got the quantity of pizzas and up here we've got the quantity of Cokes so now the price of pizzas is going to fall to five it's going to get cut in half well think about what what that means if we've got five and and I increased our income here to a thousand let's take our income back to 500. let's go back to start with these numbers that's what this budget constraint is based on now if the price of pizzas goes from 10 to 5 that means if we spent all of our money on pizzas and we had five hundred dollars and each of them cost ten dollars we'd be able to buy a hundred of them instead of just 50. so if the price of pizzas Falls from ten to five we're going to be able to buy a hundred pizzas now it's not going to affect the number of Cokes that we can purchase because the price of Cokes hasn't changed but notice that this budget constraint now is going to Pivot out just like that this thing pivots just like that and it again opens up a whole bunch of bundles that we can consume that we couldn't afford before but again we're going to want to be on this budget constraint so it's going to allow us to produce or to consume any of these bundles that we want we couldn't have afforded those when the price of pizzas was ten dollars so if one of the prices changes notice that the slope of this budget constraint is going to change the slope of this budget constraint is 250 over 50 the slope of that thing is five okay the slope of this budget constraint is going to be 250 over 100 that gives us 5 over 2. okay so the budget constraint is not as steep as before it's half as steep in this case now what we did there is we decreased the price of pizzas we could increase the price of pizzas or we could change the price this should be that should be a c up there we could increase the price of pieces or we could change the price of Cokes we could make the price of Cokes go up or down now so there are four possibilities there are two prices and either of them could go up or down so there are four possibilities let me quickly draw those four possibilities for you let's suppose I'm not going to label everything I always get on to my students when they don't label stuff but I'm going to do that for the sake of brevity here I'm going to draw a budget constraint in each of these just like the one we're starting out with what would be on the axis of each of these pictures is the quantity of pizzas and the quantity of Cokes okay let me label one of them so we can still there's quantity of pizzas quantity of Cokes foreign so now let's think about in a different color here I'm going to talk about what happens if each of these prices changes and so let's do this one first what we did there is we we had a decrease in the price of pizzas and what we saw was that pivoted this thing out like that it allowed us to purchase more pizzas than before so it pivoted out in this direction in this picture let's think about what would happen if we increase the price of pizzas well in this case it's going to Pivot our budget constraint in just like that it's going to cause us to be able to afford fewer bundles than before I'm using the word bundle here to refer to a some combination of pizzas and cokes so don't be confused by that let's think about what happens now if we change let's do a decrease in the price of Cokes okay so coax is up here on the vertical axis if the price of Cokes goes down we can now buy more Cokes than before it won't change this end point because that's how many pizzas we can purchase if we purchase only pizzas but if we decrease the price of Cokes it's going to Pivot this thing out just like that if we increased the price of Cokes it would pivot it in just like this so you can see all of those combinations would result in a change in the slope of the budget constraint okay so now we understand something about the set of bundles that our consumer can choose from any bundle that is on this budget constraint or any bundle in here but we've already talked about that we don't want to be inside here we want to get we want to purchase as much as we can we want to use all of our income so we're going to want to choose one of these bundles on this budget constraint but the question is well which one which one would make our consumer better off well what we need to do in this case is we need to combine this information about what we can afford with some information about what we like right we need to represent the preferences of our consumer and the way that we do that is we represent it with what's referred to as an indifference curve so I want to think for a second about utility what I'm going to do is I'm going to draw a little bit bigger picture I'm going to do the same thing I'm going to put the quantity of pizzas down here I'm going to put the quantity of Cokes up here and we're going to think about the preferences of the consumer I've said already that that utility represents the well-being of the consumer so the goal of our consumer is to choose whichever bundle gives them the most utility the most well-being if you haven't looked into utility before if this is the first you've heard of it or or if you've heard of it before but nobody's ever really uh told you much about it utility is a interesting concept and it forms the basis really of how economists think about the behavior of consumers the the consumer utility maximization model is is really one of the foundational models of of how economists Model Behavior we don't really believe that you have this utility calculation going on in your head but turns out that we can model your behavior as if you do one of the first people that that kind of really spent a lot of time thinking about utility was an economist named Jeremy Bentham if you've never looked into Jeremy Bentham he's a fascinating person um he he really thought in terms of utility about how we can take the things around us and get the most human well-being out of them even went so far as to to believe that that we shouldn't even waste a person's body when they die Jeremy Bentham believed that that the human body even once a person is dead could be of some benefit provide some utility to other people in fact what Bentham did was in his will he outlined a thing that he wanted built called the auto icon and the auto icon is basically a big Oak cabinet um it's a cabinet inside a cabinet it's kind of like a display cabinet that's got an outer cabinet that it the inner cabinet slides back into but inside this inner cabinet is uh Jeremy Bentham skeleton and his head that has been preserved he believed that that we shouldn't waste a person's body when a person dies we should put that to use and make it available for other people to study maybe for medical purposes or maybe so that if you were missing Jeremy you could you know go visit this Auto icon and there he is right there so it's not a it's not a painting it's actually Jeremy Bentham turns out though in in the preservation process the head went bad and it it is pretty grotesque looking the skeleton has his Jeremy's clothes on it it's fascinating just Google auto icon and you will see it but Jeremy Bentham is kind of one of the the founders of the the utility model that we use now we we use it differently than he uh kind of conceptualized it we've had advancements since then but look up Auto icon it's pretty fascinating so here's what we're going to do we're going to think about how to represent preferences how to represent utility of the consumer let's suppose that we think about a bundle bundle up here okay that bundle would correspond to I'm just going to put some little dotted lines that would indicate that that bundle corresponds to a whole lot of Cokes and not much pizzas we're going to start by calling that bundle a let's consider bundle a so there's a the consumer at that point has a whole lot of Cokes not much pizzas maybe like I don't know let's not pay attention to these numbers let me give you some different numbers so ignore those for right now let's suppose this is just a bundle for just one day so that's a bundle that maybe corresponds to six Cokes and a half slice of pizza okay so you're sitting there with six Cokes and a half slice of pizza now here's what I want to do I want to take some of those Cokes away from you so I'm gonna take some Cokes away down move down that much and I'm gonna give you some pizza in order to compensate you for losing some of those Cokes so what I would do is I would say okay let's suppose I take say two cokes away from you how much pizza would I need to give you to where you're indifferent between the new combination and the old one okay so the old one was where you had six Cokes and a half slice of pizza I'm going to take two cokes away now you're only going to have four Cokes how much pizza do I need to give you until you don't care about being at this point and this new point that we're going to call B so at B you've got fewer Cokes but you've got more pizza and probably if you thought about it a little bit you'd say okay if you take two cokes away from me and give me I don't know maybe one slice of pizza so now you've got four Cokes and one and a half slices of pizza the numbers aren't that important if you had four Cokes and one and a half slices of pizza you'd be indifferent I don't care whether I'm at a or b now let's do that again let's say okay I'm going to take now another two cokes away from you so I want this distance to be about the same as that but it looks like I went a little bit too far so I'm going to take some more Cokes away from you now and I'm going to give you some more pizza to compensate so now think about how much pizza you need if I take the same amount of Cokes away from here as I did there probably from here you're going to need to be compensated with more pizza because as I take more and more coke away from you then you're you've got less and less Coke when you've got a bunch of coke and not very much pizza I could take a lot of coke away from you and give you a little bit of pizza and you're fine but now we're getting to a point where you don't have much coke and so I'm going to need to compensate you with a larger amount of pizza than I did from A to B so when we go down here to point C and I take more coke away from you I'm going to need to give you more pizza to compensate you because now you don't have very much coke what if I do this again if I were to take more coke away you're going to say hold it but wait I'm gonna have hardly any Coke now and I'm going to say okay I realize that so I'm going to need to give you a lot of pizza to compensate you so let's suppose I give you this much pizza so there's Point d okay now exactly where those points would be depend on your preference it depends on how much you like Coke and pizza and if if through this whole thing you've been saying oh I hate Coke this makes no sense hang in there pretend like you like it or or for you pick two goods that you do like okay if if Pizza makes you sick to your stomach then choose something else right so what we've done here is we've just found four points that you're indifferent between now these are just four of an infinite number of points that you would be indifferent between we could change the amount of coke that we take away and find intermediate points but if I were to connect all of these what we would get is what we call an indifference curve and I'm going to label that with a u and a bar over it because these are points that all provide you with the same amount of utility you're indifferent between them you don't care whether you're at Point D or point a both of those points give you the exact same amount of utility okay so we call that an indifference curve an indifference curve represents the combinations of pizza and Coke or whatever too good you're thinking about it represents the combinations of those two goods that leave you with the exact same level of utility now let me draw another picture in here where we don't have all of those points I'm just going to put an indifference curve in here I'll call that U1 with a bar over it let's go ahead and put quantity of pizza quantity of Cokes in there so there's an indifference curve I don't I want I want that thing to there's an indifference curve you don't care which of these points you have they all give you the same amount of utility and let's think about what would happen if we were to start at a point like that point right there and then we give you more pizza and more coke so we move in that direction to get to a point like that well if you have more of both Goods here we were giving you more of one but taking some of the other away here I'm giving you more of both that means that you would have to prefer that point over that one so the set of points that would be on the indifference curve the same indifference curve as that higher point would be on a different indifference curve I'm going to call it U2 you get more utility from all of these combinations on this indifference curve than you do on that indifference curve Okay so the goal of the consumer in terms of utility is to reach the highest indifference curve because higher and difference curves correspond to higher levels of utility the way we describe that is that the consumer seeks to maximize utility they seek to maximize their well-being so now we can think about the actually before we put these two things together let's talk about the uh kind of the properties of an indifference curve I've got two indifference curves here but I don't want to go on before we think about well okay so what's important about this picture we already talked about the fact that higher indifference curves correspond to higher levels of utility the consumer would rather be on in this indifference curve than that one or if we had it thought about the indifference curve out here we could call it u3 that would represent High combinations of pizza and Coke that provide higher levels of utility than either of those two indifference curves so the consumer wants to be on the indifference curve that's farthest from the origin indifference curves always slope downward they have to slope down where we could never have an indifference curve that slopes upward because if an indifference curve sloped upward let's this is getting really cluttered here but suppose we had indifference curve that looked like that well that would mean that this bundle a and this bundle B provide the same level of utility but that wouldn't make any more any sense because B has strictly more of both Goods than a does so indifference curves never look like that they're never upward sloping they're always downward sloping okay um indifference curves can never cross same thing I'm going to sneak that in here so if we had an indifference two indifference curves that look like this and then another indifference curve that came along like that and crossed well that doesn't make sense because if we had a bundle here like a and a bundle like B and then another bundle like C here's what this says because A and B are on the same indifference curve the consumer is indifferent between a and b and because A and C are on the same indifference curve the consumer would have to be indifferent between a and C so if they're indifferent between all three of these bundles we've got a problem right here because C has strictly more of both Goods than b and so the consumer would have to like see more than they like be so indifference curves never crossed they nest together just like the iso bars on a topographical map okay in fact that's exactly what this is if you want more a little bit more in-depth view of what's going on in this picture this is actually a a two-dimensional representation of a a three-dimensional idea if you'll go to my intermediate microeconomics videos and go to the one on consumer preferences you'll find a more in-depth discussion of what's Happening Here we don't need to get that in depth but they can never cross and then in difference curves are are always curved in this direction they're never curved in the other direction they're never curved like this that would not obey the way that consumers behave what this says is that when I've got uh very little of something you don't need to give me very much to take even more away from me it it disobeys one of the basic laws of of how we know people behave it refers to something called the law of diminishing marginal utility it's not something we need to get into for this video but again if you want to see more on that go to my intermediate micro videos okay so now what we need to do is we're going to talk a little bit more about kind of what these pre what this tells us about your preferences and then we're going to put these things together but I clearly need to uh erase the stuff I've got on the board let's think about the shape of these indifference curves um I'm going to give you a couple of unusual or let's say extreme cases of indifference curves let's suppose we had indifference curves or let's think about what would happen if we had two goods that were perfect substitutes like say uh let's pretend for a lot of people Coke and Pepsi would be pretty close to perfect substitutes or we could think about uh something really unusual like if you had uh two nickels or one dime two nickels and one dime those are perfect substitutes for each other but let's go with our Coke and Pepsi example let's put a quantity of coke down here quantity of Pepsi up here and let's put some uh let's put some numbers on here let's suppose we had uh one two three four one two three four let's think about having um say three Pepsis and one Coke that bundle right there got three Pepsis and one Coke well if these things are perfect substitutes then we could take one Pepsi away give you one Coke that would take us to that bundle and you'd be indifferent between that point and this point three Pepsi's in one Coke gives you the same utilities as two utility is two Pepsi's and two Cokes or one Pepsi and three Cokes that point so if we connected those we would get indifference curves that are linear there's an indifference curve it's not curved like the other that we saw just a little bit ago and an indifference curve that would provide you with more utility would also be linear so those would be what our indifference curves look like so in the case of perfect substitutes your your indifference curves are going to be linear they don't have to be linear at a 45 degree angle um it depends on the trade-off between the two here I've got one can and one can but we could have cans and two liters and then it would be different slope so that's a special case of perfect substitutes we could have perfect compliments so if we had perfect compliments let's say that we had uh I don't know say left shoes and right shoes so let's suppose you had let's put some numbers up here one two three four one two three four let's suppose you have two left shoes and two right shoes so you've got two pairs well if you've got two left shoes and two right shoes then this bundle right out here where you've got two left shoes and three right shoes that third right shoe doesn't do you any good you don't have anything to go with it so you would be indifferent between these two bundles or if you've got two right shoes and three left shoes that bundle you're indifferent between all three of those points an extra shoe unless you got another one of the other kind to go with it doesn't do any good or two left shoes and four right shoes that doesn't do you any good or two right shoes and four left shoes so in this case our indifference curves look like a l this is actually what we call lexicographic preferences um you don't have to worry about that term but if you've got three right shoes you need a third left shoe to go with it so this combination would be on a higher indifference curve so it would be on an indifference curve that looks like that so your indifference curves are right angles in the case of perfect complements so these are the two extremes linear or right angles but typically they're going to be curved they're going to lie in between those two extremes so now let's talk about what the consumer wants to do the consumer wants to use the income that they've got to get the most utility so let's put up here let's go continue with our quantity of pizzas quantity of Cokes example we've got a budget constraint that looks like this and let's just think about a particular bundle here's what we're going to do we're going to say okay let's think about bundle a bundle a the consumer can clearly afford that bundle we need to know something about the preferences of the consumer so what we're going to do is we're going to put an indifference curve we're going to draw the indifference curve through point a we're going to draw the indifference curve that represents all the other bundles besides this one that the consumer is indifferent between so let's suppose when we draw the consumers indifference curve it looks something like this we'll call it U1 well here's what we know what we know is that there are some bundles out here that are affordable the consumer can clearly afford any of these bundles for example bundle B the consumer can afford bundle B and bundle B would be preferred to bundle a because it would be on a higher indifference curve so if we think about this bundle right down here the consumer is indifferent between a and and that bundle but the consumer would have to like bundle B more than that one because bundle B has strictly more of both Goods so B is better B would be on a higher indifference curve so let's draw the indifference curve through that point let's suppose that it were to look like this we draw the indifference curve through bundle B and it comes down here and looks something like that I'm going to call it indifference curve U2 well so we've increased utility the consumer is better off at B than they were at a and we know that because they're on a higher indifference curve an indifference curve farther from the origin but we're not done yet because there are still some bundles right out here that the consumer can afford that would be preferred to either A or B and so you can probably see where we're going to end and the end would be that we're going to choose the bundle that would correspond to the indifference curve kind of draw my indifference curve coming right down here and just touching that budget constraint in one place and heading off like that I'm going to call it u3 right there at bundle C once we've gotten to bundle C we're on the highest possible indifference curve the indifference curve farthest from the origin there is no other bundle on that picture that is Affordable that would be on a higher indifference curve than u3 so this bundle right there that amount of pizza and this amount of coke that would be the bundle that would maximize well-being for this consumer so when the consumer is maximizing their utility or well-being what we end up with is a place where an indifference curve is just tangent to the budget constraint let me draw it a simpler picture same thing quantity of pizza quantity of Cokes we've got our budget constraints and now we know that the consumer is going to choose the bundle where an indifference curve is just tangent to the budget constraint that would be the bundle that maximizes utility notice that when this indifference curve and the budget constraint are tangent to each other that means that at that point of tangency the slope of each of them has to be equal now we call the slope of an indifference curve the slope it's equal to what we call the marginal rate of substitution marginal rate of substitution we abbreviate it Mrs we saw the slope of the budget constraint earlier it's the price of this good divided by the price of that good it's the price of pizzas divided by the price of Cokes foreign now that we know that the consumer will end up choosing the bundle where an indifference curve is just tangent to the budget constraint now we can use that to start to understand how will a consumer respond if that budget constraint changes in other words if we change income or if we change prices that's going to change our budget constraint and it's therefore going to change our optimal point that the consumer chooses so let's start by thinking about how a change in income is going to affect the consumer's choice so let's start with a budget constraint and an indifference curve just tangent to that budget constraint we'll call it I'm going to call this uh well let's let's go with what we're doing and U Bar sum um in I think our textbook actually calls it I it doesn't matter some textbooks call it U some call it I it's just an indifference curve this tells I kind of like this notation better because it tells you that utility is not changing along that indifference curve let's call our initial point point a let's continue with our quantity of pizzas quantity of Cokes example and so we're going to start at a let's identify the amount of pizza and Coke that we're consuming so at Point a consumer is purchasing this many pizzas I'm going to call it quantity of pizzas a and this many Cokes q c a let's suppose income goes up if income goes up we know this budget constraint is going to move out parallel to itself it's going to shift out something like this so we have an increase in income our budget constraint moves out now bundle a is no longer going to be the bundle that they are going to want to consume and we know that because there are a whole bunch of bundles right in here that are now affordable that would be on higher indifference curves so let's suppose that we end up the consumer now has more money to spend let's suppose the consumer ends up at a new point of tangency on a higher indifference curve and we say point B okay so let's look at how much pizza they're going to consume and how much coke they're going to consume quantity of coke be notice that the quantity of pizza went up and the quantity of coke went up let's think about what this means this is something that you've seen before at least you've heard about it what this is telling us is that both pizza and Coke are normal Goods because we had an increase in income and that increase in income led to two things that led to an increase in quantity of pizza and an increase in quantity of coke well that's the definition of a normal good if income goes up and you buy more of the good then it's it's normal and that's happening for both pizza and Coke think about what where we would have needed to put point B to make one of these Goods an inferior good and inferior good is a good for which when your income goes up you want less of the good so if let's suppose Coke was an inferior good we would need the quantity of coke to go down so we would have needed our point of tangency to be somewhere down here where pizza went up but Coke went down let's draw that picture let's do a picture for an inferior good so let's suppose we've got our pizza down here quantity of coke here um we've got our initial point of tangency say right here at Point a and we've got an increase in income so that budget constraint moves out I'm just going to draw dotted lines down here to indicate the amount of each good let's suppose that coke is an inferior good well that means we need this to go down and this to go up this doesn't have to go up for that to be an inferior good but you can't have both Goods be inferior if you think that you could try to draw a picture where you increase income have your new indifference curve tangent to that new budget constraint and both goods go down if both goods go down you have to move down in this direction so you can only have one inferior good the other Goods got to be normal so if this one's going to be inferior that one's got to be normal that means our point of tangency needed to be some needs to be somewhere down here so we would need to have our indifference curve come down here and be tangent something like that at point B that would be a situation where income went up but the consumer bought less Coke they bought more pizza so Pizza is a normal good Cokes and inferior are good foreign so now we understand how a change is in income affect the amount of Pepsi and pizza and Coke that we're consuming or that our consumer is consuming let's think about how changes in prices are going to affect this let's suppose the price of one good Falls let's start with a picture of equilibrium so let's suppose we've got our initial initial equilibrium foreign a call this indifference curve U1 so we've got same thing let's do quantity of pizzas quantity of coke here's the quantity of Coke at Point a quantity of pizzas at Point a we're going to focus on what happens if we change the price of pizza so let's uh let's start at a and let's decrease the price of pizza if we decrease the price of pizza that means the end point of this budget constraint is going to move out in this direction now when you're drawing pictures of this stuff I always encourage my students to to make your your changes in your budget constraint large how big how much it shifts depends on how much price changes but if you're just drawing pictures like this it's easier to draw everything if if things are more spread out so I'm going to have a large decrease in the change of or in the price of pizzas something like that shifts out a lot and so now the consumer would no longer choose bundle a they're going to choose a bundle on this new budget constraint let's suppose they choose a bundle down here on indifference curve U2 that bundle right there we'll call it bundle B and so notice that what they're doing is they're buying more pizza than before that's quantity of pizza at point B now what we did is we changed the price of pizza we lowered the price of pizza and we saw that they bought more pizza this would represent the two Pizza prices and these two quantities would represent two points on the demand curve for pizza this is where demand curves come from now we can also see that the quantity of coke changed when the price of pizza changed and we know that a change in the price of a related Good Will shift the demand curve for the other good so where that's what we're seeing there but I want to focus right now on this actually what I want to do is clear the board off and draw another picture and show you exactly where the demand curve comes from all right now let's take a look at at where a demand curve comes from we've we've got all of the ingredients and what we just did over here but let's put those that picture or a similar picture what we just did together with another picture and I think it'll be really easy to see exactly where the demand curve comes from so we've got the quantity of pizza here we've got the quantity of Cokes here let's put our budget constraint our initial budget constraint an initial point of tangency indifference curve U1 and an initial point point a now down here I'm going to draw another picture and I'm going to put the quantity of pizzas on the horizontal axis right there but on my vertical axis of this picture I'm going to put the price of pizzas so what this picture is going to represent is this is going to be a demand picture right because the demand curve has the price on the vertical axis and the quantity on the horizontal axis up here remember this is not a demand picture this has got quantity on both axes two different Goods but this is allowing us to look at at the choice between two goods down here we're going to focus just on pizzas so what I'm going to do in this picture is I'm going to be changing the price of pizzas I'm going to be decreasing it now this budget constraint is drawn based upon an a price of pizzas and let's pretend that it's right there let's say that there is I'm going to call that PA that's the initial price of pizza we don't need to worry about what the number is in my original picture I started the video out with or my original example it was ten dollars but let's just call it P A the initial price okay so right here at that initial price here's the quantity of pizzas the consumer wants to buy I'm just going to drop this straight down because this quantity and this quantity are the same so right there I'm going to call that Q a okay so that's where we're starting we're starting with a price of pizzas and a number of pizzas the consumer wants to buy now here's what we're going to do we're going to decrease the price of pizzas okay when we decrease the price of pizzas this budget constraint pivots out it's going to Pivot out something like this so we've decreased the price of pizzas let's suppose here's the new lower price of pizzas that I'm choosing I'm going to call it PB maybe instead of ten dollars it's now six dollars the numbers don't matter I just I'm decreasing the price of pizzas my budget constraint shifts out like this and now the consumer is going to move to a indifference curve farther from the origin let's suppose it's an indifference curve this tangent right out here let's call it U Bar two and suppose our point of tangency is right there at point B now notice what's happened we decreased the price of pizzas and the consumer is buying more pizzas right there's QB and so now at this new lower price there's the new quantity of pizzas the consumer wants to buy there's q b we've got two points on an indifference curve now let's do it again let's let's decrease price again down here to this price PC when we decrease price again this budget constraint is going to Pivot out again it's going to Pivot out something like this and the consumer is no longer going to choose point B they're going to choose a point out here where the indifference curve would be tangent something like Point C on indifference curve u3 and now notice what we've got we've got a lower price of of pizzas and a higher quantity demanded there's QC we can drop that on down and so we get another Point down here on the budget constraint or excuse me on the demand curve and so if we connect these points we're tracing out the demand curve for pizza so this is where demand curves come from at the beginning of a principles class we never would go through that we just say okay here's what a demand curve looks like we would talk about the income and substitution effects and and we're going to see those here in a second but what we would say is there's an inverse relationship between the price and quantity demanded as the price goes down consumers want to buy a higher quantity quantity demanded goes up well this is what's behind that it comes from this model of utility maximization foreign let's talk now about the income and substitution effects because we can actually use this model to identify what the income and substitution effects look like so when I teach your principles class principles of micro or principles of macro and I introduce a demand curve for the first time usually we go through a demand schedule a list of prices and the quantity demanded at those prices and I point out that there's an inverse relationship and then we draw it and that inverse relationship shows up as a downward sloping demand curve and I always tell my students okay that happens demand curves are downward sloping because of two reasons the income and substitution effects but but you can never show that at the beginning of a principles class because you would need to understand this in order to be able to really wrap your head around what what they look like so that's what we're going to do now let's think about the income and substitution effects and how to actually see those now before we do this let me say you can't see these out in the real world you can only see it when we're working with this model of of uh utility maximization what we see out in the real world is we see that when price changes consumers want to buy a different quantity but we can't separate out that change into the part that's due to income change and the part that's due to the substitution effect but on a board like this we can so let's do this I'm going to draw a relatively big picture here so we can get a good idea of what's going on let's continue with our pizza Coke example I'm going to put put my initial budget constraint right here let's have our initial and difference curve tangent there's indifference curve U1 there's point a I'm going to focus on pizza right now I'm going to call this a quantity of pizza at Point a so we're going to start right there I'm not going to worry about quantity Cokes we don't need to focus on that but if we start at a and let's decrease the price of pizza just like we did over there and again I want when you draw your picture you should be drawing these along with me if you're not you need to be pausing the video and drawing these you will only really start to understand it when you start to draw these things as I do it so when you draw it I want you to Pivot this thing quite a bit it'll make it easier to do what we need to do here so we have a decrease in the price of pizza it allows us to produce to consume a whole lot more bundles than what we could have before price of pizza went down we know that the consumer is going to move to a higher and difference curve let's suppose that they move to an indifference curve that's tangent right there you may be asking as you draw these well where should I put my point of tangency should I make it up here down there or down here it doesn't really matter try to do it like I do it let's call that b so here's what we've seen I'm going to call this uh quantity of pizza at point B so we've we're seeing the same thing as what we saw right there we decreased the price in Pizza our budget constraint pivoted out it allowed our consumer to move to an indifference curve farther from the origin and now they buy more pizza than before what we want to do this the movement from right there to right there is what we would call the total effect that's all we see in the real world we can observe that when the price changes we can see how much more or less the consumer buys what we want to do is we want to break this up into the part that is due to the fact that that decrease in price causes the consumer to substitute away from Coke and towards Pizza and the part that would be the substitution effect and the part that is due to the fact that when the price of pizza goes down it's like the consumer gets a raise in their income that's the income effect so here's how we can do it if we were to take this new budget constraint right here and pull it backwards parallel to itself then that would have the effect of reducing income for the consumer so if we were to take this new budget constraint I'm going to do this in a different color if we're to take this new budget constraint and pull it back parallel to itself until we pulled it back enough that it became just tangent to the the original budget constraint and so that would happen somewhere I'm going to draw this somewhere right in here something like that and what I'm trying to do and it's kind of challenging is I want to draw this so that this budget constraint and that one are parallel to each other so here's what we've done we have pulled it taken income away from the consumer and we've taken enough of it away that we have found a place right here I'm going to call it a prime now notice the consumer would be indifferent between where they started a and this point a prime they would be indifferent they're no better off at a than they are at a prime so when we pull this budget constraint back it's it's like we're reversing the income effect and we're reducing them back to their original level of well-being their original level of utility and that's what allows us to see visually the part of this change that's due to the substitution effect and then the part of the change that's due to the income effect so this point right here when we hypothetically take income away what we're doing is we're finding two points on our original indifference curve that are due to Simply a change in the prices the consumer's well-being hasn't changed and and notice it's the income effect that makes you better off the substitution effect simply causes you to substitute away from the good whose price hasn't changed towards the good whose price went down so this chunk that part right there is what we call the substitution effect we're moving around the indifference curve because the prices have changed this part from there out to there that is your income effect it's the part that is due to the fact that this decrease in price has given you more purchasing power you can buy more bundles than before now I can remember the first time I saw this thinking wow it's going to take me a few times to to go through that and wrap my head around it and back then there wasn't any YouTube there weren't any videos that you went to watch this you just went to a textbook and you read through it or you went and visited a professor and said hey can you kind of run through this again but what I would do is I would go back and and watch this watch me go through this again if it's not clear the first time it may take a time or two so that allows you to see the income and substitution effect and again we can't we can't do this in the real world this is something that is is only possible right here on the board but that's what the income and substitution effects look like something that economists know as the the gift and good or a gif and good so if you were to take one of my economics classes or if you've been watching my videos you know that there are many many times that I talk about the steepness of the demand curve and so if you've been through if you've made it to this part in the principles of micro class you've probably already talked about elasticity you know that elasticity is a measure of the steepness of the demand curve and so we can have demand curves that are perfectly inelastic straight up and down or we can have demand curves that are are inelastic or perfectly elastic but typically they're going to be downward sloping we don't have n curves demand curves that are upward sloping in fact my opinion is it doesn't exist now there are other economists that believe that maybe there's some place out there where it has existed in the past I don't believe so I have never heard any evidence for a gift and good that I've found persuasive but who knows it could be out there but let me show you theoretically why it can exist there's no reason theoretically that a demand curve can't be upward sloping um so here's what I'm going to do let's an upward sloping demand curve would be referred to or the name we've chosen to give it would be a gif and good um so let's look at what this GIF and good would have to look like so I'm going to put um I'm just going to call this quantity X and quantity y I don't want to go with my pizza and Coke example because neither of these goods are a gift and goods so I don't want anybody watching this video and somehow thinking that I'm trying to demonstrate that pizza is a given good because it's not so we're just not we're not going to call these anything it's just X and Y so I'm going to put my budget constraint my initial budget constraint right here I'm going to have an initial point of tangency say right here at Point a here's my indifference curve U1 and now I'm going to decrease the price of good X so let's start out here here's the initial price of good at or quantity of good X I'm going to call it Q x a and I'm going to decrease the price of good X something like this there's my new budget constraint it pivoted out as I decreased the price of this good now here's what I need when that price Falls for for just a regular downward sloping demand curve what you need when the price Falls is you need the consumer to buy more of it but notice on this picture when this price Falls if I were to put my point of tangency right up here so for example I'm going to put my in remember indifference curves can't cross but I don't need them to cross I can make them very close to each other right here and then cause this one to be tangent right there and then head on down just like that I have not violated any property of indifference curves by drawing it like that my new point of tangency is right here at point B now notice here notice this here's the quantity of good X at point B we had a decrease in the price of good X and in my picture that led to a decrease in the quantity demanded of good X that's an upward sloping demand curve so notice A and B B is to the right of a in this picture this is how things work out there in the real world but there's no reason I can't draw a picture with B right up there to the left of point a and still not violate anything about this utility maximization model that I'm using but I've got a giffen good I've got an upward sloping demand curve now again I'm just going to say that that I do not believe these exist out there in the real world but I am always prepared to be wrong so let's go ahead and and look at what the uh if we were to separate this out into the income and substitution effects if we took this second budget constraint and pulled it back it's going to be tangent somewhere right in here so if I were to take that thing let me do this in a different do it in this color if I were to take this and pull it back it's going to end up being tangent so I want that and that to be parallel it's going to be end up being tangent somewhere right there there's a prime so notice that my substitution effect here is um quantity x a prime I didn't label it over here but there's quantity x a prime in that picture this movement is my substitution effect I'm just going to call it s and then this movement from right um here to right there that's my income effect this is a very inferior good there's my income effect so notice that if a given good were to exist it would have to be an inferior good and it would have to be a good that is so inferior that the income effect outweighs the substitution effect there are plenty of inferior Goods out there but the income effect is always small relative to the size of the substitution effect that's just how it works in the real world but again there is a example of a picture where the income effect is bigger than the substitution effect and so you get an upward sloping demand or yeah demand curve let's do a quick application of this um let's talk about uh a labor Supply I'm gonna I'm gonna do try to finish up right down here if we talk about Labor Supply um let's put a picture right down here you get a little bit foreign a person's decision of how much to work so let's keep this nice and simple let's suppose that a person has maybe a hundred waking hours during a week I don't know how many hours there are during a week there would be 24 times 7 160 something um let's suppose you spend a hundred hours a week now of those hours that you spend a week you have to decide how many of them to work and how many of them not to work let's say your choice is between work and Leisure okay so on my uh horizontal axis I'm going to put leisure how many hours do you engage in Leisure and let's suppose right out here is a hundred hours that's the most hours that you could engage in leisure let's suppose on the vertical axis we put income this is money that you can spend on consumption so you have to decide how much time should I spend engaged in leisure and how much consumption do I want how much income do income do I want to have to buy the goods and services that I would like to have let's suppose that the wage starts out at ten dollars wage is equal to ten dollars well if you spend all 100 hours of leisure for all 100 of your waking hours engaged in Leisure you know have no income you're not working um the other option would be to spend all 100 hours working at ten dollars an hour and then you would have let's say a thousand dollars of income those are two points on what we're going to connect as your budget constraint right there would be your budget constraint in this situation and so let's suppose that you have an indifference curve that's just tangent say right there call it point a so you choose this amount of leisure you work this amount and by working that amount it gives you some income that you can spend on stuff not a thousand dollars because you're still engaged in some Leisure but you're earning some income now let's think about what would happen if your wage goes up let's suppose your wage goes to now wage is equal to 20 dollars if your wage is equal to twenty dollars then instead of being able to earn a thousand you can now earn two thousand and your budget constraint would pivot up it would pivot something like that and so let's suppose now when your budget constraint pivots up let's suppose your point of tangency is something like this right up here at point B so notice what happens now when the wage was ten dollars you chose this much Leisure but now that the wage is twenty dollars you choose only this much leisure in other words when the wage goes up you work more not surprisingly right so a higher wage you sell more of your hours for labor but here's the interesting thing here's why we're talking about this let's suppose that we consider a large increase in your wage let's suppose the wage equals forty dollars then this budget constraint is going to Pivot up quite a bit actually I don't have enough space 4 000 would be somewhere up here and so your budget constraint would be something like this now let's suppose when that happens let's suppose you're you're going to move to a higher difference curve this is going to make you better off but let's suppose that your new indifference curve is tangent right out here say at Point C we haven't violated any of the the properties of indifference curves or anything there but notice what happens when that wage goes up to 40 notice now compared to B you work less so from A to B you're working more you're engaged in less leisure but then when the wage goes up to 40 you now choose to work less and engage in more Leisure if we were to draw your labor supply curve at a wage of of 10 a wage of 20 and a wage of 40. so here's our wage here's labor Supply this is the amount of work that we do what we see is that when the wage went from 10 to 20 you engaged in less Leisure you worked more so your labor supply curve would look something like that but then when the wage went up to 40 all of a sudden you worked less and so what would happen is this labor supply curve would start to bend back on itself and actually there's a lot of evidence out there in the real world for labor Supply curves looking like this we call it a backwards bending labor supply curve What's Happening Here is that when the wage gets high enough because Leisure is a normal good that income effect causes you want to want to have more leisure you don't have to work as hard to make the same amount of income and so out in the real world what happens is when the wage is relatively low as it increases you sell more labor but at some point you start to sell less labor and so there there's evidence for that out there it's it's a nice application of this model to the decision of how much to work right and we can apply it to other things I think the textbook talks has an example of of uh how a change in the interest rate affects the amount you save it can be applied to all kinds of things hopefully this gives you an idea of kind of the the way people make decisions it helps you understand a demand curve in a way that you wouldn't have understood it when you just look at a demand curve like that so I'll see you in another video
Principles_of_Microeconomics
Chapter_8_The_Costs_of_Taxation.txt
in this video I'm going to talk about the costs of Taxation so we're going to think specifically about taxes how they affect prices and then consequently how taxes affect consumer surplus and producer Surplus and total Surplus so if you haven't yet um learned about how taxes should be analyzed what you need to do is go back and look at my video think it's two videos called Supply demand and government intervention and in those videos I talk about price floors and ceilings and then taxes and and trust me if you do not know how to analyze attacks you need to watch that first because your gut reaction on how it works is everybody's gut reaction is that you would just add the tax to the price that's not how taxes are analyzed that's not how they work in the real world so go watch that first and then come back and watch this so let's start by just briefly reviewing the three ways to analyze attacks so let's draw a quick picture of a market here got our demand curve supply curve in the absence of attacks the price is going to be P1 and the quantity transacted in the market is going to be q1 and what that means is there's one price in the absence of attacks then the price that the buyers pay is they give that money to the sellers the sellers get to keep it so the price the buyer pays is the same that the seller gets but then when we impose a tax what's going to happen is it drives a wedge between the price the buyer pays and the price the seller gets to put in their pocket and we can analyze that three different ways two ways we would analyze by shifting a curve so if the tax was imposed on the buyers then what we saw in that previous video is that a tax of say t dollars reduces buyers willingness to pay by that amount their willingness to pay to the seller not their overall willingness to pay so they end up paying the seller less it drives the seller's price down but then on top of that price they the buyers end up having to pay the tax okay so we would shift that demand curve down and then analyze we would be able to see where the prices end up the second way is if we were to think about the tax being imposed on the sellers well a tax on a seller is just like an increase in a cost of production so we could take that supply curve remember that represents the cost of production and we could shift that supply curve up by the amount of the tax and the interesting implication of that is that it doesn't matter which side of the market ends up getting taxed the prices are going to end up being the same regardless so the the big conclusion is that the government is powerless to determine which side of the market actually Bears the biggest share of the tax burden that's going to end up being determined by the elasticity of demand and elasticity of supply once you recognize that there's a third way to analyze a tax and that is simply to not shift a curve just move back until you find the distance between the demand curve and the supply curve that is the same as the amount of the tax so for example if we were to have a tax of this magnitude if that vertical distance was t and it doesn't quite look vertical but it should be a vertical distance difference or vertical distance in your picture um then that's the same as shifting a curve right if we had shifted the demand curve down it would be going through that point or if we had shifted the supply curve up it would be going through that point so you don't have to worry about shifting a curve once you find that vertical distance equal to the amount of the tax then that Upper price is going to be the buyer's price that they pay the price down here that I in my other video Call PS that's the seller's price and then of course our quantity is going to decline down here to Q2 so that's the third way to analyze attacks that's the way we're going to do it when we go through this video so essentially what a tax does is a tax drives a wedge between the price the buyers pay and the price the seller gets to sellers get to keep and the size of that wedge is the amount of the tax t what we want to do now is we want to think about the welfare implications of the tax so for this what you need to do is make sure that you've watched my video on consumer and producer Surplus so that you understand how to identify cons what consumer surplus is what producer Surplus is and then how to identify them graphically in the picture um so let's uh let's go ahead and use this picture I'm going to give these areas a letter I'm going to call this area a b that little Triangle C I'm going to call this uh d call That Little Triangle e so ignore for for what we're going to do right now that little T there that's the vertical distance and then down here I'm going to call this F so now let's think about what consumer and producer Surplus look like if there's no tax so let's start with the situation where there's no tax being implemented in this market and let's start by identifying what consumer surplus would look like so remember consumer surplus is a measure of the well-being of consumers that are participating in this market and consumer surplus graphically is the area under the demand curve and above the price well if there's no tax then the price is P1 and the area under the demand curve and above the price would be a plus b plus c so that's our consumer surplus with no tax essentially a free market okay producer Surplus would be the area under the price and above the supply curve so that would be area D plus e plus f now if you were going to calculate these things you wouldn't calculate it by dividing up this big triangle into these small areas a b and c I'm just using those areas to to demonstrate um what we need to do for this problem but if you were going to calculate these you just calculate the area of that triangle right there right remember the area of the triangle is is the height times the width it's a triangle so you've got to divide it by two and then deadweight loss would be equal to zero there is no deadweight loss in other words total Surplus is maximized we saw that in the consumer and producer Surplus video that a free market results in a maximization of total Surplus which is consumer surplus plus producer Surplus so now let's analyze what happens with the tax and remember it doesn't matter if the tax is imposed on the buyers or the sellers or if part of the tax was imposed on the buyers and part on the sellers it doesn't matter the buyers are going to end up paying that price the sellers are going to end up getting to keep that price regardless of who the tax is imposed on so now let's think about let's start with consumer surplus so consumer surplus is the area under the demand curve and above the price but now there's two prices right there's the price the buyers pay and there's the price the sellers get to keep so if we're talking about consumer surplus we're clearly interested in the buyers so we're thinking about the area under the demand curve and above the price that buyers end up paying so we see that consumer surplus now Falls to just a so some of that well-being remember with no tax consumer surplus was a plus b plus C so consumers are losing B and C right so let's say loss of consumer surplus is B plus c now let's talk about what each one of those things represents so B ends up becoming part of government revenue okay actually we'll identify it here in a second but this triangle B and D that ends up being government revenue so B used to be part of consumer surplus it gets transferred to the government um c ends up becoming part of deadweight loss so when the tax is imposed our quantity moves from the free market quantity to less than that and so we end up losing some total Surplus and the total amount of surplus that we lose the deadweight loss is going to be C plus e this chunk C used to be part of consumer surplus it ends up just Vanishing out of the economy we'll talk about that here in a second um let's talk about producer Surplus so producer Surplus is the area under the price and above the supply curve but now we've got to pay attention to the price the sellers get to keep its PS so producer Surplus ends up falling to just area f which means the loss of producer Surplus is D plus e now d ends up becoming part of government revenue it gets transferred from producers to the government e ends up becoming part of deadweight loss so let's uh identify our deadweight loss deadweight loss is C plus e let's talk specifically about what's going on there so here's what happens it drives the buyer's price up and it drives the seller's price down when it drives the buyer's price up then these buyers that are represented along this portion of the demand curve right there essentially leave the market they go from being a buyer at P1 to not being a buyer at PB these sellers represented along this portion of the supply curve leave the market they were sellers when the price was P1 because their cost of production was lower than what they could sell it for they could make some money but then when the price that they can sell it for gets driven down to PS Now their cost of production is higher than what they can sell it for they choose not to sell it so it drives some buyers and some sellers out of the market these are buyers and sellers for whom their willingness to pay is greater than the cost of production there are mutually beneficial transactions that could take place from society's Viewpoint those transactions should take place remember that um you take an action if the marginal benefit is bigger than the marginal cost well here's the marginal benefit and here's the marginal cost and yet those transactions don't take place so that's where this dead weight loss comes from and then let's identify our government revenue so our government government revenue ends up being equal to B plus d so graphically that's what it looks like we could calculate the government revenue without looking at the picture because the government revenue is also the tax T multiplied by the quantity that gets transacted in the market Q2 so remember this is a per unit tax so each time somebody buys a unit of good to or if a unit of this good and this is the total amount of units that are bought and sold they have to pay the terror or the tax and so it's t times Q if the tax is two dollars and a hundred units are bought and sold then the government revenue will be two hundred dollars so T is this vertical distance Q2 is this horizontal distance so we're getting the area of that rectangle right there which is why it's equal to both of those okay they're the same thing now let's talk about this dead weight loss a little bit more let me draw a quick picture just to kind of reinforce why we have this dead weight loss or how to identify it so what's going on there is if let's draw two pictures actually let's think about what total Surplus would be in this market pretend like those are the same demand and Supply curves in this market suppose we have no tax well if there's no tax then our total Surplus is going to be all of this area under the demand curve and above the supply curve represents the well-being of people that are participating in this market economic well-being but what ends up happening with the tax is that it reduces quantity so instead of this quantity being transacted we get a situation where a lower quantity is being transacted and so the total amount of surplus that we get is right there and you can see that we're missing out on that little triangle of total Surplus okay so that's what c and e are representing and that that dead weight loss is just a result of resources being allocated now inefficiently um there are mutually beneficial transactions that that could take place but the the tax creates incentives such that those transactions don't take place let's talk about how much deadweight loss is created and and the answer to this has to do with if you go back to the video where I first talk about how to analyze attacks then one of the things we discuss in that video is the incidence of the tax how much does it drive the buyer's price up and how much does it drive the seller's price down when I draw these pictures um I usually give my demand curve and supply curve I usually put them at like a 45 degree angle like I have there and there and there it's just natural for me to do that and so consequently it makes it look like this tax is kind of divided evenly between the buyers and the sellers but that's not how things work all the time it's going to depend on the elasticity of demand and the elasticity of supply so typically one side is going to Bear a bigger share of the tax burden or a bigger share of the tax incidents than the other side is going to if we think about how much deadweight loss is created then a similar thing happens it depends on the elasticity of demand so let me draw a couple of pictures here and let's take a look at that so in this first set of pictures let's do a demand curve that's about the same in each picture and then in this picture let's have a very inelastic supply curve and then I want my intersection to be about the same place in this picture I'm going to have a very elastic supply curve and then let's impose the same tax so I'm going to impose a tax not that size a little bit smaller let's go over here and figure out where that vertical distance is let's suppose right there is the amount of the tax and then I'm going to impose that same tax that distance right over here and it's going to end up being something like that my lines aren't very straight today and you can see that in this case the deadweight loss is going to be smaller than it's going to be in this case so we when we have a situation where that supply curve is more inelastic it makes for a given demand curve when the supply curve is more inelastic it makes the dead weight loss smaller than if the supply curve is elastic we can do the same thing if we were to think about the elasticity of demand so let's draw a supply curve that's about the same in each picture and then in this picture I'm going to draw a very inelastic demand curve there's my intersection I'm make my intersection right over there in this picture I'm going to draw a very elastic demand curve and then I'm going to do the same thing I'm going to find the place where that vertical distance is the amount of the tax suppose it's right there and so there's our deadweight loss for that tax if we were to impose that exact same tax over here then now my vertical distance is right about there and we get a much larger deadweight loss so for any given supply curve the more inelastic supply is the smaller the deadweight loss and the more elastic excuse me I'm talking demand for a given supply curve the more inelastic demand is the smaller the dead weight loss and the more elastic demand is the larger the dead weight loss so you can see that and it should make sense because What's Happening Here is the tax is driving people to change their behavior and that creates inefficiencies well the more people change their behavior the more inefficiency is created so the more elastic the demand curve is and or the more elastic the supply curve is the bigger the inefficiency over here people aren't reacting as much demand and Supply are less elastic than they are over there and so because people don't react as much the deadweight loss ends up being smaller and and this is an important point because we have to have taxes occasionally a student will will hear this discussion about dead weight loss and and before I get a chance to talk about what I'm about to discuss they'll they'll kind of start to think well gosh we just shouldn't have any taxes because it's going to create deadweight loss it makes buyers and sellers worse off and and that the implication of this is not that there should be no taxes there have to be taxes there has to be a government even at a bare minimum to protect property rights and ensure that we have a court system that can can protect your property rights and then there are other things that the government provides that we can't provide ourselves if you're not sure what's going what I'm talking about just go gosh find my video on on public goods and and there I talk about things like National Defense that if we left National Defense up to the free market no government well we get an inefficiently low level of National Defense so we need a government clearly so there have to be some taxes so what that means is we need to have we need to think carefully about what we choose to tax one of the kind of mental exercises I'll go through with a Class A lot of times is to say okay let's pretend like we could wipe the Slate clean and so we have no taxes and now we need to raise money so we're going to have to tax some stuff so let's start thinking about things that we should tax and um one of the first things I'm going to say is look you know anytime you place a tax on something it's going to decrease quantity right reduces quantity from q1 to Q2 so what I would argue is let's tax things that we would like to see less of to begin with and there wouldn't be Universal agreement on that you might say okay well if we want to see less of it let's tax uh cigarettes right because I'd like to see less smoking well it for somebody who's a smoker they might not agree with that so there would still be some discussion about what we would like to tax and what we wouldn't like to but um one of the things that if you look at the largest tax in the economy it's it's an income tax and an income tax is a tax on work and so we tend to be taxing or actually we tax heavily um a productive behavior that for a lot of economists creates a problem because I would rather tax something that I want to see less of not tax something that I want to see more of and I would like to see people be productive and out there working because it helps them and it helps the economy and and yet we've placed this tax I always joke that it's like if you went to a classroom and you decided to tax something and you decided that let's tax the people who are getting the A's and B's and and I would say gosh I don't want to see less of that right I want to see more of that I'd rather tax the people who are getting D's and F's um so we have to we have to be careful when we start thinking about um what we're going to tax we have to think about the amount of Revenue it's going to raise we have to think about the amount of deadweight loss that it's going to create and we have to be aware of the fact that there's going to be less of it you can tax something completely out of existence at least out of existence in in the um um in formal markets you can tax something so it moves completely into the underground economy let's think about the relationship between the amount of the tax and the amount of Revenue that's created so I'm going to put three pictures here each picture I'm going to have a demand and supply curve and I'm going to try to make my demand and Supply curves the same in all three pictures so suppose we have situation there keep in mind I always get onto my students if they draw me pictures especially on a test where they don't label everything and clearly I haven't labeled anything but I'm assuming that that you can glance over if you've suddenly forgotten what that curve is just glance over there and you can remember so let's think about imposing a very small tax in this market so I'm going to find the place where the vertical distance is the amount of the tax and let's suppose it's right there this very small tax I don't even know if you can see that hopefully you can and so it's going to drive the buyer's price uh it's going to drive the seller's price down it's going to create some dead weight loss I'll kind of color my dead weight loss in that triangle right there is my dead weight loss okay and this rectangle right there would be the amount of Revenue that the tax creates and then over here let's impose a bigger tax let's impose a tax that's um say that big and so my buyer's price ends up being right up here and my seller's price ends up being right down there and so this area ends up being my deadweight loss and that rectangle ends up being the revenue that's created and then let's impose a really big tax over in our third picture so big that it decreases quantity back to right about there and so buyers price is going to end up being right up there seller's price that they get to keep is going to end up being right there my dead weight loss is going to be really big and the amount of Revenue that we get is going to be relatively small it's getting to be that small sliver there that's kind of hard to see in the picture so now let's just think about the size of the tax and how revenue is changing and then we'll think about how deadweight loss is changing I'm going to draw a picture over here we can put on our vertical axis the amount of Revenue that the tax generates and then on my horizontal axis I'm going to put the magnitude of the tax I'll just write tax there now you can see in this picture well let's start if we had no tax if we had zero tax then it's going to raise no Revenue so that point where we have zero tax well clearly no Revenue gets raised if we were to increase the tax enough we could make it to where nobody transacts the goods so at some point we can increase the tax enough to where there's no Revenue because nobody's buying in the good anymore and then think about what happens in between with a very small tax revenue starts out at zero and then it starts getting bigger and then this area gets to at some point it will reach a maximum and then if we keep increasing the tax past that point Revenue starts to go back down and it goes to zero so what ends up happening is we get a relationship between the amount of the tax and the amount of Revenue that looks like that and we call that the Laffer Curve it's named after an economist named Arthur Laffer we'll talk a little bit more about that here in a second so that's what the relationship between the amount of the tax and the amount of Revenue created collected by the tax looks like if we were to think about now the relationship between the amount of the tax and the amount of deadweight loss that's created we see a different type of relationship we see that with a very small tax it creates a relatively small amount of deadweight loss as we increase the tax that deadweight loss is getting bigger and as we increase the tax back here it's getting still bigger once we increase the tax to where none of the good is transacted the deadweight loss would be all of this area under the demand curve and above the supply curve and so we get a relationship that looks something like that and at some point there would be a maximum amount on deadweight loss because at some point you've just made all of the total Surplus evaporate out of the economy so um at that point there's no consumer surplus or producer Surplus if none of the good is being transacted so now let's think let's go back to our Laffer Curve so the story is that Arthur Laffer was having dinner with some policy makers in Washington DC and and I don't know maybe some people from the press and sketched out on a napkin this Laffer Curve to to illustrate for people the relationship between the size of the tax and the amount of Revenue let's think about what this means what this means is that if you start out with a small tax and you increase that tax it's going to raise revenue but think about what would happen if we were out here suppose we had a tax rate that was right here that at that tax rate it's raising that amount of Revenue and then we wanted to raise more Revenue well if we're out there on that end of the Laffer Curve then the way that you raise revenue is that you need to decrease taxes and so if you look at the early 80s President Reagan used this as a justification for decreasing taxes because what he pointed out was we're out here on this end of the uh Laffer Curve I will say that not everybody agreed with him and they're still not um agreement as to whether or not he was right or wrong you can find people who will take both sides of that but the argument was we're out here on this end of the Laffer Curve we have tax rates marginal tax rates that are high enough that what we need to do is we need to decrease those marginal tax rates and that will increase the amount of Revenue this became known as as supply side economics and and really the the reason for that terminology is that the argument was that we're we've got tax rates that are so high that it's actually giving people a disincentive to work and and President Reagan was an actor and what he pointed out was that his marginal tax rate after he made four pictures was 90 percent 90 percent meaning that for of every dollar that he made from that point on the government was going to take 90 cents and leave him a dime at that point it's not worth it to make a movie at that point you just don't do anything for the rest of the year then you wait until the next year and you start over so if for President Reagan when he was an actor if they would have decreased that marginal tax rate then he would have made more movies so it for an individual person this this could be completely true whether it was true economy-wide well I guess that's an empirical question that again we don't have complete agreement on so this will hopefully give you an idea of kind of the the welfare implications of attacks how revenue and deadweight loss are related to the magnitude attacks so I'll see in another video
Principles_of_Microeconomics
Chapter_13_The_Cost_of_Production.txt
we want to talk now about the costs of production so what we're going to do for the next several chapters is we're going to think about how firms make decisions those decisions are going to be the first decision is how much output to produce and then we're going to think about situations where the firm has to decide on it on the price that they're going to charge now we know with a perfectly competitive market the firm has no control over the price so firms in that type of market don't get to pick the price but other firms will have some control over the price so we'll talk about those in later chapters when we talk about monopolistic competition in monopoly and and oligopoly but first we need to talk about how costs behave for firms and so the nice thing about this chapter is the costs of the firm don't necessarily depend on the type of market that it's in so the cost curves for a monopolistic monopolistically competitive firm are going to be the same as the cost curves for a competitive firm or for an oligopoly so this stuff that we're going to think about in this chapter will apply to the next four chapters that we do after this so let's start by thinking about the objective objective of a business and the objective that we're going to use is that firms seek to maximize the profit that they earn now sometimes economists get criticized for that and people will say you know what you all what you're saying is that the firm's will do anything to make another dollar and and it all boils down to just maximizing the pile of profit that they've got and that's not what we're saying we're saying that that firms can have other objectives firms can have as an objective to be a responsible member of society or to donate money to worthy causes or things like that but they need to have as their primary objective to maximize profit if they don't they're typically going to be driven out of the market so let's think about what profit looks like if we think about profit we're going to use Greek pie to stand for the word profit profit is going to be total revenue minus total cost in this chapter we're going to be focusing on the total costs of the firm now once we move to the next few chapters we're going to think about the total revenue of firms and the total revenue of a firm will depend upon the type of market that it's in so the total revenue the way revenue behaves is going to be different for a competitive firm than for an oligopoly etc so this chapter we're going to think about how costs behave and we want to look at we want to understand the relationship between the costs for the firm and how much output they produce okay so that's going to be our goal let's start by thinking about the difference between explicit and implicit costs so if we think about if I were to ask you to consider a business and let's suppose that we think about a business where we're making pizzas consider a pizza business if I said think about the costs that you're going to have to pay in running this business then most of the ones that you would list at least initially would be what we would call explicit costs so if I said think of the cost you would probably come up with things like raw materials so to run a pizza business you're gonna have to buy things to make the dough and you're gonna have to buy sauce the tomato sauce to put on it and vegetables and meat and cheese and all of that stuff and so those would be costs that you'd have to pay those would be raw materials you'd probably also have to pay some wages to workers that you hire there'd be other things you'd have to pay for electricity and things like that those would all be what we would call explicit costs so those are costs that require you to pay money for the money that's in your account and you pay for it okay some textbooks define them as costs that require a cash out that doesn't mean that you're paying cash for them that simply means that it's money in your account and then you pay for it but if you think about the costs of running a pizza business you'd also realize that there are some other costs that you're going to incur and these will be different in nature from these explicit costs so for example let's suppose you're going to run the business if you're running a business that means you can't be working some other job in the time that you're running the business and whatever you could have earned in that other job your next best employment alternative you're giving that up to run the pizza business that's a cost cost also it doesn't require a cash outlay so let's suppose that your next best employment alternative is to earn I don't know let's let's say $20 an hour doing something else well if you're running the pizza business that means you're giving up $20 an hour that you could have earned it's your next best employment alternative and that's a cost - but it's different right it's dollars that never made it into your account because of this decision that you've made so we call that an implicit cost it doesn't require you to pay money that's already in your account but the outcome is the same the outcome is that you don't have the money it just never ever made it into your account okay so it's an implicit cost so this would be your next best employment alternative that would be an example next best employment alternative now when we think about how firms make decisions we want to think about the explicit cost and the implicit cost we don't want to ignore any of those because we're interested in understanding why people make the decisions that they do why firms make the decisions that they do and so we don't want to ignore some cost now if we were thinking about the way that you would treat costs if you were an accountant if you're an accountant you need to ignore these implicit cost so cost total cost for accounting purposes would be just the explicit cost because those are dollars that are coming out of an account implicit costs would never be included so for an accountant to do their job correctly they need to ignore these costs right here the implicit ones for what we're doing because we're trying to understand behavior we can't ignore any cost okay so let's think about and we'll come back to that here in a second we'll define what we mean by economic profit versus accounting profit and it will have to do with this implicit versus explicit distinction let's before we do that think about this investment there's an important thing that you need to keep in mind and that is that investments are not going to be costs so let's think about an example let's suppose that in the course of running your pizza business you take a thousand dollars out of your savings account and you buy a pizza oven with that thousand dollars and so right over here we've got our pizza oven and we're going to use that in our pizza business let's suppose that had you left that thousand dollars in your savings account it would have earned some interest let's make it convenient and suppose that you're getting 10% on your savings that means that had you left that thousand dollars in the account over the course of the next year you would have earned a hundred dollars in interest income but because you didn't leave it in there you're giving that up so that $100 a forgone interest that you're not going to earn that's going to be a cost associated with this buying of the pizza oven and it's an implicit cost right it's not explicit because you never had that hundred dollars because of the decision you made you're not going to get it in the end the outcome is the same you don't have a hundred dollars that you could have had but it was never in your possession so it's an implicit cost let's think about the thousand dollars you spent on the oven itself we would count that as an investment but not a cost of production because what you've done is you've taken that thousand dollars and you've put that into the form of an oven here and and that's fundamentally different from dollars that you spend on things like this so let's suppose that you you buy some raw materials and and so let's suppose we we put our raw materials together in the form of a pizza and then we put it in our oven and we bake it and we take the pizza out and our pizzas sitting right here it's done and let's suppose I think to myself you know what that was a bad decision I'd like to undo that well unfortunately I can't undo that I can't turn that baked pizza back into its raw materials right I can try to sell the pizza and get whatever I can out of it but I can't undo the making of the pizza if we think about this oven let's suppose at the end of a year I've used the oven I've baked with it and and I want to undo that well I can it's fundamentally different from these raw materials when I bake a pizza in my oven a little piece of my oven doesn't somehow vanish right I've still got the oven sitting there and so let's suppose at the end of the year I want to undo that I always ask my classes face-to-face classes well if I want to sell that oven at the end of the year what am I going to be able to sell it for am I going to be able to sell it for $1,000 or more than $1,000 or less than $1,000 and without fail fail people will say less than $1,000 and then I will say well why why how do you know I'm going to be able to sell it for less than $1,000 and they will say because it's going to depreciate well let's think about that for a second the only reason that we depreciate anything on the books is because it provides us with a tax advantage we can typically deduct depreciation from our taxes and so that's not really the question I'm asking I'm asking what's this oven going to be worth if I sell it at the end of the year depreciation doesn't have anything to do with that that's just an arbitrary rule determined by the tax code what I want to know is can I undo this pizza oven can I turn it back into purchasing power and the answer is yes now what's going to determine how much I can sell it for at the end of the year well my hope is that someday my economics students will realize it's not doesn't have anything to do with depreciation it has everything to do with demand and supply this demand and supply that's going to determine the price of that oven if I want to sell it it could be that I haven't taken good care of the oven and so there's really low demand for that oven at the end of the year and so I'm not able to get much out of it it could also be that that oven was the greatest oven ever made and that I've taken good care of it and people really like it but at the end of the year demand is high and I get more than a thousand it's demand and supply that's going to determine the price but the point is that I can undo it I can turn it back into purchasing power so those are what we would call investments that oven is an investment we don't count that as a cost and you might ask well okay so why is this important well it's important because on homework questions or maybe test questions I'm gonna have you calculate the costs of production and I might give you some some things in there that are investments and you want to make sure that if it's an investment you don't add that up as you're figuring out what your costs are going to be so now since we know a little bit about costs we can think about prop what profit looks like let's think about what economic profit is going to look like the way we're going to define economic profit it's going to be total revenue minus total cost but now this cost is going to be made up of explicit and implicit cost so I'm going to put explicit I'm running out of room here and implicit costs draw a little line there to distinguish between those so economic profit is total revenue minus your explicit costs and also minus your implicit cost if we were thinking about accounting profit well accounting profit is going to be your total revenue minus just your explicit cost accountants need to ignore implicit costs so in an economics class we define the word profit different for differently from the way you think about it if you were in an accounting class now here's the problem with this your brain is going to want to think about profit as an accountant would when you hear me talk about profit this is going to be the thing that pops into your head but this will be what I mean and there will be some points as we go through especially the next chapter where I'm going to give you some conclusions about the way a market works and when I when you first hear those you're gonna say oh gosh that can't be right that that can't be right and what will be happening is that you'll be thinking of profit like this and I'll be talking about profit like this the point is that accounting profit can be positive while economic profit is negative they will be equal if implicit costs are zero but implicit costs are almost never zero if implicit costs are positive then economic profit will always be less than accounting profit okay so we'll come back to that for what we want to do now let's think about what we're going to call a production function production function so a production function is going to show us the relationship between the amount of an input that we use and the amount of output that we can produce so let's continue with our our pizza example and we'll think about a very simple situation where the number we change the amount of Pizza that we can produce by changing the amount of workers that we use okay so let's start by thinking about a pizza business in the short-run the only way that we're going to be able to change output is to change the amount of Labor that we use okay if we were talking about a long time horizon then we could think about let's let's suppose we have a piece of business and somebody comes to us today and says look tomorrow I need you to make 500 pizzas for me and let's suppose 500 is more than we typically make well what that means is I can't snap my fingers right now and double the size of my pizza kitchen I can't all of a sudden just make there be twice as many ovens in my pizza kitchen I'm stuck with what I've got in terms of the capital the size of my kitchen and the number of ovens that I've got and the number of preparation tables that I've got there I can't change that right now I can a month from now later on I can but not right now but the one thing I can do is I can bring in more workers I can buy more labor from from workers so in the short run the only way I can change output is to change the amount of workers that I've got I'm stuck with the capital that I've got so let's suppose that we think about the number of workers that we're going to use number of workers let's make a column here also that gives me the number of ovens now we're going to hold that fixed let's suppose we've only got one oven that's going to be a 1 it's going to be a 1 regardless of whether we use 0 workers or 5 workers or 20 workers we're stuck with one oven let's think about the amount of output so I'm going to call this Q let's just write here output this is a number of pizzas that I can produce and let's think about this as if this is pizzas per hour ok clearly let's start by thinking about having no workers if I have 0 workers then I'm going to be able to produce 0 output ok let's think about having one worker suppose we now have one worker we still only got one oven but now let's suppose that if we've got one worker that worker can produce let's say 50 pizzas per hour but we want to make more than 50 pizzas per hour we can't change the number of ovens we've got so the only thing we can do is bring another worker in ok so let's suppose we bring another worker in so now we've got 2 workers still stuck with one pizza oven and let's suppose with those two workers we can produce let's say 90 pizzas per hour and you might wonder well so why didn't that go up to a hundred and we'll talk about that you can probably already tell that it's going to have to do with the fact that we're stuck with one that's our bottleneck let's suppose we need more than 90 pizzas so we bring in a third worker we're still stuck with one oven and so let's suppose it with a third worker we can make a hundred and twenty pizzas per hour we want more than that so we bring it forth again we're stuck with one oven and let's suppose we can make a hundred and forty pizzas per hour and with five workers and one oven let's suppose we can make a hundred and fifty pizzas per hour so now we see the the relationship between the number of workers that we've got and the amount of output that we can produce and that's going to be what our production function tells us it tells us the relationship between an input workers and an output the thing that we're producing pizzas so let's draw this I want to draw the relationship between the input and the output I'm going to put the number of workers here on my horizontal axis so this is workers and I want to put output up here on my vertical axis the number of workers goes up to five so we'll put that down here the number of units of output goes up to 150 but it doesn't go up by the same increment each time it goes up first by fifty and then it goes up by a little less that's 90 and then 120 and then 140 and then 150 so you don't want to space those out equally because clearly they shouldn't be and if we graph the combination of points here between the number of workers and the amount of output we get the production function there's a point with zero workers we have zero units of output with one worker we have fifty units of output there's a point two workers we have 90 units of output three workers we have a hundred and twenty-four workers we have a hundred and forty and with five workers we have a hundred and five and if you connect these you get something that looks like that it's not linear you can see that because these distances are getting smaller this thing starts to bend over this is our production function so that is a production function right there let's define a couple of definitions and I want to put a couple more columns right here that will help us understand something some something about the relationship between the number of workers and the amount of output let's define first what we call average product average product we're going to call it AP average product average product is just going to be the total product total amount of output this one so it would be output or I'm going to call it total product divided by the number of workers that we've got number of workers that we're using so average product total product divided by the number of workers let's put in another column right here where we've got average product so we're going to be taking our quantity of output and dividing it by the number of workers we can't divide by zero so we're not going to calculate anything right there if we have 50 units of output and we're using one worker then 50 divided by one would be 50 on average that workers producing 50 pizzas per hour if we're producing 90 pizzas per hour with two workers then our average would be 45 right on average each of those workers is producing 45 in an hour 120 divided by 3 would be 40 and you can see that this thing's going down by 5 each time this one would be 35 and this one would be 30 so there's what our average product looks like and what we can see is that as we increase the number of workers on average each of them is producing less pizzas per hour now we also want to talk about something that we're going to call marginal product and it probably will not surprise you that we need a marginal measure so marginal product we're going to abbreviate MP marginal product marginal product is just the change in the out of output that you can produce when you hire one more worker hey it's a marginal product marginal product is going to be the change in output I'm going to just write it this way change in output when you change workers and we're going to be thinking about changing the number of workers by one unit each time okay so going from zero to one workers or one to two or two to three okay so it's a change in output when you change workers and again if you've if you've stuck with me to this point in the class you've heard marginal enough that you know what that is you've heard it in the context of marginal cost and and marginal benefit and those things so let's calculate our marginal product where we need to go from zero to one so I'm not going to calculate it for zero but once we go from zero to one we see that our total output goes from zero to fifty so the marginal product of that first worker is fifty if we look at going from one worker to two we see that output goes from fifty to ninety so the marginal product of that second worker forty if we go from two to three we see that output goes from 90 to 120 so the marginal product would be thirty that's how much output changes by 120 to 140 it's going to be twenty and then ten there's what marginal product looks like and you can see that what happens is as we hire another worker at the margin the output that we get out of the workers is going down okay now let's think about why marginal product declines first let's talk about how marginal product shows up on this picture so if we look at this little line segment right here the rise is 50 and the run is 1 so the slope of that little line segment right there would be 50 if we look at this little line segment the rise is 40 and the run is 1 the slope of this little line segment right there would be 40 the slope of that little line segment would be 30 and the slope of this one would be 20 and the slope of that one is going to be 10 so what you see is that the marginal product shows up as the slope of the production function essentially the production function shows us total product it shows us the total amount of output we can produce with 1 2 3 up to 5 workers but the slope of that total product curve represents the marginal product so the question we want to think about is well why is this thing not linear why is it the case that when you hire a second worker output only goes up by 40 units and when you hire a third output goes up by even less it goes up by 30 why is it that the marginal products here are declining as you increase the amount of output that you produce and the reason that that happens is something that we call the law of diminishing marginal product let's write it right up here the reason this thing is not linear law of diminishing marginal product that's why marginal product is declining here it's diminishing as we increase the amount of output that we're producing and we can see that visually as this production function starting to curve over eventually it would curve over and go back down to the horizontal axis let's talk about why and this is something that probably at some point somebody has tried to teach you a lesson or mention something to you about this this is something that your prop mom probably understood if your mom ever said something like hey we've got too many cooks in the kitchen if your mom was an economist she probably would have said hey the law of diminishing marginal product is setting in here you need to get out but the idea here is this let's suppose that we was suppose we've got our preparation table here and we're going to make pizzas right here and then right over here is our oven right we we prepare the pizzas right here and then we put them in the oven and let's suppose that I can make 50 pizzas in an hour okay suppose me working by myself I'm capable of producing 50 pizzas in an hour and so let's I'm making pizzas and but let's suppose we want more than 50 pizzas per hour so we bring in another person and let's suppose that other person by themselves would also be able to make 50 pizzas per hour well when we bring them in now we've got a problem by ourselves working with whatever equipment we've got we can each make 50 pizzas per hour but if we've got two of us in here now we've got to share space we've got to share the equipment that we've got we've got to share that oven and so let's suppose that I've got my pizzas bacon in the oven and the other person is making their pizzas but because mine are in the oven they've got theirs made but they have to wait they have to wait until mine get out of the oven and so that bottleneck the fact that we've got a limited amount of of capital that's going to cause activity to decline not just for the the second worker that comes in but also for me because once I get my pizza's out of the oven and they put theirs in now I can put mine together but now I've got a wait on theirs and so when you get multiple workers sharing the same amount of capital everybody's productivity goes down and this is not some economic theory this is just the fact of the world that we live in if the law of diminishing marginal product wasn't a real thing then you'd be able to grow the world's supply of tomatoes in one tomato pot right let's suppose you've got a plant that's capable of producing I don't know let's say 10 pounds of tomatoes every summer and you think you know what that that plants really productive I'm going to I'm going to plant another one we're in the same pot or maybe I'll plant two more in the same pot and and by planting two more I'm gonna be able to get 30 pounds of tomatoes in the summer what you'll find is you won't you may get more than ten but now all of a sudden those three plants in one pot are going to be competing with each other for for water and for nutrients and for Sun and each of those plants is going to be less productive than if they were in that pot by themselves and again that's not an economic theory thing that's just the world that we live in so the law of diminishing marginal product is something that's just a real-world phenomenon we can have at very low levels of output you could actually for a little bit have increasing marginal product so if we were to think about a production process like let's suppose we're talking about framing a house well one person trying to put up the frame for a house is probably going to be less productive than having another person in there to help them so if you bring that a second worker in then maybe those two together would both be more productive than then each of them just working by themselves but then eventually probably very quickly the law of diminishing marginal product will set in if you were to bring in a a third worker or a fourth worker or a fifth worker and you've only got one hammer well the second worker is able to help hold up the ends of boards to nail a man and things like that that's probably helpful but a third and a fourth and a fifth you don't need a third person to hold up the other end of the board so in any production process the law of diminishing marginal product is eventually going to set in and once it sets in it's going to cause this production function to get flatter and flatter you can have so many cooks in the kitchen that productivity goes down you can have so many cooks in the kitchen that you don't get anything made okay so what we need to do is clear this off and then we're going to talk about another example where we link this up with some costs of production let's take a look now at how we go from from this of information or this type of information to linking it up with costs so this is some of the information from the table I had right here we've got the number of workers output and then I went ahead and put marginal product what we want to do is think about how the number of workers is going to translate into costs we're really going to be interested in the link between output and costs but let's start here let's think about the cost of the factory so let's suppose that the cost of the factory this is we've got everything per hour let's suppose you could think about the cost of the factory as something like the rent that you have to pay now the rent that you have to pay doesn't depend on the amount of output that you're producing so it's not going to depend on whether we have zero workers producing zero output let's suppose it's $30.00 per hour it's going to be the same if we've got five workers making a hundred and fifty units of output so it's going to be thirty at every on every row here let's think for a second the story I just told you is that you're renting the building let's think about what it would look like if you owned it well keep in mind that owning something doesn't make it free if you own the building and you're not renting it out to somebody else then that means you're giving up $30 of rent per hour so the the column here would not look any different if you owned the building then it would if you didn't and you were renting it from somebody else if you're renting it from somebody else it would be an explicit cost if you own it and you're choosing to use it instead of renting it to somebody else it would be an implicit cost but it would be $30 in either case so that's what our cost of Factory looks like let's think about our cost of workers and let's suppose just to make it convenient let's suppose that the wage is $10 an hour if we've got zero workers then clearly the cost of workers is zero if we've got one worker working for an hour to make 50 pizzas then our cost of workers is going to be $10 if we've got two workers working for an hour our cost is going to be 20 and this thing goes up by 10 each time so it's 30 40 50 let's pretend like just to keep it simple those are the only two costs that we've got now we can think about our total cost total cost of producing these pizzas if we think about producing zero pizzas with zero workers then we have zero cost of workers but we still have that rent that we have to pay and so our total cost would be 30 even if we produce no pizzas okay if we produce 50 pizzas using one worker then we've got our $30 cost of factory plus the $10 cost of workers this would be 40 30 plus 20 s 50 you can see that this is going up by 10 each time so it's gonna be 60 70 and 80 so there's our total cost of producing this many pizzas right there now in terms of what we're really interested in it's the relationship between this column and this column we want to understand how our costs of production change as we change the amount of output that we're producing so what I'm going to do is I'm going to graph the relationship between these two let's put it right over here and we're going to put the amount of output on our horizontal axis we're going to put Q down here we're going to going to put total cost up here I'm going to abbreviate it TC now our total cost is going up to 80 by 10 so we can space these out every 10 units 10 20 there would be 80 here's 30 our quantity though we have to be careful as we space it out because the distances between each of these are not the same so we've got 50 we've got 90 120 140 and that one is 150 so you have to be careful just like we did on our production function you have to be careful with the spacing of that horizontal axis so now we can graph the combinations of points here so if we are producing zero units of output it costs us 30 dollars there's a point on what's going to end up being our total cost curve if we're producing 50 units of output our total cost is 40 so it's going to be right here 90 units of output our total cost let me fill these in 40 50 60 70 if we produce 90 units of output it's 50 that's going to be somewhere right here 120 it's 60 that's somewhere in here 140 it's going to be $70 and then 150 it's going to be 80 and if we connect these you can see that this thing gets steeper and steeper there's our total cost curve let's talk about why it gets steeper and steeper and it has to do with something that we just talked about and that that is the law of diminishing marginal product so if we want to produce more output as if we want to move out in this direction and produce more output that means we have to use more workers now when we bring in another worker we don't get a break on how much we pay that worker right every time we bring in another worker we've got to pay them $10 an hour the problem for us is that they're going to be less productive not just them but our prior workers that we've already got working everybody's going to be less productive so we bring in more workers we have they have to share space and equipment and so our costs aren't going down but the amount of output that we're getting out of our workers is going down and so that's causing our total cost to get steeper and steeper now this is important what this means is if costs increased linearly then every unit would cost the same number of dollars every next unit would cost the same number of dollars to make but what we see is that as we produce more and more units it gets more and more expensive at the margin to make another unit you probably already can see that what we're going to do is here in a little bits we're going to talk about marginal cost and and the marginal cost will be the slope of the total cost curve and our marginal cost is rising this thing is getting steeper and steeper so what we need to do is we need to think about some other measures of cost but what I want to do is I want to change my table here if to think about marginal cost what we need to do is think about how total cost changes when we change output by one unit the problem with this table is that output is not changing by one unit it's jumping from zero to 50 and 50 to 90 so I want to create another table where my output is changing by one unit remember the point of this table at first was to look at the impact that the number of workers has so we've got workers changing by one unit now we want to shift and focus on output so I'm going to clear this off and put another table up here and then we'll take a look at that let's create a another table here I'm going to put quantity we're going to have output here this time we want our output not to jump by big chunks we want it to just change by one unit each time so I'm going to start output at 0 and go up to let's go up to 10 and we're going to fill in kind of a big table here we're going to think about some other measures of cost the next column I want to put up here is going to be total cost and I'm just going to give you some total cost numbers and then we'll kind of work backwards from what we did in the previous table in the previous table we started with output and then we had a couple of categories of cost and then we calculated total cost let's do it the other way let's suppose that our if we produce zero units of output our total cost is three dollars and then let me just fill in the rest of these let's suppose that then it's three dollars and thirty cents three dollars and eighty cents for fifty five forty six fifty seven eighty nine thirty eleven dollars twelve ninety and fifteen I would encourage you to to work don't just watch me do this if you really want to understand how costs behave I would write these down with me draw these pictures with me because it's going to be important that you have an understanding of of why the cost curves look the way they do so you might at this point be thinking well gosh where'd you get those numbers well once we develop this you'll start to see where those numbers come from so we need to talk about some some different definitions that we're going to use that will be useful to us let's first just think about what this cost curve looks like this is a total cost curve let me just kind of it's you know a little small picture just kind of graph what that thing looks like if we put total cost on the vertical axis and quantity on the horizontal axis at zero units of output our total cost is three dollars at 10 units of output up here it's 15 dollars so it ends up right up there and if you were to graph it I'm not going to plot all those points you'd see that our total cost curve looks something like that just like the total cost curve that we had from our previous table okay now let's think about types of costs so total cost can be broken up into two different types of cost we're going to be thinking about fixed costs these are costs that do not depend on output okay that is the definition don't depend on output let's let's put that in here don't depend on cue doesn't matter how many units of output you're producing in the previous table that we had our fixed cost was that cost of our factory right it was $30 per hour it didn't matter whether we were producing zero pizzas or 150 pizzas so if we want to understand looking at this table how to figure out what our fixed costs are it's pretty easy because your fixed costs are going to be whatever your total cost is when output is zero right that's how it looked in our previous table so our fixed cost here is three dollars and it's going to be three dollars whether we're producing one unit of output or 10 units of output so I'm just going to put a little quote line here on each of these just to remind you that it's three dollars at every level of output okay the second type of cost we're going to think about so what we call variable cost these do depend on Q these depend on the amount of output that you're producing if you're producing zero units of output your variable cost is zero in our previous table our variable cost was the cost of workers and if we were producing zero units of output we needed zero workers so we didn't have any variable cost so let's put up here what our variable cost is so variable cost is zero when output is zero and then if we look at our total cost of 3 dollars and 30 cents and 3 dollars of that is our fixed cost then the rest of it has to be the variable cost here's the relationship total cost is equal to fixed cost plus variable cost so any cost is either going to be a fixed cost or it's going to be a variable cost so the sum of these two have to add up to that so this one's going to be 30 cents and then if we look here 3 dollars and 80 cents of total cost three dollars of that is fixed cost so 80 cents of that is going to be our variable cost here's what variable cost looks like for the rest of them it's a dollar 50 and then 24350 480 630 eight dollars nine dollars and ninety cents and then twelve dollars and you can verify that this Plus that is going to add up to this one three dollars plus twelve dollars adds up to fifteen so now we understand the difference between a fixed and a variable cost what we want to do now is think about some other definitions of costs that are going to be useful to us and we're going to define something that we call average fixed cost average fixed cost we're going to abbreviate it AFC average fixed cost is simply going to be fixed cost divided by Q it tells us on average how much our fixed cost is per unit let's calculate that our average fixed cost we can't divide by Q so we're not going to divide by zero so we aren't going to calculate it for zero units of output but if we take our fixed cost of $3 and we divide it by the 1 unit of output average fixed cost is three dollars three dollars divided by two units of output it's a dollar fifty three dollars divided by three units of output it's a dollar here's what the rest of them look like it's going to be seventy five cents sixty cents 50 cents forty-three cents thirty-eight cents thirty-three and then thirty what average fixed cost looks like the next definition that we're going to think about is what we call average variable cost and average variable cost we will abbreviate AVC it is going to be variable cost divided by quantity so let's calculate that average variable cost we're going to be taking our variable cost column which is this one and we're going to be dividing it by our quantity column we can't divide by zero so our variable cost here is 30 cents divided by one unit of output that's 30 cents here with two units of output our variable cost is 80 cents divided by two that's 40 cents a dollar fifty divided by three that's fifty cents you can see that this is going up by a dime each time so this is easy to fill in you you it's going to go up to a dollar 20 then we can calculate what we're going to call average total cost average total cost we abbreviate ATC it's equal to total cost divided by Q let's calculate that average total cost again we can't divide by zero we're going to take total cost and divide it by quantity so total cost divided by three dollars and thirty cents divided by one is three dollars and thirty cents three dollars and 80 cents divided by two is going to be a dollar ninety here's what the rest of these look like a dollar fifty a dollar thirty five a dollar thirty a dollar thirty again a dollar thirty three a dollar thirty eight one dollar and forty three cents and then a dollar fifty so this one starts at three thirty it goes down for a little bit and then it starts to come back up we'll talk about why here in a little bit the last thing we want to calculate is what we're going to call marginal cost you've heard marginal cost before marginal cost is just how much your total cost changes when you change output by one unit so our marginal cost is going to be equal to the change in total cost when you change output and we're going to be thinking about our change in output have always been one unit so our denominator there how much Q changes by is always going to be one so our marginal cost it's just going to be the change in total cost so you can see that if we go from not going to calculate it at zero because we need to go from zero to one we we need to change output total cost goes from three dollars to three dollars and thirty cents so our marginal cost of that first unit is thirty cents if we think about going from one unit to two units we see that our total cost goes from three dollars and thirty cents to three dollars and eighty cents that's 50 cents changed by 50 cents so you can see that this is going up by 20 cents each time so it's easy to fill in it goes up to two dollars and ten cents so the marginal cost is telling us as if we produce another unit one more unit of output how much does that add to our total cost okay so now what we want to do is a lot of numbers there we're only going to use really for what we're going to do we're going to use these last few columns let me also identify what's going on here notice that your average total cost is equal to average fixed cost plus average variable cost so this we can rewrite this way average total cost is equal to average fixed cost plus average variable cost okay now you need to make sure that you remember these definitions all of these definitions that I've just given you right here all of them are going to be very important along with the the little trick that we did here it's not really a trick it's just recognizing that when output is zero variable cost is zero all of those things are going to be important to remember but they're fairly easy to remember because the average measures always just have Q in the denominator so anytime you see average you're going to be putting Q down there at least when we're talking about costs so these shouldn't be hard to remember what I need to do is I want to clear off this part and then we're gonna graph what these last few columns look like and take a look at kind of what the cost curves look like for a firm let's graph some of these and we're not going to plot them point by point what I want to do is just kind of give you a rough idea of what these things look like so we're going to put quantity on our horizontal axis we're going to put our costs up here our quantity goes up to ten so I'm going to just put some tick marks out here you so there's ten that's really going to be the most important when we'll put one on here but like I said we're not going to graph every point so we're going to kind of graph the beginning and graph the end and then kind of sketch in what it looks like in the middle there we're gonna stick with this side so really if you look at the numbers here the highest dollar numbers three dollars and thirty cents so I'm going to put one dollar here I'm going to put two dollars here three dollars here and then $3.30 would be right up here okay precision is not key here we just want a rough idea of what these things look like so let's start with average fixed cost that one's easy at one unit of output average fixed cost is right up in here at three dollars by the time we get to ten units of output average fixed cost is down here at 30 cents it's somewhere right down here and if you look at what it's doing it falls very rapidly by the second unit it's cut in half so it's going to be a dollar fifty at two units and then it continues to follow but fall but it falls slower and slower so this thing Falls quick at first and then starts to slow down there's what average fixed cost looks like okay Falls rapidly at first and then it starts slowing down let's do our average let's do a verage variable cost next so average variable cost starts at 30 cents it starts down here and then it ends up at $1 20 up here at 10 units so it ends up here's $1 20 roughly somewhere right in here so it's going to end up somewhere right up there and it's linear right average variable cost is going up by 10 cents each time so this thing looks something like this there's average variable cost average total cost it starts out at three dollars and 30 cents so average total cost starts out right up here it ends up at a dollar fifty so it ends up right over here but now notice what it does average total cost falls at first it Falls until it reaches a dollar thirty somewhere in the middle so it's going to fall until it reaches somewhere right around in here in the middle and then it starts to go back up right and ends up at a dollar fifty so it's going to come down here it's going to stop falling and then it's going to start going back up there's average total cost and then let's do marginal cost so marginal cost starts at thirty it starts right down here it ends up at ten units at two dollars and ten cents so it ends up somewhere right up in here and it's linear it's going up by twenty cents each time so it's twice as steep as the average variable cost curve now when you draw this here's what I want you to do and your point may not match up but what we're going to see is that the marginal cost is going to come up right through the bottom of this average total cost curve we'll talk about why here a little bit but this thing's going to come up linearly it's going to go right through the bottom of that average total cost curve and it goes right up to there there's marginal cost and if you didn't hit right on the bottom of your average total cost curve it's not that critical I mean when we draw them from this point on you want to try to do that but it's kind of hard when you're just sketching these things out so these curves here represent what's going on in this table let's talk about kind of the features of these cost curves the first one is rising marginal cost and remember that marginal cost Rises it gets more and more expensive at the margin to produce additional units because of the law of diminishing marginal product we can also think about the average total cost curve and we see that it's u-shaped u-shaped average total cost the reason it's u-shaped is this average total cost is equal to average fixed cost plus average variable cost so the shape of the average total cost curve is going to be determined by what these two curves are doing the sum of those two curves well if we look at low levels of output average variable cost is small and average fixed cost is big so it's going to be at low levels of output it's going to be the average fixed cost curve that really determines what's happening with average total cost and since average fixed cost is falling rapidly average total cost Falls rapidly but eventually towards the middle average fixed cost and average variable costs are about the same and then from that point on an average variable cost is bigger than average fixed cost and so average variable cost is going to determine what happens to average total cost and since average variable cost is rising it starts to push that average total cost curve back up so it's u-shaped for two reasons the shapes of those two curves now let's talk for just a second about the average total cost curve and this bottom point we've actually got a special name for the quantity where average total cost is at a minimum so if we were to just draw the average total cost curve its u-shaped it looks something like that this point right here's Q here's costs dollars this point right here we're average total cost is at a minimum we call that the efficient scale this quantity right there we call the efficient scale now this will come back late in a later chapter it's the efficient scale the quantity of output that minimizes the firm's total cost on average okay let's talk for just a second about why the marginal cost curve has to intersect the average total cost curve at its minimum that's true because of the mathematical nature it's just a mathematical fact any marginal curve will intersect the average curve at its minimum let's draw another picture here where we just look at the average total cost curve and the marginal cost curves so here's marginal cost here's average total cost you can see that anytime that marginal cost is below average total cost the average total cost is falling anytime the marginal cost is above the average total cost the average total cost is rising right there's the efficient scale of the firm because that's the bottom of that average total cost curve what's going on here is if you think about what's happening let's suppose you've got people in a room and we figure out the average age of the people in the room and let's suppose the average age is 20 and then all of a sudden I walk into the room and I'm older than 20 well that's gonna raise the average I would be the marginal person that enters the room and anytime the marginal is greater than the average it's going to pull that average up let's suppose that the average age is 20 and then somebody who's 5 years old walks into the room that's going to pull the average down a little bit well anytime the marginal is below the average it's gonna pull the average down okay so that's why anytime you draw an average curve and a marginal curve the marginal curve is going to have to intersect at the bottom point of that average curve what we need to do now is clear all of this off and draw just a picture of the typical cost curves that we're going to be thinking about because what we've done with all of this is really just give you an idea hopefully if why cost curves look the way they do once we start using the skirts we're not going to be going through all of this we're just going to draw a picture of the cost curves but you need to understand why the picture looks the way it does so let's clear this off and then we'll take a look at kind of the typical cost curves let's draw a picture of the cost curves where we just focus on the the main things that we're going to be interested in so if we again put quantity on the horizontal axis we're going to put costs up here it's just dollars let's start with our average total costs are the main thing that we get from that previous picture is that our average total cost is u-shaped the way that I'm going to typically draw that is going to be like that I'm not too worried our previous one was real high up here and it came down real steep and then kind of ended up over here it's just you shaped okay so we're not gonna worry about too much anything more than it being you shaped now I'm gonna go ahead and draw the marginal cost curve our marginal cost curve in the previous picture was linear they're typically not linear they're usually going to have some curvature to them and and remember that it's going to come up and go right through the bottom of that average total cost curve so if it helps think about where that bottom is and then just draw your marginal cost curve so that it comes right up through the bottom there there's what marginal cost looks like we're also going to be interested in average variable cost now in our previous example average variable cost was linear and it just kind of came up like that typically average variable cost is u-shaped now it's an average curve so the marginal cost comes up through the bottom of the average variable cost curve so here's how I want you to do this I want you to put a point over here a little bit below quite a ways below your average total cost curve then your average variable cost is going to come down and hit a bottom somewhere right in here and then it's going to go back up and it's going to get kind of close to average total cost I'll explain why but it should look something like this come down hit a bottom and then start heading back up there there's average variable cost so that's the typical cost curves that we're going to think about okay and we're going to draw that picture we're going to use that picture a lot as we go throughout the next several chapters all of the important things that we talked about previously are in there we've got rising marginal cost we've got u-shaped average total cost and our marginal cost curve is coming up through the bottom of those averaged curves okay you might ask well in the previous picture we had average fixed cost why isn't that on there well remember that average fixed cost average total cost is equal to average fixed cost plus average variable cost well what that means is average fixed cost is equal to average total cost minus average variable cost I just subtracted average variable cost from both sides well what this means is that your average fixed cost is the difference between these two curves it's the difference between average total cost and average variable cost so this vertical distance right there represents average fixed cost that's why I made these wide apart here and then as we get as quantity increases they get closer and closer to each other because your average fixed cost starts out big and ends up small okay so those are the typical cost curves that we're going to use this is something you need to get used to drawing you don't you shouldn't have to stop and think whether the average total cost curve is on the top or the bottom this should be automatic okay let's talk about the difference between costs in the short run and the long run so I've mentioned the short run we talked about our pizza business and in the short run the only way you can produce more pizza is to hire more workers but at that point we didn't talk about what we meant by the short-run versus the long run and so here's here's the main here's how we define it there are no fixed costs in the long run no fixed costs in the long run that's how we define the long run the short run is defined as the period of time during which there is some fixed cost so if you think about the fixed cost that we've talked about we talked about your your cost of your factory your rent well that's a fixed cost because presumably you've signed a lease that that goes to a particular date and so during that period of time there's nothing you can do about it you're contractually obligated to pay $30 an hour for your building but once that lease is up then you can make a decision about what you want your rent to be you can move into a different building bigger building a smaller building but during the time that that lease is in effect you have a fixed cost and so a fixed cost fixed costs come from fixed inputs so as long as there's some input that you can't change then the cost associated with that input is fixed for you so if we think about kind of what that looks like let's think about what fixed costs would look like in the long run I'm going to put quantity on the horizontal axis I'm going to stretch this out a little bit I'm gonna put costs up here on my vertical axis let's think about what your cost would look like if we were let's suppose we're gonna build a pizza restaurant okay and we haven't built it yet once we build it we're stuck with the building that we've built at least for a little while that's it would be a fixed input and we would have fixed costs that are associated with it but we haven't built it yet so we can think about building a pizza restaurant where we would produce small quantities of output and we can think about the average total cost curve for a small plant let's suppose it looks something like this this would be the average total cost average total cost for a small plant I'm just gonna put small it's u-shaped I'm not gonna put the marginal cost curve or average variable cost we're just going to think about average total cost then let's think about you know suppose we wanted to build a bigger restaurant than that that would allow us to produce more pizzas per day we could have a bigger building size so let's think about an average total cost curve for say a medium-sized one I'm gonna put it a little bit lower and we'll talk about why here in a second this would be average total cost for a medium restaurant size but let's suppose we wanted an a building it where we could produce even more pizzas than that we want to be able to produce a lot of pizzas we could build a bigger building and it would have its its own average total cost curve let's suppose we put it up here and again I'll explain why I made this one the lowest one but let's make this the average total cost curve for a large plant large pizza restaurant now these are just three of the infinite number of building sizes that we could build we could build one in between these two and it would look something like this and we could build one in between these two and it would look something like that and you can see that if we were to fill in all of the possible plant sizes this thing would start to just fill in and eventually if we filled them all in it would start to just turn into a big blue bowl shaped thing well if we were to draw a line that kind of represented the lower frontier and I don't know if you can tell I'm trying to draw this in a different color but on the video it may not show up as a different color that's that red frontier that's what we would call the long-run average total cost curve it's kind of what we would call an envelope curve it envelops all of the short-run average total cost curves so all of these short-run average total cost curves have to lie above or they're going to touch in one spot each one's going to touch that longer than average total but no part of any short-run average total cost curve is going to look like long-run average total cost curve so our long-run average total cost curve is much flatter than any particular short-run average total cost curve and all of the short-run average total cost curves lie on or above the long-run average total cost curve now let's think about how we're going to use this longer an average total cost curve if I were to draw another picture here again I'm going to stretch my horizontal axis out here's costs let's draw a longer an average total cost curve that looks something like that longer an average total cost here's here's Q down here ignore the cost that goes with that picture so let's suppose that this is our long-run average total cost curve but we have built a plant size and it has a total cost curve short-run average total cost curve that goes with it suppose it looks like this there's our short-run average total cost curve for the plant that we have built let's think about how that we use this long-run average total cost curve let's suppose we're producing this quantity well if we're producing that quantity let's suppose that's 10,000 if we are producing 10,000 units we can go up to the average total cost curve and we can see what our costs are on average let's suppose they are nine dollars per unit on average each of those 10,000 units cost us nine dollars to produce but let's suppose we want to produce more units than that we want to move production up to 11,000 suppose that's right here here's what this tells us in the short-run we're stuck with our plant that we're build that we've got built and if we want to produce 11,000 units in that plant size in the plant we've got our costs on average you're going to go up let's suppose they go up to $13 well if we were just going to increase production to 11,000 for a short amount of time and then move it back down to 10,000 well that's we're just kind of stuck paying that higher costs on average for a little while but if we were going to continue producing 11,000 units from now on then what this picture tells us is that we can do better than this what it tells us is that we should build a bigger plant size and if we build a bigger plant size we could build a plant with the appropriate average total cost to reduce our average total cost back down to nine this would be short or an average total cost for a bigger plant than the one we've got so what this allows us to do is see what our costs are going to be with the investment that we've got versus what they could be if we were to build a bigger building this works whether we're talking about building a bigger building or if we were to reduce production below 10,000 if we were to reduce production to say 9,000 our cost would also go up the short-run average total cost is u-shaped if you start at the bottom and go in direction your costs will go up so there may be a time if we wanted to decrease production to nine thousand that we build a smaller plant sometimes people think make them must have the mistaken belief that having too much space is is not a problem it's not nearly the same problem as having not enough space yeah it is if you have too much space that you're not using your bearing the cost of that that's not free let's talk here at the end of this about what causes the longer an average total cost curve to look the way it does we've actually got some terminology and let's put one final picture in over here we've got quantity we've got costs up here let's draw and long-run average total cost curve I'm going to put kind of a long flat portion on it so here's our long-run average total cost and let's suppose that we think about kind of dividing this up into some sections here so if we look at it it's decreasing up until right about here and there at that point on it gets flat and then starting somewhere right in here it starts to go back up again okay so we've got some names for different portions of this long-run average total cost curve over this range of output what we see is that as we increase output as the firm increases output its costs on average or declining we say that the firm is experiencing what we call economies of scale I'm just gonna write economies from this point on as the firm increases output nothing is happening to their cost we would say that we have constant returns to scale constant returns and then from this point on as the firm increases output its costs start to go back up and we would say that the firm is experiencing diseconomies of scale diseconomies what causes economies and diseconomies of scale and constant returns to scale so if we think about what we've done with the short-run curves we talked about why these curves look the way they do and in a lot of cases the answer was the law of diminishing marginal product okay that's why the marginal cost curve looks the way it does that's why it goes up that's part of why the average total cost curve starts to go back up it's the reason that the average variable cost curve eventually starts to go back up so if you're ever asked why a cost curve looks the way it does and you don't have any idea a really good guess is the law of diminishing marginal product more times than not you'll be right turns out though that the law of diminishing marginal product can't explain any of this and the reason is we're talking about the long-run and remember that in the long run there are no fixed costs right so fixed costs fixed inputs are what caused the law of diminishing marginal product so the answer for what's going on here is something different if you think about what's happening here it's kale what's happening is is that we increase the scale of our production process we get decreases in the form of decreasing average costs well what's happening there is that as you increase the scale of your production process it allows your workers to specialize in tasks rather than doing multiple tasks workers can specialize in a particular task and so that allows them to get better if you're the only one producing something like a car let's take something simpler look if you're the only one producing something like a shoe then you've got to do a lot of different tasks if on the other hand you're going to increase the scale of your production process so you're producing lots of shoes with lots of workers then you can have each worker specialized on a per tick in a particular part of the production process and they'll get really good at it and as they get good that that increase in productivity is your cost to go down let's talk about diseconomies well this is caused by what's happening here is you've got a big production process and you're making it even bigger what that means is that big production processes are difficult to manage so if you've got a small production process and you can manage it yourself well that's that's relatively easy but as you get bigger and bigger then you have to start hiring managers and if you think about what a manager does a manager comes in and they they're not producing out but they're just managing people and so that's adding costs of production without giving you more output and so big production process processes require lots more layers of Management and that's expensive that drives your costs up on average constant returns it's it's just the absence of economies or diseconomies okay so hopefully this gives you an idea of how costs behave in the short run what costs look like in the long run we're going to use this picture a lot what we're going to do now is we're going to combine this picture remember that profit is equal to total revenue minus total cost right we've just talked about this we've talked about how costs behave what we need to do in our next chapters is talk about how revenue behaves so we're going to combine this stuff with some new stuff in upcoming chapters and then we'll be able to see what profit how profit behaves for a firm and how that firm is going to maximize profit so I'll see you in a future video
Principles_of_Microeconomics
Chapter_4_Supply_and_Demand_Part_1.txt
in this video we're going to talk about the most basic model in all of economics the the model of demand and supply and if you study economics you'll find that this is really at the root of a good chunk of what we do a lot of of the explanations for why things work the way they do out there in the world really boils down to understanding demand for something and supply for something and how those two things interact so usually in a face-to-face class the way I would start this chapter is I would ask a hypothetical question of the class I would ask them to consider the answer to this question and the question is who determines the price of the gas that you buy when you go to the gas station and you buy some gas there's going to be a price up there on a sign and I want you to think about who determines that I used to have my students fill out answers to that and give them to me and and the answers were always things like well it's the gas station owner they get to pick the price or it's it's big oil companies or it's Congress it's the president or it's a variety of different people or institutions it turns out that the real answer to the question who determines the price of the gas that you buy the real answer there is nobody nobody's picking that and and that leaves people with this kind of weird feeling that that that just can't be true but if you really understand how demand and supply work you start to realize real quickly that that that has to be the right answer that's not always the case there are lots of different types of markets in which one side or the other the buyers or the sellers do have some control over the prey but in the gasoline market and the types of markets we're going to talk about in this chapter it turns out that nobody has any control over the price so we have to understand what's going on there because if you go out and you drive through a town what you're going to see is that the price of gas for different gas stations is all going to be very close to each other we may be exactly the same in lots of different gas stations and what that leads people to believe is that that there's something behind that that there's somehow maybe collusion the firms are getting together in colluding and picking that price so the fact that the price is the same or at least very similar across different gas stations creates the illusion that somebody's doing it and so when I tell a class no nobody's doing that that leaves them with kind of this weird feeling so we need to talk about what's going on with the demand and supply model and let's start by thinking about this word market so I'm going to use that word a lot we need to be very clear about what I mean when I use that word so if we're thinking about a market the definition that we're going to be using is that a market is just a group of buyers and sellers of a particular good or service there are different markets that function differently so it can depend upon the number of buyers and the number of sellers in the market but as at the most basic level anytime you've got buyers and sellers you've got a market it could be something very formal like a store where buyers and a seller come together and things are bought in a very structured environment or it could be something very unstructured like somebody standing on the street corner selling drugs as long as you've got a seller and you've got buyers you've got a market we're going to think about in this chapter what we call perfectly competitive markets so we're gonna think about perfectly competitive markets sometimes I will just call that competitive markets usually when I'm teaching this to a class I try to initially use this phrase perfect competition or perfectly competitive markets until they get comfortable understanding that I mean something very particular with that and then usually I'll kind of switch to just calling them competitive markets now if I were to ask a class before we got into what I mean by this if I were to say what do you think I mean when I say a competitive market most people would say well this is just a market where the firms are competing against each other where there is competition well I mean something more particular than just that so there are going to be some characteristics of a perfectly competitive market the first one is that there are lots and lots of buyers and sellers lots of buyers and sellers each of them is going to be small compared to the size of the market so let's write that and then we'll talk about what that means each is I'm gonna put small in quotes here small compared to the size of the market so let's talk about what that means you are small in a particular market if the amount that you buy or the amount that you sell is small compared to the total amount bought and sold so all of us are small in the gasoline market we can go out and buy gasoline let's say you've got 10 cars at home you can take all 10 of those cars to the gas station one day fill all of them up and that's still going to be a little teeny tiny fraction of the total amount of gas bought and sold in the gasoline market so you aren't going to have any impact on the price your behavior is not going to drive the price up or down and that's true for the sellers also each of the sellers there are going to be a lot of sellers in this type of market each of them is going to be selling a small share of the total amount of gasoline sold in the market so with a perfectly competitive market there are going to be lots and lots of buyers and sellers each of them small compared to the size of the market that's the first characteristic the second one is that the goods are identical the goods or the service offered for sale the good is identical goods are identical now let's talk about that for a second what matters here is not whether or not the goods are literally identical what matters is consumer perception of the goods so if you think about what's going on here the the two examples that are typically used by economists as examples here are aspirin versus gasoline let's start with aspirin so if we think about aspirin the if you were to go to say Walmart or go to some store where they sell pain relievers and you go to the aisle where they have that stuff what you're going to see is you're going to see that there is a whole variety of name-brand aspirin and then there's going to be a a brand of aspirin and the name-brand aspirin is going to sell for two or three times as much as the generic aspirin turns out that what's in the two bottles is going to be identical now I realized that there might be aspirin with caffeine or aspirin mixed with ibuprofen or things like that but let's just think about just aspirin turns out the generic aspirin and the name-brand aspirin are identical the the two goods are there's no difference between them one is going to have the same effect on your body as the other and yet consumers do not believe that or at least many consumers don't believe that many consumers believe that the name-brand aspirin is just somehow better than the generic and that results in there being a price difference between the two so that's a case where the goods are identical but consumers do not perceive that they are that wouldn't be a perfectly competitive market then we could think about say gasoline well let's suppose you're driving down the road and you need gasoline you need to buy some gas think about how you make that decision turns out that the gasoline sold by different gas stations different companies is is literally different there may be different additives but consumers perceive it as if it's identical a consumer that needs to buy gas pays attention to the price of the gas they don't care so much where they get it if there's a gas station there and they need gas they're gonna buy the gas and they're going to buy it at whatever gas station typically has the cheapest price so that's an example where the good is is literally not identical but consumers treat it as if it is so what matters here is consumer perception about whether it's identical or not not literally whether it is okay so these two characteristics will we'll deal with these right now these two characteristics or what I mean when I say perfect competition or a competitive market so in this chapter that's what we're going to be thinking about markets that fit into this category most markets don't fit into this category so if we think about some examples of this type of market we could think about commodity markets so we could think about say the corn market or maybe the soybean market any commodity you're gonna have lots and lots of buyers and sellers and the goods are going to be identical consumers treat corn as if they don't care where the corn came from if you go to Walmart and you're thinking about buying some ears of corn to put on the grill you don't go up to some Walmart employee and say hey I need to know whether this is north Missouri corn or is this southern Iowa corn you don't do that you just see the corn there and if you want to buy corn you buy the corn okay so consumers perceive that the goods identical there are lots and lots of growers of corn lots and lots of buyers of corn so that's a good example of a perfectly competitive market and again lots of commodities work that way the soybeans think of another example the New York Stock Exchange or any exchange where shares of stock and corporations would be bought and sold that would be a good example of a perfectly competitive market there are lots and lots of buyers of a stock lots and lots of sellers of a stock and a share of stock is just like any other Sherriff stock if we're thinking about say a share of Walmart stock the shares are all the same it doesn't matter when you bought the shares or anything like that did the goods are identical and so that's a good example of a perfectly competitive market another good example would be the gasoline market we've got to be a little careful with that this would be a good example of a competitive market if we're talking about a fairly big town or any city where you've got lots of gas stations clearly there are lots and lots of buyers of gas but you need to have lots sellers if we were talking about a small town where there's maybe only one or two gas stations then we would see something very different in that type of situation than what we're going to see in this chapter so gas is a good example but we have to be careful with that one because you need to have lots of gas stations before that fits with what we're gonna see in this chapter so you can see that perfectly competitive markets are not really that common so you might wonder well why are we talking about that well it's a good place to start and a lot of markets behave very close to what we're going to see in this perfectly competitive chapter so we need to to talk about this before you can think about how things work and other types of markets what we want to do is is start with the demand side of the market so let's think about what we mean when we talk about demand so here we're gonna be thinking about the buyers when we talk about supply we're gonna be thinking about the sellers so when we think about demand we're going to we need to define something first we're going to think about what we mean when we talk about quantity demanded which I'm going to abbreviate QD sometimes I use it as a superscript sometimes I use it as a subscript but I think that's what I'm gonna probably stick with here so quantity demanded what we mean when we talk about quantity demanded this is going to be the number of units that buyers are willing and able to buy okay and the number of units that buyers are willing and able to buy is going to be dependent upon the price that consumers have to pay for it and there's going to be an inverse relationship between the number of units that buyers want to buy and the price they have to pay so what we know is that an increase in price is going to result in a decrease in the number of units you want to buy as something gets more expensive all other things equal you want to buy fewer units you don't need an economics class to know that I can ask an a principals class on day one of the class if something gets more expensive if we hold everything else constant something gets more expensive do you want to buy more of it or less and they'll all say they want to buy less here in a second we'll talk about why there's a very good reason for why that happens the opposite is true if we have a decrease in price there's going to be an increase in quantity demanded right so as things become cheaper you want to buy more of them let's talk about why that happens first let's talk about what we call this we call this the law of demand there are a couple of ways to think about what the law of demand means the law of demand simply says all other things equal when the price of some good or service goes up quantity demanded for that good or service goes down and vice versa so verbally that's what we mean when we talk about the law of demand here in a little bit we'll talk about what demand curves look like and we'll see that demand curves are downward sloping and so another way to describe the law of demand is to say that the law of demand tells us that demand curves are downward sloping more specifically demand curves never sloped upward so that's the law of demand let's talk about why why does that happen well there there are two things at work here and we're going to talk about them in this class we're not going to go into them if you were to go on in economics you would talk about what these two things look like graphically and it's actually very interesting to look at here we're just going to kind of describe the two effects that happen let's talk about first what we're going to call the income effect and then we're going to talk about what we call the substitution effect but first let's go with the income effect and let's talk about it in terms of what happens if the price of something goes up so if the price of something you buy goes up that has a negative impact on your purchasing power and because your purchasing power has gone down you tend to buy less so let's think about it in terms of the price of gas if you were to walk out today and find that the price of gas had doubled then you're gonna have less dollars in your pocket to spend on things you're gonna have to spend more on gas you'll probably buy a little bit less gas will talk about that substitution effect and here in just a second but you're gonna have fewer dollars and so consequently you will buy a little bit less gas because of that okay so that's the income effect let's think about it the opposite direction when the price of something you buy goes down that leaves more dollars in your pocket to buy other stuff and so you're gonna buy a little bit more of whatever price went down you'll buy a little bit more of that because of the income effect it's like you got a little raise you may know at this point if you've had any background in economics that there's two different types of goods what we call normal Goods and inferior Goods and so you may be thinking as I said that hold it that's not always true and you'd be right but we'll talk about that later if you don't know what I'm talking about now don't worry about it well we'll discuss it here in a little bit so there's an income effect there's also what we call a substitution effect and the substitution effect says that when the price of something you buy goes up other things become cheaper in comparison and you tend to substitute away from the good whose price went up towards the goods that have become relatively cheaper on the other hand if the price of something you buy goes down other things are becoming relatively more expensive and you tend to substitute towards the thing whose price has gone down so you can see that both of these effects for most Goods both of these effects are going to reinforce this when the price of something you buy go you want to buy less of it because it has a negative impact on your income and because you substitute away from the thing that's becoming more expensive so the income and substitution effects are really what's behind this law of demand what we want to do is we want to think about what a demand curve looks like in the way that we're going to approach that is we're going to start with what we call a demand schedule and a demand schedule really just shows us the relationship between the price of the good and how much somebody wants to buy okay so and we're gonna think about different levels of price and how much they want to buy so let's think about somebody will call them bill let's think about bills demand schedule and let's suppose that we think about some different prices we'll think about price of pizzas let's suppose we consider a price of zero four eight and twelve and then let's think about how many pizzas bill buys in a month so let's think about quantity demanded this is going to be pizzas per month let's suppose that if the price of pizzas is zero Bill wants to buy a pizza every day of the month let's suppose he buys thirty pizzas in a month if the price of pizzas goes up to four let's suppose he buys twenty if the price goes up to eighty buys ten and if the price goes up to twelve you buy zero so what we see is that for Bill the relationship between the price and quantity demanded is exactly like we've identified here we could start at a low price and then increase price and we would see that the amount that Bill wants to buy goes down or we could start at a high price and decrease it and we would see that the quantity that bill wants to buy goes up what we want to do is we want to graph the relationship between price and quantity demanded for bill and if we do that we're going to put price on the vertical axis we're going to put quantity on the horizontal axis our prices are 0 for 8 and 12 our quantities go up by 10 let's just put 10 20 30 I'm not worried about whether or not the scale of my vertical axis matches up with the scale of my horizontal axis that that's not really worth worrying about what we need to do is just simply graph the combination of points that we've got here let's start with a price of 12 if the price of pizzas is 12 Bill wants zero so there's a point on what's going to end up being Bill's demand curve if the price is 8 bill wants to buy 10 so there's another point if the price is for Bill wants to buy 20 and if the price is zero bill wants to buy 30 and if we connect those we get bill's demand curve for pizzas the downward slope of the demand curve represents the inverse relationship between the price of the good and the number of units that bill wants to buy so this law of demand shows up verbally as this inverse relationship it shows up graphically as the negative slope to Bill's demand curve hey we described anything anytime one thing goes up and another thing responds by going down we describe that as a negative relationship or an inverse relationship and so graphically it's going to show up as a negatively sloped line here now we're going to be interested when we talk about how markets work we're going to be interested in an a market demand curve this is a demand for a particular person a market demand curve is going to be a demand curve that represents lots of different people so what we want to do is add another person here and so we're gonna add somebody else we're gonna pretend as if there are only two people in the market let's add somebody that we'll call Mary so what I want to do is I'm gonna clear off part of this board and then we'll put Mary's demand schedule and graph her demand curve and then we'll figure out what the market demand curve looks like so let me clear this off let's think now about Mary's demand schedule Mary's demand schedule it's going to look like bills we're gonna use the same crisis let's just call it P here zero four eight and twelve only Mary's gonna want to buy different amounts of pizzas let's put quantity demanded here this will also be pizzas per month let's suppose that Mary if the price of pizzas is zero she wants to buy twenty if the price of pizzas goes up to four she wants fifteen if it's eight she wants ten and if it's twelve she wants five so she has different preferences for pizza than bill does that's fine let's draw Mary's demand curve same prices four eight twelve quantities I'm going to put my tick marks a little bit differently let's go ahead and put ten and twenty but we're gonna be using 15 and five in between there so if the price of pizzas is twelve Mary wants to buy five so there's a point on Mary's demand curve if the price of pizzas is eight she wants to buy ten so there's a point if the price is four she wants fifteen so there's a point if the price is zero she wants 20 so we can connect those and we get Mary's demand curve that Mary's demand curve looks different from bills we have to be careful with the demand curve because if I were to ask you does Mary like pizza more than bill if I ask that in the face-to-face class there is always somebody that says yes and there's always somebody that says no usually there are multiple people but you have to be careful because what you see is that it depends on the price right if the price is low it appears as if bill likes pizza more than Mary but if the price is high it appears that Mary likes pizza more than bill so we would never make any general statement unless at every price somebody wanted to buy more I guess then we could say that that one person liked it more but we're not going to worry too much about comparing these to each other that's not really what we're interested in what we're really interested in is going to be the market demand curve and so let's think about how we're going to come up with the market demand curve we want a single demand curve that represents both bill and Mary so what I tell my students to do is as you're learning how to do the market demand curve and this will also be true with the market supply curve I think the best way to start is to create the market demand schedule once you understand what's going on you don't have to do that every time but if we think about the market demand schedule then it usually makes a little bit more sense let's think about again the same price is 0 4 8 and 12 but now let's think about the total quantity demanded in the market if the price of pizzas is 0 then bill wants to buy 30 and Mary wants to buy 20 so the total quantity demanded in the market is going to be 50 if the price is for Bill wants to buy 20 and Mary wants to buy 15 so the total quantity demanded is five if the price is 8 they each want to buy 10 so the total quantity demanded would be 20 and if the price is 12 12 bill wants none Mary wants 5 so the total quantity demanded is 5 so now we've got the market demand schedule all we need to do now is just graph that thing so I'm going to kind of stretch my horizontal axis out just a little bit because my quantities are going to be bigger over there than they are in either of these my prices of course are the exact same I've got 0 4 8 and 12 so let's put our quantity down here 10 20 30 40 50 and then we're just going to graph those combinations of points so let's start up here with their price of 12 if the price is 12 the quantity demanded in the market total quantity demanded is 5 so that point is right in there if the price is 8 the quantity demanded is 20 so that's going to be something right about there if the price is 4 the quantity demanded is going to be 35 so that's going to be somewhere right in there and if the price is zero the total quantity demanded is 50 and if we were to connect those we get the market demand curve now let's think about what that thing is that's going to be a single demand curve that represents multiple people we're pretending right now as if we only have two people in the market and I realize that that in this type of market a perfectly competitive market one of the characteristics of this market is that there are lots and lots of buyers and sellers so in this market there would be more than just two buyers but for ease of explaining this that we need to just keep it simple so this is a single demand curve that the Preferences of two different people bill and Mary let's talk about a more a different way of looking at what we're doing right here when you once you understand what a market demand curve is you start to realize that you don't need to go through all of this process we can naturally do it a little bit more easily by just comparing two different demand curves so I need to clear this off and then we'll take a look at how to what's really going on when we came up with that market demand curve all right let's take a look at another way of thinking about this market demand curve let's put up here a couple of people actually let's put one person over here person one we don't need to worry about what we call them person two we've got price on our vertical axis quantity demanded on our horizontal axis we've got some demand curve I'm going to label it D for person one we've got another demand curve I'll label it D for person - the demand curves don't have to look the same that's fine and then over here we're going to draw a picture of the market demand curve so here we've got price and quantity demanded in the market so let's think about what we did when we were drawing that demand curve that we had previously what we were doing is we were picking a price let's call it let's call it price actually let's call this person a and person B I think that is going to be easier for to make the notation work so that's not person X so that's gonna be person B so let's call this p1 so what we did was we picked a price and and we did it with the demand schedule first but we didn't have to put that demand schedule together what we did was we picked a price we looked at per the first and we saw how much they wanted to buy at that price let's call it Q one a and then at that same price of p1 we looked at how much person B wanted to buy we'll call that q1 b and then that told us that over here in the market at that same price of p1 the total quantity demanded was the sum of the amount that this person wanted to buy plus the amount this person wanted to buy it was something out here it was q1 a plus q1 B and that ended up being a point on what was our market demand curve so that's what we were doing a market demand curve is the horizontal summation of the individual demand curves okay so we looked the prices we picked a price we looked at a price of 12 we looked at a price of a2 price of for a price of zero and then we just horizontally summed the quantities across bill and across Mary and that told us the total quantity demanded in the market so the market demand curve is the horizontal summation of the individual demand curves and in a competitive market there would be lots and lots of individuals now I would never give you a problem where you had to create a market demand curve with say a thousand individuals that would be too time-consuming and of no value but you'll definitely have to do it with a couple people so that's how you would put that together so now we've talked about why demand curves look the way they do why they're downward sloping what we need to talk about now or what are the things that cause a demand curve to shift around and I typically call these the determinants of demand the determinants of demand these are the things that shift demand curves I'm going to put in parentheses demand shifters they're going to be five of them that we're going to talk about the first one that we're going to think about is income your demand for different goods and services can change when your income changes even if nothing about the good or service changes so if usually in a face-to-face class what the example I would use would be maybe to think about the number of times you go and see a movie in a month and then think about how that number might change if all of a sudden you won the lottery and you had a big income probably for most people they would go and see more movies or you could think about how many times in a month you go out and eat at a restaurant the other way to think about it would be think about the number of times you go out and eat at a restaurant let's say maybe it's five times a month and then think about what would happen if your income went down you probably go out and eat at a restaurant fewer times in a month now how your demand changes in response to a change in income depends on something I just briefly mentioned earlier in this video and that is whether or not the good is what we call a normal good or an inferior good so let's start with a normal good terms turns out that most goods are normal goods and a normal good is a good for which an increase in your income causes you to buy more of it an increase in your income causes an increase in demand so an increase in income causes an increase in demand and again most goods are normal goods the goods that I gave you going out to a movie or or going out to eat at a restaurant for most Goods if people's income goes up they tend to do those things more often the other side of that coin we need to put up here a normal good is a good for which a decrease in your income causes a decrease in demand so let's write it this way a decrease in income causes a decrease in demand so that's the first type of good that we're gonna think about the second type of good it's what we call it inferior good say inferior goods an inferior good is a good for which an increase in your income causes a decrease in demand as you get as your income goes up you tend to buy less of these things the other side of that coin is as your income goes down you tend to buy more of them let's write that definition and then we'll think about some examples of that so an inferior good is a good for which an increase in income causes a decrease in demand the other side of that coin is a decrease in your income causes an increase in demand so if I'm in a face-to-face class and I ask the class somebody give me some examples of an inferior good if I were to ask you to do that right now I would be willing to bet that the first thing that pops into your head I think with the exception of one semester every semester that I have ever been teaching principles of economics micro or macro if I ask a class what somebody give me an example of an inferior good the first thing that somebody volunteers is almost always ramen noodles ramen noodles are kind of the classic example of a good for which when your purchasing power goes up you tend to buy less of it and when your purchasing power goes down you tend to buy more of it but ramen noodles are certainly not the only example of an inferior good there are lots of other examples public transportation is an example typically of an inferior good although it depends on that that will we may come back to that a little bit later other examples might be used clothing for a lot of people used clothing is an inferior good though not for everybody there are some people that collect vintage clothing and for them it's a normal good as they get more money they buy more vintage clothing but for a lot of people used clothing would be an inferior good or say generic brands the story I always tell in my principles class is that when I was an undergraduate student I tended to eat a lot of banquet pot pies I thought they were the greatest thing ever and they were relatively cheap back when I was in college and you could go out and for less than a dollar you could buy a couple of them and I considered it a victory if I could eat a meal for less than a dollar and then I went to graduate school at Iowa State University and and at graduate school if you've got an assistantship you're gonna earn a little bit of money there you know it's going to be enough to pay for your graduate school and and rent an apartment and and have a little bit of spending money but you're not going to have a lot of money and so even in graduate school I I ate a lot of pot pies and then when I graduated with my PhD and I got my first real job and I went from making not very much to making pretty decent money I remember thinking if I were to ask a class right now what do you think happened to the number of pot pies that I ate everybody says I it went down but the reality of it is I think those things are great I still think they're great and and when I started making more money I ate even more of them so for me pot pies are not an inferior good and the whole point of that story it really doesn't have anything to do with pot pies the point of the story is that what might be a inferior good for you is not necessarily an inferior good for everybody else it's not that simple there for some people a good can be normal and for other people it could be an inferior good and for you a good might be inferior sometimes at some point in your life and a normal good at other points in your life so being a good normal good and inferior good is not a characteristic of the good itself it's a characteristic of your preferences okay so income is the first determinant of demand probably one of the most important let's talk about let's call that number one the second determinants of demand that we need to talk about let's talk about prices of related goods well this is number two prices of related goods so if you think about any two goods pairs of goods for most pairs of goods there there probably is not going to going to be any relationship between them but if you do have two goods that are related to each other they're either the nature of that relationship is going to be either there they're going to be substitutes for each other where you consume one or the other or they're going to be complements where you tend to consume them together so things like peanut butter and jelly those Goods tend to be complements things like Coke and Pepsi tend to be substitutes so for example with Coke and Pepsi you don't go out and buy a coke let's say a two liter of coke and a two liter of Pepsi and then have a drink where you pour half of one and half of the other that's not what you do you buy Coke or Pepsi they substitute for each other okay so let's think about how these two things relate let's start with substitutes and let's consider a hypothetical scenario where let's suppose every day before class you go and you buy a Pepsi let's suppose there's a Pepsi machine and a Coke machine right next to each other let's suppose that you like Pepsi a little bit more than coke and so every day before class you go to the machines and you buy a Pepsi and let's suppose that the price of each of them is $1 and then one day you walk up to the Pepsi machine with your dollar in your hand ready to buy that Pepsi and you see that the price of the Pepsi is now three dollars think about what most people would do in that scenario most people are going to if the price of the Coke is still a dollar most people are going to switch from the Pepsi to the Coke now nothing changed about the Coke right it was a dollar yesterday it was a dollar today nothing about the Coke changed but what did change is the price of that Pepsi and that increase in the price of Pepsi caused you to not buy the Pepsi and switch to the Coke your demand for the Coke increased so with substitutes that's a situation where increase in the price of one increase in price of one good increases demand for the other increases demand for the other running out of space here the other side of that coin is also true a decrease with substitutes a decrease in the price of one will cause you to buy more of that and less of the other so a decrease in the price of one good decreases demand for the other I'm just gonna put a couple quotes there decreases demand for the other so that's how it works with substitutes let's put here Coke and Pepsi we could also think about two goods that complement each other and the example that I typically use in in a face-to-face class would be say hot dogs and hot dog buns so complements with an e they're complements let's suppose that you're walking through the grocery store and you decide that you're gonna grill out tonight and so you pick up a pack of hot dogs and and you walk over to buy the buns and when you get over to the buns you see that much to your surprise a pack of hot dog buns cost twenty dollars I don't know about you but I'm not buying a twenty dollar bag of hot dog buns so that increase in the price of the hot dog buns is going to cause me not to buy those buns and I'm probably going to put the hot dogs back so they're an increase in the price of the hot dog buns reduced my demand for the hot dogs it's different from this one right so with compliments an increase in price of one good decreases demand for the other decreases demand or the other good again you need to make sure that you always think about these definitions first in terms of an increase and also in terms of a decrease with compliments a decrease in the price of one good increases demand for the other because if one good gets cheaper you're going to buy more of that good and you need more of the other good to go with it so here a decrease in price of one good increases demand for the other got myself hemmed in over here so that's the first two determinants of demand let me clear this off and then we'll talk about the rest of them
Principles_of_Microeconomics
Chapter_4_Supply_and_Demand_Part_2.txt
so the third determinant of demand is your tastes or your preferences and we're not going to spend a lot of time on this one because that we economists typically don't delve too deeply into what determines your tastes or your preferences let's just say that this is concerned with what you like and what you don't like and this can be influenced by advertising it can be influenced by friends it can be influenced by say the results of a medical study let's suppose that maybe there was a medical study where the Journal of the American Medical Association reported that high hotdog consumption was associated with high test scores probably if people believed that that was legitimate more people would go out and eat more hot dogs because they want to do better on tests so it doesn't matter whether or not the research is legitimate or not it has to do with whether or not people believe it so your taste can be changed by advertising friends things like that the fourth one is your expectations about the future so this one's pretty straightforward let's suppose there's some product that you buy and you expect it to get more you expect the price to go up in the future then you might choose to stock up now so an expectation of a higher price in the future could cause you to increase your demand for it right now or vice versa or it could be an expectation of higher income maybe you expect to graduate and you believe that when you graduate you're gonna get a good job because you've worked hard in college and so you decide to buy a car now to replace your old worn-out car then your increase in demand right now would be driven by an increase in a higher expected income in the future now it could turn out that you get to the future and realized that your income didn't go up you didn't get that good job we would never say that somehow that was the irrational behavior we would just say maybe that you're not very good at forming expectations but if we want to understand your behavior right now your expectations about the future whether they're correct or not whether they turn out to be correct or not that is an important determinant of demand and then the last one is going to be population we could say the number of buyers in the market number of buyers so this one works just like you would think it would an increase in the number of buyers in the market will shift the market demand curve to the right and vice versa some textbooks just described this as demographics a decrease in the number of buyers would shift the market demand curve to the left if you think about the five determinants that we've talked about the first four are fundamentally different from this one so we've talked about income we've talked about the prices of related Goods we've talked about tastes we've talked about expectations let's start with those four those four will shift an individual persons demand curve to the right or to the left and when and when the individual demand curves shift to the right or to the left then that's going to shift the market demand curve so if you think about what's going on there let's suppose we have a person here with their demand curve and another person with their demand curve and then let me squeeze in here the market it's gonna be kind of a tight fit and let's suppose we've got the market demand curve so here's here's my market there's person a person B I'm not gonna label my price and quantity but you know that price would be up here quantity would be down here here's a demand curve well if this person's demand curve shifts to the right and this person's demand curve shifts to the right then that means our market demand curve is going to shift to the right so the first four determinants of demand shift that market demand curve by shifting the individual demand curves but the number of buyers is fundamentally different it doesn't work like that the number of buyers in the market doesn't shift an individual persons demand curve a lot of times in a face-to-face class I will ask a class does the number of buyers say say think about your demand for Pizza does the number of Pizza eaters in town just do changes in that number affect your demand and some people will say yes a lot of people aren't sure but clearly the number of Pizza eaters in town doesn't affect your individual demand or else if I worked if it did I would be able to ask you what is the number of Pizza eaters and you'd be able to tell me but clearly you don't care how many other people are eating pizza so it number of buyers doesn't affect your individual demand let's do what look at how that a change in that is going to affect the market demand curve so I'm going to draw person a here let's draw a person B here I'm going to leave a space there because I'm going to add another person let's draw the market demand curve over here so this is the market demand curve that's based upon person a and B okay so we've got two people there's our market demand curve let's add a third person now so suppose another person enters the market and we could look at their if this is the mark demand for pizza then this is the this is person C and that's their demand curve for pizza and so now when we look at the market demand curve it's going to be shifted to the right because we've got an additional person this is going to be the market demand curve based upon it person a and person B and person C so what happens when another person enters the market is that the market demand curve shifts not because the individual demand curve but because we're adding across multiple people an extra person or if this person left the market or if this person left the market then the market demand curve would shift to the left so the first four determinants are different from the fifth one all of them shift the market demand curve the first four shift it by shifting individual demand curves and the fifth one shifts it because we're simply adding across a different number of people let's clear the board and then we'll talk about a very important distinction and that distinction is the difference between a change in demand which we're talking about here and a change in quantity demanded let's take a look at what a change in quantity demanded looks like first so if we look at a demand curve we've got price up here we've got quantity down here there's our demand curve we've talked about why that demand curve is downward-sloping why is there this inverse relationship between price and the amount of number of units that people want to buy or a person wants to buy and what we know is that that inverse relationship is caused by the income and the substitution effects so when price goes up quantity demanded Falls and if we were to think about what's going on here if we pick a price like p1 and go over to that demand curve we can look at the quantity demanded at that price so at that price of p1 this person is going to by q1 units if price were to go up to p2 then the law of demand tells us that quantity demanded is going to fall at a price of Q of p2 they're going to want to buy q2 units so that increase in the price is going to cause a decrease in quantity demanded and we call that a change in quantity demanded okay so this is a change in quantity demanded if we talk about a change in demand then we're talking about something that's different from what's going on there if we're talking about a change in demand then we're talking about that whole demand curve shifting so we start with a demand curve let's put quantity demanded down here if we start with a demand curve let's call it d1 because it's going to shift if we have a change in one of the determinants that we've just talked about income prices of related goods that that stuff if one of those things changes then this entire demand curve is going to shift if we were thinking about an increase in demand then this demand curve would shift from d1 to d2 that's a change in demand so if we think about what causes first a change in quantity demanded versus what causes a change in demand this is caused by a change in the price and what's going on here is that we've got price on the vertical axis if one of the things on the one of these axes changes then we're going to move along this function along this demand curve so if this was a picture of where we had x and y and some function and you change x you simply plug that into the function and you get a new y but the whole function doesn't shift so with our demand curve a change in price is going to cause us to move to a new point on that demand curve we started at that point we moved to a new point on the demand curve that's a change in quantity demanded if something changes that isn't on one of these axes right and remember the price of the good itself is not a determinant of demand we talked about income we talked about prices of related goods goods that are related to whatever this good would be we talked about your preferences we've talked about your expectations if this was a market demand curve we could talk about the number of buyers but if one of those things changes then that's going to cause this whole curve to shift this whole demand curve okay so keep that in mind there you need to make sure there will be times when I use one or the other of these terms and you need to make sure you understand what I'm saying there let's talk now about supply we're going to a lot of what we talk about with supply is going to be real comparable to what we did with demand let's start by thinking about quantity supplied so quantity supplied this is going to be I'm going to use Q s to identify quantity supplied this is going to be the number of units that sellers are willing and able to sell so with demand we were talking about the buyers in the market with supply now we're going to be talking about the sellers what we're interested in is the relationship for a seller between the number of units that they want to sell and the price that they can sell them at and it probably would be reasonable if you think about it you would realize that if you're selling something you want to sell it at a higher price as you can sellers like higher prices as buyers we like low prices but if we're selling something we'd rather sell it for more so there's going to be a positive relationship between the price an increase in price will cause an increase in quantity supply and vice versa decrease in price when price goes down sellers want to sell a smaller number of units decrease in price leads to a decrease in quantity supplied and we call that the law of supply now to understand really why that happens you'd need to go on and talk about some principles of microeconomics stuff but at its most basic level at higher prices it's more profitable to sell the good and so firms want to sell more so at this point if you'll just think about it that way I mean you'll be fine we could go through and draw an individual person's put together an individual person's supply schedule we could draw the supply curve from that but I'm just going to jump to the supply curve the supply schedule when we looked at the supply demand schedule as went up quantity demanded fell with a supply schedule as price goes up quantity supplied is going to rise and what you're going to get is going to be an upward sloping supply curve so we have a supply curve that looks like this this tells us that if price is p1 we can go over to that supply curve and we can see the quantity supplied be q1 if price were to go up to p2 we see that this firm wants to supply a higher quantity of the good ok whatever it is that they're selling here in terms of the market supply curve the market supply curve is derived exactly like the market demand curve the market supply curve you would take if we had a firm with its supply curve that looked like this there's firm 1 and if we had another firm with another supply curve and these I drew these supply curves very similar to each other they don't have to be the same then if we wanted the market supply curve we could think about a price say p1 we could look at the quantity that this firm wants to sell and we could look at that same price of p1 at the quantity that this firm wants to sell we can add this quantity to that quantity that tells us that at a price of p1 over here in our market this would be the sum it doesn't look like my distance is quite right but over here is my market so this would be the sum of these two quantities and that would end up being a point on our market supply curve ok so that would be our market supply curve just the exact same ideas what we did with our market demand curve so the market supply curve is simply the horizontal summation of the individual firm supply curves so now we've talked about why the supply curve slopes upward sellers like high prices what we need to do is talk about what causes the market supply curve or the individual firm supply curves to increase or decrease what causes the the supply curve to shift so these are going to be the determinants of supply so let me clear the board off here and we'll look at the things that shift supply curves alright let's talk about the determinants of supply these are the shifters of the supply curve so these are going to be things that are of interest to the firm if you go back to the determinants of demand that we just talked about those are things that are of interest to buyers consumers income prices of other goods you're the buyers expectations buyers preferences number of buyers in the market these are going to be the things that are of interest to the firm's okay so the first one probably one of the most important one is going to be input prices the inputs are the things that the firm has to purchase to make the output so if it's more costly to buy the inputs then that means it's left less profitable to make the good and we get a decrease in supply so let's just say in an increase in input prices will lead let's say reduces supply increase in input prices reduces supply the other side of that coin also holds a decrease in input prices increases supply so that's the first one the second one is going to be technology and the way this one works is that a technology improvement increases supply a deterioration of technology decreases supply if you think about what it means for technology to improve a lot of times I'll ask that in a face-to-face class if I were to ask you what causes technology or what how you would describe what it means for technology to improve a lot of times students will say well it's the the firm is more efficient and that's true but let's think about what that means if I force you to say okay well so what does it mean to be more efficient eventually most students come around to the idea that that it's cheaper for the firm to produce the good and you can think about that one or two ways you could think about that as well if they have the same amount of inputs they can now produce more output than before or if we think about another way it uses they need less input to make a unit of the output a unit of the good both of those are basically saying the same thing from a different point of view essentially improvement in technology is exactly like a decrease in input prices so these are basically the same thing we break them out into two different ones because it can be useful to kind of have a thorough understanding of what's going on here but a technology improvement reduces input prices and shifts that supply curve to the right so that's the first two determinants of supply the third one is going to be what we're going to call the price of substitutes in production let's say prices of substitutes and production and what this one means prices of substitutes in production let's think about a a auto manufacturer car manufacturer and let's suppose that the auto manufacturer can produce let's say they produce either a model of car or a model of truck and let's suppose that it becomes more profitable to produce the trucks prices of trucks goes up probably what the firm is going to do is shift resources away from car production into truck production so they're going to decrease their supply of cars so that they can produce more trucks because the price of trucks has gone up so this one works like the prices of related goods did when we were talking about our determinants of demand only now we're talking about other things that the firm can sell it's it's rare for a firm to produce only one good typically firms produce a variety of things and they will increase or decrease the supply of those things based upon how the prices of what they can sell changes so prices of substitutes in production that's an important determinant of supply the fourth one is going to be expectations and now this is expectations of the sellers I'm gonna I'm gonna write that expectations of sellers if you go back to the notes that you're taking as you watch the video in our determinants of demand I probably just wrote expectations but it would be more complete to write the expectations of the buyers in that list if you just have expectations then it appears as if the same thing as a pair is on both lists but this going to be expectations of sellers and this one works just like it did with the expectations of buyers of course we're now talking about the sellers if the firm expects that their output price is going to going to go up in the future they may reduce supply now so that they can sell more in the future or if the firm expects input prices to go down in the future they may decrease supply now so that they can sell more in the future so expectations of sellers could be about the future output price at could be about future input prices but that's going to be a determinant of supply again the expectations can be turn out to be wrong but if we want to understand their behavior right now their expectations right or wrong is an important determinant of supply and then finally the number of sellers in the market an increase in the number of gas stations in a town would shift the supply the market supply of gas to the right increase the supply of gas the same discussion that we just had about the first four determinants here versus the fifth determined it also applies these first four determinants of supply will shift the market supply by shifting the individual firms supply curves but this one shifts the market supply simply because you're adding across a different number of firms okay so if the number of firms goes up the market supply curve is going to shift to the right not because the individual firms supply curves shift but simply because there's a different number of sellers what we need to do now is talk about equilibrium so we need to talk about how a market works we've talked about both sides of the market we've talked about the buyers buyers like low prices we've talked about the sellers sellers like high prices the beginning we talked about the fact that neither side has any control over the price in this type of market all right there's so many buyers and sellers that everybody is a price taker they have no control the goods are identical here so what we need to do is figure out well how does price get determined and let's start with a picture of what equilibrium looks like a picture you've probably seen before if we take our demand curve which is downward sloping this is going to be our market so this is the market demand curve let's add here the market supply curve it's upward sloping because sellers like high prices at a higher price they want to sell more let's go ahead and identify the equilibrium and then we'll talk about why that's the equilibrium so your equilibrium is going to be found right here at the intersection of the demand curve and the supply curve if we go over here and look at that the coordinates of that point we get this price which I'm going to call p-star and we get this quantity which I'm going to call Q star so P star we would describe as the equilibrium price Q star is our equilibrium quantity let's talk briefly about what an equilibrium is in equilibrium if we think about the word equilibrium it simply means a situation of rest it means that is if we're sitting at this equilibrium nothing's going to change our price isn't going to change and our equilibrium quantity isn't going to change if something else shifts in here if we had a change in the supply curve or change in the demand curve then this intersection is going to move and we would move to a new equilibrium we'll talk about that here in a second but an equilibrium is simply a condition of rest okay where nothing is going to change so let's talk about the characteristics of this equilibrium price we call this the equilibrium price P star is the price I'll say it's the only price where quantity demanded is equal to quantity supply that's going to going to be really important if we were to pick some other price like say this price up here P 1 if I go over to the demand curve there's my quantity supplied and then over here's quantity demanded if I were to pick P 2 and go over to the support we hit the supply curve first there would be quantity supplied and there would be quantity demanded you can see that P star is the only price up there where a quantity demanded is equal to quantity supplied now let's think about what that means that means that at that price of P star the number of units that buyers are willing and able to buy is exactly equal to the number of units that sellers are willing and able to sell so in terms of how many units buyers want to buy and how many units sellers want to sell that price of P star brings those two things together what we need to do is talk about how the market gets to P star okay so let me clear this off and then we'll we'll talk about how the price how we actually get to that price okay let's talk about how a market is going to get to that equilibrium so let's start with a picture of a demand curve and a supply curve and let's think about what's going to happen if the price is not at the equilibrium price let's suppose that we have a price let's put a let's start down here let's suppose we have a price like p1 so let's think about the the what the buyers want versus what the sellers want in terms of the number of units they want to buy vert sell if we go over from that price we're going to hit the supply curve right here this is quantity supplied if we go over at that price to the demand curve we hit it right over here this is quantity demanded so this is going to be a situation where the number of units that buyers want to buy is bigger than the number of units that sellers want to sell and of course we call that a shortage some textbooks refer to it as excess demand I'm typically going to go with this term shortage now let's think about what a shortage what that looks like in the real world so a shortage is a situation where let's suppose you're you're a seller and so we want to sell some units today and so we we make those units in the morning and we put them out on our counter during the day and we open up the store in the morning and we've got a price on them and people come in and start buying them and let's say by lunchtime we've sold out we don't have any more to sell and people continue to into our store and say hey I wanted to buy one of those do you have any and you have to say no I don't have any I ran out about 11:30 so for the rest of the day people come in and tell us over and over I wanted to buy one but you don't have any in other words there's this is the number of units that our sellers are going to want to sell in the market but buyers want to buy a larger number of units you don't have to have had an economics class to understand that that's going to give you an incentive you're going to realize real quickly you can sell these units for more it's going to put upward pressure on price so a shortage creates an incentive let's say this a shortage is going to create an incentive for sellers to increase price shortage there's going to be an incentive for sellers to increase price this incentive is going to come not just from the sellers it's going to come from the buyers because if you're a buyer if you want to buy something and then you can't get your hands on one because it's always sold out there's a way that you can buy one if you offer to pay more so if you think about the Super Bowl it's always sold out well what that means is if you want to pay face value for a ticket yeah you're not going to be able to get in but you could go to any Super Bowl you want to go to all you have to do is just be willing to pay extra for the ticket so this creates an incentive for sellers and buyers to raise price actually I'm just going to use this term I'm going to say that there's upward pressure on price if that upward pressure is going to come from the buyers that's going to come from the sellers so let's think about what happens let's suppose sellers start to increase price let's suppose they increase it up here to p2 well that's going to create a change in quantity supplied and quantity demanded now there's quantity supplied here's quantity demanded they're still going to be a shortage that shortage has gotten smaller but there's still a shortage that means there's still an incentive for sellers and buyers to increase price they're still going to be upward pressure on price so this will continue until the shortage goes away so this continues till there's no shortage and that happens when price has gone up right there to P star at that point what that looks like in the real world is you have you want to sell a certain number of units you put them out on your counter and throughout the day people come in and buy them and by the time you close you've sold the last one nobody else comes in and wants to buy one you've sold the number of units you wanted to sell there's no incentive at that point to increase or decrease your price so anytime price is below the equilibrium there's going to be upward pressure on price and that's going to continue until price goes to P star let's think about what happens if the price starts out higher than the equilibrium so let's do the same thing have demand and supply let's start let's put p1 now up here if we go over to the curves we hit the demand curve first there's quantity demanded that's the number of units buyers want to buy if we go over to the supply curve we hit it right here there's quantity supplied and we call this a surplus we see that at p1 quantity supplied is higher than quantity demanded the number of units that sellers want to sell is higher than the number of units that buyers want to buy there's a surplus and so if we think about what that looks like in the real world or let's suppose that you're a seller and you want to sell some units let's suppose we want to sell ten of these pins and so we've made ten pins this morning we put them out on our counter and we put our price out there let's suppose that price is $4 per pin and then we open our doors and we want to sell all of those pins and let's say somebody comes in in the morning and buys one and then we don't have anybody that comes in maybe they come in people come in and look but they don't buy anything and maybe we sell one or two during the afternoon but at the end of the day when it's time to close down we've sold three pins and we wanted to sell ten so there's still seven sitting there and having those seven pins sitting on a counter doesn't put dollars in your pocket because we wanted to sell those ten pins today so we can sell another ten tomorrow so the fact that we didn't sell them is going to create downward pressure on price it's going to give us an incentive to decrease our price and you don't need to have had an economics class to understand that if you're not selling as much as you want to sell there's a very easy way to sell more and that is to decrease price so a surplus does the opposite a surplus is going to create an incentive for sellers to lower price Center for sellers to lower price we're going to just describe that I typically describe it as it creates downward pressure on price downward pressure on price so we can decrease price if we decrease price a little bit here that notice that the surplus gets smaller but the surplus doesn't go away until prices fall into right down here at that price of P star quantity demanded and quantity supplied are exactly equal so this continues until there's no surplus continues until there's no surplus so anytime price is below the equilibrium there's going to be upward pressure on price that will drive it up to that equilibrium anytime price is above equilibrium there will be downward pressure on price that drives it down to the equilibrium let's talk about this for a second one time several years ago I was talking about this in a face-to-face class and I got to this point where I was talking about there being an incentive for sellers to lower price and one guy raises his hand in the class and I called on him and he said now no I wouldn't do that it's not how I would behave and I said well so what do you mean he said well I'm not gonna let the market push me around that's why I'm going to school I'm gonna learn how to be a better business person than to have to do this and so we talked about that it turns out that in this type of market you can't be a better business person because let's think about what happens let's suppose you say to yourself okay I only sold three pens today but I'm not gonna let the market push me around I'm gonna keep my price up there where it was and I'm gonna sell the more pens tomorrow here's the problem other sellers are going to start lowering their price if other sellers aren't selling however many they want to they're going to start lower lowering their price and so now if you're the one person sitting there with seven or $5 pins and everybody else has lowered their price now think about how much you're gonna sell now on the second day when you've got the highest price you're not gonna sell any because your pins are identical to everybody else's and there are lots and lots of sellers buyers aren't stuck buying these from you they can buy from any of the other sellers you don't have a better product your product is identical to everybody else and buyers know that so what happens is if you keep your price high while the other sellers lower theirs you sell zero in other types of markets which you would talk about in a principles of micro class there might be some ability to try to resist it not in this type of market so we call this the law of demand and supply the law of demand and supply says that the price of any good or service will adjust if it's allowed to the price of any good or service will adjust until quantity demanded is equal to quantity supplied in other words if there's ever a surplus or a shortage in a market the price will adjust to eliminate that surplus or shortage there was another time that different class there was a young lady sitting on the front row and I got to this point in the class and she raised her hand and and she said here's what happens at the gas station I work at she said that what happens is that she ran the register and she said that the periodically throughout the day the owner of the gas station would call her up and the owner would ask what's happening at the pumps and she would say one of three things she would say oh well we're selling a lot of gas today or she would say I word we haven't been selling very much or she would say that's about an average day she said that if she said that there was a lot of action at the pumps they were selling a lot of gas that the owner told her okay raise the price a few cents and I'll call back here in a little while that is exactly this exactly this right if if you're selling more than you want to sell this gives you an incentive to raise your price or if he called up and she said I we haven't been selling much gas this morning he would say okay well lower the price a few cents and I'll call back exactly that or if he called up and she said we're we're some kind of an average day he would say okay well just leave the price of there I'll call back a little bit later that's how this works that's how if things work in a competitive market like this notice that he didn't ask what's the gas station across the road selling it for or what's gas being sold for on the interstate or what are they selling gas for what's the price of gas in st. Louis or the price of gas in Chicago right none of that matters what matters is what's happening at your pumps because that's the only thing that puts dollars in your pocket in this type of market it can be tempting what you tend to observe we'll talk about this a little bit later you tend to observe consistency across gas stations in terms of their price but that's just showing us what's happening in this picture the price is going to be driven to this across all of the firms that does not mean that those firms are somehow getting together behind the scenes and deciding what price to charge that's not what's going on not in this type of market so as long as the price is allowed to adjust the quantity demanded and quantity supplied are going to come together now keep in mind that what's going on for the buyers and the sellers is that nobody sees this picture right no gas station owner is looking at a picture of what the market demand curve in the market supply curve looks like in town they don't need to it's that the shortage or the surplus creates an incentive for them to push price in the direction at needs to go to get to the equilibrium and when that shortage vanishes there's no longer an incentive to push price up they don't need to see that somehow on some picture we've reached that intersection that's not how it works and if there's a surplus it creates downward pressure on price until that price Falls to P star and then that downward pressure vanishes nobody needs to see this picture right that's the beauty of how a market works let's talk now about now that we understand let's draw a final picture of a market so what we've got here is a demand market demand curve a market supply curve a market in equilibrium is going to have this price P star and this will be the quantity Q star the number of units that is bought and the number of units that is sold in this market all of the sellers are going to sell for this price of P star what we need to do now is think about what happens if either the demand curve or the supply curve changes we've talked about five things that will shift the demand curve and five things that will shift the supply curve if one of those determinants of demand or supply or both changes then that's going to move our equilibrium so we need to talk about how to analyze a change in equilibrium so I'll clear the board and then we'll take a look at that when you start thinking about analyzing a change in equilibrium at least at the beginning it's useful to keep in mind that there are three steps to doing this the first step that you need to think about is you have to think about which curve shifts and that's why it's important to make sure that you know the determinants of demand and the determinants of supply that that's one of those things that I always tell my classes you need to have those lists memorized within each of those determinants so remember when we talked about the way that income affects your demand it depends on the nature of the good it depends on the nature of your preferences towards the good it depends on whether or not it's a normal good or an inferior good so just knowing that income is a determinant of demand isn't quite enough you need to know the difference between a normal and fear you're good you need to know in terms of the prices of related goods you need to know if the two goods are complements or substitutes so there can be a lot that goes into understanding this but the first step is just figure out which curve shifts and that's easy if a determinant of demand changes the demand curves get a shift if a determined supply changes the supply curve will shift next step is to figure out which direction so is the demand curve get a shift to the right or the left or if it's the supply curve that's shifting which direction is it going to shift and then the third step is simply to draw the picture let's just say draw it and figure out what's going to happen to the equilibrium price and quantity and the process is actually very easy it's not hard to understand or even hard to work through as long as you know the determinants of demand and supply let's talk for just a second about how you should be shifting these curves or how you should be thinking about the curve shifting let's start with demand so if we were to think about a demand curve like d1 here's a demand curve if you're thinking about an increase in demand I want you to think about that as a rightward shift of the demand curve okay so this would be an increase that's an increase in demand you want to shift it to the right or if you're thinking about a decrease in demand you want to think about that as a leftward shift to say d3 this is a decrease at this point I don't want you to think about shifting up or down a lot of times it's intuitive for students to think about an increase in demand as being an upward shift and a decrease in demand as being a downward shift but I don't want you to think about it that way you might look at it and say well gosh if I had been if I had drawn my arrow straight up like that that's okay my demand curve would still be in the right place or if I said draw a decrease in demand and you were thinking about this as a downward shift you might be thinking well I'd have my demand curve in the right place and that's true for demand but if we're talking about supply here's a supply curve if you have two let's call that s one if you're going to draw an increase in supply here's what an increase in supply looks like it's right here oops s two that is an increase in supply so if you're thinking about these as going up and down then when you thought about your increase in supply you would draw your increase in supply above the old one which would give you the wrong answer so this is an increase in supply a decrease in supply is a leftward shift there's a decrease call that s three see as long as you think about these increases shifting to the right and decrease is shifting to the left you won't miss them but if you're thinking about up and down then you better think about it backwards when you get to supply because you'll miss every question and I cease unfortunately I see some students that miss every one of the supply questions because they're shifting they're drawing their picture the wrong way so my advice is to always think about increases of shifting to the right think about that real number line where you've got zero in the middle and then as you move to the right on that number line the numbers increase and as you move to the left of the numbers decrease that's that's the easiest way to think about it so now let's do an example let's start with a picture of a market in equilibrium so we've got a demand curve we've got a supply curve let's suppose let's go ahead and identify our initial equilibrium I'm going to call it point a here's our price our initial price of p1 and our initial price of q1 so this is a market in equilibrium we're gonna start at a price is equal to p1 quantity is equal to q1 and I always will any time I'm writing out the story of what's going on I always write it out the same way every time it's useful to be very methodical about how you write this out and so I if I were you I would get in the habit of kind of doing it the same way so we're starting today let's suppose that this is let's say this is the let's do the market for hot dogs this is the demand and supply for hot dogs and let's suppose that the price of hot dog buns goes down so we have a decrease in price of hot dog buns now you have to think about okay is that on the determinants of demand or determinants of supply we've got to start by thinking about which curves going to shift so you have to think about how hot dog buns are related to this market and clearly hot dog buns and hot dogs are going to be complements so if we have a decrease in the price of hot dog buns consumers are going to buy more hot dog buns and that's going to increase the demand for hot dogs okay so our demand curve let's label our initial demand curve d1 our demand curves going to shift to the right to d2 we get an increase in demand now notice that our equilibrium so let's say increase in demand our equilibrium is no longer at Point a now our equilibrium is going to be right up here at point B so we've gone through the first steps we figured out that it's the demand curve that's going to shift we figured out which direction that demand curve is going to shift and then the final step is to figure out the coordinates of our new equilibrium that tells us what's going to happen to price price goes up from p1 to p2 and our quantity of hotdogs bought and sold goes up from q1 to q2 so we have it let's new equilibrium at b and b new equilibrium at b we have an increase in price and an increase in quantity q1 to q2 from p1 to p2 so a decrease in the price of hot dogs buns is going to drive up the price of hot dogs it's going to increase the quantity of hot dogs bought and sold let's clear this off and then do another example or two let's start again with a picture of a market in equilibrium get demand supply label our initial equilibrium at Point a initial price of p1 and an initial quantity of q1 so we're going to start at a price equals p1 quantity equals q1 let's suppose that suppose the number of sellers in the market goes up so let's suppose that maybe this is the market for gasoline and a few new gas stations get built in town okay so the first step is to figure out which curve is going to shift the number of sellers is the determinant of supply so the supply curve is going to be shifting if we think about which direction it shifts if the number of sellers goes up that increases supply so we're going to shift this supply curve to the right from s 1 to s 2 there's an increase in supply so let's write out here increase in the number of gas stations number of sellers that's going to lead to an increase in supply from s 1 to s 2 let's identify our new equilibrium our new equilibrium now is down here at point B and if we identify the coordinates of that point that gives us the new price which is going to be right down here at P 2 that increase in the number of gas stations is going to drive the price of gas down our quantity is going to go from Q 1 to Q 2 so it's going to increase the quantity so we get a decrease in price and we get an increase in quantity of the good which is gasoline more gasoline will be bought and sold and that will take place at a lower price so you can see these are very straightforward there are 10 determinants or excuse me 5 determinants of demand I could change 5 determinants of supply most of the time I will simply change one determinants and so it's just a matter of figuring out which list is it on is it a demand thing or a supply thing which directions are going to shift the curve and then just draw the picture let's talk about what happens if both of them change let's suppose a determinants of demand changes and a dieter minute of supply well in that case let's start with a equilibrium so we'll start with a demand curve and a supply curve just like normal an initial equilibrium at Point a an initial price at p1 and an initial quantity at q1 now let's not I'm not going to worry about giving you a determinant of demand and a determinate of supply and figuring out which list it's on and all of that let's just kind of cut to the chase and say suppose that we have an increase in demand increase in demand and a decrease in supply so there's an increase in demand and a decrease in supply let's draw that let's draw our increase in demand from d1 to d2 there's our increase in demand let's draw our decrease in supply so if our supply curve shifts to the left from s1 to s2 there's a decrease in supply let's label our new equilibrium our new equilibrium is going to be at the intersection of our new demand curve and our new supply curve which is right up here at point B if I were to ask you now what what happens to price let's start there you can see that price is going to be going up here to p2 so this is definitely going to drive up price the increase in demand causes price to go up a decrease in supply causes price to go up both of these things will push price up suppose I were to ask you what happens to quantity well if you're looking at my picture it looks like quantity if I were to draw a dotted line down there it looks like my quantity is going to end up right where it started if you're drawing this picture along with me it could be the case that your intersection is somewhere over here or maybe your intersection is somewhere back over here so if you're drawing the picture you could have quantity that's higher q2 or a quantity that's lower so let's think about what's going on there notice that where this intersection ends up depends on exactly how much I would I shifted the demand curve to the right compared to how much I shifted the supply curve to the left if I hadn't shifted my supply curve very far let's suppose I only shift it a little bit so that it was right there then there would be my intersection and my new quantity would be back here or if I had shifted my supply curve a lot suppose I shifted it back here then my intersection would be back here in my new quantity would be less than where it started or we could think about how much I shifted the demand curve I could have shifted a lot or not shifted it very much and you can see that that's going to determine exactly where that new intersection is so the effect on quantity is ambiguous I'm going to say just put a question mark here for quantity it depends on how much the supply curve and the demand curves shift relative to each other so if to determine its change if a demands determinate changes and a supply determinate changes you'll be able to tell what happens to one but not both of these okay it depends on the combination of increases and decreases sometimes you'll be able to tell exactly what happens to price but not quantity other times you'll be able to tell exactly what happens to quantity but not price so that's what happens if we have two determinants shifting let's talk for just a second about something that tends to come up especially with gasoline there tends to be some conversation about price gouging and price gouging is the idea that somehow firms are colluding against the buyers in the market and using some power that they have over price to drive price up and in this type of market remember we started this chapter by talking about the fact that neither the buyers nor the sellers have any control over price so the idea that there's any type of but gas stations or somehow colluding with each other if you've got a a town or a city where there are lots of gas stations that's not going to be going on if you have a small town where there's just one or two gas stations then those gas stations are going to have a little bit of control over price and if you were to go on and talk about some more microeconomics stuff if this is a micro class and you're going to be studying microeconomics you'll talk about that later if this is a macro economics class then you're probably not going to be thinking about that but at this point in this type of market there's not going to be collusion that's taking place there's too many buyers too many sellers in the market and the goods are all identical so nobody's going to have any control over price but remember the fact that the price is driven to one price tends to create this this idea that there is collusion going on is when you drive into a town and all the gas stations have a price that's very very close to each other that creates this impression that somehow they've gotten together and decided on that price that's not what's going on in this type of market let's finish up by talking about inferring what happened to price and quantity based upon what you observe in a market let's suppose that we observe suppose I said that you observe equilibrium price rising and quantity falling you observe equilibrium price rising and quantity falling and I ask you what's the most simple explanation for that the way that you would solve a problem like that is to start with a picture so we've got a demand curve a supply curve here's price here's quantity if we were to start with a price an equilibrium price of p1 and an anique an equilibrium quantity of q1 and then observe the price goes up and quantity goes then that's going to if we have a higher price but a smaller quantity that's going to put us somewhere back in here and so the simplest explanation for what would cause an increase in price and a decrease in quantity would be that that supply curve shifted to the left we had a decrease in supply that would move our equilibrium from point A somewhere up here we're at a higher price and a lower quantity so essentially this is kind of working backwards from what we're doing over here here we're starting with a determinate changing we're figuring out which curve shifts and then we're deciding figuring out what happens to price and quantity here we're starting with what happens to price and quantity and figuring out the simplest explanation that is consistent with what we're observing so that will give you an idea of kind of the basic probably the most useful model in all of economics this demand and supply model what you're going to find is that a whole lot of the explanations for what's going on out there in the real world is going to boil down to some version of demand and supply we'll talk later on in other classes if you go on in economics we would talk about what happens if there's a small number of buyers or a small number of sellers and you'll see that things are a little bit different than this but in this type of market a competitive market neither the buyer nor the seller has any control over price and any price changes any quantity changes that we observe are going to be coming from shifts in that demand and supply curve so I will see you in the next video
Principles_of_Microeconomics
Chapter_15_Monopoly.txt
in this video we're going to take a look at how a monopoly maximizes profit now a monopoly is a situation where you only have one seller so if we think about how this fits into what we've already talked about we've talked about perfect competition where there are lots and lots of buyers and sellers and the goods are all identical as a matter of fact we could kind of think about this range of competition I'll call it we've talked about perfect competition let's put that down here on this end perfect competition lots and lots of buyers and sellers the goods are all identical the complete other end of the spectrum is what we're going to talk about now we're going to think about monopoly once we get done with monopoly in another video we're going to talk about oligopoly and oligopoly fits right in here colleague up can't spell it olga polly and then we're going to think about monopolistic competition that fits in somewhere down here so we're gonna think about all four of these types of markets so we're thinking about the two ends of the spectrum first a lot of what we've done with perfect competition we'll be able to use that to help us understand what happens in these other markets so let's start with our monopoly discussion and and the key here is that monopolies have no competition as a matter of fact let's list the characteristics of this type of market so characteristics we did this with perfect competition the characteristics were that there were lots and lots of buyers and sellers the goods were identical and there was free entry and exit in terms of monopoly there's going to be one firm one seller that has no competition there's going to be one good clearly if there's only one firm there's one good and that good will have no close substitutes now that's going to give this firm what we're going to call market power this firm will have some control over the price in a way that perfectly competitive firm did not and then the last characteristic of a monopoly is that there is going to be no entry I'm going to say that there are strict barriers to entry strict barriers to entry these characteristics the fact that there's one firm there are strict barriers to entry we will say that the monopoly is a price maker monopoly is a price maker they get to choose their price they have market power so saying that they are a price maker is I'm going to also describe that by saying that they have market power so that phrase market power essentially means some control over the price the monopoly is going to have as much market power as it is possible to have now let's think about what that means for a second the monopoly is going to be the only seller of a particular good and sometimes people believe that that that means the mark that the monopoly can charge whatever price they want and that's certainly not true the monopoly is going to have as much power over price as it's possible to have but they still can't reach into your pocket and take dollars out you still have to make the decision to buy the good or not and so what that means is the monopoly is going to be restricted by the consumers willingness to pay so they're not going to be able to charge a crazy price if consumers are not willing to pay that price so let's think about before we get into how a monopoly going to make its decisions on on what quantity to produce and what price to charge let's think about some sources of barriers to entry okay so let's call this sources of barriers to entry the first one that we're going to talk about is that the government can block entry so let's just say there are government sources of barriers and those could come in the form of say a patent or a copyright so sometimes the government grants a patent if you've come up with a new idea or a new pharmaceutical then you can get a patent for that and what that patent guarantees is that you'll be the only person that can profit from that particular invention at least for a period of time and it might be 18 years it depends on on whether or not or not we're talking about a mechanical patent or a pharmaceutical patent or something like that so a patent or a copyright so if you were to let's say write a song and and record that song and it becomes very popular then you're the only person that can profit for that from that for a period of time if anybody else were to try to take your song and make money off of it or even just use it even if they're not making money off of it if they were to use it they have violated your copyright and you would be able to to pursue that in a court of law and you would typically be able to win if you can prove that it was yours so sometimes the government creates barriers to entry now here's what we're gonna see in this chapter we're gonna see that having a monopoly in a market creates deadweight loss so later on we're gonna be talking about some things the government might want to do to prevent monopolies and so at that point you might think back and say well hold it sometimes the government creates monopolies why would they create it and then try to fight it well what's happening is that the government needs to create some type of incentive for people to be productive and for people to innovate and create new products to to go out and find cures for ailments and so what the government does is they create this this patent or a copyright that is going to be the reward for the firm that comes up with a cure for cancer or somebody who comes up with a very entertaining movie or a very popular book there's that reward if if that reward didn't exist firms would never spend the millions and millions of dollars that it takes to develop some new pharmaceutical drug so there's this fine line we want to create an incentive for firms to innovate and to find new things like a some cure for something the problem is once they've found it they have market power and they're able to use that market power and so that's kind of a tricky situation but that's why this exists the government creates these these situations because they want to there to be a reward to being innovative another situation where there's a barrier entry would be if a single firm owns all of a key input so single firm owns all of a key input not really that common but there are some great examples of this so if you were to go back and look at a company called Alcoa there was a period of time during which Alcoa had a monopoly in aluminum production and the reason is that Alcoa had control of all the bauxite and you need bauxite to make aluminum so when they had control of all the bauxite they were the only people that could the only business that could make aluminum so they had a monopoly another kind of textbook example is De Beers diamonds De Beers owns a vast majority of of the most productive diamond mines in the world and so do years for all practical purposes has a monopoly in in the sale of diamonds so first two sources of barriers to entry we can have what's referred to as a natural monopolies a natural monopolies is a situation where there are economies of scale over the relevant range of production levels so economies of scale exist let's think back to what economies of scale mean we talked about that when we were thinking about costs of production we talked about if we had costs up here and we had quantity down here and our long-run average total cost curve was always declining as quantity increased then we called that we said the firm is experiencing economies of scale what this means is it's cheaper to produce larger quantities than it is to break that up into smaller quantities and produce it may be an individual factories so if we were thinking about this being the total quantity in the market if we had one firm producing that quantity then the average total cost would be right up here remember this is the long-run average total cost curve so this would be the average total cost of producing that quantity in one business one plant we could think about what would happen if we had two plants each making half of that amount well half of this amount would be this here's Q over two so if each firm was making half that amount we could think about what would happen to their costs on average and their long-run average total cost would be up here so you can see that there are economies of scale there is a benefit to getting bigger if we did have two companies there would be an incentive for those two companies to merge and produce all of that quantity in one production facility so that they can reduce their costs on average we tend to see the and things like utilities so if we're talking about delivery of water to households the delivery of water to households through a set of pipes well typically you have one water company in an area and you might say well let's suppose that you had I don't know in your backyard you had a giant lake of clean fresh water and you wanted to sell that piece sell that to people and you wanted to be able to deliver that through pipes to their house well you could go to the other water utility and say hey would you let me borrow your pipes so that I can pump my water through your pipes to people's houses and they're gonna say no clearly so what's gonna happen is you're gonna have to lay another set of pipes to everybody's house well laying two sets of pipes just drives the cost up on average so in a situation like water utilities clearly it's a natural monopoly it's cheaper to have one firm than it is to have multiple firms and and that's really the nature or a result of the shape of the long-run average total cost curve so you can see that these sources of barriers to entry are these aren't things that you go to get your MBA to learn right you you don't businesses don't have lots of market power businesses don't become a monopoly by making good decisions this is really something that's kind of a you kind of look into it or the government you you create something that is very useful that lots of people want to buy and you have a patent on it that's going to create a lot of market power for you so let's talk now about what profit maximization looks like for a monopoly so here's the key this is really for a monopoly what it boils down to the monopoly faces the market demand curve the monopoly faces the market demand curve we talked about perfect competition perfect competition or competitive markets that's a situation where there are lots and lots of buyers and sellers so no individual seller faces the market demand curve actually in perfect competition each seller faces a perfectly elastic demand curve for their product and the reason is each firm is selling a good for which there are perfect substitutes so if one particular firm tries to raise its price consumers will just go to the other firms that are selling the exact same good in this situation we've got a firm that has no competition consumers can't go to another business to buy the good they either buy it or they don't that's the decision they have to make so the monopoly faces the market demand curve the monopolies we already said is a price maker they get to choose their price competitive firms had no control over the price it went up or down depending upon what happened to market demand there's nothing they could do or what happened to market supply they have no control over but a monopoly does they are limited by consumer willingness to pay let's think about what this means about the marginal revenue curve because what we're going to do is the same thing we did with perfect competition we're going to look where marginal revenue equals marginal cost but now what we're going to see is that this firm faces a different type of marginal revenue curve then a competitive firm did so let's do the same thing we did with perfect competition let's create a little table here that allows us to look at total revenue will calculate marginal revenue we'll calculate average revenue and we'll just see what ever or what marginal revenue looks like so let's put up here the demand curve that the firm faces let's start with quantities and let's go from 0 up to 10 and then let's think about price now when we were doing a perfectly competitive firm remember the perfectly competitive firm was small compared to the size of the market so it didn't matter the quantity that the perfectly competitive firm produced they had no impact on the price so that our column our price column for the competitive firm was all the same it was six dollars all the way down now we're putting up here the market demand curve so what we know is demand curves are downward sloping and so if the monopoly wants to sell more a higher quantity they have to lower the price on every unit they sell so let's put here let's start our price here at six dollars and let's just go down by $0.50 each time five fifty five dollars so you can fill the rest of this in by going down by fifty cents each time and we'll draw this demand curve so it goes down to $1 we can draw this demand curve it's very simple the choke price the highest price that consumers are willing to pay is six dollars we call that the choke price because that's the price at which quantity demanded Falls to zero at a price like seven nobody wants to buy any of it so if we draw the rest of that demand curve it's going down it's got a slope of 50 cents it goes down 50 cents every time it goes over one so by the time it gets out here to ten it's down here at $1 so there's what that demand curve looks like it's just a downward sloping linear demand curve like we've worked with before let's figure out what total revenue looks like total revenue is just price times quantity so if the firm sells zero units of course they make zero revenue if they sell one unit at five dollars and 50 cents they make five dollars and fifty cents in total if they sell two units at five dollars each they get a total revenue of ten dollars remember these are not profit these are total revenue so here's what the rest of those look like it's going to be thirteen fifty sixteen dollars seventeen fifty eighteen dollars seventeen fifty again sixteen dollars thirteen fifty and ten dollars so look at what total revenue is doing total revenues going up and then it reaches a maximum and then it starts to go down again this is not profit this is just total revenue and and so what's happening here is you have to remember that the way that to interpret this table is not that the firm sells the first unit for 550 and the second second unit for five and the third unit for 450 that's not what's happening this tells us the price they can charge if they want to sell four units if they want to sell four units they have to sell all four of those units for four dollars if instead they want to sell eight units they have to lower the price to two dollars per unit okay now that we've got total revenue we can let's figure out average revenue now remember average revenue is always equal to price and here's why we know that total revenue is equal to price times quantity average revenue is equal to total revenue divided by Q so if we take price times quantity divided by Q the quantities cancel average revenue is just equal to price that's always true so our average revenue here we're not going to calculate we can't divide by zero but if we take our our total revenue and divide it by quantity 550 divided by quantity of 1 gives us 550 which is the price $10 divided by 2 is 5 which is the price so this is just the price all the way down 454 goes down by 50 cents each time so it's the same as this column so there's what average revenue looks like let's figure out what marginal revenue looks like cuz that's what we're really interested in marginal revenue remember marginal revenue is just the change in total revenue when we change quantity okay sometimes a lot of times I write it this way change in total revenue when you change quantity but marginal revenue is just the slope of the total revenue curve so what we need to look at we're going to not do anything for zero we need to go from zero to one we see that as if we produce if we produce that first unit our total revenue goes from zero to 550 so our marginal revenue for that first unit is five-fifty if we produced the second unit our total revenue goes from 550 to ten so it goes up by four dollars and fifty cents if we produce the third unit our total revenue goes from ten to 1350 so our marginal revenue is three dollars and fifty cents you can see that our marginal revenue is falling by a dollar each time so it goes down here to 250 and then 150 50 cents and then it goes negative if we take 50 Cent's minus a dollar that's negative 50 cents and then minus a dollar 50 minus 250 minus 350 so there's what our marginal revenue looks like now let's think about what's going on here notice that marginal revenue at all of these production levels down here we see that marginal revenue is less than price let's think back to what happened with the perfectly competitive firm so with the perfectly competitive firm we saw that price and marginal revenue were always equal every time the units or the firm sold another unit they made $6 sell another unit you make $6 sell another unit you make six dollars every time you sell a unit you make six dollars which means your marginal revenue it's always six dollars here this isn't happening for this firm if it wants to sell another unit it's got to lower the price for every unit it sells so it's marginal revenue of the next unit is going to go down and so what we're seeing here is that marginal revenue is less than price we see that for a monopoly marginal revenue is less than price that is important what we want to do is we want to graph the marginal revenue curve but in order to do that I need to clear off this side I'm gonna leave the table and then we'll take a look at what the marginal revenue curve looks like let's draw the demand curve that we've got here again and then let's draw also the marginal revenue curve and see what they look like compared to each other so here's this is going to be the the demand curve that the firm faces so our demand curve starts up here at six dollars and it's linear and by the time we get out here to a quantity of ten it's down at one dollar but we're really interested in this marginal revenue curve so notice that the marginal revenue starts out here at the first unit at 550 it starts out somewhere like this and then by the time it gets out here to a quantity of six or seven it's going to cross the horizontal axis and become negative so the marginal revenue curve actually looks like this here's the demand curve the firm faces there's the marginal revenue curve marginal revenue is below price in other words if we graph this the marginal revenue curves linear also but notice it has twice the slope as the demand curve the slope of the demand curve here is $0.50 when we go down 50 cents over one unit down 50 cents over one unit our marginal revenue would go down a dollar over a unit down a dollar so this marginal revenue curve has twice the slope as the demand curve as a matter of fact in terms of graphing the marginal revenue curve here's the general rule and this is something you need to remember because there will be times when you may be given the demand curve and you have to figure out what the marginal revenue curve that goes with it looks like well here's the rule for doing it and this rule works for a linear demand curve so I'm going to say for a linear demand curve if it's if the demand curve is nonlinear then this isn't going to work but in this class we would be using a linear demand curve so this will work for everything that we're going to do so for a linear demand curve the marginal revenue curve has the same vertical intercept it's the same vertical intercept and twice the slope as a demand curve same vertical intercept and twice the slope as the demand curve so drawing a marginal revenue curve if you're given the demand curve drawing the marginal revenue curve is not hard at all so if I were to give you a demand curve that looks like this then you just start your marginal revenue curve up here where the demand curve starts and you give it twice the slope there's the marginal revenue curve that would go with that demand curve or if I were to give you a functional form for a demand curve suppose I said that the demand curve is equal to 10 minus 2q there's a demand curve it has a vertical intercept of 10 and a slope of negative 2 that's just the slope-intercept form of a line well our marginal revenue curve would have the same intercept and twice the slope there's the marginal revenue curve it also has an intercept of 10 but its slope is negative 4 instead of negative 2 so you can see that this general rule is very useful let's talk for just a second about whether or not that rule worked for a perfectly competitive firm so for a perfectly competitive firm that perfectly competitive firm faced a perfectly elastic demand curve for its product the demand curve that the perfectly or that the competitive firm face look like that and what we saw was that the marginal revenue curve was right on top of the demand curve so let's think about whether or not the rule that we've got here works in this case well so the marginal revenue curve and the demand curve had the same vertical intercept it's right here and then the slope of the demand curve if we took the slope of the demand curve is zero two times zero is zero so the marginal revenue curve does indeed have twice the slope of the demand curve because they're both equal to zero so for the perfect perfectly competitive firm the rule still follows the key here is that if the demand curve is perfectly elastic then the marginal revenue curve will lie right on top of it but as soon as this demand curve has any downward slope the marginal revenue curve is going to fall below it okay whatever downward slope it has the demand curve has the marginal revenue curve will have twice that downward slope okay so now we've got what we need we know what the marginal revenue curve looks like for a perfectly competitive firm all we need to do is put that together with our cost information and we can look where marginal revenue and marginal cost are equal and that's going to tell us what the firm is going to do so I'll clear this off and then we'll take a look at that so the way a monopoly maximizes profit is the exact same way that every firm maximizes profit we saw in our perfect competition chapter that all firms maximize profit by producing the quantity where marginal revenue equals marginal cost so that's the thing that we're looking for anytime we're thinking about a firm maximizing profit if you're stumped on how to solve a problem and and it's a problem where a firm is maximizing profit that's the first thing you need to be looking for figure out where marginal revenue equals marginal cost okay that may involve taking a demand curve and drawing the marginal revenue curve it may involve taking a table of numbers and figuring out total revenue and then figuring out marginal revenue but the goal is always going to be to look where marginal revenue equals marginal cost so let's think about what this looks like for a monopoly so let's draw a picture of the monopoly and I'm going to put up here the the cost curves for the monopoly first so let's put up here the marginal cost let's go ahead and put our average total cost curve there's average total cost so there's a picture of our monopoly let's make sure we label it monopoly now notice that if I were drawing a perfectly competitive firm I would have drawn that picture also the cost curves of the firm are not what's different between the different types of markets it's the revenue for the firm that's different between a perfectly competitive firm versus of monopoly versus a monopolistically competitive firm versus an oligopoly so this stuff anytime we draw a picture of the firm most of the time we're going to be drawing this picture so now we want to put the revenue information in there so I'm going to put in the demand curve that the firm faces I'm going to just draw it out here it doesn't really matter don't really worry too much about where it intersects everything now we're going to be looking where marginal revenue equals marginal cost so we need the marginal revenue that goes with that firm so the marginal revenue curve has the same intercept and twice the slope so it's going to come down here something like that there's our marginal revenue curve and again don't worry exactly where it intersects everything there's really only one intersection that we're interested in and that's the one where marginal revenue equals marginal cost so the firm is going to produce the quantity where marginal revenue equals marginal cost that happens right here and and in your picture your intersection right there might be above the average total cost curve it might be a long ways from it it doesn't matter that's the only intersection that's important at this point so this is the quantity that the firm is going to produce there's I'm going to call it Q M for the monopoly quantity now let's think about the price that the monopoly is going to charge for this so there's the quantity that they want to produce if this was a competitive firm that's the end of the story because the competitive firm has no control over the price but the monopoly does and what the monopoly is going to want to do is charge the highest price that they can get for that particular quantity and fortunately for the monopoly they know what that what that is because the demand curve the height of it represents consumer willingness to pay so they know how much consumers are willing to pay for that particular quantity so we simply go up to the demand curve for that particular quantity and we can see that consumers are willing to pay that amount that is the monopoly price that's the price the monopoly will charge so they will produce this quantity they will charge that price so that's a picture of the profit maximizing decision for the monopoly now let's take a look at another example here what I want to do is focus on the relationship for the monopoly between price and marginal cost so let's draw a smaller picture here I'm gonna put my marginal cost for this picture I'm not going to put my average total cost because I want to focus on marginal cost let's go ahead and put the demand curve that the firm faces it's downward sloping let's put the marginal revenue curve that the firm faces it's right there the firm's going to produce the quantity where marginal revenue equals marginal cost and that happens right there there's the quantity the monopoly produces they're going to charge the price found by looking at the demand curve so we go up to the demand curve they're going to charge this price now let's look at the relationship between price and marginal costs so let's identify the marginal cost of producing this quantity well that's easy all we have to do is go up from this quantity to the marginal cost curve and we hit it right there there's the marginal cost of producing the quantity that the monopoly is producing and what we see is that for the monopoly price is greater than marginal cost okay so for monopoly price ends up being greater than marginal cost which is not surprising because we also saw that for a monopoly the marginal revenue is less than price and the monopoly is equating marginal revenue and marginal so this shouldn't come as a big surprise but here's the practical interpretation of what's going on here remember that for a perfectly competitive firm price was equal to marginal cost that meant that when you buy a product from a firm that's perfectly competitive you can be assured the price they charge you is equal to their cost of production well here's what this means for a monopoly they charge you a price that's greater than marginal cost that means when you buy a good from a monopoly you can be assured that the price they charge you is greater than their cost of production this will end up creating some deadweight loss and we'll see that here in just a little bit first let's talk about how we identify profit so let's draw a couple of pictures here remember that profit is equal to the difference between price and average total cost multiplied by Q so let's draw a picture actually let's draw two pictures one of them we're going to have a firm earning a positive profit and then the other one we're gonna have a firm a monopoly earning a negative profit so let's start this one with our marginal cost let's put our average total cost down here kind of low so here's average total cost still u-shaped still in marginal cost still intersects average total cost at the bottom of the average total cost curve now let's put our demand curve in here there's a demand curve the firm faces here's the marginal revenue curve the firm is going to produce the quantity where marginal revenue equals marginal cost that happens right here here's the quantity the firm is going to produce there's QM they're going to use the demand curve to figure out the highest price they can charge for it so we go up to the demand curve there's the monopoly price now we've got price and we've got quantity we need the average total cost so if we go up from this quantity to the average total cost curve we hit it right there there's the average total cost of producing that quantity this area would represent the profit that the firm is earning so this is a firm earning a profit that is positive Bena monopoly does not guarantee you a positive profit though if the demand for this product was relatively low so just because you're the only seller of something does not mean that people want to buy it so let's draw a picture where let's start with our marginal cost I'm gonna put my average total cost kind of high this time I'm gonna put it somewhere right up in here here's average total cost now I'm gonna move my demand curve back I'm gonna move my demand curve down here where it's under the average total cost curve so suppose there's my demand curve I'm gonna draw the marginal revenue curve that goes with that demand curve there's marginal revenue the firm's going to look where marginal revenue and marginal cost are equal and that happens right here so this will be the quantity that the firm will produce there's QM we go up to the demand curve to find the price there's the price they're going to charge that's p.m. now we need the average total cost of this quantity so we go up to the average total cost now we hit it up here there's average total cost and now we see that average total cost is bigger than price so the area of this rectangle is going to be the loss that this monopoly earns this is a firm a monopoly earning a negative profit a loss so again being the monopoly doesn't guarantee you a positive profit it depends on where that market demand curve is relative to the cost curves let's talk about the supply curve for a monopoly and it turns out that this discussion is relatively easy to have because the monopoly has no supply curve monopoly has no supply curve let's talk about what that means so the monopoly clearly makes a supply decision the monopolist is going to decide how much to sell in all of these pictures the monopoly is choosing a quantity to sell but we have to be careful about labeling anything a supply curve actually we can't label anything a supply curve and let's talk about why so if we were talking about a competitive firm so we draw you a picture of a competitive firm here's our competitive firm we've got the marginal cost curve I'm not going to draw any average total cost curve or anything like that the way the competitive firm made its decision is it looked at where the market price was wherever that market price was all it had to do was go over to the marginal cost curve and that gives us the quantity so if the price is p1 quantity will be q1 if price falls down here to p2 quantity is going to be q2 where if price were to go up here to p3 quantity is going to be q3 so what happens is that that marginal cost curve tells us everything we need to know that's why we can just label that thing a supply curve with a monopoly the marginal cost curve does not tell us everything that that we need to know the marginal cost curve is important for figuring out what the quantity is going to be but notice then we have to use a third curve we've got to use this demand curve to figure out the price so here the marginal cost curve is is helpful for figuring out the quantity but the marginal cost curve doesn't help us figure out the price we've got to use the demand curve so in our monopoly pictures notice we're using three curves we've got to use the marginal cost curve and the marginal revenue curve to first figure out quantity and then we've got to use a third curve the demand curve to figure out the price there are three curves involved in figuring everything out whereas with our perfectly competitive firm there was only one curve involved and so we could just label it a supply curve so what we're saying when we say that the monopoly has no supply curve is we're saying that we we can't label anything a supply curve they still make a decision of what quantity to sell okay it's just that we can't label one of one of these curves a supply curve let's talk about the difference between the short run in the long run so when we discuss perfect competition we spent a long time talking about the firm's supply curves in the short run and what the market supply curve looked like in the short run in the long run and so we spent time thinking about the difference between the short run in the long run we spent time thinking about the effect that a change in market demand has first in the short run and then in the long run we don't have to do any of that in the case of the monopolies because there's no entry that takes place in a monopoly that makes things nice and simple because if a monopoly is earning positive profit in the short run like this firm is they can continue to earn that positive profit in the long run because there's no entry there are strict barriers to entry so there's nothing that's going to drive that profit to zero so we don't need to worry about the difference between the short run and the long run because there's no entry there's nothing that's going to end up causing anything to change in that picture I want to clear this off and then we'll talk about a couple of things before we kind of finish this up let's talk about the effect that having a monopoly in a market has on total surplus so let's think about the effect of a monopoly on the efficiency of markets which is something we spent a video talking about earlier what I want to do is draw two pictures here let's draw a picture of a competitive market and a monopoly market so this is going to be a monopoly and this one's going to be a perfectly competitive market not a perfectly competitive firm a competitive market competitive market so let's start by putting in the market demand curve okay now what I want to do is I'm going to draw the same market demand curve in each picture because the market demand curve has to do with the consumers okay so that's not a difference necessarily between a competitive market and a monopoly market now I'm going to draw the marginal cost curve and I'm going to draw the marginal cost curve here like we would have drawn it back when we first learned the demand and supply model I'm going to draw it like this and I'm gonna label it a supply curve so in a competitive market we know now that that market supply curve at least in the short run is the horizontal summation of all of the individual firm marginal cost curves this is just a marginal cost curve itself now over in our monopoly picture I can put the marginal cost curve also and it's going to look like this marginal cost curve but I'm not going to label it a supply curve because over there we know we can't label any one curve as a supply curve so I'm just gonna call it a marginal cost curve still represents the marginal cost of production now the way that a competitive market works is we know that the market price is driven to the intersection of this market demand curve and the market supply curve so we get P star and we get Q star and we talked about consumer and producer surplus we know that our consumer surplus would be this area up here our producer surplus is this area down here all of the area under the demand curve and above the supply curve represents total surplus and we know with it a compact with a competitive market total surplus is my so mised let's talk about what happens though with a monopoly so with a monopoly when the monopoly maximizes its profit it doesn't care where the marginal cost curve and the demand curve intersect it's going to be looking where the marginal cost curve and the marginal revenue curve intersect so we've got to put our marginal revenue curve up there this will be the quantity that the monopoly produces this is QM and they're going to charge a price found on the demand curve so we go up to the demand curve and there's the price the monopoly will charge so now we can use these two pictures to compare what would happen in a competitive market to what happens with a monopoly and the first thing that we can see is that the quantity with the monopoly is going to be smaller than the quantity with a competitive firm or with a competitive market with a competitive market this would be the free the quantity that gets produced but what happens is the monopolist uses its market power to restrict quantity and drive price up so we see that with a monopoly quantity is lower than it would be with competition and we see with a monopoly price ends up being higher than it would be with a perfectly competitive market so the monopoly is using its control over price it's using its market power to restrict quantity and drive price up hey we'll see we'll draw a picture here in a second but you can see that consumer surplus is going to be smaller with the monopoly then it would be with the competitive market you'll also see you can see it in this picture we'll draw another picture but because the monopoly reduces quantity it's going to create some deadweight loss which is going to be this area right in here these triangles right there you can see the deadweight loss as that area right in there same as what we saw when we were thinking about a price floor or a price ceiling or a tax it's going to create deadweight loss because it pushes us away from that free market quantity let's draw a different picture with the monopoly now and let's just identify some different areas so we can see the effect on consumer and producer surplus so I'm going to put my marginal cost up here let's put the demand curve that the firm faces the marginal revenue curve here's the quantity the firm produces QM here's the price they charge we'll call it p.m. let's identify our perfectly competitive price which would be right there I don't need to draw a P here would be the quantity produced in a perfectly competitive market the intersection of the demand and supply curves would be right there so we can label I'm going to label this bigger triangle right up here let's label it big a so a here represents I realize there's a line cutting it in half but a represents that B represents the area of this rectangle right here here's C D is going to be all of this kind of trapezoid looking area and let's call that e so with a competitive market perfect competition consumer surplus would be a plus B plus C and with perfect competition producer surplus would be D plus E but let's think about what happens with the monopoly let's start with consumer surplus consumer surplus with the monopoly is just area a so the loss of consumer surplus is B plus C compared to perfect competition producer surplus with the monopoly ends up being all of the area under the price and above this marginal cost curve which is B plus D and then finally let's talk about the deadweight loss so this will be the deadweight loss of Manoj this is total surplus that we miss out on because the monopoly is using its market power to restrict the market quantity the deadweight loss would be C plus C now let's think about this for a second having the monopoly in the market creates deadweight loss but the monopoly is doing nothing wrong the monopoly is simply maximizing its profit so when we think about this deadweight loss we're looking at that from society's viewpoint we're saying that the economic pie is not as big as it can possibly be but again that's not because the monopoly is doing anything wrong the monopoly is making as much profit as it can make so the goal of the monopoly is not to maximize total surplus the goal of the monopoly is to maximize its profit that's that should be the goal of the monopoly it's just that it ends up having this negative side effect on society that it creates deadweight loss because of that deadweight loss the government may have some desire to step in and and prevent there being a monopoly in a market or fix it if there is a monopoly so let's talk about some government policy towards monopoly and let's start by thinking about some antitrust legislation so antitrust legislation you can think about this as the anti market power legislation or the anti-monopoly legislation and really the first piece of antitrust legislation was the the Sherman Antitrust law of 1890 and there were lots and lots of other antitrust laws that came along after that but essentially that that Sherman Antitrust law gave the power the government the power to do a few different things to try to prevent this deadweight loss of monopolies try to prevent a firm from having too much market power one of the things that the sherman antitrust legislation and other pieces of legislation have given the Justice Department the ability to do is to prevent merger so if there were two really big companies like Coke and Pepsi and they wanted to merge into one big soft drink company they would have to run that by the Justice Department and the Justice Department would most likely say no we're not going to allow that so the the Justice Department has the ability to prevent mergers I believe they have to argue it in front of a federal judge so it's not that the Justice Department has a final say I think the companies might be able to appeal it that's beyond what we need to worry about right now so the government can prevent mergers they can break companies up the government has the ability again if they argue before a federal judge and the federal judge agrees with them they can break companies up this is how Bell Telephone was broken up decades ago into what became known as the baby Bell companies and so essentially the Justice Department determined that Bell Telephone had a monopoly in in the market for telephone service and it broke it up into some smaller companies each with a smaller amount of market share and the idea there is it creates competition between those companies and you know there if one company has if there's just one company they're gonna use their market power but if it's multiple companies competing against each other then it's a different type of market it's an oligopoly we'll talk about that a little bit later so the government can break companies up another thing that the government can do is regulate the monopolist regulate the monopoly so the government could instead of breaking it up they could go in and say hey you've got to charge a price equal to this or you need to produce this quantity this is common in the case of a natural monopoly common with natural monopolies let's think about how that can be kind of a challenge though turns out that if we think about a natural monopoly a natural monopoly the cost curves are going to be a little bit different than kind of what we've drawn right here if we think about a natural monopoly we can think about a natural monopoly as being a situation where the average total cost is constantly declining so what the cost curves tend to look like for a natural monopoly is this we tend to have a constant marginal cost and then our average total cost is declining now when your cost curves look like this this means that there is some fixed cost if fixed cost was zero then the average total cost curve and the marginal cost curve would be the same curve but if there's a fixed cost than our average total cost is always going to be declining so we have a picture that looks like this this is the cost curves for a man a natural monopoly let's put the demand curve that the firm faces let's put the de market demand curve now if this monopoly was able to make its own decisions then we would look at the marginal revenue curve which would have the same intercept I intercept would be up here but my marginal revenue curve would come down like this there's marginal revenue the firm would look where marginal revenue equals marginal cost which would happen right here they'd produce this quantity and charge this price there's our monopoly quantity there's our monopoly price so now let's think about what would happen if say the government came in and regulated the monopoly to charge a price equal to marginal cost say suppose the government said you know what we're not going to let you charge a price that's higher than your cost of production well if they force them to charge a price that's equal to their marginal cost then the price they charge would be right down here the problem is that here would be their average total cost of production if we go up from this quantity here's the average total cost of production the average excuse me a ver äj-- total cost well if they're forced to charge a price equal to marginal cost their price would be below average total cost they make a negative profit they access the market in the long run so the government has to be careful in terms of the regulation that it imposes on a natural monopoly because some types of regulation could drive the natural monopolies out of the market the government could also say you know what instead of producing this quantity you have to produce the the competitive market quantity this quantity where the demand curve intersects what would be the supply curve in this market if it were perfectly competitive so they could force them to chart to produce this quantity and charge a price equal to marginal cost but then again their average total cost is going to be higher than the price they can charge they would exit the market in the long run so if the government it's it's not as simple as you might think for the government to regulate especially in natural monopolies the final thing that we'll think about is let me sneak it right in here the the last thing that we could think about the government doing in terms of monopolies to do nothing one of the challenges with the government stepping in to fix the deadweight loss of a monopoly is that there's this other thing that that tends to happen and we call it government failure there's a lack of accountability a lot of times when the government steps in so if we think about the problem with this the problem with doing some of these things is that there's a lack of accountability and so sometimes when the government steps in to try to fix a problem they create a bigger deadweight loss than the one they're trying to fix and so the last thing we would want is for the government to step in try to fix something and then create a bigger problem than the one they're originally trying to fix one of the problems with government it's a problem if we're thinking about fixing things it's not a problem in terms of how the government functions is that there's there's relatively long lags in terms of the government's ability to take action and those lags are built into the Constitution for a reason I can give you lots of examples where when the government acts very quickly people are not helped by that and so we have to be very careful about the government taking very swift actions sometimes there are our times when that needs to happen sometimes you wish that they could take action quickly and they can't but for most cases we don't want the government acting very rapidly and those are those lags and government action make it hard to solve problems like this the lack of accountability makes it really really challenging to to get a government situation where you know the the government is going to be able to do much of anything that's very effective one thing let's put in here now that I think about it I kind of skipped over one of the important ones that we need to think about sometimes the government just takes the monopoly over so let's put here instead of after breaking companies up the government can take the monopoly over that's the case of the US Postal Service it's a situation where the government took control of delivery of mail and so you can see why I mean if frankly I'm I worked at the post office for many years and I can tell you there are a lot of things there are a lot of hard-working people that work at the post office and they they do an amazing job of delivering the mail that they deliver after working there for many years it's it's kind of surprising to me that as many things get delivered as as actually do get delivered but the problem is that they're terribly inefficient and part of that is that there's a lack of accountability it's a government institution and so the government is not held to the types of accountability that a private firm would be held to so you have to be really careful with this taking something over there may be a time when that needs to happen but most of the time at least my personal opinion would be that's probably not the first thing we need to take a look at so these give you some some ideas about what the government can do a lot of times they do nothing and and you know maybe maybe dealing with that deadweight loss of monopolies is is fine we didn't talk about it but there really aren't that many cases of a good monopoly I gave you in when we were talking about the sources of barriers to entry we talked about DeBeers and we talked about Alcoa but there really aren't that many cases where there's what we would consider a pure monopoly I honestly I can't come up with what I would consider a perfect example of a pure monopoly so it's not that common but where it does exist it's going to create some deadweight loss what we want to do now is kind of clear this off and finish up by talking about some different pricing strategies we need to talk about what we're going to call price discriminate and how a monopoly might be able to make even more profit than it would if it charged only a single price so we'll clear this off and take a look at that let's talk about price discrimination now so price discrimination is the act of charging different consumers different prices for the exact same good and we'll talk about the conditions under which a firm can do this this is not illegal the firm can't price discriminate based upon a protected category like gender or race or anything like that but in terms of geographic location firms do not have to sell the same good on the west coast for a pride the same price that they sell that good here around the Midwest so firms can price discriminate based upon age so there can be senior citizens discounts there can be student discounts there can be military discounts those are situations where the firm is selling the exact same product to different groups of consumers for different prices so let's think about why the firm would want to do this if the firm can price discriminate they can increase their profit so the idea here is that kind of the simplest way to think about different groups is let's think about two groups with different willingness to pay to groups different willingness to pay so we may have a one group with a high willingness to pay and one group with a low willingness to pay so if we think about what's happening there if the firm is able to prevent the two groups from selling it or buying the good from each other if the firm can sell to this group and sell to this group and be able to identify who's in each group then they can make more profit than if they just chose one price to sell to everybody so if we've got a monopoly that cannot price discriminate we would call that a single price monopoly but if we've got a monopoly that is able to price discriminate then we would call that a multiple price monopoly or a price discriminating monopolist so let's think of an example let's suppose that I draw two pictures here I'm going to have one group where we have a high willingness to pay and one group where we have a low willingness to pay I'm actually going to for these pictures I'm going to simplify the marginal cost and the average total cost the picture I erased over here we used a constant marginal cost I'm going to do that over here and I'm going to to also make it even more simple than that I'm going to assume that our fixed cost is zero if we assume that then our marginal cost and our average total cost will be equal so it's just a simplifying assumption we could do this with upward sloping marginal cost and and everything would be fine but this this allows us to focus on on the effect of the price discrimination a little bit better so these are going to be let's let me draw them a marginal cost in each so here's marginal cost equals average total cost in each picture price quantity now over in this picture let's let's put our low willingness to pay group no willingness to pay and then in this picture we'll put the high willingness to pay now remember that willingness to pay is represented by the height of the demand curve so let's draw the high willingness to pay group first that means that at every quantity their willingness to pay is going to be higher than this group so I'm gonna put my demand curve that the firm faces up relatively high let's put in our left picture let's put a relatively low willingness to pay remember the firm is not in control of the willingness to pay so this group has a demand curve it's below that groups demand curve now the firm remember isn't interested so here's the demand curve the firm's not interested in where the demand curve intersects the marginal revenue curve they're interested in where the marginal revenue curve intersects the marginal cost curve and so if we put our marginal revenue on here for each picture now we can figure out the quantity that they're going to produce and the price that they're going to charge so the firm will charge this or produce in for this low willingness to pay group this quantity and charge this price for the high willingness to pay group the firm will produce this quantity this is where marginal revenue equals marginal cost in this picture and charge this price so you can see that the firm the monopoly would charge not surprisingly the high willingness to pay group a higher price and the low willingness to pay group they'll charge a lower price let's think about the situation that this firm would be in if they could only charge one price to everybody so if they can only charge one price to everybody they would look at the marginal or excuse me the the market demand curve and that they face we could sum these to mark demand curves together to get the market demand curve but let's make it more simple than that suppose they could only pick one of these two prices well if they were to pick this price if they picked the high price and they tried to charge that to everybody then none of these consumers would buy the good right looks like there might be a couple people up there that might buy the good or so if they choose this price they're not going to sell to most of these people down here these people just wouldn't buy it or they could choose this lower price and all of these people represented along this portion of the demand curve would buy the good and then these people over here many of them would want to buy it but this price would be too low over here they would be selling the good to these consumers for an inefficiently low price so if they choose one or the other prices they're just not going to make as much profit as if they were able to choose the right price for this group and the right price for that group but now here's the problem the firm has to prevent be able to prevent what we call arbitrage so let's suppose what would happen if somebody in this group bought the good at this price and then was able to go over to people in this group and say hey I'll sell you the good for a price that's less than what the monopoly would charge you but more than what the monopoly charged me notice that there's a if I were to extend this over here there's a range of prices here that would work for these people and these people there's a price somewhere in the middle like that where it would make these people better off they're able to buy it at this price and sell it for that and it would make these people better off because they're able to buy it at this price and it would have cost them that if they bought it from the monopoly we call that arbitrage if you're able to buy at a discount and sell it to somebody else you're making some money off that your arbitrage in the difference in prices so for price discrimination to work the firm has to be able to prevent arbitrage they have to be able to prevent people in one group that are able to buy it at the discount from selling it to people in the other group that don't get the discount so let's think about some categories that the firm can price discriminate on so I mentioned a couple of them they can discriminate based on geography they can discriminate based on age so they can give senior citizens discounts they can give student discounts they can price discriminate based on income universities do this universities chart give different amounts of financial aid to different students that is classic price discrimination they're charging different students different prices to sit in the exact same classroom as everybody else and they're deciding on that price based upon the income of the student more typically the income of of the student's parents but that's an example of price discrimination if we think about where we tend to see price discrimination take place it happens in markets where the arbitrage potential is low so if we think about it it's things like movie tickets let's say airline tickets it's a situation where consumers cannot I can't buy a ticket and sell it to you you wouldn't be able to get on the flight discount coupons we could think about financial aid I mentioned that we could also think about quantity discounts that's a little bit different than what we're talking about here but a quantity discount something like Sam's Club or Costco that's a situation where if you're willing to buy in larger quantities they'll give it to you for a different price than they would charge other consumers who want to buy in smaller quantities it's a form of price discrimination so you can see that it happens it's not that common but you've probably gotten a student discount or us if you're a student you probably haven't gotten a senior citizens discount maybe a military discount things like that let's talk also about something that we call perfect price discrimination perfect price discrimination this is a very rare form of price discrimination but it results in something that's kind of unusual and interesting so let's talk about perfect price discrimination this is a situation where the monopolist charges each consumer their maximum willingness to pay so each consumer is charged their maximum willingness to pay so the firm the monopoly would have to know your maximum willingness to pay so you can see that this is this is very rare it would be rare for the firm to know what your maximum willingness to pay is and I can tell you as a general rule it's never in your best interest to disclose your willingness to pay if you were to walk on to say a used-car lot the first question that they're gonna ask you if they're good at their job is what are you looking to spend today if you answer that question honestly you probably deserve to get taken advantage of don't tell them what you're willing to willing to pay I mean if it were me I would if they ask me what's my maximum willingness to pay I would turn around it turn it around and say hey you know what I'm interested in what's your minimum amount you'll take so don't ever disclose your willingness to pay you that just takes all of your bargaining ability away so it's rare for the firm to know what your maximum willingness to pay is but let's suppose they do and I can tell you that you know to get financial aid from a university you have to disclose pretty much your entire or your parents entire financial condition and so that's a situation where the firm the university has a pretty good idea or at least they have data that they can use to estimate your willingness to pay they get a pretty good idea of what your willingness to pay is with that so let's suppose the firm does know your maximum willingness to pay so let's look at the situation that this firm is going to be in so let's put up here let's let's also use this constant marginal cost thing just to make it easy marginal cost equals average total cost so we're assuming fixed cost equals zero again not important that you understand even why that results in this not not in this class let's put the market demand curve up here here's the demand curve now normally what we would do we draw the marginal revenue curve but let's think about what's happening here if the monopoly is able to charge each consumer their maximum willingness to pay and we recognize the fact that all of our consumers are represented along this demand curve and those with the highest willingness to pay are up here those with the low willingness to pay are down here think about this consumer with the highest willingness to pay the firm is going to charge them that price and then if we think about the next consumer with a little bit lower willingness to pay the firm is going to charge them that price and then the firm's going to charge this consumer that price and this consumer that price so there's not just one price there's not two prices there's a different price for every consumer what that means is the price is always found along the demand curve which means our demand curve and our marginal revenue curve once again are the same curve because the additional revenue that the firm gets from an additional consumer is found on the demand curve because it represents the price that consumers going to pay so now if we look now that we know that the demand curve and the marginal revenue curve are once again the same curve marginal revenue equals marginal cost right there there is the quantity that the perfectly price-discriminating monopolist will end up producing they will not charge one price we can't identify any one price because every consumer is charged their maximum willingness to pay so if we think about what's happening here notice that if this was a perfectly competitive market that would be what we would call Q star that is the quantity that would be produced with perfect competition so with perfect price discrimination with a monopoly that can do this we end up with the same outcome in terms of quantity that we get with perfect competition what that means is there's no deadweight loss in this market so with a perfectly price-discriminating monopolist deadweight loss is equal to zero if all we're interested in is total surplus this is as good as perfect competition but now if you're a consumer this type of market think about what your what consumer surplus is here if each consumer is charged their maximum willingness to pay consumer surplus is equal to zero all of this area under the demand curve and above the supply curve which if this was a competitive market that marginal cost curve would be the supply curve all of this area is total surplus so total surplus is maximized but it's also all producer surplus because consumer surplus is going to equal zero so if we go back to our argument where we talked about the efficiency of markets and the benevolent social planner the benevolent social planner who would be taking the point of view of society as a whole the benevolent social planner would not have any problem with this the benevolent social planner would say you know what in terms of total surplus that's as good as perfect competition problem is it would bad to be a consumer in that type of market because consumer surplus is equal to zero consumers still get the good it's just that they get no consumer surplus it would be great to be the monopoly in that type of situation because producer surplus is huge it all goes to the monopoly so and and clearly the monopoly knows everything about your willingness to pay so it's not surprising that you end up getting the short end of the stick as the consumer there are other types of price discrimination and if you went on in in a higher level economics class we would talk about different types of price discrimination but this gives you a good idea of kind of the the two main forms of price discrimination perfect price discrimination and then the type of price discrimination where they can break consumers up into a high willingness to pay group and a low willingness to pay group but this should give you an idea of what happens with this far end of the competitive spectrum where there's no competition what we're going to do in the next couple of videos is we're going to talk about what comes in between their monopolistic competition and oligopoly so I'll see you in those videos
Principles_of_Microeconomics
Chapter_17_Oligopoly.txt
in this video we want to talk about the fourth type of market that we're going to discuss and that's oligopoly and oligopoly means a few firms so anything more than one firm which would be a monopoly would qualify as an oligopoly let's just so that we know kind of where this all fits in we've taken a look at this several times but let's think about perfect competition down here that's the first one we studied perfect competition we talked about monopoly situation where there's only one firm so those are the two ends of the extreme and then we've talked about monopolistic competition that is basically perfect competition with differentiated goods now what we're going to do is we're going to talk about a market structure oligopoly that's closer to this end of the spectrum so as soon as you're not a monopoly as soon as there's more than one firms it becomes an oligopoly and then at some point there's it's not really a smooth continuum necessarily but down here to this end we've got monopolistic competition and perfect competition for for now let's take a look at some of the things that we've seen in in those different types of markets and let's just kind of compare what we saw we've seen that it with perfect competition price ends up being equal to marginal cost the firm's perfectly competitive firms charge you a price that's equal to their cost of production we also saw that price gets driven to equal the minimum average total cost and we talked about the fact that there is productive and consumption efficiency here deadweight loss is equal to zero with perfect competition that's really the type of thing that we were thinking about about back when we were talking about the efficiency of markets and the fact that a free market maximizes total surplus we talked then about monopoly and with monopoly we saw that Isis greater than marginal cost so a monopoly charges you a price that's greater than their cost of production we saw that price was not driven to equal the minimum average total cost now notice that what this means is that profit ends up being equal to zero for perfectly competitive firms for a monopoly as long as there's a strong demand for their product the profit that the monopoly earns can be positive in the long run and we saw that there is deadweight loss deadweight loss is greater than zero then we talked about monopolistic competition with a monopolistic competition we saw that price ends up being greater than marginal cost so that's the same as what we saw with monopoly we saw that price is driven to equal average total cost but it's not the minimum average total cost we saw that a monopolistically competitive firm actually produces with excess capacity they do not produce at their efficient scale but we saw that free entry drives profit to equal zero so price is greater than marginal cost with a monopolistically competitive market which is like monopoly but profit gets driven to zero in a monopolistically competitive market which is like competition and then there's deadweight loss with this type of market deadweight loss is positive but remember that since profit gets driven to zero it's it's really challenging to try to think about any type of regulation any type of regulation on these types of firms that force them to choose something other than their profit maximizing price and quantity would cause their profit to be less than zero they'd exit the market in the long run so there's deadweight loss but it's deadweight loss that we're just willing to live with and the silver lining is that in this type of market there's lots and lots of product variety and there are continually new advances in products and quality oftentimes is increasing and so there are some really good things that happen here that cause us to not worry too much about that deadweight loss now we're going to talk about oligopoly and let me just tell you right off the bat that this discussion is going to be a little different than the discussions we've already had in the sense that it's really hard to draw a picture of what a monopoly looks like because every different monopoly is going to look a little different and so we're not going to focus so much on pictures of cost curves and and profit maximization although those firms will be doing that we're going to focus more on some game theory stuff and we're going to think about strategic interaction between firms but we can say that with oligopoly price is going to end up being greater than marginal cost oftentimes it kind of depends on the type of oligopoly that we're talking about but with oligopoly there will be barriers to entry and what we'll see is that there are times when oligopolies can make a positive profit in the long run and there's going to be deadweight loss that's greater than zero with an oligopoly so you can see that an oligopoly looks in terms of the outcome for consumers and the outcome for society it looks very similar to what we see with a monopoly all these firms maximize profit by producing the quantity where marginal revenue equals marginal cost it's just that with oligopoly we we take a little bit different approach when we think about kind of how to analyze the market so let's think about the characteristics of oligopoly so we've got oligopoly the characteristics the first one is that there are a few firms so we're not going to define what we mean by few it could be two it could be five more than one and not nearly as many as there would be with perfect competition or monopolistic competition there are a few firms and when you have a few firms each firms going to have a relatively big impact on the market so in these types of markets we would describe the firm's as being small compared to the size of the market with an oligopoly we would describe the firm's as being big compared to the size of the market each firm is going to have a big impact on the the market the second characteristic and this one we can choose a lot of different ways to go with this one but we're going to say little product differentiation little product differentiation as a matter of fact we're we're going to think about the firm's of selling identical products we can have an oligopoly where the firm's sell identical products we could have an oligopoly where the firm's sell differentiated products and each different type of oligopolies going to act a little bit different it's going to behave a little bit different so for what we're doing let's just kind of assume that the goods are identical if they're not identical they're really close there's very little product differentiation so let's let's just say here we'll say almost identical and then the final characteristic is that there are barriers to entry they will not be strict barriers to entry there can be oligopolies where there can be some entry but there's going to be a barrier so for example a barrier to entry that would not be a strict barrier to entry would be say the market that I'm in so to be a professor you've got to have a terminal degree you need to have a PhD and and it doesn't matter if any of my students could teach economics better than me I don't have to compete against them not at this moment because they don't have the degree they could go get it it would take a few years and some effort but you could get the degree and then you could come and compete against me and my job but there's a barrier to entry in that particular job the medical industry any type of Licensing those all would be examples of barriers to entry but they're not strict barriers to entry so let's think about some examples of oligopolies so there are lots of them soft drink industry characterized by basically two big companies Coke and Pepsi there are some other smaller companies but the vast majority of the market belongs to two companies we could think about Airlines we can think about cellphone service or we could think about something like satellite radio or satellite television we could think about the the car industry automotives we could think about the crude oil industry that's an oligopoly the gasoline industry isn't that's close to a perfectly competitive market but the crude oil industry is an oligopoly I'm so you can see there are several examples I would say that all agha Polly is probably the second most common type of market monopolistically competitive markets most common very common oligopoly there's a good amount of that perfect competition not very much monopolies not very much I would say monopolies probably the the rarest form of market that we're thinking about it kind of depends on how we define the market if we were thinking about let's say the market for gasoline in a big city the market for gasoline is perfectly competitive but if we were talking about a smaller town that only had two or three gas stations at least the gas market within that town would act like an oligopoly and if we had a small town with only one gas station well it would act a little bit like a monopoly but people can drive to other towns to buy gas so it kind of depends on how we define the market so those are some examples of oligopoly the key thing about oligopoly that's different from any of the other types of markets that we're going to think about is we're going to think about strategic interaction we're going to think a lot about strategic interaction between the firm's that is not something that we thought about with any of the other types of markets so with perfect competition the firms don't care what the other firms are doing right if you're a gas station owner and you're trying to decide how to figure out what price to charge or what quantity to produce you don't have any control over the price you respond based upon what the market does to the price now the incentives that you face are going to cause you to push your price in a particular direction or the other depending upon what the market is doing but you don't care what the other gas stations are doing because the only thing that puts dollars in your pocket is what happens at your pumps okay so there's no strategic interaction that's happening with perfect competition the rule for a perfectly competitive firm produce the quantity where price equals marginal cost there's no strategic interaction for a monopoly and it's trivial because there's no other firms with a monopoly so the monopolist doesn't have to worry about other firms with a monopolistic competition there's no strategic interaction the firm's are all small compared to the size of the market they may have an incentive to try to market themselves to try to increase the residual demand curve that they face but that's not what we're talking about here okay it's not until we get to oligopoly where the firm's have to think about what the other firms are doing and the reason they have to think about what the other firms are doing is that the decision of any one other firm is going to have a big impact on the market these firms are large compared to the size of the market and so they have to think really carefully about what the other firms are doing that makes their decision much more challenging so here's the the thing that makes running and oligopoly difficult the thing that makes it difficult is you don't know what demand curve you face these firms all do they know what the demand curve that they face looks like but an oligopolist doesn't it depends on what the other firms do so they have to try to decide what the other firms are going to do before they can figure out what they're going to do okay so the decisions of any one firm have a big impact on the market let's just say that in an oligopoly the firms are large compared to the size of the market let's just say to the market and that's very different from any of the other three types of markets that we've thought about what that results in is they face uncertainty about the demand curve they face they are uncertain about the demand curve they face that makes it really challenging now in order to understand that that may not make perfect sense to you but in order to understand that we need to go through and exact a few examples and once you start to see what's going on here I think you'll understand why they're not sure what kind of demand curve they face when we have a situation like this where we need to study the strategic interaction between two people or between two firms or between people and a firm we use what's referred to as game theory so in game theory game theory is just kind of a sub-discipline within the field of economics and game theory is just the study of strategic interaction within game theory the way we're going to analyze things we're going to have a game so the strategic interaction we will refer to as a game although it doesn't have to be a fun game so if let's suppose you and I or stand in facing each other and you've got a gun to my forehead and I've got a gun to your forehead well that we would describe as a game not a fun game the decision I make depends on what I think you're going to do right so think about that situation if if I knew beyond a shadow of a doubt you were going to pull the trigger then it's in my best interest to pull the trigger first on the other hand if I knew beyond a shadow of a doubt that you weren't going to pull the trigger then it's in my best interest to not pull trigger because if I know you're not going to and I do that's murder and you face the same set of incentives now neither of us know exactly what the other person are going to do so that's a strategic interaction and we're going to have to in a split second make a decision about what we think the other person is going to do and we could be right we could be wrong we would describe that as a game clearly not a fun situation to be in the people who are involved in the game we would call the players the options that the players have we would call the strategies and then there's going to be some outcome of the strategy and we would call those the payoffs so the payoff could be a good thing it could be a bad thing okay so in game theory it's the study of strategic interaction using games players the players are going to have strategies they're going to choose their strategy each player is going to choose their strategy and then we get to observe the outcome and each player is going to get whatever payoff corresponds to that outcome so now let's kind of start to illustrate the the strategic interaction between firms with a kind of a simple market type example where we've got a couple of firms and let's suppose that these two firms are selling water okay so suppose we have what we're going to call a duopoly duopoly is two firms each of these firms is going to be selling water and just to make things simple we're going to assume assume that their marginal cost of production is zero we don't have to do that but that makes this example simple and we want it to be a simple example because we want to focus on the strategic interaction not so much the cost stuff so we'll talk here in a little bit about what happens if we make the marginal cost something other than zero so quite literally what the way this is going to work is that we have two firms in the morning each firm has to decide how much water they want to pump out of the ground and take to town and the pumping of the water and the taking it to town cost them nothing okay once they get to town they're going to sell their water and they can't bring more water once they've gotten to town now here's the problem for the firm the price of water when they get into town depends upon how much water I bring as a firm and let's suppose you're the other firm the price of water is going to depend on how much water each of us brings so if I know you're going to bring a lot of water then I know the supply of water is going to be high that day and the price is going to be low and I might not want to bring very much water if I know you're going to bring a lot on the other hand if I know you're not going to bring very much I might want to bring a lot because I know by you not bringing very much all other things equal that's going to cause the price to be higher so how much water I bring depends on how much water I think you're going to bring and you face the same dilemma you don't know how much I'm going to bring once we get there we'll find out how much we've both brought but by then it's too late to change what we did okay so that's the situation every day the two firms bring water to town and they sell it to the people in town let's pretend as if the people in town don't have any other alternative source of water they have to buy it from one of the two firms now let's think about what the town's demand schedule looks like so let's put over here quantity and let's make our quantity go from zero up to a hundred and twenty by tens so we're gonna go zero 10 20 up to 120 so there's our quantity now let's think about the price and what we're doing here is we're just tracing out a downward sloping demand curve I want this to be a very simple downward sloping demand curve so I'm going to start it at 120 I'm going to go down by 10 each time down to zero so if we do that it's going to look like this so there's a downward sloping demand curve if we were to draw that let me draw it right down in here if we were to draw that demand curve and price up here quantity down here at a price of a hundred and twenty quantity zero so right up here is a hundred and twenty and then it goes down by ten each time and over by ten each time so the slope is negative one so down here by the time we get to a price of zero quantity is a hundred and twenty so here's what that demand curve looks like right it's just a basic demand curve it's downward sloping like any other demand curve we've worked with so there's the town's demand schedule let's figure out what total revenue looks like total revenue now let's think about this assumption that we've made by making it cost less to produce water we are making total revenue equal to profit okay we don't have to do that but it just is more complicated if we don't do this so that's why we're making this assumption so all we have to do is take our price times our quantity and that gives us our total revenue which will be profit well if they don't sell any water because the price is 120 then they make total revenue the rest of these look like this there's what profit looks like so now let's think about what would happen under a couple of different circumstances we've set this up as if it's two firms we're both going to decide how much water we bring and then once we get to the market we'll figure out what the price is and then we'll know what our profit is so for example let's suppose you bring 30 gallons and I bring 30 gallons well if you bring 30 and I bring 30 then the total quantity that day is going to be 60 it's going to result in a price of 60 and are the total profits going to be thirty six hundred each of us is going to get half of that profit which is 1800 and the way that we're figuring that out is if we both bring thirty then the total quantity is going to be sixty the supply curve that day is perfectly inelastic because the quantity of water is whatever we brought it that day so there's the supply curve if we look at a quantity of 60 in that supply curve it tells us the price is 60 so that's all we're doing if on the other hand let's suppose that you brought 40 and I brought 40 well if you bring 40 and I bring 40 the total quantity is going to be 80 the supply curve would be shifted to the right in that case the total quantity is 80 and so it's going to drive price down to 40 so that's how we figure out what the price is going to be and what our profit is going to be if we both bring 40 and it drives the price down to total quantities eighty and it drives the price down to forty then we're going to split thirty two hundred dollars that's sixteen hundred dollars each right so that's how we've set this up now let's suppose that we think about a couple of extremes we want to know what the likely outcome is for these two firms the way we're going to figure this out is we're going to think about what the outcome would be if there was just one firm and then we're going to think about what the outcome would be if this was a perfectly competitive market so let's start by thinking about the outcome if this was a monopoly if monopoly well if it's a monopoly then there's no uncertainty if it's a monopoly we know exactly what the demand curve looks like and we would maximize profit by producing a quantity of 60 the price will end up being 60 and we would end up with $3,600 now let's look at how we know that okay so our demand curve looks like this it's 120 120 there's what the demand curve looks like let's think about what the marginal revenue curve looks like the marginal revenue curve has the same intercept as the demand curve and twice the slope so it's going to come down here like that if it has twice the slope then it's going to hit right here at 60 now we've got our demand curve and we've got our marginal revenue curve in order to figure out the profit maximizing decision for a monopoly we need the marginal cost curve but the marginal cost curve is zero marginal cost lies right on the horizontal axis right down there there's our marginal cost curve so we look where marginal cost equals marginal revenue which happens right there the firm will produce 60 gallons it gets its price from the demand curve and if we look at a quantity of 60 gallons the price is 60 there's the outcome for a monopoly it would earn a profit of $3,600 and it gets to keep all of that because this is the only firm in town there's the monopoly outcome and of course it's going to make as much profit as it's possible to make it's good to be the monopolist now let's think about what would happen if this was perfect competition so let's say if perfect competition so if it's perfect competition then price equals marginal cost in a perfectly competitive market we've already seen that if it's perfect competition then price is equal to zero if price is equal to zero the quantity would be 120 so price equals zero quantity equals 120 and profit would be equal to zero just like we would expect to see in a perfectly competitive market and that shouldn't be surprising because the way a perfectly competitive market works is that we would simply look at the intersection between the demand curve and the supply curve and the supply curve which is the marginal cost curve lies right on the horizontal axis right there and so we're looking at that intersection and that happens at a quantity of 120 and a price equal to zero there's no profit to be made in a perfectly competitive market and you might ask well gosh that doesn't make any realistic sense because you're making zero dollars here it doesn't make sense for the pumping of water to be cost us I get that we could make marginal costs positive we could make marginal costs ten dollars and then the price would end up being ten dollars but remember profit being equal to zero means that all costs are covered okay explicit and implicit so this is the perfect competition outcome so we know that if we had a monopoly it would be this outcome and if we had a perfectly competitive market it would be this outcome we know that oligopoly lies in between mean monopoly in perfect competition so we would expect our oligopoly outcome to be somewhere in this range and probably closer to the monopoly outcome than the perfect competition outcome so without knowing exactly what's going to happen we would expect our oligopoly outcome to be somewhere probably up in this area okay now you may already be thinking that one thing that could happen is that if you own a firm and I own a firm we could in secret get together and agree to both bring 30 and then end up getting this monopoly outcome we could collude with each other if two firms are colluding we call that a cartel so one possible outcome is for the firms to come to collude with each other and form a cartel it turns out that forming a cartel is unlikely there's going to be an incentive to do that but there's also going to be an incentive for the firms to break any agreement that they formed and so one of the reasons that they might want to do that is or that it's going to be challenging is they would have to decide first how to split output and they'd have to decide how to split the profits so let's say for example you own a company one of the water companies and I own the other one and we're sitting down in secret to talk about colluding with each other we can't do it out in the open because it's illegal to collude so we're sitting down in secret and I say you know what look I'm I'm an economics professor I know more about this than you do so I should get a bigger share of the profit and you might say well you don't have to know anything about economics to know that this is the best possible outcome so you maybe we should split it equally and if we start to bicker about how to split the profits then the collusive agreement might break down there's another reason why we would expect not to see a collusive agreement and that is that even if we do form an agreement this that agreement itself contains two the seeds of its own self-destruction and I'll explain what I mean by that let's actually what I want to do is clear off one half of this and we'll work the rest of this out over here let's suppose that you and I form a collusive agreement so in secret we get together and we say okay look here's here's what we need to do we need to each bring thirty gallons and if we each bring thirty gallons the total quantity is going to be sixty the price will be sixty we're going to make total thirty six hundred dollars of profit and we'll split that down the middle so your profit is going to be the thirty gallons you bring times the sixty the price the price of $60 which is eighteen hundred dollars mine will also be 30 gallons times the price of 60 which is eighteen hundred dollars the eighteen hundred two times is the thirty-six that's the best we can do so let's suppose you decide we agree on that we shake hands okay here's the problem with that so firm a let's let's say this if firm a expects firm B to bring 30 gallons which is the agreement that we have come to you expect me to bring 30 I expect you to bring 30 well let's suppose I'm firm a if I expect you to bring 30 then I can reason this way firm a can reason this way I could bring 30 that's one option I can abide by the agreement I can bring 30 that means that the total quantity say total quantity equals 60 the price equals 60 and my profit is going to equal $1,800 that's the agreement or I can bring a little bit more than that I can bring 40 think about what happens if you bring 30 but I show up with 40 so if I bring 40 now while you bring 30 the total quantity will be 70 if the total quantity is 70 then that's going to drive the price down to 50 price goes to $50 but notice what that does for me if I bring 40 gallons and I sell each gallon for $50 my profit is all of a sudden $2,000 so if I suspect you to bring 30 it's in my best interest to actually bring a little bit more than that bring 40 because I can get some profit above and beyond what I could have gotten if I abided by our agreement now notice what your profit is going to be this is profit for me this is firm a's profit your profit profit for firm B if you're bringing the 30 and selling it for $50 each year profit Falls to 1500 so you get hurt by me breaking the agreement but I walk away with $2,000 that's $200 more than I would have gotten if I would have have abided by our agreement here's the catch you can reason this way also so let's say but firm B can reason the same way so if you expect me to bring 30 then it's in your best interest to bring 40 if you bring 40 while I bring 30 then the total quantity will be 70 the price will be driven down to 50 but your profit will be 2000 mine will fall to 1500 now if both of us are reasoning this way you're sitting at home tonight and I'm sitting at home tonight and I'm thinking oh they're gonna bring 30 so I'm showing up tomorrow with 40 and you're thinking you're sitting at home tonight you're thinking oh no doctor Azevedo is gonna bring 30 so I'm showing up tomorrow with 40 then the likely outcome is that we both show up tomorrow with 40 likely outcome we both bring 40 now let's think about the situation that leaves us in if we both bring 40 then the total quantity will be 80 if the total quantity is 80 then that means the price is going to be driven down to 40 and notice that in that case the profit for me and the profit for you is going to be the 40 gallons that you brought times the price of 40 that's $1,600 and the profit for me will be the 40 gallons that I brought times the price of 40 which is $1,600 notice that the likely outcome with a duopoly is right there both of us bringing 40 total quantity of 80 the price is 40 this is the likely outcome for a duopoly notice that the likely outcome results in a total profit equal to $3,200 which is not as good as the profit for us would have been had we formed the agreement so you might think well okay so if we show up tomorrow each with 40 and we both walk away with $1,600 we can do better than that we can do better than that if tomorrow we bring 30 if we'll both bring 30 then we'll walk away each with $1,800 so let's form an agreement you bring 30 and I'll bring 30 and then we're right back to this if I think you're gonna bring 30 I'm probably going to bring 40 because I can walk away with $2,000 so the the the collusive agreement contains the seeds of its own destruction and here's what it boils down to once we form the collusive agreement it removes uncertainty if I know what you're going to do I can exploit that to my advantage and if you know what I'm going you can exploit that to your advantage so that the agreement itself creates an opportunity to do better than the agreement for each player and the likely outcome is that you won't be able to form the agreement people are going to have a strong incentive to break the agreement notice that what we're seeing here is that there is oftentimes a difference between what doing what's in your own best interest versus doing what's in the best interest of an organization or in this case the two players if we think about the two players as a team then what's in the best interest of the team is to act like a monopolist but if we're going to act like a monopolist then each player has to ignore what's in their own best interest because what's in their own best interest is to break the agreement and go walk away with $2,000 there's conflict there and that's true about a lot of things out there in the real world it is not the case that what is in your own best interest is also in the best interest of the organization and we can very easily see this in sports contests or we can see it in firms what's in the best interest of an individual baseball player is not always what's in the best interest of the team so sometimes for for a team to win sometimes the players have to put their own individual best interest aside the problem with that is that that can hurt them in the long run and so they they have this conflict it happens with working situations what's in the best interest of the business is not always what's in the best interest of the individual worker and so sometimes we see workers engaging in behavior that is in their best interest but not in the best interest of the business and it boils down to this there's there's conflicting incentives and you have to make a decision what's most important to me doing what's in the best interest of you and I together or doing what's in my best interest so this really demonstrates the uncertainty that in oligopoly faces they don't know exactly what the other firm is going to do and that makes their decision harder they have to try to to guess about what the other firm is going to do so what we want to do now is clear this off and then we're going to talk about a little bit different type of game that we call the prisoner's dilemma let's talk about one of the most basic games in game theory and that's a game that we call the prisoner's dilemma and you've probably actually seen the prisoner's dilemma played out maybe in a movie or TV shows you'll recognize it as soon as we start to talk about it so here's how the prisoner's dilemma works let's suppose there are two criminals and they get caught in say a minor crime so two criminals get caught engaged in a minor crime and they're caught red-handed the police have all of the evidence they need to convict them of this minor crime but let's suppose that the police suspect them of a major crime and they want them to confess so police suspect them for a major crime and want a confession so maybe these people get pulled over and they've they've got some drugs in the car or something or maybe they're pulled over and they're speeding on the highway and so they've got these people for this minor crime but in the trunk of the car they see things that would be used for robbing a bank and let's suppose there was a bank that had been robbed and and they want them to confess to this major crime so let's think about how they get them to confess most people kind of take this shortcut to thinking and most people would argue that the police get confessions by somehow beating it out of criminals and and I'm not going to say that's never happened I'm positive that that's happened but there's a much more elegant way to do it and the way that you do it is you understand the fact that criminals face conflicting incentives there's a set of incentives for the team of the criminals that the pair but there's also an individual incentive that each of them is looking out for individual self-interest and so if you can put them into a dilemma where they choose their own individual self-interest over the team's interest then you can get a confession so here's how the prisoner's dilemma works the first step you probably already know it you separate them you separate them and you put them in in a dilemma and here's what the dilemma looks like you tell each of them separately right now we can lock you up for let's say a year right now we've got all the evidence we need to lock you up for a year say one year but if you'll confess and implicate your partner then we'll let you go free and your partner goes to jail or to prison for let's say twenty years so if you confess and implicate your partner you go free and your partner gets 20 years but you need to do it quickly because if everybody confesses if you confess and your partner confesses then we don't need your testimony everybody confessed and we'll lock you up for say eight years so if you both confess you'll get eight years so you better do it real quick before your partner confesses because if your partner confesses first you're gone for 20 years okay the beauty of this is it puts them in a real dilemma and we need to understand how to analyze this this dilemma the way that we're going to do this if we want to understand kind of likely outcome of this game we have to create what I'm going to call a game matrix so each of the players has two strategies there's two players each criminal is a player they have two strategies they can confess or they can remain silent and then depending upon the combination of who confesses and who remain silent we get to observe the outcome or their payoff so let's create what I'm going to call this game matrix it's going to be a box I'm going to divide it up into four cells here and I'm using the word cells to refer to it like you would in matrix algebra not the criminal version of cells so let's put over here on one side let's put criminal two over here and let's put criminal one up here and then these two rows right here are going to correspond to criminal twos two strategies let's put confess here and remain silent and then criminal one has the same two strategies let's put confess here and remain silent now what I'm going to do is I'm going to divide each of these boxes in half this way and we're going to put the payoffs in the box and we're going to put criminal ones payoff up in the top right hand corner of each box and criminal twos payoff in the bottom left hand corner of each box so let's start with this one right now we can lock you up for one year so if neither of them confess then criminal one then both of them remain silent that puts us down in this box right down here where criminal to remain silent and criminal one remains silent each of them gets one year so we're going to say one year for criminal one and one year in prison for criminal - okay the other outcome let's start do than this one next if they both confess they each get eight years well if criminal two confesses were in this row if criminal one confesses were in this column so if they both confess we're right here and they're both going to get eight years so there's the payoff for criminal one here's the payoff for criminal two so now if that was the extent of the game then it's easy to solve right if that's the extent of the game then both of them remain silent and they get one year and we all know that the unwritten code of being a criminal is that you deny everything and stick to your story you you don't you don't confess the things everybody knows that it's not in the best interest of the criminals to confess because if they confess they get eight years if they remind remain silent they get one year but it's these off-diagonal elements that are going to be to fly in the ointment for each of the criminals it's this one right here that causes the dilemma for them so now if you confess and implicate your partner you go free so let's think about what that how that looks if criminal one confesses and criminal 2 remains silent and criminal one is going to get zero years and criminal too is going to get 20 years if criminal one or excuse me if criminal two confesses and criminal one remains silent then criminal one is going to get 20 years and criminal - the one who confessed gets 0 years now we've got the game matrix put together and now we need to think about how do we analyze this game matrix so the best outcome in terms of the team of criminals is this one right there if they could collude with each other and both agree to remain silent that's the best outcome that would correspond in the problem we had over here with the Monopoly outcome now that problem we had over here was a much more complicated problem because there each firm had more than two strategies so if we were to going to put the game matrix together for that that two firms selling water problem that would be complicated any problem I would expect you to be able to solve on a homework or a test we're gonna have two players two strategies and you will analyze it this way if you were to go on and study game theory and we've got a game theory class here at UCM and it's a very popular class you would do more complicated games than this one the field of game theory is fascinating and I would encourage you to take that class because a lot of people really enjoy it so let's think about how to solve this problem so the way that we're going to solve this as we need is that we need to think about let's write first the best outcome for the team or for the pair best outcome for the pair is to remain silent what we're going to see is it's going to be very very very difficult to achieve that outcome and the likely outcome is that they end up confessing but let's figure out how we we understand that so here's the way that we're going to do this we want to take the point of view of the criminals one at a time so let's take criminal ones point of view criminal one they've got two options and let's think about what the other criminal can do from criminal one's perspective criminal two could confess or could remain silent so if criminal two confesses let's think about what's in the best interest of criminal one so if criminal two confesses then we're in this row right here criminal one is going to either get eight years in prison or 20 years in prison they'd rather have eight years in prison so if they know that criminal two is going to confess it's in the best interest of criminal one to confess so if criminal two confesses I should confess criminal two might not confess criminal two might remain silence so if criminal to remain silent well in that case we're in this row if criminal 2 is going to remain silent then criminal 1 is either going to get 0 years or one year and they'd rather have 0 years so if criminal one knows that criminal 2 is going to remain silent it's in criminal one's best interest to confess I should confess notice that it's always in criminal one's best interest to confess this is what we call a dominant strategy so confessing is a dominant strategy for criminal one a strategy is dominant if it's always the best strategy regardless of what the other player does let's do this for criminal - let's take criminal two's perspective and we're going to see that it works out exactly the same way so let's say if one confesses if one confesses so now we're taking criminal twos perspective if if one confesses then we're in this column that means criminal two is either going to get eight years or 20 and they'd rather have the eight so if they know that one is going to confess criminal two should confess but criminal one might not confess criminal one might remain silent so if one remains silent in that case now we're in this column and criminal to is either going to get 0 years or one year and they'd rather have 0 years in prison than one so if criminal 2 knows that criminal 1 is going to remain silent criminal 2 should confess I should confess so notice that confessing is a dominant strategy for criminal to confessing is a dominant strategy for both of the criminals the likely outcome of this game is that they both confess and both spend eight years in prison even though they know that it's in the best interest of them as a team to remain silent to deny everything and stick to your story the likely outcome is that they confess and you don't need to beat it out of them you just need to understand the prisoner's dilemma so let's think about a couple of characteristics of this game we're going to think about the likely outcome both confess that doesn't always happen we'll talk here in just a second about the conditions under which we might expect them to be able to both remain silent to get that good outcome there are some times when that happens and notice also that we're making a we're making a not so much a probability statement but we would never say that this is the guaranteed outcome it's different for the two criminals it's different for every prisoner's dilemma situation would be a little different from every other one but what we're saying is that unless we have other information we would predict that the likely outcome is that they both confess now that's a bad outcome from their perspective but it is what we call the Nash equilibrium so this is a Nash equilibrium let me give you the definition of a Nash equilibrium and I'm just going to say it so that you can pause the video and then you can write it in your notes but a Nash equilibrium is a situation where given the strategies of the other players no sterno player regrets their strategy given what the other players did no player regrets their strategy now let's think about why this is a Nash equilibrium while these other three outcomes cannot be a Nash equilibrium so let's suppose the game has been played we have observed that let's say you're a criminal and I'm a criminal I observed that you confessed you observed that I confessed we don't like that because we're gonna spend eight years in jail but the question is do either of us regret our strategy do you wish that you had remained silent because if I had confessed and you remained silent you would be in here for twenty years and I wouldn't be in here and so you don't regret your strategy and I don't regret my strategy knowing that you confessed I'm glad I confessed because if I would have remained silent you would be gone out of here and I would be in here for twenty years instead of just eight so this is a Nash equilibrium let's think about this one so let's suppose that both of us are in prison for a year and we both remained silent do we regret that strategy well I regret my choice if I would have known you were actually going to remain silent I would have rather confessed because then I wouldn't be in prison at all so I do regret my strategy and you would regret your strategy knowing that I had remained silent it would have been better if you had confessed so this is not a Nash equilibrium if you go through the process I just went through you'll see that neither of these are Nash equilibria either so we've got a situation where this is the Nash equilibria though it's not a good outcome for the the play years that are involved okay so knowing that something is a Nash equilibria doesn't mean it's necessarily a good outcome it's good from society's viewpoint it depends on the viewpoint you're taking but it's not good from the viewpoint of the players involved now let's talk about this dominant strategy stuff this is how you solve one of these problems if I give you a problem and it would be set up like this you put the game matrix together and then you figure out whether or not anybody has a dominant strategy if you were to see that there's not a dominant strategy then here's what it looks like if criminal two confesses I should confess if criminal one or if criminal two remains silent I should remain silent that's what it would look like if there was not a dominant strategy these two things would be different if that's the case then you would not be able to predict a light likely outcome unless you had some more information okay so if I were to give you a problem and you go through this process and you realize that neither of them have a dominant strategy then you would just say there's no dominant strategy or I can't predict a likely outcome but if one or both of them have a dominant strategy then they're likely going to play their dominant strategy and then you can figure out what the likely outcome is going to be so not all games have a not Nash equilibrium some of them have multiple Nash equilibrium I would give you a game that only if it's going to have a Nash equilibrium it would have one let's think about when we when the criminals might be able to confess so when is cooperation more likely when is cooperation I think I forgot how to spell cooperation their cooperation more likely we don't always observe the criminals confessing sometimes there are situations where they do remain silent and so we need to think about when is cooperation more likely well one of the things that makes cooperation more likely is if this was a repeated game so if it's a repeated game then it's more likely that the players involved in the game will learn from the game so let's go back to that that to firm example where we're we're selling water and let's suppose we're having a hard time cooperating we know that that the best outcome is for both of us to bring thirty gallons but day after day we bring 40 then one thing that we could do is I could pull you aside and I could say look here's what's happening we're we're both bringing 40 but we would be both better off if we bring 30 how about tomorrow you bring 30 I swear I'll bring 30 and let's just see what happens and suppose that you trust me and I trust you and so that next day we actually bring 30 and we realize that we're able to walk away with more profit than if we both bring 40 then if we know that this game is going to be played tomorrow and the next day and the day after that we might be able to realize that in a repeated game we are better off by cooperating if it's a one-shot game if tomorrow is the last time it's going to be played then I'm going after that 2,000 dollars if you want to bring 30 that's great and I hope you do but in a one-shot game cooperation is highly unlikely so that's one characteristic under which cooperation is more likely or if they can punish each other if the players in the game can punish each other for breaking the agreement then cooperation is more likely okay so if if the players before they go to prisoner let's say before they commit the crime they know that each of them is willing to let's say kill the other one if they break the agreement or have somebody else kill them or maybe have somebody else kill a member of their family then it's more likely that they're going to remain silent now we're adding something to the game there it's not just this as the payoff it's that there's another dimension it's one year in prison but if you confess you're going to go free but I'm going to have you killed okay so we would have to alter the game to build that in there but if the players can punish each other then certainly there's a higher probability that we will observe cooperation between them or if they have a relationship so if the two criminals are let's say brothers well there's a higher probability that we they will observe or that we will observe them cooperating with each other so if I were going to engage in a crime I wouldn't but if I were and I needed to pick an accomplice rather than just picking somebody that I'm acquainted with that is handy at the time I would rather pick my brother one of my two brothers I can trust both of them I know they're not going to rat me out and I they know I'm not going to rat them out and so if I had to engage in a crime I'm going to choose somebody that I can trust completely and then there's a much higher probability that we're going to be able to observe cooperation the last one is if there's a small number of players so if there's a small number of players then it's more likely that we will observe cooperation although even with two players the smallest number of players there can be for it to be a game it's still unlikely that we're going to observe cooperation but imagine the situation if this is a group of twenty criminals that have all been caught all that it takes is one of them to confess and then the whole thing breaks down so the higher the number of players the smaller the probability they will actually be able to cooperate with each other and if you think about say criminal organizations that have been successful for long periods of time you'll start to see that those criminal organizations have some aspect of all of these involved they will emphasize the the family nature of the organization and they will not hesitate to punish each other very violently and it's a repeated game small number of players usually if they're going to engage in a specific crime they have a small number of people that are going to engage in it so criminal organizations that are successful they do that so there is there are times when we observe cooperation but it's not really that likely a lot of times we observe confessing what I'm going to do is I want you two to go to YouTube now I want you to pause this video I want you to go to YouTube and there's going to be a link for you to click on and I want you to search for actually follow the link and it's going to take you to a video that is a clip out of the movie A Beautiful Mind and in there in that clip you're going to observe a beautiful mind is a story about John Nash John Nash came up with the Nash equilibrium he won the Nobel Prize for his work on what's become known as the Nash equilibrium and it has to do with this prisoner's dilemma and the inability to cooperate and the movie A Beautiful Mind was made about John Nash and his life and it's a fascinating movie I would encourage you to watch it but the scene I want you to watch in the movie is a scene where some some guys have just passed their qualifying exams to get their PhD in economics they've just passed their qualifying exams and they go out to a bar to celebrate and while they're at a bar some ladies walk in and and the way they portrayed John Nash is coming up with the Nash equilibrium is in this scene where he's observing what's happening with the ladies who have walked into the bar so I want you to pause this for a second go watch that video and then come back to this one and we'll talk for just a second about whether or not they do a very good job of illustrating the Nash equilibria so I'll clear this off while you pause the video and watch that clip so let's talk about that clip so in the clip the ladies walk into the bar and John Nash sees them and what he realizes is that the the guys are all going to have an incentive to go after the the one young lady who is better-looking than the other ones and the problem with that is going to be that they're going to go try to get her attention and they're all going to essentially block each other and then what's going to happen is once they've been rejected by the good-looking one then they're going to try to go after her friends but her friends are going to be upset and because they didn't go after the friends first and so everybody goes home empty-handed and what he realizes is there's a better outcome and the better outcome is that what we need to do is ignore the good-looking one if what we will do is if we'll ignore her and just go directly after her friends then her friends will be much more receptive to our advances and then everybody's gonna pared off with one of these girls and everybody's happy and that's where they end and and he jumps up from the table and he says something about Adam Smith being wrong and he runs out and he thinks the lady on her way on his way out and and then he goes and writes his dissertation which won him the Nobel Prize in Economics and when I asked my classes we will watch that in a face-to-face class and if I ask them does that do a good job of illustrating what I just taught you a lot of times students will say yeah it does it it it demonstrates that sometimes there's a better strategy but I would argue that it the movie does a terrible job of depicting the actual prisoner's dilemma and the reason is because what John Nash the way they depict it is that John Nash says there's a better strategy the better strategy is that we all agree not to go after the one good-looking one but now think about what that collusive agreement creates it contains the seeds of its own destruction if we all agree not to go after the good-looking one if you know your friends are not going to go after the good-looking one what's in your best interest well it's in your best interest to go after the good-looking one but that's true for all of them so they can form this collusive agreement to ignore her but if you know everybody's going to ignore her then it's in your best interest to be the one person who doesn't ignore her because then she's going to be more receptive to you and yet all of them face that same incentive so the end result is that they all still go after the good-looking one right so in my opinion the movie while I had liked the movie and and I think there are some good things about the movie it's very interesting and I think if you're interested in and learning more about John Nash reading the book that that that movie is based on is it's a very interesting book I'm John Nash was killed not too many years ago in he and his wife were riding in a taxicab in New York that was hit by another car and it killed both of them but John Nash had a very interesting life and in a lot of ways a very tragic life because he suffered suffered from schizophrenia so I think the movie is decent but I don't think that they do a very good job of actually illustrating the conflicting incentives that that the prisoner's dilemma is designed to to demonstrate let's take that prisoner's dilemma because in this class we're not really that interested in whether or not criminals confess we're more interested in in how firms behave and so let's take this prisoner's dilemma and let's think about how it would work if we had two firms competing against each other so let's think about a duopoly game and this will be a simpler more simple game than the one we had with the two firms selling water so let's suppose we have two firms and these two firms are in a town suppose it's Walmart and Target and let's suppose both of the stores sell some game system that's the each firm sells the gaming system I'm not going to demonstrate how out of touch I am with gaming systems by by giving you the name of one because it'll just make me look foolish so whatever your favorite gaming system is that's the one we're talking about so Walmart sells it target sells it they're the only two places that the people in town can buy it and let's pretend they can't go to another town to buy it if anybody wants to buy the gaming system they have to buy it from one of those two places and let's suppose that the manager of each store has to decide whether to charge a low price or whether to charge a high price so the two strategies are to charge a low price or charge a high price so let me go ahead and give you the game matrix here let's set this up same way we set up our prisoner's dilemma so each of these stores each of the managers is going to have those two strategies let's put target up here at the top and let's put Walmart Walmart over here and let's put let's suppose the two prices let's suppose the low price is $400 and the high price is $600 so those are the two choices we could have more choices but we want to keep this fairly simple so let's suppose we put our two strategies right here charge 600 or 400 Walmart can charge 600 or 400 let's divide these cells in half and we'll put the payoff to target up in the top right corner and we'll put the payoff to Walmart in the bottom left corner of each of these cells let's suppose if both of them sell it for $400 let's suppose that they each make $7,500 profit on the other hand if they both sell it for a high price let's suppose they both make $10,000 in profit now if those were the only two things that they had to worry about then clearly what they want to do is both sell it for 600 both stores will make more profit if they will agree to sell it for $600 then they would if they both sell it for $400 here's the catch though let's suppose that one of them sells it for 600 while the other one sells it for 400 in that case the one that sells that for 400 is going to be able to steal customers away from the other one and so the one that sells it for 400 let's suppose they make $15,000 in profit and the one that sells it for 600 makes $5,000 in profit down here its target that's selling it for 600 and Walmart that's selling it for 400 so Walmart would get the $15,000 in profit and Target gets the $5,000 in profit and so what we've got here is the classic prisoner's dilemma the best outcome for the two firms is for them to both sell it for 600 the problem is if you'll analyze this game the way I taught you to analyze a game you'll see that selling it for 400 is the dominant strategy and so the likely outcome of this game is that outcome both of them selling it for $400 earning less profit than if they would sell it for $600 and you might think well they should just agree to sell it for 600 well keep in mind that colluding is against the law but let's suppose they do agree well if your target and I'm Walmart and I and I know we have agreed to sell it for 600 and I think you're gonna sell it for 600 it's in my best interest to sell it for 400 because then I can get this fifteen thousand dollars in profit instead of 10,000 I'd rather have 15,000 than 10 so selling it for the low price is a dominant strategy even though there's a better outcome now again if this was a repeated game or if these firms could somehow punish each other every once in a while I have a student raise their hand and say hey you know what I've figured this out all they need to do all Walmart and Target need to do is sign a legally binding contract to sell it for 600 that's illegal right you can't do that because the whole collusive agreement between firms is illegal it's it's outlawed by the Sherman Antitrust Act and a whole bunch of antitrust legislation so if you show up at you know in front of a judge and say hey look I've got this agreement I'm the manager of Walmart and and manager of Target and I'd so both signed this agreement to collude you're going to get in big trouble so you can't do that so it all has to be done secretly and we tend to see very little collusion now that doesn't mean we see no collusion if you look at kind of classic examples of collusion out there in the real world the airline industry there's been situations where different airlines have colluded with each other actually if you're interested there's a fascinating podcast the podcast is one edition of this American Life and the title of the podcast is the fix is in it was done I think maybe three or four years ago it's been a number of years ago that it was when I first heard it and it is a fascinating podcast about a story of a guy who had been engaged in some illegal activity but he agreed to be kind of an informant for the FBI to help them nail some international firms that were colluding to fix the price of something off the top of my head I can't even remember what it was they were selling but it is a fascinating discussion of exactly the the see nature of collusion and and how firms do it out there in the real world and and how they can get caught doing it so I would highly encourage you to listen to that podcast let's talk about some other examples of how we can apply this collusion or inability to collude result that we've just gotten because it applies to a lot of very interesting things out there in the real world that don't really have anything to do with firms making decisions notice in this chapter that we are not drawing the cost curves for firms and looking where marginal cost equals marginal revenue but that would be going on behind the scenes here these firms would be to the extent that they can trying to figure out where marginal revenue equals marginal cost the problem for them is there's a massive amount of uncertainty they don't know exactly what demand curve they're going to face any particular day and so knowing what their marginal revenue curve it it's very challenging they don't know what it looks like and it can look different from one day to the next depending upon what the other firms are doing so that's what we're focusing on why we're focusing on the game theory aspect of it rather than just a marginal cost curve and a marginal revenue curve but keep in mind these firms are still going to be trying to maximize their profit it's just that there's uncertainty so let's think of some other ways that we can apply this this prisoner's dilemma idea one of them is in terms of advertising so we tend to see a fair amount of advertising with oligopolies the problem is for oligopolies the advertising typically doesn't help them we see a lot of it but let's think about why if we go back and we look at monopolistic competition we talked about the fact that there is going to be a strong incentive for those firms to advertise to market themselves and the reason is they need to try to further differentiate their product if they can do that they can shift that residual demand curve to the right and at least for a while they have a shot at making a positive economic profit so we see a lot of advertising and that that type of market structure with a oligopoly we also see a lot of advertising you see Coke and Pepsi do a lot of advertising you see the airline's do advertising you see cable companies advertise you see DirecTV and Dish Network advertising you see a lot of advertising here the problem is the advertising for these firms tends to be relatively unproductive and the firm's would be much happier if they didn't have to do it so let's think about a couple of cigarette companies advertising so kind of this is kind of a classic example that's described in a lot of economics textbooks let me show you kind of what the game matrix looks like for two cigarette companies that are deciding whether or not to advertise let's make our two companies Marlboro and let's say Camel let's suppose that each company has two strategies they can either advertise or not advertise I'll abbreviate that so advertise or not advertise let's break up the cells into halves so we can write the payoffs let's suppose that if both of them adverse advertise Marlboro gets three billion in profit and let's say Camel also gets three billion in profit if on the other hand they don't advertise let's suppose that Marlboro gets four billion in profit and camel gets four billion in profit and you might ask wolf why that doesn't make sense why why would the payoffs look like that well actually it makes perfect sense if you think about why people choose to smoke people do not choose to smoke because of advertising so it's not the case that a nonsmoker is out getting gas at the pumps and they're sitting there filling up their car and they look up on top of the pumps and they see that hey lo and behold Marlboro cartons are on sale this week and think to themselves you know what I've never smoked in my life but they've got a sale on Marlboro cart and so I'm gonna go get me a carton and start smoking that's not why people choose to smoke people choose to smoke because their friends smoke their parents smoke they choose to smoke for reasons other than advertising okay so the advertising doesn't get new customers for Marlboro or Camel so they would be better off if they didn't spend the money on the advertising so from their perspective this is the best outcome if they advertise they're not going to generate new smokers they're going to that dollars spent on advertising it has no payoff for them and so they end up making less profit overall here's what does happen when you advertise though if you advertise and the competitor doesn't then people who are already smokers start to switch brands so if camel chooses to advertise while Marlboro chooses not to advertise camel can steal some of the market away from Marlboro and so the firm that advertises is going to make a bigger profit than the firm that doesn't so let's suppose that if Camel choose us to advertise but Marlboro doesn't we're up here in this box let's suppose Marlboro gets 2 billion in profit and camel gets 5 billion in profit on the other hand if marburo chooses to advertise while camel chooses not to advertise the Marlboro is going to get the 5 billion in profit and camel gets the 2 billion now if you'll analyze this the way I taught you to do it what you'll see is that the likely outcome for this game is both of them to advertise advertising is a dominant strategy and so the likely outcome is right up here even though there is a better outcome for these two firms they would rather not advertise but advertising is a dominant strategy let me pause here for a second before we talk more about this and just show you that the likely outcome is not always going to be up in the upper left hand corner because in this game the likely outcome is in the bottom right hand corner it depends on where you put your your two strategies so you could analyze this game and another person could analyze the game and switch the numbers here and their likely outcome would be in a different location okay so the reason I say that is don't get used to thinking that once you put the game matrix together you just need to circle the top left corner that's not how it's always going to work so let's go back to this example if you were to go back to the early 70s cigarette companies did a lot of advertising on television and you can actually get on you to YouTube and search for old cigarette commercials in there they're incredibly odd to watch because we just can't imagine having a commercial like that on television but they used to have commercials and then Congress decided that what they would do is they would punish the cigarette industry by not allowing them to advertise on television they would have to advertise in magazines or they could advertise in stores or other places but they couldn't advertise on television and at the time the cigarette companies kind of stood back and said oh gosh don't do that to us please don't make it illegal for us to advertise but they didn't really oppose it very much and so what ended up happening was the legislation passed and they prohibited the cigarette companies from advertising on television and if you look at the profits that the cigarette companies earned they went up after that and you might ask well why well here's why because the government solved the prisoner's dilemma for the cigarette companies the governor the government made advertising at least in on television illegal and that moved them down here to this outcome that they wanted to be at from the beginning and so what you see is we still see a push for restrictions on cigarette companies and what they can do and whether or not they can advertise within 20 feet of a candy counter and as if advertising within 20 feet of a candy counter actually has anything to do with whether or not people smoke we still see pressure put on to restrict the ability of cigarette companies to advertise and the cigarette companies kind of stand them back and say now you know what don't do that please don't do that but that's exactly what they want if you want to hurt the cigarette companies let them advertise let them advertise wherever they want now I'm not advocating that as something we should do but if your goal is to drain the cigarette companies of dollars you should make advertising easy for them because it's not going to help them get new smokers and they don't want to do it in the first place so this is a good example of how we can use this prisoner's dilemma and and what we know from it to understand something that we see out there in the real world this also applies to a lot of other things if we think about the arms race I think your book may have a discussion of the arms race as a prisoner's dilemma here's the situation if we think about two countries let's make it simple two countries two strategies either armed or disarmed so you either have nuclear weapons or you don't well there are two inherently safe situations in the world one situation is where nobody has any nuclear weapons and the other situations where everybody has a nuclear weapon and that's a safe situation because of mutual Assurance of destruction if you're going to push the button to nuke another country you know they're also going to push the button so you're essentially pushing the button to kill yourself so both of those are inherently safe situations the unsafe thing is when one country has them and the other doesn't and if you look at the outcome of that game arming yourself is a dominant strategy and so we tend to see countries arming themselves and if you want to understand how we could somehow get - probably without argument a better outcome where nuclear weapons didn't even exist I don't think there's any doubt in anybody's mind that that is the best outcome of the game the problem is it's a dominant strategy to arm so it's not the case that politicians are just stupid and that that they all they care about is killing everybody else so that's why they want these weapons it's a dominant strategy if we want to understand how to change that then we have to go I've erased them but if you go back to the conditions under which we would expect cooperation if we want agreements arms agreements then we have to recognize that this world we live in is a repeated game we have to have relationships between countries so it's easier to cooperate if the countries have a relationship so if countries are isolating themselves that's not good for solving that that prisoner's dilemma if there's an ability to punish a country if they break the agreement then we have a higher probability of observing cooperation so setting up international organizations that have authority to punish a country if it does something that is not abiding by the agreement that can be one way to get closer to that good outcome of not having any nuclear weapons at all so the arms race is a good example of the prisoner's dilemma another excellent example is dirty campaigning this is something that we're all used to seeing these days it's something that people complain about and if you go to the barber shop you'll hear people talking about how politicians are just dirty these days and all they ever do is dirty campaigning well if you think about it it's a dominant strategy let's suppose nobody campaigns dirty ok if nobody campaigns dirty that's one possible outcome or if everybody campaigns dirty if nobody campaigns dirty then voters tend not to switch if everybody campaigns dirty voters tend not to switch what happens though is if one person runs a clean campaign and the other person runs a dirty campaign then voters tend to switch towards the person that's running the dirty campaign because if somebody's talking bad about your candidate and your candidate is not responding then you start to think to yourself you know what if that wasn't true I'd speak up and I'd say something but if your candidate just keeps their mouth shut then people start to think well maybe it's true and they start to switch and so what you start to realize real quickly is that if you want that dirty campaigning is a dominant strategy believe me the candidates would love to not spend all the money that they have to spend running those dirty ads they don't want to do that it's not that politicians are inherently more dirty than anybody else that's not that's a shortcut to thinking that's not the way it works it's that if you are going to run a clean campaign no matter what you're probably going to lose unless somehow you are lucky enough to be running against another candidate who is not going to campaign dirty and we do see some of that sometimes we do see two candidates that that they don't really say anything bad about the other candidate but here's the real thing we you might think well let's just outlaw a dirty campaigning well how do you do that how do you outlaw something when you can't even define what it means to campaign dirty if if somebody voted for something and I point that out as a candidate does that mean that that's a dirty campaign I mean you might not like that if you like the other person and I point out that they they voted this way or they didn't vote this way so somehow defining what it means to campaign dirty that's that's impossible so it's a dominant strategy and I don't know how to fix it I guess we can think about using some of those conditions under which cooperation is more likely but that's really challenging with dirty campaigning so you can see that there are a lot of applications of this prisoner's dilemma not just to firms and how firms make decisions but to you know lots of other things that we tend to see game theory is very useful for analyzing sports situations it's very useful for analyzing military strategy situations it's really widely applicable to a lot of different things that we observe out there in the real world so let's kind of conclude this discussion of oligopoly with a couple of different things so the first thing is that collusion is unlikely if I ask my students at the beginning of a principles class whether they think collusion is likely many students will say that it's happening all the time that all firms are colluding against us no they're not they're not as a matter of fact we don't see collusion with perfect competition if you think about the gasoline market it's tempting to think that firms are colluding because when you drive into a town you see that all the gas stations have a price that's pretty close to everybody else's that tempts you into thinking that there has to be somebody behind it but what's happening there is just that the incentives are driving everybody to that price that's what happens when you have a market where ever the demand curve and the supply curve intersect the price is going to be driven to that price and so when you observe all of the gas stations having a price that's within a few cents of each other it makes you think as if they're colluding but they're not they're just responding to the incentives that are created at their particular gas station we don't see collusion obviously with monopolies and we don't see collusion with monopolistic competition because there's so many firms that the number of firms is too high for them to collude it's not until we get to oligopoly that we see collusion and even then it's hard for all Agha police to agree to collude and then stick to it so collusion is unlikely but it does sometimes happen so let's say but sometimes happens the second thing is that we have to keep in mind that firms have conflicting incentives firms have conflicting incentives this is true with regards to a lot of things in life there's not actually most of the time there's not one right answer life would be simple if there was always one right answer but really that's what the study of economics is about is is understanding that everything has benefits and everything has costs and sometimes there's uncertainty about those benefits and costs and that makes problem-solving challenging it's one thing if you know exactly what the benefits are and exactly what the costs are but there's uncertainty there's uncertainty created by the fact that life itself contains randomness as and is uncertain it's uncertain in part because other people their behavior is uncertain the behavior of firms is uncertain so firms have conflicting incentives and that makes problem-solving challenging here's the other thing it's good for society that firms have a difficult time colluding it's good for society that firms have a difficult time colluding we're kind of taught to think that cooperation is always the best thing that if we could always just cooperate life would be way easier there are times when that's true I would argue though that there are lots of times we do not want people cooperating we certainly don't want firms cooperating with each other because when firms cooperate that drives the price up for consumers it's good for the firm's if they can cooperate but it's bad for consumers it creates deadweight loss and so it's a good thing that firms have a difficult time colluding it's a good thing that criminals have a difficult time colluding it's a good thing that the police are able to get criminals to confess Society is made better off by that so some types of cooperation are good but there are lots of times when we do not want to observe cooperation the other thing is and let's finish with this the more firms there are the less likely cooperation is the more firms there are the less likely cooperation is I would argue that this is also a useful piece of information to keep in mind when you're thinking about something like a conspiracy theory so if you're thinking about some conspiracy I would argue that we can use what we've learned right here to come up with a good rule of thumb for whether or not there's anything behind a conspiracy theory or not and this piece of information right here is very useful any conspiracy theory that involves a lot of people I'm not very I'm not going to lend much credence to that if there's a conspiracy theory that involves a lot of people to pull it off and a lot of people to keep their mouths shut I'm not gonna buy it that's not to say that there aren't some conspiracy theories out there that that are actually happening I don't know there have been conspiracies in the history of the world that we know of that involves some people keeping their mouths shut and they did or at least they did for a while but if we're talking about large numbers of people keeping their mouths shut or large numbers of people cooperating with each other to pull it off I don't know there's a lot of evidence that tells me that that's very unlikely it's not completely unheard of but the more people have to be involved in it the less likely it is that they're all going to be able to cooperate and then continue to keep their mouths shut unless there's some ability to punish them and if there is if they know that speaking up is going to get themselves killed well yeah that that increases the probability so we can we can generalize a lot of this information to other things besides just firms producing goods and services but keep in mind really what we're after here is this fourth market structure where we're talking about how this compares to monopoly and monopolistic competition and perfect competition and this fits in with in between monopoly and monopolistic competition closer to monopoly if we increase the number of firms we get closer and closer to monopolistic competition it depends on how differentiated the goods are so hopefully this gives you an idea of some game theory stuff and I will see in a future video
Principles_of_Microeconomics
Chapters_10_and_11_Externalities_and_Public_Goods.txt
in this video I want to talk a little bit about a concept known as externalities and really what we're going to be thinking about is the issue of pollution where pollution comes from why it's created and then what we can do to try to have less of it okay so we're going to think about how how markets result in the creation of pollution actually any economic activity results in the creation of pollution but we'll think about specifically how the issue of pollution is dealt with within Mark a market-based systems so let's start by thinking about um this question what's the right amount of pollution and that's a kind of a tricky question because if you go out and you ask people on the street what's the right amount of pollution everybody's knee-jerk reaction is to say zero but we have to think very carefully about that question because it's much more subtle than than just saying no we we need to strive for having none um what we know about this world that we live in is that that any type of economic activity the the production of anything creates pollution pollution is not something like um tires that are created solely created in a factory we don't have pollution factories out there if we had pollution factories and that was where pollution came from then we just closed those factories down pollution is a byproduct of the production of everything and so if as a matter of public policy we wanted to have zero pollution then the only way to achieve that is to have zero production of anything or everything well clearly we don't want that and so we have to figure out well what's the how do we balance the benefits that we get from the production of things like baby formula and medicines and food and transportation to different places and entertainment how do we how do we pick the right amount of that so that we don't have too much pollution because pollution is something that we don't want to have we want to have less of that so we have to think about well what what are the benefits of the things that we're producing out there in the economy but also what are the costs and the pollution is one of the costs of of production and unfortunately a lot of times the the costs created by pollution tend to kind of get pushed aside and and ignored because they can be hard to put a dollar amount on and so it's easy to ignore them it's easy to ignore anything that you don't have a dollar amount for but we have to think about if we're going to make good economic decisions we can't ignore that so let's start by thinking about a concept known as externalities externalities so here's the definition of an externality we can have a positive externality we can have a negative externality but here's what an externality is an externality is a either a benefit or a cost that affects somebody that's not directly involved in the production or consumption of a good or service okay so let me give you a different less sophisticated definition of it an externality an externality exists anytime one person's Behavior affects somebody else so somebody's either getting a benefit or bearing a cost and that benefit or cost that they're bearing is a result of somebody else's Behavior it's not a result of their behavior okay so let's think about a negative externality or some examples of negative externalities this would be a situation negative externalities a situation where one person's behavior is creating a cost that is being borne by somebody else okay so the classic economic example of a negative externality secondhand smoke so this would be a situation where the smoker is breathing out exhaling smoke and then somebody else that's in the vicinity ends up breathing it and they're breathing that smoke it's creating a cost for them that's not part of their behavior it's part of somebody else's Behavior um production of a good that creates pollution that's an example of a negative externality production of a good that creates pollution so we've got let's say a firm a business that's producing some good and that during the course of production of that good maybe there's something that comes out of some smoke stack or maybe it's something that um is emitted into a Waterway or maybe it's noise pollution um and then there are people outside of the firm maybe people in the vicinity of the factory that are just going about their daily lives and they have to breathe in the stuff that comes out of the smokestack that would be an example of a negative externality it could be anybody driving a car you driving your car if you drive your car then there are things that are coming out of your exhaust pipe that go into the air and other people breathe those things in and so you anytime you're driving you're creating a negative externality it's not very big but it's still a negative externality and you might think well no I've got a an electric car but it's still the same right it's just that the the emissions aren't coming out of your car they're coming out of the electricity Factory that's generating the electricity it's just that the emissions are coming from a different location and keep in mind that a majority of the electricity that we use is still generated using fossil fuels and so um it's still creating a negative externality again not a very big one the size doesn't matter it's it's a negative externality and there are lots of other examples playing your music loudly so that it it disturbs somebody else that would create a negative externality there are also positive externalities so if we think about some examples of positive externalities something like you getting a college education so if you get a college education then people with more education tend to make more responsible decisions they tend to be more productive in society and when people are more productive that that benefits me so you getting your college education actually passes on a little bit of benefit to me because it improves Society not a lot you actually capture a vast majority of the benefit of your college education but there is some benefit that you pass on to other people so that's an example um immunizations so you being immunized or you getting your kids immunized um creates a healthier Society in the future so other people benefit from that so those are examples of positive negative externalities another example of a positive externality would be say you spending time Landscaping your yard to create a a pretty scene that somebody could drive by and see so if I drive by your yard you've spent a lot of time making it look nice then I could drive by and sell wow I like driving by here that that makes my day just a little bit better and there you of course can enjoy you're getting the benefit but you're also passing along a little bit of benefit to me so what we've talked about earlier in the class we've talked about a situation where free markets what we know is that free markets maximize the economic well-being of both buyers and sellers we've seen that that total Surplus is maximized when you have a free market now what we're going to think about now is what happens if there's an externality present and what we're going to see is that a free market does not maximize total Surplus in the presence of an externality either a negative externality or a positive externality so if externalities are present this is a situation where the government may play a role to try to move the market outcome to a better outcome something where total Surplus is actually bigger than it would be if the government didn't do something let's start by thinking about a negative externality so a negative externality is a situation where the supply curve the marginal cost curve does not capture all of the costs associated with the production of the good so we'll think about production of some good that creates a negative externality so let's start by thinking about what this picture is going to look like we've got the price up here and the quantity down here let's think about say the production of electricity okay so let's say that this is the generation of creation of some amount of electricity and so here's the price price of electricity now we've got some demand curve a market demand curve all of us have have some demand for electricity going to be a supply curve then a label this supply curve um call this private remember that's a marginal cost curve anytime we have a supply curve we know that the market supply curve is just the horizontal summation of all the individual firm Supply curves so that market supply curve represents the cost of production it's a marginal cost curve but now let's label it private because what we're going to do is we're going to think about let's say s private represents the private costs of production of electricity so it represents private costs of production of electricity now what what we mean by that is that it represents the cost that the the electrical utility faces okay so the the utility is ha going to have costs associated with production of the electricity that it generates and and they would respond they would maximize their profit by looking where marginal revenue equals marginal cost but they're only going to respond to the costs that they themselves bear so when we say private costs we're thinking about the private sector versus say the public sector maybe the public in general that's probably a better way to think about it but now what that means is that there's going to be some cost that the utility doesn't bear maybe they emit something out of smoke stacks that particulate matters matter or maybe it's some pollution that that clouds the sky or creates some health impact for people okay so there's going to be let's say that the firm is able to pass some cost onto everybody else okay foreign this would be the pollution so whatever form that pollution takes it could also be that maybe it's uh this is a generation electricity generating facility where water is used to cool something down and then that water is emitted back out into a lake and it warms up the water temperature that's some something a situation where some cost would be passed on so the firm is able to pass some cost to everyone else okay we call this the external cost the external cost so if we think about what this supply curve represents if we were to pick a quantity a quantity out here and go up to the supply curve the height of that supply curve would represent their private cost but then there's going to be some additional external cost so if we were to go up a little bit higher at each quantity if we were to go up a little bit higher this vertical distance is going to represent the external cost so let's have a little arrow that says here's our external cost that's the cost that the The Firm itself doesn't have to bear they're passing that on to everybody else so if we were to think about the total cost if we were to have another supply curve that was everywhere that vertical distance higher than the original one we'll call this the supply curve we'll call this the social cost I'll just say social that's the full cost to society it's made up of the private cost which is this distance plus the external cost that's borne by people outside of the firm so out here at these quantities the private cost would be this vertical distance and then there's this extra external cost that's passed on so this curve represents the full cost to society now if we think about what happens if the firm is able to pass that cost on then the free market outcome is going to be right here let's call this Q I'm going to say free market there's the quantity that would be produced if the the firm is able to pass this cost on to everybody else and here would be the price in the free market I'm just gonna say free market there's the free market quantity and price but now let's think about what the right quantity in price would be and and the right quantity in price is going to be the quantity that exists and the price that exists if no cost is passed on or let's say no cost is ignored so from a from society's perspective the full cost of production at the margin would be represented by this curve and right here would be the right quantity this would be the quantity that the benevolent social planner would choose because the benevolent social planner would not just pass this cost on to somebody else and then not worry about it the benevolent social planner wouldn't wouldn't ignore any cost and so right here is what we would call the efficient quantity and right up here would be the efficient price so let's call this Q star and we'll call this P star so here's the conclusion that we get anytime we've got a negative externality where production of some good and it's not just production it could be any of these things what we see is that the free market results in too much of the good being produced and the free market price is too low it doesn't reflect the full cost of production so once the the externality is accounted for we would produce less of the good and charge a higher price okay so let's say negative externality foreign Ty results in production of too much of the good and let's say here with a free market so anytime you've got a free market and an externality is negative externalities present the free market is going to result in too much being produced and we have a name for this we call this market failure Market failure I've never been a big fan of this term because I think that the term itself makes it seem as if the market is not adjusting to eliminate surpluses or shortages it that's not what we're talking about here the market fails not in that price doesn't adjust or if a a surplus is present there's no incentive to lower price that's not what happens clears it's just market results in the wrong quantity an inefficient amount of the quantity being produced okay so there's going to be dead weight loss created that's what we mean when we say market failure now let's think about what happens if there's a positive externality so let me clear this off and then we'll take a look at that let's take a look now at what the picture is going to look like if we have a positive externality now remember a positive externality is when there's some benefit that's being passed on to somebody that's not engaged in the behavior themselves so for this one let's do uh let's do college education so let's suppose our quantity is um here we'll we'll just say college education we could describe that in terms of number of students getting educated or the number of college hours of Education we don't even need to worry about that let's just say it's quantity of college education we could even talk about it in terms of maybe scholarships but let's just leave it as that this will be the price of a college education so let's put our demand curve up here and our supply curve market demand market supply now in this situation there's a benefit that's being passed on now remember the supply curve represents the cost and when we were talking about a negative externality and there was a cost being passed on we rep we recognized the fact that that supply curve didn't represent the whole cost but now there's a benefit being passed on remember that it's the demand curve that represents the benefit so we're going to call this demand curve the let's call it the marginal private benefit I'm just going to put private down here like we did with our supply curve represents the marginal private benefit but now there's some benefit that's being passed on we could call it an external benefit and so if we were to pick any quantity and go up to the height of the demand curve the height of the demand curve would be rep capturing the private benefit that would be the benefit that the students themselves get from going to college but then there's going to be some external benefit that's passed on and so let's suppose that that external benefit is say that vertical distance okay and it's going to be that vertical distance whether we're talking about this quantity or this quantity there's our external benefit so if we were to represent the full benefit the curve that captures all benefits associated with college education then we'll call this one D Social that represents all of the benefits the benefits that are captured by the private individuals engaged in getting the education but also the benefit that's captured by Society because other people are getting an education so keep in mind the benevolent social planner would never ignore that that benefit that's going to other people now if we were to think about where the free market goes well individual students are going to respond to their own private benefit and so this would be the outcome in a free market we'll call that Q free market and this would be the price foreign now if we think about what would happen if we don't ignore any benefit the benevolent social planner would point out okay well this is what happens in the free market but right up here is the quantity that would be provided if we were thinking about ma actually maximizing total Surplus we'll call that Q star here would be P star so we get a similar thing to what we saw with a negative externality except it's in the opposite direction what we see is that with a positive externality the free market results in too little of the good being provided so the Practical conclusion that we could get from this is that there is a good justification for why the government would want to get involved in increasing the amount of college education that takes place whether that be through grants or whether that be through scholarships of some type there's a or or subsidizing universities in some way there's a good theoretical explanation for why we wouldn't want to just rely on the free market okay results in too little of the good if there's a positive externality let's ask this question what causes externalities what causes externalities and there's kind of an interesting explanation for what causes externalities what it boils down to is that there's some failure of the property rights structure so let's just say the the main cause the cause of externalities whether they're positive or negative is what we would call an incomplete property rights structure incomplete property rights structure that's what allows some people to pass on costs to other people and it's also the problem of not being able to capture all of the benefits associated with a positive externality so let me give you an example to kind of illustrate what's going on here let's suppose that you own a piece of land and that land has a lake on it and let's suppose that you lease some of the land to a paper mill foreign your leg well if you own the land and the lake is on your land and you lease that land to the mill and they pollute your Lake then you can hold them responsible the court system will allow you to sue them as long as you can prove that they somehow did this beyond the contract that you agreed upon then they will be held liable in that case there's no externality because they're not able to pass on the cost you can hold them responsible for it but now let's suppose that that that Mill is built on let's say privately owned land on the banks of a lake that's owned by the state now that creates a whole different situation because if the lake is owned by the state then then in reality it's not owned by anybody if everybody owns it that's the same thing as nobody owning it because if you you yourself as a private citizen cannot sue somebody else for polluting a public Lake okay so in the absence of any type of government regulation that prohibits them and there are certain restrictions but emitting some pollutant into a lake is not completely illegal you have to abide by whatever the government regulations are but nobody can sue for that in that case the cost of the pollution is passed on to society that is a situation where the firm would be able to make that cost external to them they don't have to worry about it and what is different between the two situations is who owns the property whether or not it's owned by a private individual or whether or not it's public property public property means that no private individual is able to sue based upon what happens to that property okay you can't challenge anybody else's ability to use the link and that's what creates the problem it's an incomplete property rights structure there's a a failure of anybody to own that so now let's think about how we fix that problem the way we fix it is we have to address the property rights structure and there's a a let's say a handful of ways that we can do that so let's talk about private solutions to externality private Solutions to externalities I'm just going to abbreviate that and let's start by reminding ourselves that the efficient amount of pollution is not zero we have to balance the the good that we get out of production of the the goods and services or in the the picture that we drew the good that we get out of production of that electricity and remember that production of that electricity is allowing people to heat their homes when it's cold and allowing people to have lights and it's allowing people to run um let's say appliances in their house all of those things that enhance people's well-being but then there's this negative side effect and that is that it's also creating some pollution so we have to balance the good that we get from the things that we can use the electricity for with the bad that we get from production of the electricity so if we think about let's draw a little picture here let's suppose that we think about pollution reduction here quantity of pollution reduction so here's our quantity let's put our our price let's put just dollars up here this will be the cost and benefit so if we think about what's the right quantity of pollution reduction well there's going to be a marginal cost of pollution reduction it's going to be upward sloping the more pollution we reduce the more costly it is to reduce pollution when you let's suppose that we're not reducing any pollution then that means there's going to be some low-hanging fruit so to speak there's going to be some easy things that we can do that have relatively low cost that allow us to reduce pollution but then as we reduce more and more pollution it gets harder to reduce more and more you can install scrubbers on smoke stacks and that will capture a large part of the pollution but then there's always going to be very small amounts that are very hard to capture and you need much better technology and that better technology is much more costly so if we want to reduce a lot of pollution at the margin it's going to be very costly now in terms of the marginal benefit of reducing pollution if we do no pollution reduction then the marginal benefit of reducing a little bit is is high but then as we reduce more and more and more and more the marginal benefit declines if you've reduced 99 of the pollution that's out there then the marginal benefit of reducing that last one percent is very low so just like any demand curve or marginal benefit curve it's downward sloping so let's call this marginal benefit this is downward sloping for the exact same reason that any demand curve is downward sloping when you don't have very much of something you value an additional unit highly but the more of something you have the less you're willing to give up to get another unit at the margin this is why the first bite at the buffet is always the Best Buy right but then as you continue to eat each consecutive bite gives you less and less benefit and by the time you're full one more bite's not doing you much good at all now what we've seen in this class is that we get the right outcome when marginal benefit and marginal costs are equal a decision maker takes an action as long as the marginal benefit is higher than the marginal cost never when the marginal cost is higher than the marginal benefit so the right amount of pollution reduction is right here let's call it Q star don't confuse this with with production of some good this is we're talking about how much pollution should we reduce the conclusion that we get here is that we should not reduce all pollution we need to reduce pollution until the marginal benefit of reducing pollution is equal to the marginal cost of reducing pollution okay let's think about in terms of some private solutions to externalities let's talk about something that we call the COS theorem cos theorem States created a first written about by an economist named Ronald cose and what Dr Coast said is that as long as transaction costs are low then the firms or the the parties that are involved in the negative externality are going to have an incentive to come to some agreement between themselves that will eliminate the externality okay so let's just say that um private bargaining that's the key thing private bargaining will fix the problem so here's the idea let's suppose that um um I own the lake and um no let's let's change it let's suppose that um there's a river looks like this and let's suppose I own a resort down here 's a resort and up here is a firm that generates electricity so electric let's just say the Electric Plant and so I own this Resort here's the Electric Plant this is a public River and let's suppose that this plant as part of the generation of electricity let's suppose they take in some water they heat it up to they use it to cool off some of their I don't know turbines or something and then they emit that warm water back out into the river okay so there's warm water getting emitted back into the river let's suppose the river flows this way towards my Resort and let's suppose that that warm water makes it uncomfortable for people at my resort to get into the water okay and so that's hurting my business so here's what the coast theorem says the coast theorem says that I have an incentive as the Resort owner to go to the Electric Plant and offer to pay them to reduce the amount of warm water that they emit into the river I have an incentive because I'm losing out on some of my profits that I would make otherwise so up to the amount of my losses I would have an incentive to pay them to reduce the um the temperature and and we can show theoretically that yeah there is an incentive and we can also show that the electricity plant would also have an incentive to make a deal with me as long as as the transaction costs are low the problem with this is that that it doesn't apply a lot it's nice it's nice to think that there could be some private bargaining that goes on but a lot of times what we see out there in the real world is that the firms tend not to negotiate with the resort and especially if there are a lot of people let's suppose it's not just one electricity plant suppose it's multiple plants the bottom line is that the coast theorem is nice theoretically but it doesn't apply very often so let's think about some government remedies foreign the first one that we'll think about is attacks if you look at this picture if we go back and look at this picture you should recognize this picture or at least let's say this picture should remind you of something that we did earlier in the class and that was when we were analyzing attacks um there when we analyzed attacks we had a curve shifted by the amount of the tax and this picture's similar now what we need to do is clear this off and draw a different picture because in in our tax picture um we actually had it shifted the different direction than this one so let's clear this off and then we'll draw another picture and see how attacks can be used to fix an externality all right let's draw a picture of what this will look like we have a negative externality so we can we don't need to worry about what the good is let's just suppose that production of this good creates some uh some negative externality Okay so we've got demand curve that represents the benefit that consumers get from the good and then we have our supply curve that represents the costs but remember we're going to have this private supply curve that represents the private costs of production for the firms and then there's going to be this external cost we can draw another supply curve that represents we called it social that represents the full costs associated with production of the good and the vertical distance here is the amount of the external cost that's passed on so the free market quantity we found was right here let's call it q1 P1 now this picture should look very similar to the picture that we saw with attacks so what we saw is that if we impose a tax then we can illustrate that tax graphically one of two ways actually there's three ways but two of them involve shifting a curve we could illustrate a tax by either Shifting the supply curve up by the amount of the tax or we could have shifted the demand curve down by the amount of the tax but if we're sitting here at this point let's call it point a if we're sitting at Point a and we wanted to get to this outcome which we would call point B that's the efficient outcome then if we imposed a tax of this vertical distance on the sellers then that would shift that supply curve up by the amount of the tax which in this case is equal to the external cost so the tax would be equal to the external cost the cost that's being passed on if we were to impose that tax then that would result in this outcome Q star and this price so you can see that if we were to impose a tax equal to the amount of the cost that's being passed on then that would fix this problem let's think about the challenges with that the first challenge the first and most obvious challenge is that the government would need to know how big the external cost was in order to figure out how big to make the tax and that's challenging because the costs that get passed on oftentimes are it's hard to put a number on those actually if you're interested in that I teach another class called natural resource economics econ 4020 where we would look at how you put a dollar amount on that we can do it we've got some techniques that we can use to place dollar values on things that are very hard to place a dollar value on but that's one of the challenges with this is that the government it's not obvious um what that tax should be and you might think well let's just go out and ask people what what do they think the tax or the external cost that they're bearing is going to be well people have an incentive to exaggerate that there there are some reliable techniques that we can use let's let's leave it at that here's an interesting thing about this what we know is that we could get that efficient outcome by taxing the um producers we could also get that efficient outcome if we tax the buyers politically that seems like a much uh an unpopular thing to do the general public typically doesn't understand that it doesn't matter who you place the tax on the outcome is going to be the same but if we're trying to create a tax that fixes a pollution problem and we tax anybody but the creators of the pollution that's easy to criticize even though from an economics perspective we know that it doesn't matter it has the exact same outcome in the end um if we wanted to have a similar remedy for a positive externality instead of a tax we would need a subsidy because we know that with a positive externality too little of the good is bought and sold too little of the good is produced so we would need a subsidy to increase production of that good um let's talk about some market-based systems actually before we before we talk about that let me just add on one thing to this government remedy and that is another thing that the government could do is what's referred to as command and control and that simply means that the government instead of at using attacks can just set a quantity instead of of using a tax to try to influence the amount of the good that creates the pollution the government could just say you know what here's the amount of pollution that you can create it's referred to as command and control the problem is that different producers have different technology different producers create different amounts of pollution and the government doesn't have the ability to know the the those quantities that are being produced by of pollution by The Producers so that makes it very challenging for the government to try to somehow tell each firm how much pollution to um to emit market-based systems one of the obvious ones is what's referred to as a tradable permit system tradable permit system essentially the way a tradable permit system works is that the government issues some rights to pollute and what and then once those rights to pollute are are um issued to the firms the firms can buy and sell them from each other and we don't have time in this class to really go into why that's a good system and a lot of times when you first hear that it it seems weird because it seems odd to to people that the government should should somehow be in the business of giving people the right to pollute once you understand that the right amount of pollution is not zero it starts to feel less weird to you but if you're under the impression that the right amount of pollution is zero then you certainly would not agree with the government allowing people or giving people permission firms permission to pollute what we do know though is that these systems are very good ways of of reducing the amount of pollution to the efficient amount as a matter of fact if you were to go back several decades there used to be a problem with what was referred to as acid rain actually it was a very big problem it was a problem with sulfur dioxide in the atmosphere combining with rain droplets falling back to the ground and and creating all kinds of problems health problems and problems with food production because it killed plants and problems with buildings because that acid rain starts to wear down the out Outer surfaces of buildings it created a lot of problems and the way that problem was solved was through a tradable permit system sulfur acid rain is not a problem now we did not reduce the amount of sulfur dioxide in the atmosphere to zero we reduced it to the point where it's not creating a lot of problems that that outweigh the costs of the good products that we get that create it as a byproduct there are several issues with tradable permit system and again if you're interested in that we have a class where we would talk extensively about the good things and the bad things having to do with the tradable permit system but it does reduce or it does result if done correctly in the efficient amount of the good being produced um let's talk here a little bit about four different categories of goods or categories of goods and we're going to think about a couple of Dimensions that we're going to use in this discussion the first is what we call rivalry rivalry so rivalry is when one person's consumption reduces the amount of the good there is available for other people to consume consume we would say that the good is rival most goods are rival an example would be a pizza so if we have some people in a room and we have a pizza and I consume a piece of the pizza then that's one piece of pizza that can't be consumed by anybody else we would say that a good like that like a pizza is rival most goods are that way right if I go to the store and I buy anything that and I use it I consume it then that means that that can't be consumed by somebody else okay another dimension of a good that we're going to think about is what we call excludability a good is excludable if anybody who doesn't pay for the good can be excluded from consuming it so for example you can't consume the pizza unless you pay the seller for the pizza and I know that you might be thinking well I could if I steal it we're not we're not talking about breaking the rules most goods are excludable and what that means is that you don't consume it unless you compensate the seller for it okay so most goods are rival and most goods are excludable not all are though so let's now think about four categories of goods and they're all going to differ in terms of whether or not their rival and whether or not they're excludable so the first one that we're going to think about is what we call Private Goods most goods are private goods and that means that the good is both rival and excludable so just about any good that you think of especially if it's a physical good most Services as well though so a haircut they it's excludable and it's rival if the crew if you consume the haircut then that's a haircut that somebody else can't consume the person cutting the hair can't be cutting both heads of hair at the same time it's excludable in that you can't walk in and get a haircut without paying for it so pizzas haircuts all lots of goods just about everything that you thought about would be in this category of private Goods there's also what we call public goods public goods let's say these are rival and excludable rival and excludable I'll just abbreviate it public goods are non-rival and non-excludable non-rival and non -excludable so let's think about some examples once you hear some examples it starts to become obvious what falls into this category so these would be things like let's say public television so public television is non-rival in other words I can turn my TV set on and watch public television and that doesn't diminish the amount of public television that there is available for everybody else to watch we could all be watching at the same time it's also non-excludable in the sense that it's it's broadcast over the airwaves if you've got an antenna you can pick it up for free you don't have to pay for it you don't have to uh somehow compensate the uh producers of public television they will have telethons where they they ask you to do it but you can still watch it even if you don't so public television is non-rival and non-excludable other examples would be things like National Defense foreign I can enjoy the benefits of having National Defense the security that we have and that doesn't diminish your ability to enjoy it and I get to enjoy it even if I don't contribute anything to it I don't have to support it I it's non-excludable I realize that taxes go to it and all of that but it still falls into this category um here's what we tend to see in this type of situation with public goods we tend to see that the free market doesn't result in in the right production of it and the main issue here is that what we see is that there's a lot of what we call free riding so if you've ever watched public television and not sent a check in what you were doing is free writing you were taking advantage of the fact that you can consume it without pain it's a very natural thing to do and and there are other situations where we tend to see that and it's true for public radio public television National Defense um and again in a a natural resource economics class we would talk more about that there's uh let's talk about the third category it's what we call quasi-public Goods a quasi-public good would be one that is non-rival foreign but excludable so this would be something like satellite radio or satellite TV so if you don't pay for a subscription then you can't capture the signal but it's still non-rival because my consumption of it doesn't diminish the ability for anybody else to consume it and then finally we have what are reform referred to as common resources common resources are rival but non-excludable rival but non-excludable so these are things like the Buffalo and the American West so you couldn't exclude people from hunting the Buffalo back when we were when the U.S was expanding Westward but their rival because if one person captures a buffalo kills a buffalo then that's one that somebody else can't we could think about flowers in a public park that's probably flowers in a public park and you might say hmm I don't see too many flowers in a public park well here's why because they're common resources they are rival if I pick a flower in a public park that means that nobody else can enjoy that flower but it's non-excludable We Can't Stop people from doing it what we tend to see is that common resources get exploited that's why you the idea of seeing flowers in a public park might not be uh something that you're very familiar with because a lot of times they get picked and and you don't see them let's talk about demand for a public good so let's go back to this one and think about demand for a public good you've talked about how to figure out what the market demand curve for a private good looks like and the market demand curve for a private good we know it's just the horizontal summation of all of the individual demand curves so it looks something like this I'll draw a little picture we've got a demand curve another demand curve this could be person one person two if we want to figure up the market demand curve we'd pick a price P1 we'd go over and see how much this person wants to consume at that same price of P1 we'd see how much this person wants to consume and that tells us that if these are the only two buyers in the Market at that price of P1 the total amount is going to be this amount plus that amount it's going to be something out here and that's going to be a point on our market demand curve that's how you get a market demand curve for a private good a good that is both rival and excludable if we wanted to figure out what the demand curve for a public good looks like it's going to be different because it's non-rival my consumption of the good doesn't diminish this person's ability to consume it so you can't just add up the quantities it doesn't make sense and so let's clear this off and then we'll take a look at how we construct the demand curve for a public good in order to think about what the market demand curve for a public good looks like I'm going to draw a picture and it'll be similar to the kind of horizontal summation except now what we have to do is add vertically so let's do this let's start down here at the bottom I'm going to think about let's say person one down here let's put person two right here and then we're going to think about the market demand curve up here so now keep in mind that because this good is non-rival it doesn't make sense to think about adding up the quantities if we think about public television as a good example if I consume an hour of public television and you consume a hour of public television at the same time then there's still only one hour of public television that's been provided we can all consume it at exactly the same time it's very different from a slice of pizza if I consume a slice of pizza and you consume a slice of pizza then two slices of pizza had to be created so that's why we can't in the case of a public good add up the quantity if we're talking about public television there's only 24 hours of it in a day and all of us could consume 24 hours a day of public television and still they only need to supply 24 hours a day so instead what we do is we add up how much we value it so if you value a unit of it at a certain amount and I value a unit that same unit at a certain amount we can certainly add up the value to get the total value that you and I place on the good so let's think about a quantity and let's go up to this demand curve to see what this consumer is willing to pay for that particular quantity of the good here would be consumer one's willingness to pay we'll call it willingness to pay one and then if we think about at that quantity what this consumer is willing to pay this consumer is willing to pay this amount let's call it willingness to pay two so if this consumer is willing to pay that amount and this consumer is willing to pay that amount then the total willingness to pay for this quantity would be this distance plus that distance it would be somewhere up here so this would be willingness to pay one plus willingness to pay two that would be total willingness to pay and the market demand curve we would just add up the willingness to pay at each quantity it's a vertical summation so if we're thinking about the optimal amount of a public good to provide then what we would be doing is thinking about a market picture so if we were to think about this as being public television then there's some supply curve there's this market demand curve that we've gotten through this vertical summation here's the market demand curve the market supply curve and the optimal quantity of course would be that quantity right there where that demand curve is the vertical summation of the individual demand curves the question of course is would the free market provide that quantity of a public good and the answer is no it won't if we leave it up to the free market because the good is non-excludable you don't have to actually pay that amount to consume that quantity you can be a free rider so what we tend to see is that people's contribution to it in practice is actually much lower than the full amount that they value it which means that the demand curve the actual demand curve that would come from people actually contributing amounts of money to public television is much lower and so we end up with a much smaller quantity than the efficient quantity that's why it's provided typically through government grants because the free market would result in too little of the goods so again too little of the good with the free market and the reason is because of the free rider problem think about common resources that last foreign type of good we thought about or common goods if we're thinking about a common property good or a common resource and the example that we talked about was flowers in a public park um the problem with that type of good is that um let's go with the hunting example if we think about Buffalo and this would be true of any any resource extraction of any resource whether it's a buffalo or whether it's timber in a forest or whether or not it's fish in a pond or or flowers in a park it doesn't matter but if we're thinking about um Buffalo as an example if a hunter takes a buffalo then they incur the costs but not all of the costs and and what's going on there is that um if somebody harvests a buffalo then it becomes much harder for other people let's say not much it becomes harder for other people to to harvest from that resource so that cost is being passed on to somebody else the the hunter would capture all of the benefits but able to pass on some of the costs um and what we see is that in that case Buffalo were over hunted they were over harvested the free market results in in US abusing the resource not because anybody's necessarily doing anything wrong but because the property rights associated with a common pool resource like Buffalo or or flowers in a park allow people to pass on a cost to other people and it just results in a misuse of the resource if you'd like to read more about that there's the the concept Here is known as the tragedy of the commons tragedy of the commons it is why common property resources or common property is a very very bad way to establish property rights um if you're interested in the tragedy of the commons um there's paper that's called A Tale of Two Fisheries actually it's an article Tale of Two Fisheries it was published in the New York Times years ago it is a very very good explanation for why the tragedy of the commons happens they write it in terms of um I think the article is written about different types of fishing but mainly they focused on focus on on lobster fishing and um it's not written for an academic art audience it was written for a you know the the popular press and it's very interesting article I would highly encourage you to um read that if you're interested in how we can fix the tragedy of the commons it's a problem but it's also a problem that has some solutions and there are actually some um interesting Solutions so hopefully that gives you an idea of um some of the issues that we think about when we're talking about the right amount of pollution externalities positive and negative it's an issue that we need to think about there are lots of externalities out there and it's an example of when the free market doesn't work as good as we would hope but there are some solutions that we can use to get to an efficient outcome so I'll see you in a future video
Principles_of_Microeconomics
Chapter_1_Ten_Principles_of_Economics.txt
let's start by thinking about what economics is and we're gonna start that discussion by thinking about scarcity so if we we think about scarcity the idea behind scarcity is that society has unlimited wants and limited resources so it's impossible for us to produce all of the goods and services that people want to consume and and we'll talk about the fact that that scarcity is going to lead to trade-offs we'll get into that here in just a second but that scarcity also is is what leads to the field of economics so let's talk for a second about what economists do what is the field of economics one of the exercises that I go through with my face-to-face classes is something that I used to I used to when I first started teaching I would send my students on the first day of class before we talked about anything we would meet we talked about the syllabus we talked a little bit about you know what I expected to have the students do in class and what they could expect from me and then we wouldn't really have time to get into the material so I would send them home with this little survey and the survey was just designed to get them to think about some of the stuff that we talked about we're going to talk about in class and then just gather some information about them one of the questions I asked on that first day survey was to try design to try to get them to understand what economics was and so I would give them a list of topics and the question that I would have them answer was this one which of the following subjects do you think economics can help us understand so I wanted them to put a checkmark if if this topic was something that that we would study in an economics class and so some of them were some of them weren't I said here this isn't a trick question economics doesn't help us understand everything so let me just read some of these to you and and you just think about whether or not this is something that you believed we would talk about in an economics class the first one is the business cycle what I do now is on a first day of class I just read these off and I have the class raise their hand so when I say the business cycle well everybody recognizes that yeah that's an economics thing everybody raises their hand when it was a survey everybody would check that the second one what causes unemployment again people typically raise their hand let me just read some of these others the freezing point of milk what causes inflation how the human heart works how fast people drive on the highway how to maximize profit how much people eat at a buffet how much students study for a test the cause of earthquakes how long a homeowner waits to mow their lawn how long students pay attention during an economics lecture how the banking system works why stock prices change how long parents wait before checking on crying baby why criminals sometimes voluntarily confess to the police why gamblers tend to lose money when a baserunner decides to steal a base why fish swim in schools why politicians engage in negative campaigns and there's several more on there and of course what happens is whether its students checking a box on a survey or whether it's students raising their hand in a classroom what happens is hands go up when I say something like how to maximize profit or how the banking system works or why stock prices change everybody's hand goes up but then when I say those ones that are the ones like how long a homeowner waits to mow their lawn hardly anybody would ever raise their hand or how fast people drive on the highway or when a base runner decides to steal a base hardly any hands ever go up and of course if I were to say something like why fish swim in schools yeah and the reason I do this is to illustrate to people that if your hand went up every time I said something like the banking system or I said something like the stock market and your hand didn't go up when I said how long students pay attention during an economics lecture or how fast people drive on the highway then you don't really understand what economics is people typically raise their hand when dollars when when the thing that I've just read has something to do with dollars stock prices banking system stuff like that inflation but they skip the stuff like how much people eat at a buffet how long a homeowner waits to mow their lawn that's economics I mean all of those things that you do raise your hand on the business cycle that's economics too but that other stuff that most people don't raise their hand on a lot of that other stuff I read that's also economics so let's talk about the definition of economics it's at its most basic level it's the study of human behavior study of human behavior that is what economists do they study human behavior so you should once you understand that you should raise your hand on everything that involves somebody making a decision how long a homeowner waits to mow their lawn that is a human behavior how long Barents wait before checking on a crying baby when a baserunner decides to steal a base there's a lot of economics literature devoted to things like that so I guess the real reason that I do that in a class or the real reason that I would do that by sending home a survey is to get students to realize that economics is not about the financial fate of people that's not it we can talk about that an economist understand that and we will spend some time talking about it but economics is so much more than that economics is about human behavior and if I were to ask a thousand people out there on the street where should you go on a college campus to study human behavior a vast majority of them would say well you should go maybe to the sociology or maybe you should go to psychology and those are places where they are studying human behavior but if you want a study human behavior that's the most basic level the Economics Department is where that's going on we typically talk a lot about business types of situations because there are large numbers of dollars on the line so if you understand human behavior then it's natural to talk about human behavior in the context of large dollars large amounts of dollars being transacted but at its most basic level it's the study of human behavior turns out though that it's actually more general than that there's some economics literature where they study the behavior of animals so if you look at say a rat in a cage and you think about how much food that rat eats let's suppose that at the end of the cage there's a device there where the rat presses a lever and if they press the lever once a piece of food falls out and the rat eats it and if they press the lever again another piece of food falls out if we were to look at the amount of food that the rat consumes in a day and then change the price of food for the rat if I were to ask most people out there in the on the street what's gonna happen if we change the price of food most people will say it won't matter the rat won't respond to that because it's just a rat I mean they have nothing to do in this cage all day except press the lever and eat food so they're just gonna eat the same amount of food turns out that's not at all what happens rats respond to prices the same way we do the price of something goes up all other things equal prices something goes up we want to buy less of it if you raise the price of food for the rat it will consume less food raising the price of food looks like just making it press the bar more than one time for a piece of food if you raise the price it will consume less food so some economic literature's devoted to just the study of behavior whether it's a human or whether it's some other animal but obviously in this class we're not going to be think about the eating habits of rats so we're going to be thinking about the study of human behavior but it's at its most basic level that's what we study in an economics class we study how people make decisions in this video we're going to talk about ten basic principles of economics so what these are are principles that you tend to see in any economics class that you might take so so the field of economics is very very broad economics is an old discipline and there are lots and lots of sub disciplines within the general field of economics the kind of big distinction is going to be micro economics versus macro economics in macro you're looking at the big picture how an economy functions in micro you're looking at the small picture how individual behavior or how an individual business might make its decisions but within those two main categories we've got lots of subdivisions and so there are all kinds of different subfields within economics there's labor economics and international economics and public choice theory and game theory and econometrics and a subdivision called Clea metrics which is the overlap between history and economics and and lots and lots of others regardless of whether you're taking a class in this sub discipline or that sub discipline or this sub discipline over here there are some basic principles that tend to pop up in all of them and so what we're going to do here is is talk about those those basic principles the first one and the numbers of these are not important but let's just kind of run through them the first one is that people face trade-offs and so what we mean there that that's one of those things that you might think well that just kind of goes without saying but it doesn't what the idea here is that to get something you want you're always going to have to give up something else you want so we could be thinking about how you choose to spend your household income so you have some income that you've earned from working and then you've got to decide whether you should spend it on pizzas or or textbooks for a class or whether you should spend it going to a movie or all of the other things that you could spend that money on and if you spend money on one thing that's money that can't be spent on something else so there's a trade-off there we could think about so let's just put here maybe your income we could think about your time how do you spend your time there's a trade-off there if you spend one hour studying for a test that's an hour that you can't spend sleeping or going out with your friends or watching TV or whatever you might be doing with your time so there are trade-offs that you face in terms of income time lots of things opportunities it's also the case that society faces trade-offs so if we think about a trade-off that society faces society faces the trade-off between production of consumer goods and services and national defense sometimes we refer to that as the guns versus butter trade-off we like to have both of them we like to have national defense and we like to have consumer goods and services and and it'd be great if we could just have more of everything but we can't any resources that we devote to national defense are resources that can't be devoted to building new roads or scholarships to go to school things like that there's also a trade-off that society faces between efficiency and equity efficiency has to do with the size of the economic pie equity has to do with with how fairly the pie is divided and there's a trade-off there unfortunately it would be great if there was not a trade-off but the reality of it is that if if we put more emphasis on equity if we try to make sure that everybody has an equal slice of the pie then the economic pie gets smaller and the reason for that is that if we're making sure that everybody has the same size pie the size of the slice of the pie then that means we're going to be taking resources from people who earned them and providing them to people who didn't earn them and every economy does that and that's not an argument that we should do none of that but we have to recognize the reality that the more of that we do the less of it the less of everything there is to go around ok so society faces trade-offs just like we individually face trade-offs let's think about the second principle and that is that the cost of something is what you give up to get it the cost of something it's what you give up to get it let's just say what you give up this word cost what you see what you'll learn quickly in this class and in any economics class is that I'm going to use this word cost a little more generally than you've used it in the past so typically the way that you use the word cost or the way that any of us use the word cost when we're interacting with other people is to refer to the number of dollars that you've given up to get something so if you have something let's say you have a cup of coffee and I say hey how much did that cost you you might say well that cost me two dollars you would recognize that I'm using the word cost to refer to the number of dollars that you've given up to get it I'm going to use the word more generally I'm going to use the word cost to refer to whatever you give up to get it it could be that you had to give up two dollars plus the effort to walk over to the place to actually get it so I'm going to use this word cost more generally actually economists typically used the phrase opportunity cost to remind ourselves that it's more than just the dollars that you have to give up the dollar certainly are part of the cost of something we'll talk more about that here a little bit but there's there's other things that you're giving up and we have to include whatever it is that you've given up so if we think about opportunity cost let's think about the opportunity cost of of say going to class if we were to talk about the opportunity cost of going to class then there's no dollars transacted in going to class when you walk into a classroom the professor doesn't require any dollars when you walk in and the tuition that's that's been paid we're talking about just going to that particular class walking in that classroom and sitting down to listen for however long you need to sit there to listen to the professor talk about the material there's no dollars transacted there if we think about the opportunity cost of going to class what you're actually giving up is the time that it takes you to go to class to sit there and to get back to your apartment or your dorm room so it's tempting at first to think that the opportunity cost is that you're giving up your time but we have to be careful about that because it's not the number of minutes that is actually going to be the cost it's whatever you would have done in those minutes it's whatever your next best alternative use of your time would be so it could be that if you weren't going to class you would spend that time sleeping so by going to class what you're giving up is sleeping or it could be that your next best use of time maybe you would be playing video and so if you weren't a class that's what you're giving up or it could be that maybe your parents are in town and they're getting ready to leave today and they call you up in the morning and they say hey we'd like to take you out to lunch and you say oh you know what I've got class at 11 o'clock I can't go out to lunch with you because I've got to go to class and so clearly that would be a bigger thing that you're giving up then just sleeping so it's not the number of minutes it's not the time it's whatever your next best alternative is next best alternative so I'm going to just kind of put a line through time because again it's not the number of minutes let's think about something else let's say buying something so if we were to think about the opportunity cost of purchasing something let's suppose you buy a pizza suppose that it you call up the pizza place and they say oh it's ten dollars and so you pay with your credit card and and then they deliver the pizza let's think about the cost the opportunity cost associated with that in this case there is a dollar component because you will have paid ten dollars so it's tempting to say that the opportunity cost that part of it is ten dollars but we have to remind ourselves that it's whatever you would have spent the ten dollars on that's what you really give up it's not the ten dollars itself it's the other goods and services that you could have bought with the ten dollars so there's that there's the other goods and services that you have to give up and then there's a little bit of effort right you had to make a phone call and maybe when they knocked on your door with the pizza you had to get up off the couch and go get it and so you had to exert some effort that's part of the cost but the big thing here is that you gave up other goods and services because you you use some of your purchasing power for that pizza so you can see here that when we're talking about the cost of something we're going to be including the fact that you have to pay dollars that is part of the cost with a lot of different things but there are lots of different behaviors that you engage in like going to class where there's not a dollar transaction but there's still a cost and it's it's whatever your next best alternative is let's talk about for just a second one common problem that I see students make one mistake is that sometimes let's say we go back to this going to class sometimes it's tempting for students to say well if I didn't go to class then I could be watching TV and I could be another thing I could be doing is playing video games or I could be hanging out with my friends or I could be I don't know going for a walk and you can list an infinite number of things that you could be doing if you weren't going to class and so sometimes students say well the what I'm giving up is infinite there's an infinite number of things that I could be doing we're always going to think about just your next best alternative okay so the cost of something is what you give up and that leads us to this kind of a basic idea in economics that you've probably heard before and that is that there's no such thing as a free lunch there there is nothing that is truly free everything has a cost if you divide defining the cost as just the dollars then sometimes you might be able to say this is free because I didn't have to give up dollars but we're not going to be defining it as just dollars it's whatever you give up to get it and in that case there is no such thing as a free lunch there is always going to be a cost associated with everything let's talk about principle number three principle number three is going to be that people respond to incentives people respond to incentives the reason that we're going to be thinking about this you might think when you first see that that well that's another one that goes without saying people respond to incentives yeah well turns out that's actually really easy to forget the reason we're gonna think about people responding to incentives is because we're interested in understanding human behavior and so when we think about how to understand human behavior incentives are going to be at the root of all behavior it's not going to be easy to sometimes to see the incentives that change so you may not be able to understand why somebody engaged in a particular type of behavior but you can rest assured that all behavior is a response to incentives let's talk about some different types of incentives we can think about what we're going to call economic incentives an economic incentive is typically what we mean if we're talking about say dollars or points in a class so if I were to give a test the reason I put points on it is because I want to give you an incentive to try to earn those points to get a good grade in the class and so if you are studying in an attempt to do well on the test so that you can earn more points that's easy to understand why you're responding to that incentive or if there's some dollars on the line if I were to offer you a certain number of dollars to do some work for me and you did it it would be easy to understand why you did that work because of the dollar incentive that I gave to you so economic incentives a lot of times really easy to see and it's easy to understand why people respond to those but there are other types of incentives that are not that easy to see so if we think about another type we can talk about social incentives social incentives are created by society so these are things like the desire for acceptance or the avoidance of ridicule so if you think about it a lot of the behavior that we all engage in is driven by our desire to be accepted by people that we respect and our desire to avoid being ridiculed by people that we respect and whether or not that's good or bad we could talk about that all day that's not what we're interested in if we want to understand behavior there are a number of social incentives that we're all responding to more of our behaviors driven by social incentives then it's driven by economic incentives and then we could also think about moral incentives a lot of your behavior is driven by your sense of what's right and what's wrong and so these two types of incentives are much harder to see than economic incentives so what we can rest assured that behavior is driven by incentives but that doesn't make it easy to understand all behavior because there are incentives that are very challenging for us to observe in the real world it's also the case and this is important that that not everyone responds to the same incentive in the same way so I might give an incentive an economic incentive maybe in the form of points on a test I might give all my students in a class the exact same economic incentive and then what I observe is all of them will react somewhat differently to that incentive some students will study really hard and try to scramble for every point they can get and then other students for whatever reason maybe other students have other time commitments that just keeps them from studying as much or maybe they just aren't that interested in studying that much or maybe they don't even realize they need to study that much but I will observe some students not work very hard and therefore not earn very many points on the test and not end up doing that well in the class and and yet I've I've provided the exact same incentive to all of the students so not everyone responds to the same incentive the same way let's talk about the fact that it's very easy to forget that people respond to incentives the way that I typically illustrate this is to talk about our consumption of oil let me give you some numbers here if we were to look at the amount of oil that we've got I'm gonna give you a very big number 531 with nine zeros after it that's the number of barrels of crude oil in reserve and this is a real number it's not a number that I just made up it's it's a real number what it means for oil to be in reserve is that we know where it's at most of that would be in the ground but we know that we can get it out of there it's not in your best interest typically if you have oil in the ground to pull it all out of there because then once you pull it out you've actually got to pay to store it and it turns out that it's already being stored in the ground so 531 what is that billion barrels of crude oil in reserve in the world we could also think about world annual usage it's going to be a smaller number but still pretty big world annual usage of crude oil looks like that 16 and a half billion barrels per year so that's what we've got in reserve let's just pretend like that's all there is there's oil we haven't found yet but let's pretend like that's the maximum amount we've got and then this is the amount we use every year if you look at how numbers like that get used by politicians and by activists what you see is that what they'll do is they'll take this number and they'll divide it by that number and then they'll come up with a number of years that we've got before we run out of oil and that would be something probably in the 30s all right 531 divided by 16 and 1/2 something in the 30s we've got if we use those numbers that way about 30 years worth of oil what if I were to tell you that that's a completely wrong way to use those numbers what if I were to tell you that the right answer to the question when will we run out of oil the right answer is never we will never ever run out of oil I can say that with 100% confidence anybody that tells you otherwise either has never taken an economics class doesn't understand that people respond to incentives or I guess they could be lying to you but we're never going to run out of oil let's talk about why though how do I know for sure that that we won't well I know because I know people respond to incentives let me give you a different example let's suppose that instead of oil let's suppose that I I call you up and it's your birthday and I say hey I've got a birthday present for you the birthday present is this giant room full of peanuts it's a big room it's very deep there's a door at the top and you walk in and there it's just full of peanuts and it's all yours and just for convenience let's pretend like there's five hundred and thirty 1 billion peanuts in the room and that peanut room is yours and if you are a person who is allergic to peanuts then the peanut room is probably uh Turley terrifying for you I understand that and I've thought about maybe changing it to some other type of room like a banana room or something but that's just gross so it's easier if you just pretend like you're not allergic to peanuts for right now so I give you this peanut room five hundred and thirty one billion peanuts the only requirement is you can't take anything out of the room so if you eat a peanut the shelves got to stay in the room so let's suppose you love peanuts and you call up all of your friends and your friends love peanuts too and so you say hey come over doctor Azevedo just gave me this peanut room come on over and let's eat some peanuts so your friends go over there and and you guys start eating peanuts and and think about the cost of eating a peanut on day one you're you can you can reach anywhere you want and there's a new peanut you don't even have to have your eyes open you can anywhere and there's a new peanut and all you've got to do is crack it open eat it and let's suppose that that you're smart so you guys all decide that you're gonna throw your peanut shells over in one corner because you don't want them on top of your good peanuts so you're throwing your shells over in one corner if you think about the cost to you of eating a peanut on that first day it's practically zero it's whatever effort you have to exert to pick it up and and and open it up and eat it so peanuts are as close to free as they're ever going to be on that first day and so let's suppose I run into you after maybe a month and I say hey how's that peanut room going for you and let's suppose that you and your friends in that first month I've eaten 16 and a half billion peanuts pretend like you never get sick of peanuts okay if we were to use these numbers the same way that we used him with oil just a second ago then what you would do is you would think to yourself well I've got five hundred and thirty 1 billion peanuts in the first month my friends and I ate 16 and a half billion peanuts so we've got about 30 months worth of peanuts but that would be ignoring that people respond to incentives so let's talk about what's going to happen as you continue to eat more and more peanuts so you eat more and more peanuts eventually those shells that you've been throwing back here in this corner of the room are going to start to slide down spill over onto your new peanuts and it's not going to take very long before you're going to have a layer of peanut shells on top of your good peanuts so now think about what's happening to the cost of eating a peanut peanuts are no longer very close to free now if you want to consume a peanut you've got to go into the peanut room dig through a layer of shells some of which might have been in your friends mouth and are kind of gross so now you've got to dig through a layer of shells to get down to the good peanuts and maybe maybe then you you throw some of those good peanuts up on the top but the point is that the cost of consuming peanuts is going up and you don't have to have an economic class to know what happens to the amount of something you want to consume as it gets more expensive as things get more expensive we want to consume less of them and so the number of peanuts that you consume is not going to stay constant as the cost of consuming peanuts goes up we consume fewer peanuts so this number is going to be falling and eventually what would happen is remember that you can still drive to Walmart and buy a bag of peanuts for something probably around two dollars and fifty cents so eventually what would happen is you would get so many shells on top of your good peanuts they're just no longer worth it to dig down to the good peanuts if you call up your friends and you say hey come on over and let's eat some peanuts your friends are eventually going to say look if we go over and get into the peanut room it's going to take us 30 minutes and a lot of effort to dig down through 10 feet of gross shells to get to the good peanuts you know what I'm just going to drive to Walmart and buy a bag of peanuts so eventually what would happen is the cost of consuming peanuts would rise enough that you just choose not to consume any more peanuts from the peanut room so when are you gonna run out of peanuts in the peanut room when will you consume the last good peanut in the peanut room never you never would eventually it would get so expensive to try to hunt down for those good peanuts you just voluntarily choose to consume an alternative that's what will happen with oil when will we consume the last barrel ball well never eventually what would happen is the oil will get expensive enough that we will just voluntarily switch to some other alternative source of energy and when we switch it won't be because of of education that we've provided young people it won't be because of of some overall environmental awareness it will be because the alternative has become cheaper so people respond to incentives you have to keep that in mind when you start thinking about numbers like this it is not appropriate at all to divide that number by that one and come up with a projection of how much oil we've got so let's talk about principle number four and that is that people think at the margin people think at the margin just talk about what that means so a marginal change let me give you a couple definitions a marginal change is an incremental change to a plan of action you can kind of think of the margin as the edge of decision-making and that probably doesn't make sense at first you probably may not be able to understand what I mean by that but once you think about a couple of examples it'll probably become clearer so people think at the margin what that means is you have a plan of action but you respond to the incentives as they change it's very rare to have a plan of action and then to dog idli stick to that plan regardless of what happens to you that almost never happens what happens is you have a plan of action and then you start to execute that plan and then the incentives you you pay attention to those incentives as they change and then your plan mate may switch right in the middle so let's think about studying let's think about studying and how that is a great example of what it means to think at the margin so let's suppose that you've got a test that you need to study for and you know that you need to spend some time this evening studying and so you it comes time to start studying let's think about how you make the decision of how much time to study do any of you sit there and say you know what I'm going to study for exactly 82 minutes and 17 seconds you don't ever do that right you you the only time you would even come close to doing something like that is if you know you need to study a whole lot and you've only got a small amount of time you've only got 30 minutes then chances are you're probably going to study for the full 30 minutes but even that might change we'll talk about that here in a second instead what we do is we have a plan of action to study and what we do is we sit down and we start studying and let's suppose that we're sitting there studying and we're not giving up very much let's suppose the opportunity cost of studying is very low none of our friends are doing anything and and so we're not giving up very much and let's suppose that we are are learning a lot about the material that we're studying let's think about those incentives what we're giving up and what we're getting turns out that a decision-maker takes an action if and only if the marginal benefit of the action is bigger than the marginal cost so I'm going to abbreviate marginal benefit MB and marginal cost MC you take an action if and only if the marginal benefit is bigger than the marginal cost now let's talk about what the marginal benefit is the marginal benefit is just the change in benefit the word marginal in economics you can always substitute the phrase change in and understand better what it means the marginal benefit is the additional benefit that you get from continuing to take the action the marginal cost is the additional cost that you incur if you continue to take the action so a decision-maker takes an action if and only if the marginal benefit is bigger than the marginal cost that's really really really important now that's like a 2-star important maybe even three stars take an action if and only if the marginal benefit is bigger than the marginal cost now let's think about what that means in terms of study the marginal benefit of continuing to study is going to be the additional not knowledge that you gain from studying the change in benefit okay the marginal cost is going to be the additional cost that you incur so if you continue to study you're going to be giving up whatever your next best alternative is but remember we've said that that your friends aren't doing anything and the marginal cost is low so if the marginal cost is down here and the marginal benefit is right up here then you continue to study and then let's suppose that there's a knock on the door and it's one of your friends and and it's a friend that you like and they say hey you know what we're gonna go out and we're gonna do something you want to go with us now the incentives have changed and people are going to respond to that change in incentives that doesn't mean you quit studying what we have to think about is what's changed well the benefit of continuing to study at the margin hasn't changed but the cost has and if these are friends that you like the marginal cost has gone up does that mean you quit study well it depends on how much the marginal cost has gone up if it goes up but it's still smaller than the marginal benefit then you continue to study but if it goes up and now it's bigger than the marginal benefit well then you might say you know what I'm gonna close my book I'm gonna go out with you guys so what's important is to focus on the marginal benefit and the marginal cost it doesn't matter how big one by itself is or how small it is what matters is the relative size let's change it a little bit let's suppose that you're sitting there studying and you're not giving up very much at all so the marginal cost is relatively low and let's suppose that you are really really learning the material well you're having breakthroughs that are unlike anything you've ever experienced before so the marginal benefit is huge tremendously big and so when that friend knocks it doesn't matter what they're doing you're you're gonna continue to study because you're starting to the clouds are parting in your brain and you're starting to understand things like you never before is there any circumstance under which you would ever interrupt that where the marginal benefit is tremendously high is there any circumstance under which you would stop that and the answer is of course of course it doesn't matter if the marginal benefit is just sky-high if all of a sudden the marginal cost is even higher then you quit so if you're sitting there studying and and the clouds are parting and then all of a sudden somebody walks in through the door and sticks points a gun at you and says either stop studying or you're gonna die then clearly the marginal cost of continuing to study is now huge right you die so the key here is not how big marginal benefit is or how big marginal cost is it's how big they are relative to each other they could both be way up here they can both be way down here but what matters is how do they compare to each other so that's what it means to think at the margin okay you're thinking about the marginal benefit the additional benefit the additional cost and you're going to respond to those the the comparison of those you're going to respond to those and incrementally adjust your plan of action let's think about the next principle we'll call it number five and that is that trade can make everyone better off trade can make everyone better off we can think about this in terms of trade between two people or we can think about this in terms of trade between two countries let's start by saying that noticing that it doesn't say that trade always makes everyone better off that's not what what it says what it says is that trade can make everyone better often will actually spend a whole future video talking about this very principle why do people voluntarily engage with trade with each other let's think for a second about international trade it's not uncommon for people to be skeptical of international trade because they think that the world is a zero-sum game and a zero-sum game is a game where if I win you have to lose so poker is a zero-sum game if we all walk into the room with $100 and I walk out with 300 dollars then that means some other people had to walk out with less than they walked in with that's a zero-sum game any gains by me are losses by somebody else turns out that trade is not that way voluntary trade between people or between countries is not a zero-sum game it's what we would call a positive sum game that means that we can all walk away and have gained from it so trade can make everyone better off an easy way to think about why this has to be true is to think about what would happen if you didn't trade with with other people so let's suppose that you decide you know what I'm gonna trade less if you traded less then that means that you're not going to be buying stuff from other countries let's suppose that you you buy only stuff made in the u.s. you buy American well then that's you won't have reasons for wanting to do that but that's going to close off several options to you for buying goods and services from people in other countries if buying American is good then how about if you just buy Missouri well think about all of the things you won't be able to buy because they're not produced here or if if by Mazar is good then by Warrensburg clearly you start to realize that if i only buy the less i trade with people the more stuff i have to produce myself because it's not being produced in my limited circle that i'm going to trade with and so clearly the more stuff you have to do for yourself the less time that leads for other things so trade can make everyone better off the absence of trade makes people worse off we'll talk about that again in an upcoming video let's talk about principle number six number six is that markets are the best way to organize economic activity markets are the best way to organize economic activity now what we mean by this I'm gonna insert the word free here free markets are the best way to organize economic activity we have to be careful about what we mean by this phrase free markets we're not saying when I say the phrase free market I don't mean a situation where sellers can do whatever they want sometimes people define a free market as the complete absence of any regulation on sellers and that's not at all how economists mean that phrase what we mean is that sellers are free to sell what they want within the bounds of the law and consumers are free to buy what they want within the bounds of the law so you can't you can't lie to customers about the quality of your product and you can't lie to them about other characteristics of the product and as a consumer you can't buy things that are deemed illegal we're not going to get into whether or not things should be legal or illegal we're gonna say that within the bounds of the law you're able to consume what you want and produce what you want that's a free market and this principle is that free markets are the best way to organize economic activity the other alternatives the alternative to a free market is a planned marker excuse me a planned economy planned economy that's the other alternative so we could think about planned economies like socialism or communism so what this principle says is that free markets capitalism it's the best way to organize economic activity it is a better way to organize economic activity than communism or socialism communism and socialism for what we're going to do in this class the key characteristic of those two ways of organizing economic activity is that the means of production is owned by the government so if you look at sociology or excuse me if you look at communism or socialism both of those are situations where the government controls the means of production the businesses what's being produced it's a basic principle of economics that capitalism beats that that doesn't mean capitalism is perfect so this next principle that we have to think about we'll call it number seven is that sometimes the government can improve the free-market outcome so sometimes let's just say government can sometimes improve the free-market outcome that happens when there is what we call a market failure so free markets are great we'll talk more about that and in other parts of this class if you take a principles of microeconomics class you spend a lot of time talking about why free markets are good but sometimes there are situations where the free market doesn't work quite as well it still works better than planned economies so that this is not an argument for that certainly not but there are times when we have things that we call an externality an externality is when one person's behavior imposes a cost on somebody else so it's a type of market failure in those cases sometimes the government can improve the market or the free market outcome let's talk about the last few principles number eight has to do with a country's standard of living so a country's standard of living depends on its ability to produce other things to produce things other people want to buy a standard of living depends on its ability to produce things other people want to buy it's a lot of writing country standard of living depends on its ability to produce things other people want to buy it's also true for an individual your standard of living in the future will depend on your ability to produce goods or services that other people want to buy from you that's why it's important to have skills that other people are willing to pay for we'll talk about how that works in a market talk about principle number nine and ten number nine and ten I'm just going to say that's too much writing principle number nine is that prices rise when the government prints too much money so let's just put let's just write money here when the government prints too much money drives prices up causes inflation we'll spend time talking about that so that's kind of a basic principle of macroeconomics the tenth one is let's just put here there's a trade-off between inflation and unemployment in the short run and that may not mean much right now to you it doesn't need to but we'll spend some time talking about that in the short run the government may want to decrease inflation and it will want to decrease unemployment but the problem is that there's a trade-off if you decrease one it's going to drive the other up so we have to decide which one do we not like the most because that's the one we might want to decrease so these are the ten principles what we'll do in our next videos we'll think about some some basic things that we're going to be thinking about in a principles of macro class or principles of micro class we'll kind of think about what economists do how they do it and then we'll move on to start talking about this principle that trade can make everyone better off
Principles_of_Microeconomics
Chapter_7_Consumer_Surplus_Producer_Surplus_and_the_Efficiency_of_Markets_Part_1.txt
in this video we're going to talk about consumers producers and the efficiency of markets so what we're going to be thinking about is developing a measure of the well-being of consumers and a measure of the well-being of producers and then we're going to think about what a free-market results in in terms of human wellbeing so we're gonna essentially one of the main goals of this chapter is to think about whether or not free markets are good in terms of of human well-being now let's start by thinking about what we mean by the term free market and so if you've been watching previous videos there's probably a time or two where I've talked about this in a previous video but let's just review what we mean when we talk about a free market and and this is something that I didn't realize until you know I had been teaching for a little while economists used the word free market and sometimes forget that out there in the non economist world other people use that term a little bit or that phrase a little bit differently so when I use the word free market I do not mean a market or a situation where the sellers are free to do anything they want one time I was having a discussion with my daughter and it was about whether or not markets were good and I said well we can demonstrate that free markets are a good thing they're not perfect and we'll talk about that but all other things equal I would much rather have a free market than some some planned system and she said well we learned in school that free markets are not good and I was kind of shocked I just can't imagine that that people are being taught that but in the course of this conversation it turns out that what she was meant when she said free market was a situation where the sellers could do anything they wanted they could lie to the consumers and they could dump waste into rivers and they could do all kinds of stuff if it was just like a market with no rules is how she was using the term if that's how you use the term then you need to keep in mind that's not how I'm using the term so when I talk about a free market what I'm talking about is a situation where buyers and sellers are free to engage to make decisions that they want engage in exchange between each other if they want but everything has to happen within the bounds of the law so ideally we would want the consumers and the producers to have the same information so that the the producers can't lie to the the buyers about the quality of the product or anything like that so our free market is going to be a free market within the bounds of the law people's property rights are protected etc so what we need to do is we need to come up with a measure of the economic well-being of buyers and sellers and so the best way to describe how we get to what we're going to use which we're going to call consumer surplus and producer surplus the best way to explain that is to first talk a little bit about Jeremy Bentham an early economist what happened was that Bentham was looking for a way of measuring the well-being of people and Bentham's idea was Bentham was around when the thermometer was invented and prior to the invention of the thermometer it was easy to hold two objects and say well this one feels warmer than that one but we weren't able to put an objective number on it and then the thermometer gets invented and now all of a sudden we can measure the temperature of something in terms of degrees and we can say well that this item is five degrees warmer than that item and Jeremy Bentham wanted to do a similar thing with utility what he was going to call utility he wanted to do it with with human wellbeing and he envisioned a point in time where a device would be invented he was going to call it a you know lammeter and that device would be able to measure the the happiness or the well-being that a person experiences so maybe it was some device that you put under your tongue and and the units of measure for the you know ometer were going to be utilization 'since of util so your satisfaction in terms of doodles turns out that never happened right and so and Jeremy Bentham knew it didn't wasn't going to happen and but that was kind of the basis of some of the stuff that we still use today that we still think about the utility maximization model not in this class but in a different class you might look at that so Jeremy Bentham came up with this idea of measuring the well-being of people in terms of utility we can't objectively measure utility but we can objectively measure something else now it turns out there's an interesting other story about Jeremy Bentham he became wealthy he left his fortune to one of the economic schools in London I can never remember which one it is but he left them a good chunk of his fortune with the provision that they they preserve his body removed the head mount him in this display cabinet that he called the auto-icon and put his clothes on the skeleton and put him sitting in on a little bench there and then his head had to be preserved and then mounted between his feet and they did it if you can google auto icon or Jeremy Bentham and you will see pictures of Jeremy Bentham's preserved head and and his body and it's kind of a one of the more unusual stories in economics but we're not going to be thinking about utility in this chapter we're going to be thinking about something we call consumer surplus and producer surplus and we're going to use those to measure the well-being of buyers and sellers then what we're going to do is we're going to think about how a market works we're gonna think about what that market results in in terms of the well-being of buyers and sellers and then we can compare that to other economic systems and see whether a market system how a market system compares to other systems like a socialist system or a communist system a planned system okay so the way that we're going to do this is we first need to think about how economists major value and it turns out that if you consider think about something that you've out you highly and if you're taking notes right now then then I want you to think about something you value highly and something you value you place some value on but not very much if you're taking notes whatever you're using the right could be the second thing the thing that you value but not very much for the other thing think about some physical thing that you value highly maybe it's something that somebody gave to you and they've passed away and that that kind of for you you value it because they gave it to you for me it's it would be a guitar that that my grandpa gave me and if I didn't have that I I would be disappointed I place a lot of value on it so think about those two things and then think about me taking both of them away from you and think about which one of those you would be willing to pay the most to get back and if you've done what I asked you to do you'll realize real quickly you'd be willing to pay more to get the thing back that you valued the most so we can represent how much you value that by looking at how much you'd be willing to pay to get it back if I took it or if it bothers you to think about me taking it we could think about me offering to buy it from you and in that case we'd be thinking about your willingness to accept dollars to part with it so both of them are kind of different ways to look at a very similar idea and that is how much purchasing power are you willing to exchange for this and so we're going to be thinking about willingness to pay and I'm going to abbreviate willingness to pay wtp willingness to pay wtp so I'll write that frequently so this is going to be our measure of value this is how economists measure the value you place on something whatever you're willing to pay to get it now let's think about an example where you have let's suppose you have a guitar and you're interested in selling that guitar and so you invite some people to an auction that you're gonna have where you're gonna auction this guitar off and let's suppose that some bidders show up so we have some bidders let's call them a B C and D so these four people show up and let's suppose we think about their willingness to pay so we're gonna be thinking in this column about how much each of these four people valued this guitar that you're going to be auctioning off let's suppose that bidder a values at $1,000 okay that's their willingness to pay bidder B values it at $800 bitter C values at at 600 and bitter D values at 400 now you as the seller of the guitar you don't see these numbers all you see is that four people show up but what's going to happen is that the auction process itself is going to reveal the willingness to pay for three of these people and one person's can end up buying the guitar so if we think about how the auction would work you would start the bidding and let's suppose that you started at $200 and all four of these people would be bidding once the price rises to just above 400 D falls out they don't want to bid anymore because they were willing to pay 400 but no more but a B and C would still bid for the guitar and so the price would continue to rise until it gets just above 600 and then bitter C would fall out a and B would be bidding against each other and the price would continue to rise until it's just above 800 and then bitter B would fall out and so bitter a is going to end up buying the guitar and they're going to pay one bidding increment higher than 800 let's just call it for convenience $800 so a buys the guitar for 800 and again it's going to be one bidding increment above 800 but we'll keep it at 800 now a is going to be happy about this because they were willing to pay a thousand but because of the way this worked out they only had to pay 800 so they have two hundred dollars left over that they were willing to give up to get that item but because of the way it worked they only had to pay 800 and they get to keep that 200 we say that a gets two hundred dollars of consumer surplus which we will abbreviate si s consumer surplus let's put that in parentheses bidder a gets two hundred dollars of consumer surplus now let's think about a couple of things related to this so the first thing to think about is to notice that what ended up happening is that this auction resulted in the person that valued the good the most getting it so bitter a valued it the most and ended up walking away with it bidders b c and d may be disappointed that they didn't end up buying it but they didn't end up having to pay anything either so they don't get any consumer surplus because they didn't engage in a transaction here's the other thing we need to keep in mind that these numbers here don't tell us anything about income necessarily so it's tempting sometimes to think well bitter a that's probably just a rich person and as usual the rich person walked away with that's not necessarily an inference we can make it turns out that bitter a could be some kid that really wants a bat guitar and has been saving his money from mowing lawns for a long time and is willing to pay a thousand it could be that bitter d is somebody that's very wealthy and just doesn't particularly value the guitar very much so even though it's tempting to think that these somehow represent income they don't they simply represent the value that these four people placed on it also notice that the auction revealed bidders DC and B's willingness to pay because we could see when they quit bidding it didn't reveal the full amount that bidder a was willing to pay it only revealed that they were willing to pay more than anybody else they had a higher value than anybody else now what we want to do here is we want to draw the demand curve for the guitar and then we want to see where this consumer surplus shows up with that demand curve so if we think about how to actually let's draw before we draw it let's put together the demand schedule so we're going to think about the price of the guitar and we're going to think about quantity demanded now remember there's only one guitar here but multiple people want to buy that guitar so the quantity demanded can certainly be higher than one let's think about prices above 1000 first and then we'll think about 1,000 to 800 and then 800 to 600 and 600 to 400 and then let's say below 400 so if we think about prices above a thousand bidder a is willing to pay more than anybody else and the most they're willing to pay is a thousands so the quantity demanded in any price above a thousand certainly is zero any price between a thousand and 800 it's only bidder a so quantity demanded is one I'm going to put a here because it's just bitter a that is willing to pay any price between those two points there between 800 and 600 quantity demanded is two and it's going to be a and B they're both willing to pay a price between 800 and 600 600 to 400 the quantity demanded is three its bidders a B and C and then any price below 400 quantity demanded is or it's all four of them a B C and D so there's the demand schedule now what we need to do is we want to graph the demand curve and so any demand curve we know is going to have price up here on the vertical axis quantity down here let's our quantities go up to four so let's put one two three four and then let's put here our prices so our prices let's put start at 200 and go up by 200 each time so 200 400 600 800 and a thousand now what we want to do is we want to graph the information in this demand schedule on that picture if we look at any price above a thousand the demand curve is going to be this vertical portion of the vertical axis okay nobody wants to buy any guitars at any price above a thousand at a price of a thousand our quantity demanded jumps to one so we get kind of a little flat segment on our demand curve any price between a thousand and 800 quantity demanded stays at one so we get a vertical segment on the demand curve at a price of 800 quantity demanded jumps to two so we get another flat section on it any price between 800 and 600 quantity demanded is still two at 600 we get another jump quantity demanded jumps to three any price between 600 and 400 it stays at three and then at a price of 400 it jumps out here to four and any price below 400 quantity demanded stays it 4 there's the demand curve for this guitar now let's think about what we're seeing on the demand curve here and this is really important this is a single demand curve that represents four people notice that the height of the demand curve right here is 1,000 and that corresponds to a Zwilling 'no stoop a so the height of the demand curve here represents a's willingness to pay the height of the demand curve right here represents b's willingness to pay here we see c's willingness to pay and right down here's DS willingness to pay so here's what we're seen the height of a demand curve represents willingness to pay and we can observe especially with this demand curve we can observe where each person's portion of the demand curve actually is the consumers are the potential buyers with the highest willingness to pay are going to be represented up here on this end of the demand curve the potential buyers that with the lower willingness to pay are going to be represented down here on this end of the demand curve now I realize that the demand curve looks like a set of steps and that's kind of weird but keep in mind that that it's still downward sloping the higher the price the smaller quantity demand it is so that's really the most important thing that we're thinking about here the reason the demand curve looks like a set of steps is that we've got a story here where there are discrete units of the good right either by one guitar zero guitars so that causes us to have these jumps from from three to four or two to three the other thing is that we've got a small number of buyers in the market if we had lots of buyers let's suppose we had twice as many buyers let's suppose we had another buyer here that was willing to pay nine hundred and another buyer that was willing to pay seven hundred and somebody willing to pay five and somebody willing to pay three then when we drew the demand curve we'd have twice as many steps and so you can see that as we increase the number of potential buyers in the market the steps get smaller and smaller and eventually we would get a demand curve that would look kind of like what you're used to thinking about with the demand curve so don't be bothered by the of that it looks odd but it's pretty straightforward what we want to do is we want to see where this consumer surplus shows up okay now let's think about how we calculated consumer surplus there what we did consumer surplus the way we calculated it was we took bitter a's willingness to pay and we subtracted off the price that they actually did pay which was 800 so consumer surplus is willingness to pay - price in our case it was a thousand - 800 which gave us the consumer surplus of $200 now if you look at the price here let's talk about what the the the rest of this picture so if we think about how this market is working we've got one guitar that's going to be auctioned so the supply curve for guitars is going to be vertical here at one and so if we were to draw that in let's draw our supply curve if I draw the supply curve in it's going to come right up here like that there's my supply curve we can see that the intersection of our demand and supply curve gives us an equilibrium quantity Q star of 1 not surprisingly one guitar gets auctioned off we've got this overlap in terms of price but the way the auction works is that the price gets driven up until there are no bidders left except one and so our price ends up stopping right here there's P star ends up being 800 if you look at this little rectangle right here the vertical distance of this rectangle is 200 units or $200 excuse me times this horizontal distance of 1 the area there is 200 which is the consumer surplus that goes to bidder a so the area of that little rectangle there represents the consumer surplus that went to or that's going to bidder a so what we get is a general conclusion that the area under the demand curve and above the price tells us consumer surplus now what we're going to do here in a little bit is we're going to switch away from a demand curve that looks like steps and we're going to think about demand curves that are closer to what or exactly like what we're used to thinking about but this general conclusion tells us that if we have just a plain linear demand curve like this not steps and we have a price that's right over here what we've just seen is that the consumer surplus is going to be represented as the area under the demand curve and above the price we're seeing it here with this kind of unusual demand curve but it's also going to be true there hey that's important what's happening here is that the height of the demand curve represents willingness to pay it represents the value and then we subtract off what gets paid and what's left over is the consumer surplus which is what we're seeing right there when we calculated that okay let's think about what happens if instead of just one guitar let's suppose we had two guitars so if we have two guitars then our supply curve is going to shift now our supply curve shifts to the right if you think back to that basic demand and supply model you know that an increase in supply is going to drive price down and it's going to drive quantity up and that's exactly what we're going to see so let's suppose we have two identical guitars let's let's think about it in terms of our story first so we if we have two guitars and we start and let's suppose they're identical okay if we start the auction price low let's say 200 then all four people are going to be bidding on the two guitars once the price gets above 400 D drops out but a B and C now want to each want a guitar but there's only two so they're going to be bidding against each other the price will rise to 600 just above 600 and then C drops out and then there's no incentive for a or B to bid for one guitar or the other because they're both identical so now there are two guitars to people who want to buy them so our price is going to stop at one bidding increment above 600 let's just call it 600 and B are both going to buy a guitar for $600 each and they're both going to get some consumer surplus so with two guitars a is going to buy one pay a price of 600 and get $400 of consumer surplus B will buy a guitar for 600 and get $200 of consumer surplus now let's see where that shows up in our picture if we were to increase the supply to two guitars then we're going to be dealing with a supply curve that's right here our equilibrium quantity not surprisingly is going to increase to two so now here's our quantity the equilibrium price is going to fall to 600 and if we look at consumer surplus now that goes to bidder a all of this area under the demand curve under bidder A's portion of the demand curve and above the price is going to represent consumer surplus for a this vertical distance would be 400 times the horizontal distance of one so if there are two guitars price is going to end up being 600 consumer surplus that goes to a is going to be $400 which corresponds to this I'm going to shade it this time all of the area under the demand curve and above the price of 600 this area is 400 but now there's also another buyer there's consumer surplus that goes to B and B's consumer surplus ends up being right under their portion of the demand curve and it's right here this vertical distance is 200 and the horizontal distance is 1 that area is 200 it corresponds to consumer surplus that goes to bidder B and so we get $200 of consumer surplus for bidder B total consumer surplus let's call it consumer surplus total would end up being of course $600 there's $600 of consumer surplus that's going to the people who bought guitars in this particular situation so you can see that when the supply increase it drives price down and when it drives price down that's going to increase consumer surplus now consumer surplus let's think about that for a second consumer surplus this is going to be our measure of the well-being of buyers if you think about why it works well as a measure of well-being it's because the more consumer surplus you get the better off you are right so if you think about when you go out to buy things you almost always get some consumer surplus you almost always pay a price that's below the maximum amount you would have been willing to pay sometimes there are times when you have a difficult time deciding whether or not to buy something maybe you think okay I'm going to buy this and then you you talk yourself out of it and you think I know I'm not going to buy it and then maybe you pick it back up again and say okay yeah I am gonna buy it that's probably a time when the price that you're being asked to pay is is pretty close to equal to your willingness to pay and you have to decide is it is it which is bigger am I willing to pay that price or am I not willing to pay that price most of the time that doesn't happen you observe the price and you buy it and you may not stop to think about the the maximum amount you would have been willing to pay and you probably don't stop to think about how much consumer surplus you're actually getting but you are typically getting some consumer surplus and the more consumer surplus you're getting the better off you are we call it consumer surplus because essentially its surplus value that you didn't have to give up to get the good so the the more of that you get the better off you are so we're going to use consumer surplus as our measure of the well-being of buyers in the market okay so now what I need to do is we need to clear this off and then we'll talk about what it looks like with just a regular demand curve because it's it's actually much simpler wouldn't we switch to that type of demand curve so let me clear this off and then we'll take a look at that okay let's take a look at how this consumer surplus looks like with just a plain linear demand curve so if we think about a demand curve like we're more comfortable working with let me bring it up here to the vertical axis so we've got P and Q Q down here so here's our demand curve let's suppose our price is right here that's our equilibrium price and here's the quantity transacted now I'm leaving off the supply curve in this picture but my supply curve would be going right up through that point so I don't want it to get any more complicated than it needs to be what we've just seen is that consumer surplus shows up as the area under the demand curve and above the price so our consumer surplus would be this area right in here now when we've got a linear demand curve consumer surplus is going to show up as a triangle in my previous picture it showed up as some rectangles here it's a triangle but keep in mind you still calculate the area of it and that gives you consumer surplus so if I had let's suppose I let me draw another example down here where I put some prices and the quantity on there let's suppose our prices 10 excuse me our intercept up here is 10 let's suppose the price is 5 and the quantity over here let's suppose it's 20 okay so we're looking for this area right here and remember anytime you've got a triangle like this a triangle is half of a rectangle right so if you want the area of the triangle you calculate the area of the rectangle and divide it by 2 and the area of the rectangle would be the height this distance times the width that distance so our height here consumer surplus is equal to the height which is 5 multiplied by the width which is 20 and then we have 2 divided by 2 that's $50 so consumer surplus is easy to calculate now let's take a break here for a second and think about why in our previous example consumer surplus was equal to one number minus another number and now all of a sudden it's showing up as an area so if you're willing to pay $50 for something and you only have to pay ten you get $40 of consumer surplus well what's going on here is that we're talking about multiple units of the good okay so we're thinking about not just this first unit and how much consumer surplus you get but then the next unit the next unit in the next unit the next unit and by the time you add it up over all of these units you're getting all of this area under the demand curve and above the supply curb of the price so if you're just talking about transaction of one unit then it's just your willingness to pay minus the price if we're talking about multiple units like we would be with a market picture like that then we're talking about an area okay now what we've just come up with if we're talking about a demand curve let's remember that we've just talked about the fact that that demand curve represents the value that consumers place on the good and we know from our demand curve that looks like a set of steps that the consumers with the highest value are going to be represented up on this end of the demand curve the consumers with the lowest value are going to be represented down on that end of the demand curve with a linear demand curve like this it's not as obvious exactly where each person is represented but you can still keep in mind that it represents value these people have a higher value than these people okay so now let's think about how consumer surplus changes if we change price I'm going to draw another picture here here's a demand curve let's pick a price like p1 and let's identify our initial quantity here at q1 and then let's think about what happens if price falls to p2 so what we want to think about here is what happens to consumer surplus if price goes down okay so we can put q2 out here I'm gonna label some areas let's call this area a it's called this B and let's call this C now let's start by thinking about consumer surplus at a price of p1 consumer surplus at p1 well consumer surplus at a price of p1 is going to be the area under the demand curve and above the price which is area a consumer surplus at our initial price of p1 is equal to area a you would calculate that just by taking this height multiplying by the width dividing by 2 that would give you that area okay now let's think about what happens if price were to fall to p2 so price falls to p2 I'm leaving the supply curve off but for price to be p1 our supply curve would be right there and then if we have an increase in supply 2 right there it would drive price down to p2 so I'm just not drawing those supply curves but keep in mind they're there so price falls to p2 what we saw in our previous picture when this with the step demand curves was when that supply for guitars shifted to the right then the price fell and what happened was consumer surplus increased and that's what we're gonna see here consumer surplus at a price of p2 is now the area under the demand curve and above this price of p2 it's going to be area a plus B plus C a plus B plus C now if you're calculating that you wouldn't calculate these three areas you just calculate the area of this bigger triangle right there okay what we want to do is think about how much did consumer surplus change so I'm gonna talk about the change in consumer surplus so how much did it go up by well it started out at a it went to a plus B plus C so it increased by B and C so the change in consumer surplus was B plus C you could calculate that if I asked you to calculate the change in consumer surplus you could calculate it by calculating the area of this rectangle height times width you don't have to divide by two plus the area of this triangle height times width divided by two or you could calculate the area of the small triangle calculate the area of the big triangle and subtract the smaller one from the bigger one that would give you that area okay let's talk about these two chunks right here B and C so let's start by thinking about what happens when our supply curve increases well when the supply curve increases it drives price down consumer surplus goes up by that amount B plus C but let's think about where this consumer surplus be where that ends up going and where C goes so if you think about the consumers that are buying the good the consumers that are buying the good are going to be represented along this portion of the demand curve at a price of p1 at a price of p1 these consumers are buying the good and when the price falls to p2 those consumers continue to buy the good and they're made even better off because they get more consumer surplus they bought at the higher price of p1 they're definitely going to buy at a price of p2 and so area B represents more consumer surplus that goes to the original buyers so area B this is the increase in consumer surplus to the original buyers those were the buyers who bought at the higher price of p1 when the price goes down to p2 notice that that brings some additional buyers into the market these people represented along this portion of the demand curve from that point to that point these people those people enter the market when price goes down just like in our previous picture with the step demand curve when there were two guitars bidder be entered the market now we use that term entered the market they were there all along but at a price of p1 these people down here don't buy any of the good once the price Falls to p2 these people all of a sudden decided to buy the good just like bitter B decided to buy a guitar once the price fell to 600 so Area C represents the consumer surplus that goes to the new entrance into the market see this is consumer surplus - I'm going to put in in quotes here the new entrance into the market these were people who didn't buy at a price of p1 but now decide to buy at a price of p2 because their willingness to pay is now higher than the price okay so you can see that the total amount of the additional consumer surplus can be broken up into different groups if we were to reverse this if we were to start with a low price a supply curve out here and have our supply curve shift to the left that would drive price up these people would leave the market they would switch from being buyers to be non buyers they wouldn't buy the good we would lose some consumer surplus these people up here would continue to buy the good even at the higher price because their willingness to pay is still greater than the price but they would lose this chunk of consumer surplus area B but they'd still continue to buy what I need to do now is clear this off and then we'll talk about producer surplus let's talk now about developing a measure of the well-being of the sellers in the market so we're going to develop something that we're gonna call producer surplus and in a lot of ways it's going to be very similar to consumer surplus only obviously we'll be talking about the other side of the market so let's let's kind of do the same thing let's let's think about now instead of selling this guitar let's suppose that you're going to get bids to have a guitar built for you okay so let's suppose that did you get some shops to bid on building you a guitar so let's have shops let's call them e F G and H and let's think about their cost of production so this is going to be the cost of production for each of the shops all of them are going to build an identical guitar let's suppose you've got the plans for it you've got maybe you've got the woods picked out you know exactly what the hardware is gonna look like and so they're gonna build the exact same guitar but they're all gonna have different cost of production let's suppose shoppi their cost of production is $200 for them to build the guitar it costs them $200 these are not the prices they're going to charge you this is their cost of production so don't think about that that has been the cost to you okay let's suppose Shop F has a cost of production of 400 G 608 800 and you might ask well if they're all building the same guitar then why would they have different opportunity costs well because everybody has a different opportunity cost of their time and so even though the good is identical they can have different opportunity costs so if we think about what's going to happen in this situation it's kind of a reverse auction you're gonna start the price high and you're gonna say okay who will build the guitar for $1,000 and of course all of them would be willing to build the guitar for $1,000 and then you're gonna say okay who would build it for 900 and all of them would be willing to bid it build it for 900 and as the price falls as soon as the price fell just below 800 shop H would say okay we're out we're not going we can't do it for that price would continue to fall because EF and G would bid against each other so the price continues to fall until it's just below 600 and then D drops out price continues to fall until it's just below 400 and then F drops out so we know that shop II will end up building the guitar and they're gonna charge one bidding increment less than 400 let's just for convenience call it 400 so shop e builds the guitar and they charge you $400 remember you don't see that number right they don't show up to the auction and tell you what their cost of production is going to be the auction process revealed the cost of production for H G and F but not for e so shop e builds the guitar charges you 400 we would say that they get $200 of producer surplus they're going to get $400 it's going to cost them 200 so they're going to get to keep $200 okay so we say that shop e shop he gets $200 of producer surplus which we're going to abbreviate PS producer surplus here's the definition that we're using producer surplus is equal to the price minus the cost of production producer surplus Izzy price minus cost of production so the price here was 400 minus their cost of production of 200 that leaves them with producer surplus of $200 now at this point you may be looking at that and say anything that sounds a lot like profit well turns out that its producer surplus and profit are not the same thing but for what we're doing right here if you were to think about it while it wouldn't be technically correct and it would be okay to think about that at this point that way let's draw the supply curve for guitars now we could put together the supply schedule I'm going to kind of skip that since we did that for the demand situation let's just we can just use this information we know what's going to end up happening let's put our prices here 200 400 600 and 800 our quantities go up to 4 because there's 4 shops if we think about let's start with low prices it's easier in this case to start down here so if we start with low prices like $100 no shop is willing to build it for a hundred so any price up to 200 our supply curve would be this portion of the vertical axis and at a price of 200 shoppi is willing to build the guitar so our quantity supplied jumps to one unit at that price any price between 200 and 400 it's still just shoppi so we get a vertical segment at 400 it jumps to 2 because now shop F is also willing to build it any price between 400 and 600 it stays it to at 600 it jumps to 3 any price between 600 and 800 it stays at 3 and then at 800 it jumps to 4 and any price above 800 all four of them are willing to build the guitar so we get a supply curve that looks like that and it's the key is that it's upward sloping it looks like a set of steps but again that's because we're thinking about discrete units of the good and we've got a small number of firms here but the nice thing about this supply curve is it helps us understand first off that the supply curve is representing the cost of production here so I'm going to label this supply curve as representing the cost of production because if you look at the height of the supply curve right here the height is representing shop e's cost of production I'll abbreviate at copy2 the supply curve is representing shop F's cost of production and right here this is G's cost of production and right here is HS cost of production so the height of the supply curve at every point is representing the cost of production now let's think about what the demand curve looks like in this case so the demand curve we only want one guitar produced and so the demand curve is perfectly inelastic at a price of one so I could draw it in there but I'm not going to we know that what's going to end up happening is our equilibrium quantity is 1 the equilibrium price is going to end up being 400 because of the nature of this auction and this area right here this represents the producer surplus that goes to shop II I'm going to call it producer surplus with a little heed on there if you calculate the area of that little rectangle this vertical distance is 200 horizontal distance is 1 so the area here is 200 which is exactly what we got when we calculated it down here it was 400 minus 200 producer surplus was equal to $200 hey we could do the same thing if we wanted with our demand curve we could suppose that we want two identical guitars to be built and let's suppose each shop can only build one then if we shifted the demand curve to the right we know that an increase in demand is gonna drive price up and quantity up so if our demand curve shifted to the right then it's going to drive quantity up to two it's going to drive price up to $600 each shop II would still produce a guitar and get paid 600 so they would get $400 of consumer surplus or excuse me producer surplus and shop F would now build a guitar they'd sell it for 600 it would cost them 400 to produce they would get $200 of producer surplus we could look at the total amount of producer surplus as all of the area under the price and above the supply curve so everything works the same as what we saw with demand it's just now that we're talking about supply we're talking about the area under the price and above this like Irv so if we were to switch now to just a plain upward sloping supply curve if here's our price then we can go over here here would be the quantity I'm going to leave the demand curve off it would be going down through there but what we've just seen is that producer surplus is the area under the price and above the supply curve hey there's producer surplus make sure we label that supply curve what we want to do now is just think about what happens if price changes and let's go with our linear supply curve so let me draw another picture here and we'll do something similar to what we did with demand let's draw a supply curve let's start with a price of p1 okay let's look at a quantity of q1 and then we'll think about what happens if price goes up to p2 we'll think about a quantity of q2 I'm gonna put a little dashed line there I call this a let's call this B let's call that C and I'll go through this fairly quickly let's start with producer surplus at a price of p1 producer surplus and the price of p1 is the area under the price and above the supply curve its area a producer surplus at a price of p2 now our price is up here producer surplus is all the area under the price and above the supply curve so it's going to be b2 plus C let's say a plus B plus C when the price goes up that increases producer surplus so the change in producer surplus how much did it change by well that's area B plus C and we can break that up area up into two chunks just like we did with demand so the sellers that sell the good at a price of p1 are these sellers represented along this portion of the supply curve from that point to that point at a price of p1 there are our sellers just like at a price of 400 there is our seller it's shoppi when the price rises to p2 shop e still sells the good and it gets more producer surplus so area B that represents increase in producer surplus to the original sellers increase in producer surplus to the original sellers okay just like when demand increases and price goes up shoppi continues to sell the good sell the guitar and they make more producer surplus but what also happens is when that price goes up shop F entered the market and so when the price rises to p2 these sellers right here enter the market now they were there all along on our picture it's just that they weren't selling the good at a price of p1 so area C is producer surplus to the what we're going to call new entrance into the market so now let's think about what we've got we've talked about consumer surplus it's going to be our measure of the well-being of buyers we've talked about producer surplus it's going to be our measure of the well-being of sellers keep in mind that the units of measure that we're using for all of these these are dollars so the nice thing about consumer surplus and producer surplus is that the units of measure are something that we're really comfortable thinking about okay unlike Jeremy Bentham's original idea where we had utility and it was measured in you Dalls I don't know what a util is but I know what a dollar is and I know what a dollar can buy and so these are really useful ways of measuring well-being what we want to do now is take these ways of measuring well-being and think about how a market works and whether or not a market works in a way that that increases human well-being okay so I'm gonna do that in the next video we'll also see that we can come back and use these to kind of think about how public policy different types of government policy impact consumer and producer surplus so we'll do that in a future video
Principles_of_Microeconomics
Chapter_6_Supply_Demand_and_Government_Intervention_Part_2_price_controls_and_taxes.txt
let's talk about another type of government intervention into a free market and that is taxes so if we think about a tax there are several different types of taxes some of them are or what we would call a per unit tax some of them are a percentage tax so you may be used to thinking about say a sales tax where you have to pay two percent more so it's a percentage of the total amount of stuff that you've bought you would pay what we're going to think about is a per unit tax so this would be a tax of say $1 every time you buy something or a tax of five dollars every time you buy something so for every unit you pay five dollars tax and and the way these taxes work very similar to the way a percentage tax works so we're going to go with the simplest one so let's start by thinking about kind of a hypothetical scenario let's suppose that you've gone down to a car lot and you've picked out a used car that you're going to buy or maybe a new car that you're going to buy and let's suppose the total price of that car is $10,000 and you haven't paid for it but you've told the people there that owned the car lot that you want it and they're going to hold it for you too till tomorrow morning and you're gonna show up and and write a check or however you're gonna pay ten thousand dollars for that car and then you go home tonight and you're watching the news and you see a news story that says that effective immediately the Missouri Legislature has just passed a tax on car sellers and every time they sell a car they're gonna have to sell they're gonna have to send a thousand dollars to the government so now think about what's going to happen when you go down tomorrow to buy that car think about whether or not they're going to still sell you that car for $1,000 because remember you haven't you haven't signed any contract or anything you just pick the car out the price was going to be ten thousand and then now all of a sudden something's change if I ask that to a face to face class if I ask them so what's what's the price going to be everyone agrees that is going to be higher than it was the day before because now there's a tax and most people will say if I ask them what's the price going to be most people will say it's going to be $11,000 because now the car yesterday cost a thousand today there's a cost ten thousand today there's a new thousand dollar tax on the car that the seller is going to have to send off to the government so they're gonna charge you eleven thousand dollars and that feels like the way a tax works it feels like all you have to do is take the tax and add it to the original price it turns out that's not at all how a tax works and so what we have to do is we've got to go through and think about how a tax affects the incentives of the buyers and the sellers and what we're going to see is that that tax will cause the demand curve or the supply curve depending on whose taxed it will cause that to shift and that's going to change the prices it's going to change the quantities so it's a tax is not as simple as everybody thinks everybody thinks out there in the real world everybody thinks they understand how taxes work and almost nobody really does so let's think about a tax let's start with a tax levied on the buyers so a tax levied on buyers let's suppose the tax is equal to t dollars okay so that could be one dollar per unit or five dollars or $50 or a thousand dollars per unit but this is a tax levied on the buyers so when the buyers buy the good they have to additionally send off some money to the government okay so we need to first think about which curve shifts we need to then think about how much it shifts or which direction it shifts not how much which direction and then we need to draw the picture and think about what happens so clearly this is a tax levied on buyers and that's going to influence the demand curve because that's the curve that represents the buyers so let's think about let's suppose that just think about a different example let's suppose that you're hungry it's lunchtime you're hungry and you're willing to pay ten dollars for a pizza if you call up the pizza place and they tell you that the price of the pizzas seven dollars you buy it or if they tell you it's eight dollars you buy it or if they tell you it's nine dollars and seventy five cents you buy it you're willing to pay ten but if they tell you it's ten dollars and twenty five cents you say no I'm not gonna buy it I'll buy something else okay so now let's suppose that all of a sudden when you pay that price for the pizza you have to turn around also and send a dollar to the government let's suppose there's a one dollar tax on pizzas now think about the most you'd be willing to pay the pizza place for the pizza if you call them up and they tell you it's five dollars you would say okay I'll take the pizza because you pay them five you send a dollar to the government that's six total that's fine you were willing to pay ten if they tell you seven you say okay I'll do it so you pay them seven for the pizza you send your dollar to the government that adds up to eight that's great but think about the most that the pizza place will tell you that the highest price before you decide not to buy it it's gonna be nine dollars right so now if they tell you nine dollars and twenty five cents you're gonna say you know what never mind because if you pay them nine dollars and twenty five cents and then another dollar to the government that's more than you were willing to pay for the pizza so what happens is that a tax on buyers reduces willingness to pay by the amount of the tax and that's important this is extremely important reduces willingness to pay by the amount of the tax now let's talk about how that shows up graphically if we think about a demand curve remember the height of that demand curve represents willingness to pay so if we look at a particular quantity and we go up to the demand curve the height there represents the willingness to pay at the margin so if let's suppose at this quantity you're willing to pay this amount let's call that Q 1 P 1 and then all of a sudden a tax of T dollars is imposed we know that that tax is going to reduce your willingness to pay by the amount of the tax if it's a $1 tax it reduces your willingness to pay by $1 so your willingness to pay is going to fall by the amount of the tax T and so if we draw the demand curve that represents your new willingness to pay this I will call the demand curve with the tax it's going to be T dollars directly below the old demand curve it reduces your willingness to pay because now if you buy this unit of the good you're only going to be willing to pay this amount to the seller and then you've got to turn around and send T dollars to the government and that adds up to your total amount that you're willing to pay okay so that's the effect of a tax on buyers so now we've figured out which curve shifts we've figured out which direction it shifts straight down by T dollars now we just have to draw the picture so let's analyze the effect of attacks here let's start with a demand curve and the supply curve let's identify our initial equilibrium here at Point a here's our initial price of p1 and our initial quantity of q1 so let's put our story over here we're going to start at a price equals p1 quantity equals q1 and let's suppose that we impose a tax on buyers tax on buyers of t dollars per unit that's going to shift the demand curve straight down by T dollars here's the demand curve with the tax now let's think about where our new equilibrium is our new equilibrium is right down point B all right there's point B let's start by thinking about the effect on quantity so not surprisingly the quantity is going to fall and most people find that to be pretty intuitive anytime the government imposes a tax less of the Goods going to be bought and sold so quantity Falls let's say - q2 now let's think about how this is going to affect the prices so let's start by thinking about how it's going to affect the price that the buyer pays to the seller notice that it's actually going to drive down the price that the buyer pays to the seller right here is the price that the buyer pays to the seller we're going to call it PS I'm going to call that the sellers price because what's happened is that the tax reduces consumers willingness to pay and so consequently they're going to be willing to pay the sellers less than before so from the sellers perspective that's the end of the story once they sell the good to the buyer they the sellers not being taxed so they get to put those dollars in their pocket that's it but if we look at what happens for the buyer this is not the end of the story for the buyer the buyer has to pay this to the seller and then pay two dollars to the government and if we add tea from this price if we add another tea remember the vertical distance between these two curves is T so that takes us right back up to the original curve there's the buyers price I'm going to call it PB that's the final price that the buyer has to pay the distance between the price that the buyer pays and the price that they the seller gets to keep is the tax t so here's the surprising thing if we look at the implications of this tax there are three surprising things that we're going to see or we could say two surprising things one that has to do with quantity one that has to do with price what we see is that quantity Falls I guess that's not really that surprised but here's the surprising part this tax that's imposed on buyers ends up affecting both the buyers and the sellers it drives the buyers price up and it drives the sellers price down and we can think about how much it buys spirit drives the sellers price up versus how much it drives how much it drives the sellers price down versus how much it drives the buyers price up if we look at where price started it's right here that distance right there is how much it drove the price up for buyers that's what we would call the buyers burden or the buyers incidence of the tax now notice it does drive the buyers price up but it drives it up by less than the amount of the tax right if this distance between this price and that one is the amount T then this distance clearly has to be less than T notice that it drives the sellers price down that's what we would call the sellers burden of the tax or the sellers incidence of the tax so even though the sellers were not taxed this is a tax on buyers the tax still ends up affecting the sellers because the tax reduces the buyers willingness to pay okay so the tax burden is going to fall on both the buyers and the sellers now in my picture here it makes it look like the tax burdens a little bit bigger for the buyers and smaller for the sellers your picture if you've been drawing this along with me yours may be a little different yours may be make it look like it's split right down the middle what we're going to see is that it does not get split right down the middle it depends on the elasticity of demand and the elasticity of supply we'll do that here in a little bit but for now the key is that the tax is going to fall on both the buyers and the sellers okay what I need to do now is is clear off I'm going to clear off some of this and then we'll do a tax on sellers and we'll see how that looks all right let's think about now what a tax levied on sellers look like so this was our picture of a tax levied on buyers I'm gonna put my tax levied on sellers right next to it and I'm going to put my story right over here so what we want to do at the end is we want to be able to compare these pictures and see what the similarities are between attacks on buyers and attacks on sellers so let's kind of draw a similar set of demand and supply curves my initial equilibrium is right here at Point a my initial price of p1 and initial quantity of q1 okay so we're gonna start at a price equals p1 quantity equals q1 and now let's have a tax levied on sellers a tax levied on sellers now before we draw it on that picture let's talk about the effect that a tax is going to have on sellers so let's suppose that now you run a piece of business and let's suppose that it costs you $5.00 to make a pizza okay so that includes the cost of raw materials and that includes the cost of any labor that you've got to buy from other people and that includes the opportunity cost of your time let's suppose you're running the business and and so you're accepting the risk of the business and and you've you've put up the capital and so you're entitled to some of the money that gets paid to the business for the pizzas that $5 covers everything okay so the cost to you of selling a pizza is $5 you're going to be willing to sell the pizza for anything $5 or more you'd never sell the pizza for $3 all right because that's less than your cost of production but you'd start selling it at five five covers everything you'd love to sell it for more you'd love to sell it for eight or nine or ten or you'd really love it if you could sell pizzas for fifty or sixty or a hundred dollars each you'd make a lot of producer surplus but you'd be willing to sell it for anything greater than five now let's suppose that the government imposes a tax on you and that tax is $1.00 per pizza every time you sell a pizza you've got to send a dollar to the government well notice that what that does is that just adds a cost of production to you doesn't it it's just like the cost of your labor going up so what that means is if you were previously willing to sell pizzas for $5.00 and now all of a sudden the government imposes a $1 tax on you then the lowest price you'd be willing to take for a pizza is going to be $6 now so what a tax on sellers does is it increases their cost of production by the amount of the tax okay so let's let's write that over here attacks on sellers increases cost of production by the amount of the tax by T dollars whatever that taxes and that's important okay if we were to draw a little picture of what that looks like remember that the supply curve represents the cost of production we talked in a previous chapter about the fact that any supply curve is nothing more than a cost of production curve and the height of that supply curve at any different point and in particular point represents the cost of production well if the cost of production is going to increase by T dollars then that means every place along the supply curve it's going to shift up by T dollars so the supply curve with the tax I'm gonna write that small and you may not be able to read that the supply curve with the tax is going to be t dollars higher than the original supply curve okay so now let's go back to this picture so this is our supply curve with no tax now all of a sudden the supply curve is going to shift up by the amount of the tax let's suppose that there's t here's the supply curve with the tax so now we're going to get a new intersection right up here at point B let's start by thinking about the effect this will have on quantity so not surprisingly a tax is going to decrease the amount of the goods sold any tax on a good discourages economic activity so the quantity Falls now look at what happens to the price that the buyer pays to the seller so right there is the buyers price that's the price the buyer pays to the seller and from the buyers perspective that's the end of the story this is a tax on sellers so once the buyer pays that price they're done they take whatever the good is they take the pizza they go home they eat it in Tula into the story it's not the end of the story from the sellers perspective because the seller doesn't get to keep all of those dollars right there the seller has to take T dollars away and send them to the government so if we take that price subtract t remember the distance between these two curves is T that takes us back down to the original supply curve down here and we get the number of dollars that the seller ends up getting to put in their pocket the difference between the buyers price and the sellers price of course is the amount of the tax T so let's think about the implications here the first one is that quantity Falls to q1 or excuse me to q2 quantity Falls to q2 it drives the buyers price up and it drives the sellers price down and we can think about the incidents or the burden of the tax that falls on both the buyers and the sellers let's start with the buyers so the buyers price started at p1 and it ended up right up here and that vertical distance represents the tax burden that falls on the buyers just like it did over here but we've also seen that the tax is going to fall partially on the sellers and so that distance that it drives the sellers price down that's going to be the burden of the tax that falls on the sellers and so remember the difference between the two prices is always the amount of the tax t let's think about now kind of the implications of this tax so this is a tax on on buyers this is a tax on sellers even though this was a tax on sellers that ended up falling partially on the buyers because it drove their price up why because a tax on sellers increases their cost of production and when their cost of production goes up the buyers are going to end up having to pay more but it also falls partially on the sellers okay if we think about the implications here quantity Falls and the tax ends up falling on both sides of the market now if we kind of step back for a second we think about how this tax on buyers is similar to this tax on sellers and how they're different well if you look at the two pictures there are some obvious differences in this picture the demand curve is shifted and in this picture of the supply curve is shifted we've got our points be in two different places so those are the differences but if you think about the similarities what you start to realize is that the pictures in terms of what really matters the pictures are exactly the same right we see that the quantity Falls we see that the buyers price gets driven up we've seen that the sellers price gets drip down the tax burden doesn't fall only on one side or the other it gets it falls partly on the buyers and partly on the sellers so in terms of what is happening in these pictures in terms of the prices and the quantities they are exactly the same and that leads us to an important conclusion and this is extremely important attacks on buyers is exactly equivalent to attacks on sellers that is very important tax on buyers is exactly equivalent to attacks on sellers usually in a face-to-face class I will put that on the board I'll wait for everybody to get it written down in their notes and then I'll just sit there and just wait hoping that it dawns on somebody what that means and at this point in my career it has never happened no student has ever realized the magnitude of what that means and I remember back when I was a student I don't think it dawned on me until the professor said now stop and think about what's happening right there think about what that means so if you're going to impose a tax on a market it doesn't matter if you impose that tax on the buyers or if you impose that tax on the sellers everything's going to work out exactly the same the prices will end up exactly the same and the quantities will end up exactly the same regardless of which side of the market you place the tax on now here's the real reason why that's important it's important because if you think about politics and you think about how much of a particular politicians platform is based on which side of the market they think should be taxed and which side of the market they think should get a tax cut and then you sit back and realize that it doesn't matter which side of the market you raise taxes on it doesn't matter which side of the market you cut taxes for all of that is a smokescreen it doesn't matter you either tax a market or you don't you either lower taxes on a market or you don't it doesn't matter which side of the market you impose the tax on government is powerless to determine how much it drives the buyers price up and how much it drives the sellers price down the government does not have that power and yet I would argue that there are lots of people out there casting votes based upon something they've been told about which side of the market should be taxed or which side of the market shouldn't be taxed and now I've just told you that is an irresponsible way to determine your vote so I would encourage you to think about listen when people talk about taxes listen to which side of the market they think should be taxed and in the back of your mind every time they start talking about that say you know what it doesn't matter here's an example so I've heard lots of people say stuff like you know what cigarette companies they're selling a product that that just the normal use of the product hurts your health and so we should tax cigarette companies because they shouldn't be entitled to keep the profits that they get from from selling a product that hurts people's health well that's exactly the same thing as saying you know what we should tax smokers it doesn't matter if you tax the sellers of cigarettes or you tax the smokers it doesn't matter at all a tax on the cigarette companies is exactly equal to a tax on smokers and yet there are lots of people that think you know yeah you know what we need to stick it to those businesses well it doesn't matter if you think that that's somehow sticking it to the business you're wrong the tax is going to drive the buyers price up and it's going to drive the sellers price down and it doesn't matter which side of them you tax as a matter of fact you could take half the tax and put it on buyers and half attacks and put it on sellers same outcome it's going to drive the price up by exactly the same amount and the sellers price down by the exactly the same amount it will change the quantities by the exact same amount you could put 10% of the tax on one side and 90% on the other side it doesn't matter it doesn't matter how you split the tax up now once you understand that then we can think about a different way to illustrate the effect of attacks once you're comfortable and you realize that it doesn't matter which side of the markets you tax there's a little bit easier way to do it than these so I'm going to clear this off and then we'll take a look at that let's think about the third way to illustrate the impact of that attacks has on a market so let's same thing put a demand curve and a supply curve up here here's our equilibrium with no tax the price would be P one quantity would be Q 1 now if you look back if you were to look at the notes that you've taken you'll realize that in both those pictures the tax on buyers and the tax on sellers you'll realize that what really matters is that we've ended up in each of those pictures by finding the place where the vertical distance between the demand curve and the supply curve was the amount of the tax okay if you think about the first picture where we placed the tax on buyers we had a demand curve shifted down and it was shifted down that much and it went right through that point and at some point in that problem we found the or we identified that the vertical distance between the demand curve and the new shifted demand curve was the amount of tax T in the other picture where we had attacks on sellers we had a shifted supply curve and it would have been going through that point right there it would have been t dollars directly above the old supply curve so it would have been running right through there but what really matters is that we found the place where that vertical distance between the demand curve and the supply curve is the amount T and it has to be a vertical distance you can't angle it like that or you'll get the wrong answer it's got to be a vertical distance once you've identified that once you've found the place where this distance is T then that tells you that that's the quantity let's call that Q - there's q1 right up here's the buyers price and right down here is the sellers price and we've got the same picture right here's the buyers burden of the tax here's the sellers burden of the tax and the way that would look let's suppose that this is a $3 tax then what that might look like so let's suppose the tax is equal to $3 so this distance if the tax was $3 that distance would be 3 and this distance you would find the place where this distance between the demand curve and the supply curve is $3 and clearly if you haven't realized this yet clearly I would need to give you a picture with a background here where you could see some grid marks like a piece of graph paper that's how you would be able to identify how much that vertical distance is with a free hand drawn picture like this you'd never know that that distance was 3 okay so on a test problem there will be a background that will allow you to easily identify where the distance is 3 you know back here the distances would be much bigger this might be a distance of 12 and as we get closer the distances get smaller and you just find the place where that distance is 3 what you're going to find is that part of the tax burden is going to fall on buyers and part will fall on sellers in this picture it looks like it's split equally so it could be that a dollar 50 of that $3.00 tax falls on the buyers it drives their price up by a dollar 50 drives the sellers price down by the other dollar 50 so half the tax could fall on the buyers half could fall on the sellers it could be we'll do it here in a second I'll change the elasticity of the demand curve compared to the supply curve and we can make more of the tax fall on one side than on the other so it could be that $2 of the tax falls on buyers and one dollar falls on sellers or $0.50 of the tax falls on buyers and $2.50 of the tax falls on sellers but the burdens the burden falls on buyers plus the burden that falls on sellers had that has to add up to the amount of the tax okay so let's do now let's illustrate what happens when we change the elasticity's so let's draw a couple of pictures here in my left picture I'm going to have a relatively elastic demand curve and a relatively inelastic supply curve and I'm going to identify my initial price of p1 here's P here's Q and my initial quantity of q1 now over in this picture I'm going to have a relatively inelastic demand curve and a relatively elastic supply curve here's my equilibrium p1 q1 so I've got two different demand supply pictures let's start by thinking about the demand curve in this picture I've got elastic demand and in this picture I've got inelastic demand here I've got inelastic supply and in this one I've got elastic supply now let's use this procedure for figuring out the effective attacks so let's suppose that we had a background here and it's a I don't know a $2.00 tax and so we we find the place where that vertical distance is $2.00 and remember it has to be a vertical distance so it's going to be something like this it's not the straightest line I've ever made but right there let's suppose that distance is $2.00 then what we can do is we can see that right down here is q2 right here's going to be the price that the buyers pay we'll call it PB and right down here is going to be the price that the sellers end up getting to keep and you can see that the the burden of the tax here is going to fall mostly on the sellers it's going to drive the sellers price down by much more than it drives the buyers price up okay let's put the same tax let's find the place over here where we've got a $2.00 tax so we're going to take that same distance of $2.00 and move it right over here and it remember it has to be a vertical distance so it's going to be something like that it's going to be something somewhere right in here so now let's suppose that that distance in this picture is $2.00 we can see that it's going to cause quantity to fall to q2 right up here is going to be the price the buyers have to pay P B and right down here is going to be the price that the sellers get to keep that's going to be PS and you can see here that the burden of this tax is going to going to fall mostly on the buyers a very small chunk of it's going to fall on the sellers so what you can see here is that when we've got an elastic supply an elastic demand most of the tax burden falls on the supply side of the market which is the inelastic side of the market over in this picture we've got inelastic demand and elastic supply and here a bigger chunk of the burden falls on the buyers the inelastic side of the market so what we get is here's our general conclusion a tax burden falls more heavily on the inelastic side of the market that's a general conclusion all other things equal for any tax the tax burden is going to fall more heavily on the inelastic side of the market here sellers are the inelastic side of the market and it drives the sellers price down by a lot more than it drives the buyers price up over here the buyers are the inelastic side of the market and it drives the buyers price up by a lot more than it drives the sellers price down now this is relative elasticity what matters here is the steepness of the demand curve compared to the steepness of the supply curve okay so it both demand and supply can be elastic so if you calculate the elasticity of demand and the elasticity of supply you could for both of them get numbers greater than 1 which would mean demand is elastic and supply is inelastic but whichever one has the smallest number will be the inelastic side of the market okay so this is relative elasticity that matters here and that that's very important let's talk now about what happens to consumer and producer surplus with attacks so I need to clear this off and then we'll finish up with that all right let's take a look at the effect on consumer and producer surplus for attacks so price here's our demand curve [Music] supply curve there's where quantity goes let's identify our initial price of p1 and our initial quantity of q1 let's suppose that we've imposed a tax here of t dollars and we've found the place where this vertical distance between the demand curve and the supply curve is the amount of the tax t so right here's going to be the quantity with the tax let's call it q2 right up here is going to be the buyers price PB and right down here is going to be the sellers price PS now I want to label those areas on there and then we'll be able to figure out what happens to consumer and producer surplus so let's call that a b c d e and f so as usual let's start by thinking about what the situation is going to be with no tax so with no tax with if we had the free market being able to operate then the consumer surplus is going to be the area under the demand curve and above the price that's a plus B plus C producer surplus is going to be the area under the price and above the supply curve so that's D plus E Plus F and I know I've said this before but obviously if you were given a market and you had to calculate consumer surplus you wouldn't divide it up into these areas you just calculate the triangle area I'm dividing it because once we impose the tax those are going to be appropriate divisions with no tax deadweight loss is equal to zero total surplus is maximized now let's impose the tax with the tax let's start with consumer surplus so now there are two prices there's the price the buyer pays and there's the price the seller gets to keep the buyers price the price of the buyers pay is always going to be the highest price with a tax and the price that sellers end up getting to put in their pocket is always going to be the lower one okay so there should be no confusion as to which is the buyers price and which is the sellers price the buyers price is always going to be this one up here there's no way the sellers keep more than what the buyers paid not with the tax so now let's think about the area under the demand curve and above the buyers price with the tax consumer surplus Falls to just a and so the tax causes consumers to lose B plus C so consumers lose B plus C so in terms of economic well-being the tax clearly hurts consumers let's think about producers producer surplus is the area under the sellers price and above the supply curve it's just area F and if we think about the loss of producer surplus producers lose D plus e so any tax also is going to hurt producers let's think about the deadweight loss of the tax so if we think about deadweight loss well remember that with no tax the free market quantity would be q1 the tax causes a decrease in economic activity so quantity falls to q2 and we lose this chunk of total surplus C plus E is the deadweight loss now B and D those don't just vanish B and D end up going to the government right so if we wanted to identify the amount of revenue that the government collects from the tax so here's government revenue the government revenue is going to be equal to B plus D so part of that be used to be consumer surplus that ends up in the pockets of the government part of it d used to be producer surplus that ends up in the pockets of the government and then the government decides how they want to spend it so it could be that maybe they spend it in such a way that some of that gets back to the consumers and producers it could be that case there's another way to calculate government revenue so we could say also you can easily calculate government revenue by taking the amount of the tax and simply multiplying it by q2 so if there's a $1 tax and then 25 units get sold then the government is going to collect $25 t times q2 notice that T times q2 is also equal to the area of that of B plus D because the vertical distance the distance between the two prices is T and this horizontal distance from right here out to there is q2 so and it's a rectangle so we don't have to divide by 2 so if you take the vertical distance of T and multiply it times the horizontal distance of q2 you're getting the area of that rectangle so both of these are completely equivalent ways of calculating the government amount of government revenue now let's stop here for a second and think about how to interpret this what does this mean you might say if you were somebody who I don't know didn't like taxes or didn't like government and and you wanted to look at this and use it in a way that you think you'd like then you could say well okay see here's here's the deadweight loss and here's the loss to consumers and here's the loss to producers so we shouldn't have any taxes we should allow the free market to operate freely all the time well the fact of the matter is if we're going to have a government we have to have taxes so I would not take this and say ok this is the general conclusion is taxes hurt consumers taxes hurt producers we should have no taxes that's that's not the right way that's not the lesson to take away from this we just have to be aware of the fact that that for any tax it's going to create some deadweight loss and so we want to be careful about what things we tax we want to always remember that we're we are powerless the government is powerless to determine which side of the market bears the biggest share of the burden they don't have any power over that the other thing that I would say is that we need to be we need to be aware of the fact that any tax is going to decrease the amount of that good or service that's transacted and this is a conversation that I will have with a lot of face-to-face classes a conversation where I say let's let's pretend we were starting with a clean slate let's suppose there are no taxes we have a government we need to have a court system we need to have a law enforcement system there are lots of things that we need to the government to do and in order to do that we need some money so we have to have some taxes let's start from a clean slate and think about what we should be taxing that's a that's a good conversation to think about if you remember that anything that you tax there's going to be less of it then I would argue as an economist if we're starting with a clean slate let's start by taxing things we'd like to see less of okay I would rather tax things that I'd rather see less of if you look at what we do tend to tax in the US the largest tax in the u.s. is the income tax and the income tax is a tax on labor when you look at it that way you start to think food why would we choose that we like people to be productive we like people to go out there and do things that are productive for themselves and for the economy as a whole and for other people and so why did we choose to tax something that we like if we tax labor there will be less labor bought and sold that's a fact if you think about it that way you start to realize that some of the things we tax maybe we probably should think twice about that you know I would I would argue that if you're somebody who would like to see less smoking I certainly wouldn't have any problem with taxing that or if you would like to see people driving less less congestion on the highways I would be in favor of taxing gasoline people would buy less gas they would drive less so I would as an economist tend to target those things before I would target things we like I always jokingly say that if it was up to me I would tax D's and F's I'd rather see less D's and F's in my class so I would impose a tax on those if you take my class and you get an F you got to pay me $500 we'd probably see less D's and F's of course you can't do that but I would much rather tax things I would like to see less of let's talk for just a second about a subsidy in a lot of ways a subsidy is just the opposite of a tax so let's think about consumers willingness to pay let's suppose that we think about what would happen if the government paid you $1 every time you bought a pizza well then your willingness to pay for pizzas would increase by $1 rather than decreasing by a dollar like it did with a tax or if you're a seller and the government decided to subsidize you then that subsidy decreases your cost of production so with a subsidy instead of the demand curve shifting down by the amount of the tax it would shift up by the amount of the subsidy our supply curve it with a tax shifts up by the amount of the tax but with a subsidy it would shift down by the amount of the subsidy and when that happened what you would see is that the quantity ends up increasing so a subsidy like I said in a lot of ways is just the opposite of a tax the sub attacks reduces the quantity transacted in a market a subsidy increases the quantity transacted in a market here's an important point though you might think well if they're kind opposites of each other then a and if a tax creates deadweight loss by decreasing quantity then wouldn't a subsidy increase total surplus no turns out it doesn't remember that when we're talking about a market here's the right quantity Q star we don't want too little of the quantity and we don't want too much of the quantity we don't want a policy that pushes the quantity out here - Q - because that is going to create deadweight loss as well that creates a situation where here's the cost of production at the margin and here's how much people value it at the margin we would never want the quantity of Q - that's having too much of something is just as bad as having too little now again that you have to think about what's being subsidized and what's the objective of the government and there's a whole lot of stuff that goes behind it but I don't want you to think when I say that a subsidy is the opposite of attacks I don't want you to think that that means that that the deadweight loss all of a sudden somehow becomes a gain there's deadweight loss with a subsidy also let's do one quick example I think your book probably includes this example if it doesn't we can kind of talk our way through it here real quick in the 90s the government imposed what they called luxury taxes and those were taxes on luxuries things that rich people bought now the idea there was that Congress wanted to essentially if you ask Congress they would not describe it as punishing rich people but the idea behind those luxury taxes is that rich people needed to pay more and so what they wanted to do was tax the things that rich people buy and so they they imposed these luxury taxes these were taxes on on private jets and taxes on things like yachts and a whole bunch of other things let's just pick one of those and think about the impact that a tax on say yachts has so let's suppose that this is the market for yachts I think that's how you spell yacht so I don't write the word yachts enough to know but and the more I say the word yachts the weirder it starts to sound to me but I'm pretty sure that's that's pretty close now let's think about the demand for yachts and the supply of yachts and let's think about which is elastic and which is inelastic let's start by thinking about the demand curve so remember that all other things equal demand tends to be more elastic when there are more substitutes for the good and if we're thinking about rich people buying yachts if the price of a yacht went up there are lots of other things that rich people can spend their money on so there are lots of substitutes for yachts and what that results in is that the demand for yachts tends to be fairly elastic so if we draw the demand curve for yachts it tends to be pretty elastic now if we think about the supply curve for yachts turns out that the people who make yachts the Scott making skills are pretty specialized they tend not to transfer very easily to other production of other goods so what that means is the supply of the aughts tends to be fairly inelastic tends to look something like that that's what this market tends to look like now if we look at the equilibrium price p1 and the equilibrium quantity q1 and we impose a tax on this what we realize is let's impose a fairly big tax let's let's find the place where the vertical distance between our demand curve and our supply curve is the amount of the tax and let's suppose it's something like that so there's our tax we can see that the quantity is going to fall less yachts will be bought and sold and it will indeed drive the buyers price up so rich people will pay more for yachts the problem is that because demand for yachts tends to be fairly elastic they're not going to pay that much more it's going to drive the price of yachts up but not by very much not compared to how much it drives the sellers price down because the supply curve for yachts is very inelastic it's going to drive the sellers price down relatively a lot compared to how much it drives the buyers price up so if we look at the burden of the luxury tax on yachts that fell on rich people it's relatively small if we look at the burden of the tax that falls on the suppliers the sellers of yachts which tend not to be rich people we see that they end up bearing the biggest share of the burden and that is exactly what happened in the 90s with the luxury taxes Congress passed the taxes the tax burdens fell disproportionately heavily on the sellers side of the market and so what ended up happening was yacht manufacturers were going out of business and saying look you guys are passing this putting a tax on on buyers but it's falling on us and of course politicians were saying some of them know how this works some of them don't so there were politicians saying well okay we're not sure exactly what's happening there but it's rich people doing something behind the scenes and and the end result was that the luxury taxes were repealed but it's all very easy to understand if you just understand elasticity and understand that it doesn't matter which side of the market you tax you either tax the market or you don't tax the market and the government is powerless to determine which side of the market the burden is going to fall most heavily on and unfortunately what you hear you always hear politicians talking about luxury taxes they may not call them luxury taxes but these things are still getting discussed and people still are talking about them without understanding anything about what's going on there let's draw one final picture actually two final pictures let's just think about a price ceiling compared to a tax so let me draw a picture here demand-supply picture in a picture there let me label things and then I'll get out of the way here so that you can see it so we've got a demand supply picture up top and a demand supply picture at the bottom and let's think about a price let's do a price floor up there in our top picture and let's do a tax down here in our bottom picture let's think about a price floor that's imposed right here what we know with a price floor is that our markets going to want to push price down but it can only push it down as far as the floor and the result of that is that right there's our quantity and right out here is the number of units that sellers want to sell and we get a surplus so let's identify our quantity we get a difference between the quantity demanded and the quantity supplied if we had a price ceiling our price would be down here and we'd have a difference between the quantity supplied in the quantity demanded a price floor or a price ceiling when they're binding they create a surplus or shortage in other words quantity demanded is not equal to quantity supplied with a binding floor or ceiling we get a picture that looks something like that with a tax we get something different we get a situation where we're at a lower quantity and we get a buyer's price up here and we get a seller's price down here so we get a picture that in some ways kind of resembles the price floor price ceiling picture but let's think about what's going on down here when you think about a tax a tax creates a situation where there are two prices essentially what a tax does is it drives a wedge between the buyers price and the sellers price and the size of that wedge is the amount of the tax T that distance is T right there but notice that a tax does not create a surplus or shortage if we look at where quantity demanded is and where quantity supplied is on that picture will see that they are equal to each other now here's the interesting thing they show up at different places on the picture if we wanted to find quantity demanded we would go over from the buyers price to the demand curve and we would hit it right there there's quantity demanded if we wanted to find quantity supplied we'd go over from the sellers price to the supply curve and we hit it right there there's quantity supplied they show up at different places in the picture but notice they're both equal to that quantity a tax does not create a surplus or shortage attack simply drives a wedge between the buyers price and the sellers price quantity demanded is still equal to quantity supplied so the pictures look similar to each other but there are some important differences keep in mind that both of these policies create deadweight loss equal to this triangle right there so they look similar but they're different so hopefully that gives you an idea of kind of some different types of government policies how they may or may not achieve the objectives that people who argue for them push and I'll see you in another video
Principles_of_Microeconomics
Chapter_14_Perfect_Competition_Part_1.txt
in this video we want to talk about how firms that are in perfectly competitive markets make decisions so we're going to think about how they choose what quantity to produce that maximizes their profit in our previous chapter we talked about the costs of production and so we're going to use what we learned in that chapter and we're going to combine it with the revenue information for a competitive competitive firm and we'll think about how the firm makes its decisions so let's review quickly the characteristics of this type of market so in terms of a perfectly competitive market you already know two of them we're going to add a third one but the first two that you know there are lots of buyers and sellers and each buyer and seller is small compared to the size of the market so remember that a good example of a perfectly competitive market would be the gasoline market and so each of us are small compared to the size of the market I don't buy enough gas to have any impact on the price of gas you don't either so there are lots and lots of buyers and sellers and then the goods offered for sale are identical so I'm just going to say the goods are identical remember the discussion that we had in a previous video where we talked about the fact that what matters here is not literally whether the goods are identical but whether or not consumers perceive them as being identical so a good example of that is we talked about aspirin and we talked about gasoline and so what's important here is the consumers believe that the goods are identical okay now there's going to be a third characteristic that we're going to add here and that there the third one is that there are no barriers to entry no barriers to entry sometimes you'll will hear me call or text books just call this free entry now let's think about what that means so if you wanted to start a business free entry means that you can start a business there's not anything that prohibits you from doing it of course you have to obey all the rules you have to have a business license and you've got to pay your taxes and and you've got to follow the laws but there's nothing to prohibit you from starting a business okay so there's free entry so if we think about what these three characteristics together imply actually let's think about what these to imply remember that anytime there are a lot of buyers and sellers and the goods are identical then both the buyers and the sellers are price takers so I'm going to say buyers and sellers are price takers that means they take prices given that does not mean that price never changes every once in a while I'll have a student that mistakes this for me saying that the price does not change the price will change we've talked about what happens in a market if demand changes or supply changes it's going to change the intersection of the demand and supply curve it's going to change the price what we're saying here is that the buyers and the sellers just take that as given it's not under their control now let's talk about in terms of a particular firm what this means this means if we think about the the demand curve that a particular firm faces this means that each particular firm if I were to label this a firm the demand curve that a particular firm faces is perfectly elastic at the market price there's the demand curve that this firm faces and here's the market price the example that I would have used back you if you've been listening the videos then you've heard this example before in a face-to-face class the way that I would motivate this is to to say let's suppose that all of you are selling the exact same thing there's lots of you and you're all sitting in a room and you're all selling say these pins so you've got a box of these pins in front of you and all of you have a price of a dollar well if I walk into the room to buy a pin from you and you see me walking up to you and you say to yourself oh here comes doctor as the video I'm gonna go Jim a little bit and when I walk up to you say Oh for you the prices of dollar fifty then I'm just going to step to the person next to you and buy it for a dollar so in this case if you tried to raise your price any above the market price you sell nothing so at this price you sell zero and this price you sell zero and that price right there you sell zero but as long as you charge the market price there are enough buyers in the market that you can sell all you want there are lots and lots of buyers so you can sell this quantity at that price or you can so that quantity at that price you can sell some quantity way out here at that price okay so the demand curve that you face is perfectly elastic now we are not saying that the market demand for this good is perfectly elastic let me draw you a little picture here that relates this to the market if I were to draw a picture of the market right here and then a picture of the situation that the firm is in our market is going to look like this we've got a market demand curve and a market supply curve here's price and quantity and right there is the equilibrium price and this demand curve right here is not perfectly elastic it's just downward sloping but what that results in is that the demand curve that each firm faces a different demand curve it's perfectly elastic at the market price so this is the market and right here would be a particular firm that we're talking about QP so each firm faces a perfectly elastic demand curve remember that the examples that we typically use in a classroom about what perfect competition looks like out there in the real world there aren't a ton of examples the stock market is a good example of a perfectly competitive firm or market commodity markets like corn and soybean those are often used as examples of perfectly competitive markets and then we could think about the gasoline market but we have to be careful there because if we're talking about a small town with just a couple of sellers just a couple of gas stations then we're violating that that assumption but if we're talking about a bigger town or a city that's got lots of gas stations then it's going to behave very much like a perfectly competitive market let's talk about the revenue of a competitive firm so we know so let's think about revenue we know that the profit of a firm is equal to total revenue minus total cost we've seen that before we spent a previous video talking all about how costs behave now we're going to think about how revenue behaves and once we understand how revenue behaves we can put those together and we can see how a firm's going to maximize profit so revenue total revenue is just price times quantity and then we're going to subtract off total costs so there's what profit looks like what I want to do is put together a little table here let's let's put it over here so let's put Q up here let's think about a firm that's going to be producing some different levels of output let's go from 0 up to 8 so we could think about this as maybe a dairy firm that's producing gallons of milk let's think about the price that they can sell milk for let's suppose milk sells for six dollars a gallon so the market demand curve in the market supply curve intersect at a price of six dollars that's not under the control of the firm the firm is a price taker so the price that they can sell milk at is six dollars whether they sell zero gallons of milk or eight gallons of milk or 20 gallons of milk the price they can sell it for is six dollars now it could change tomorrow could change next week but for right now we're gonna focus on the situation they're in and right now they can sell any number of gallons they want at six dollars each from that we can calculate their total revenue we just take price times quantity before we do that let's remind ourselves of what I've got right here right here I've got the demand curve that the firm faces if we were to graph that this would be a picture at a quantity of zero the firm can sell a gallon of milk for six dollars at one the price of six at two the price is six so this would be a horizontal demand curve at a price of six dollars so that's what I've got right there is the demand curve that the firm faces now let's figure out what total revenue looks like total revenue is just price times quantity if they sell zero gallons of milk they obviously get zero total revenue one gallon of milk for six dollars they earn six dollars of revenue to six dollars each that's 12 so you can see this is going up by six each time so 18 24 30 36 42 up to 48 they sell eight gallons at six dollars each they make 48 now that's not profit that's just total revenue right now let's think about a major that we're going to call average revenue I'm going to abbreviate average revenue a our average revenue is just going to be total revenue divided by quantity remember that total revenue is just price times quantity so average revenue you can see these two Q's would cancel out average revenue we're going to see is always equal to price that's going to be true for all firms but we can calculate it right now if we want to figure out average revenue we can take total revenue and divide it by Q we can't divide by zero so I'm not going to do it right there we sell one gallon of milk for $6 of total revenue divided by the one gallon of milk that's 612 dollars divided by 2 that's 6 18 divided by 3 that's 6 this is going to be 6 all the way down and that should make sense because average revenue is equal to price and price is 6 and then finally we can calculate what we're really after and that's going to be marginal revenue marginal revenue which we're going to abbreviate em our marginal revenue is equal to the change in total revenue when you change quantity and we're typically going to be thinking about a change in quantity of one unit I can rewrite this this way I can change take these Greek deltas out and I can insert an English D I get that marginal revenue is equal to the change in total revenue divided by the change in quantity and that simply tells you that the marginal revenue is the slope of the total revenue curve it's the change in total revenue when you change quantity we can figure out what marginal revenue looks like here here's we're going to be thinking about starting at 0 and going to 1 so I'm not going to calculate marginal revenue for the 0 unit I'm going to go to 1 we can see that when we go from 0 to 1 units our total revenue goes from 0 to 6 so our change in total revenue was 6 dollars when we go from 1 to 2 our total revenue goes from 6 to 12 so again it changes by 6 dollars from 2 to 3 our total revenue goes from 12 to 18 it's always changing by six dollars 6 all the way down so that leads us to the first important conclusion that we get from this and that is for a perfectly competitive firm price and marginal revenue are equal okay that is important we're going to use that we will see that that will not be true for other types of firms what I want to do now is clear this off and then we're going to reproduce part of this table and then we're going to link it up with some cost information and we're gonna see what's the right quantity here to produce so we've got some of the information from that previous table that we had we've got our quantity and we've got the amount of revenue that they earn at each one of each level of quantity let's add to that some cost information so let me give you some total cost numbers okay so let's suppose the total cost if they produce zero gallons of milk their total cost is three dollars and then five eight twelve seventeen twenty three thirty thirty eight and forty seven now hopefully you remember that what this means is that fixed cost is equal to three dollars you could figure out what your variable cost is you could go through and figure out average fixed cost average variable cost average total cost let's go ahead and figure out just what profit is because we know profit is equal to total revenue minus total cost so let's calculate our profit if the firm produces no milk they have no revenue they incur three dollars of cost so their profit would be negative three dollars they lose three dollars if they don't produce anything if they produce one gallon of milk they're going to sell it for six dollars they're going to incur five dollars of cost so they make profit of $1 the rest of these would look like this four six seven seven six four one first thing that we can see is that profit is not made in volume you can see that if you think that profits made in volume then you would think Oh need to produce and sell as much milk as possible but what's happening is that profit initially goes up it reaches a maximum and then it starts to go back down right if we were to graph profit here's what profit typically looks like if we put profit on the vertical axis and quantity on the horizontal axis profit is typically going to look like this profit is not made in volume if you produce enough you can drive your profit right back down to zero and if you produce past that you can drive your profit negative what happens is we want to know where's this what's the quantity that we should produce that maximizes the amount of profit that we can earn in this is not profit per unit this is the total amount of profit that you're going to earn let's put up here so we've just seen that right here that's the place where profit is maximized so what we can tell from this table is the firm should produce four or five gallons of milk now we can narrow this down a little bit more and so let's do that but just by looking at the profit numbers we can see get a good idea of where this firms going to want to be let's figure out what marginal revenue looks like so we already knew that marginal revenue was always six right at each one of these levels of output marginal revenue is six let's add up here marginal cost so all we have to do is look at our change in total cost so if we go we're not going to calculate it here we need to go from zero to one if we look at what happens our marginal right here our marginal cost goes from our total cost goes from three to five so our marginal cost is 2 then from five to eight it's three then from eight to twelve it's four you can see that our marginal cost is going up by a dollar each time it's going to be five six seven eight nine there's the marginal cost now let's look we already know we're profit is maximized it's right in here but let's look at what's happening to marginal revenue and marginal cost right in there what we see is that right in there marginal revenue and marginal cost become equal every place down here they are different marginal revenue is greater than marginal cost and then up here marginal cost is greater than marginal revenue so what we can see is that in order to maximize profit the firm is going to produce the quantity where marginal revenue and marginal cost are equal right there is the place where profit is maximized so let's put up here profit is maximized when marginal revenue is exactly equal to marginal cost that's going to be very important this is going to be true for all firms to maximize profit any firm out there is going to produce the quantity where marginal revenue equals marginal cost let's just kind of take it one gallon at a time and think about a few different decisions that the firm would need to make so let's suppose we didn't know this suppose we didn't have this table put together let's suppose we were just thinking about the first gallon of milk well if we were thinking about the first gallon so let's put up here first gallon of milk for the first gallon of milk marginal revenue is equal to six and the marginal cost is equal to two so if we think about what's going on there remember the decision rule for a decision maker the rule for a decision maker is that you take an action if and only if the marginal benefits bigger than the marginal cost well in this case marginal cost is obvious the marginal benefit to the firm would be the revenue that they earn so this certainly passes the test the firm would want to produce the first gallon of milk because it adds six dollars to their revenue and only adds two dollars to their cost and then if we were to go to the second gallon of milk then we see that so for the second gallon what we see is that the marginal revenue again is six and now the marginal cost is different oops it's three but this still passes the test marginal revenue is still bigger than marginal cost and so the firm would certainly want to produce the second gallon of milk and this would be true about the third gallon of milk in the fourth gallon of milk let's skip over the fifth one for just a second and let's go to the seventh gallon of milk okay so let's put a little line here and let's just think about the seventh gallon so would the firm want to produce the seventh gallon well the seventh gallon this one right down here has a marginal revenue of six dollars and a marginal cost of eight dollars so this gallon would add six dollars to our revenue but eight dollars to our cost so we certainly would not want to produce that gallon so you can see that any place up here marginal revenue will be bigger than marginal cost and we wouldn't want to stop here we wouldn't want to stop there as long as marginal revenue is bigger than marginal cost you should do more of that and we would never want to be down in this range because our marginal cost now is bigger than our marginal revenue so if you're doing something and at the for the level that you're doing it the marginal cost is bigger than the marginal benefit then you need to do less of that so we never want to be back here we never want to be down here that just leaves right there for the place where marginal revenue is equal to marginal cost let me just show you as an aside profit is equal to total revenue minus total cost the profit function looks like this if you want to maximize a function then what we need to do is we need to find the place right up here's the maximum and what we see is that at that maximum the slope is equal to zero so all we need to do is find the place where the slope of profit function is equal to zero well here is the profit function if we want to find the place where the slope is equal to zero we need to take the derivative of this I will not ask you to do this on a test you do not need to know how to take a derivative you don't need to reproduce this I just want you to see from a calculus perspective why this has to be true if we take the derivative of profit with respect to quantity that's equal to the derivative of total revenue with respect to quantity minus the derivative of total cost with respect to quantity here's the slope and we want that slope to be equal to zero this is just the definition of marginal revenue it's the change in total revenue when you change quantity so this says that marginal revenue - this is marginal cost it's the change in total cost when you change quantity this says that for profit to be maximized marginal revenue minus marginal cost must be equal to zero if we move that to the other side this says marginal revenue has to be equal to marginal cost so we can prove that this has to be true now we're just simply looking at it a different way we don't need to do the calculus of it to be able to see that profit is maximized where marginal revenue equals marginal cost what we want to do now is is clear this off and then think about what this looks like graphically and it's we'll see that it's actually very easy to apply this given the cost curves that we've learned about let's graph the marginal revenue and demand curves that we had i erase the table there but let's think about what those look like so we drew the demand curve that the firm faced in this particular example the demand curve that the firm faced was horizontal at the market price so it looked like that there's a demand curve remember our marginal revenue column was all a bunch of sixes also so a quantity of zero at 6 and at a quantity of 1 it was six and under quantity of eight it was six so at all of these quantities it was also six that means that the marginal revenue curve lies right on top of the demand curve that the firm faces so there's the marginal revenue curve that the firm faces so now what we want to do is we want to take this picture and put it together with our cost curve picture that we talked about in a previous chapter and then we'll be able to see that it's very easy to identify where marginal revenue and marginal costs are equal for a firm so let's think about the firm's marginal cost curve and its supply decision so I'm going to draw a competitive firm so here's a firm Q P let's think about the cost curves so remember we've got a marginal cost curve that's upward sloping we've got an average total cost curve and we've got an average variable cost curve there's the cost curves for a firm our average curves are you shaped and they intersect that marginal cost curve at the bottom of the average curves what we want to do now so this represents the costs of the firm we want to put in here the revenue of the firm so I'm going to actually move mine up a little bit I'm going to do it in a different color I don't know if this will show up as a different color but let's suppose the market price is right there there's our market price so we know that the marginal revenue curve is going to be horizontal at the market price so we can draw our marginal revenue curve up here there's marginal revenue it's equal to the demand curve that the firm faces and we also know that what we're looking for is we're looking for the place where marginal revenue and marginal costs are equal well here's marginal revenue here's marginal cost right there is the place where marginal revenue and marginal cost are equal in that picture and so the profit maximizing quantity for this firm to produce is right there I'm going to call it Q star there is the profit maximizing quantity for this competitive firm to produce if they produce any less than that they're not going to be maximizing profit and if they produce any more than that they will not be maximizing profit let's draw another picture here where we think about a little bit more about why other quantities do not maximize profit in this picture I'm just going to put the marginal cost and let's just put the marginal revenue curve let's suppose right there is the market price I'll put the marginal revenue curve it's right there so the profit maximizing quantity for this firm to produce would be right here let's think about this quantity just think about what the problem is at that with that quantity well here's the problem we can go up from that quantity to see what the marginal cost is so right there would be let's call that Q 1 here's marginal cost 1 that's the marginal cost of producing that quantity we can go up to the marginal revenue and see the marginal revenue of producing that quantity we'll call it M R 1 so you can see why we don't want to stop at Q 1 at Q 1 the marginal revenue is bigger than the marginal cost and so we certainly do want to produce Q 1 but we don't want to stop there right we could produce this quantity or this quantity or this quantity and the marginal revenue for all of those will be bigger than the marginal cost that's going to be true all the way up to this point at that point the marginal revenue is no longer bigger than the marginal cost they are equal so we stopped increasing let's think about this quantity right here Q 2 why would we not want to produce a quantity higher then let's go ahead and call that Q star well at Q 2 if we go up we can see that here's the marginal revenue call it mr2 and if we go up here we see that there's the marginal cost MC 2 so we can see that for that quantity Q 2 the marginal cost is bigger than the marginal revenue that doesn't pass the test we don't want to produce Q 2 because we get less in terms of our revenue at the margin than the costs that we incur at the margin so we don't want to produce any quantity to the left of Q star we don't want to produce any quantity to the right of Q star that just leaves Q star as the right quantity to produce mmm so now we can understand very easily how the firm is going to choose the profit maximizing quantity to produce it all depends on where the market price is if the market price was lower then our marginal revenue curve would be horizontal at a lower price and we would see where that marginal revenue curve intersects the marginal cost curve and that would give us the quantity that they want to produce so let's think about what would happen let's call this P 1 and we'll call that Q 1 let's think about what would happen if the market price fell so if the market price fell and remember the thing that would cause the market price to fall would be something that happens over here in the market picture this is the market price but remember this is just one firm out of many that are in the market this firm has no control over that price so when the market drives the price down there's nothing that this firm can do about it let's suppose the market drives drives the price down to P 2 when the price falls to P 2 then what we see is we can draw the marginal revenue curve there so this is our new marginal revenue curve and now marginal revenue and marginal cost are equal at that point and so what we see is that at a lower price the firm will produce a smaller quantity there would be the optimal quantity for the firm to produce at a price of p2 so when the market changes the price then of course the firm will react if the price goes down the firm will react by producing less if the price goes up the firm will react by producing more so what we see is that for the firm the price quantity combination is always found along this marginal cost curve now if you think back to the very beginning when you first started learning about demand and supply what you would have seen is that when we first introduced demand and supply we draw a supply curve and we say that a firm has a supply curve it looks like this what this tells us is that when the price is low the firm wants to supply say this quantity we'll call it p1 q1 if the price goes up to say p2 the firm wants to supply a higher quantity q2 sellers like high prices so at higher prices they want to sell more well what you should start to see real quickly is that what we were calling the supply curve back then is nothing more than the firm's marginal cost curve right the supply curve back then showed us the relationship between the price and the number of units that the firm wants to sell which is exactly what is happening in this picture if you ignore these average cost curves just focus on the marginal cost curve what you see is that for any price over here all we do is go straight over to that marginal cost curve and that tells us the quantity that they're going to produce so for a competitive firm the marginal cost curve is its supply curve so let's just put here this is important the marginal cost curve is the competitive firm supply curve marginal cost curve is the competitive firm supply curve now it turns out that not all of the marginal cost curve is important for us so what we want to do now is think about how low the price can go and the firm's still produce the quantity founded by the marginal cost curve so let's think about the what we're going to call the short-run decision to shut down short-run decision to shut down now let's talk about what we mean when we say shut down so shut down is simply going to be a short-run decision to not produce anything that's different from another decision that we'll talk about and that is the decision to exit a market okay so if we're thinking about exiting exit is a long-run decision to leave the market shut down is just a short-run thing most firms shut down at night so if you think about most businesses they shut their doors at night they don't they're not open and then the next morning they open back up now the reason they shut their doors at night is because demand for their services is so low that it's just not worth it to be open so they shut down that's different from a firm that exits if you're a firm and you exit then you would sell all of the equipment all of the buildings you you get out of the market okay so shut down as a short-run decision exit this is a long-run decision okay we're gonna focus on this one let's focus on the short-run decision to shut down if you shut down then let's say it this way a firm that shuts down is able to reduce its variable cost to zero but shutting down does not reduce your fixed cost to zero so if we let's say a firm that shuts down for that firm of course their total revenue will be equal to zero if you shut down and you're not selling anything then you bring in no revenue but your variable cost is also equal to zero because if you shut down you don't need any workers so shutting down allows you to move your variable cost to zero fixed cost is not zero remember your fixed cost you have to pay those no matter what in the period of time during which you have to pay them is what we call the short-run so let's talk about what when a firm will shut down so a firm will shut down as long as the revenue that it earns from producing is greater than its variable cost so let's write that a firm shuts down if total revenue is less than variable cost firm shuts down if total revenue is less than variable cost now let's think about that for a second and let me give you an example of when you've probably seen something like this and you might have thought about it but not thought about it in these terms but if you've ever been at a restaurant say for lunch and let's suppose this is a popular restaurant in the evening but let's say you're there at lunch and there's hardly anybody there let's say you're the only person there and you're eating lunch and you're thinking to yourself they can't be making money I'm the only person in here right and for a lot of people they see that and they don't understand it well here's what's going on you're probably right that they are not making money their profit is probably negative but for a lot of people they go from there to thinking that if your profits negative you shouldn't be open at all but the reality of it is that there are going to be some times when your profit is negative and you should still be open and it has to do with this okay so let's work our way through this a firm shuts down if its total revenue is less than its variable cost let me give you an example we'll come back to this here in a second let me give you an example let's consider a business consider a business and let's suppose that the fixed cost for this business is equal to five hundred dollars let's suppose variable cost for this business is equal to a thousand dollars and let's suppose total revenue is equal to twelve hundred now let's think about what this business should do so if the firm produces remember profit is equal to total revenue minus total cost so if they produce if they produce then our prop the profit of this business is going to be total revenue which is twelve hundred minus total cost now their total cost is going to be fifteen hundred so if they produce they will lose three hundred dollars and your first reaction is to say oh boy that's a bad idea shouldn't be doing that but now let's think about what's going to happen if they shut down so if it shuts down then we can calculate their profit if they shut down they get no revenue and they incur no variable cost but they can't get out of their fixed cost so if they shut down they lose five hundred dollars so once you realize that the two options here are to either lose three hundred dollars or to lose five hundred dollars then you start to realize this is the best outcome they will stay open and they will lose money but they will lose less money by staying open than if they shut down okay now let's change this just a little bit let's do let's suppose that now our fixed cost are 500 let's suppose our variable cost is still a thousand but now let's suppose our total revenue is only $900 now let's think about what this firms going to do so if they produce so let's figure out what their profit will be if they produce if they produce they're going to earn 900 dollars of revenue minus the $1500 of cost they're going to lose $600 if they stay open if they shut down it's the same situation as over here their revenue would be zero they're variable cost would be zero they lose the fixed cost if they shut down their profit would be equal to not negative 500 so in this case this firm will choose to shut down they would rather lose 500 than $600 so notice that in one situation they're going to produce in the other situation they're going to shut down in every possible scenario that I've put up here their profit is negative but when your profit is negative what it means to maximize profit is to lose the smallest amount possible so there's nothing that the firm can do here they're going there on the hook for their fixed cost no matter what now let's think about what's different between this right here and what I've got right here notice I didn't change anything about my costs all I did was right here my revenue is bigger than my variable cost and down here my revenue is smaller than my variable cost so what we get here is that a firm will shut down if its revenue is less than its variable cost not if profits negative but if revenue is less than variable cost now let's take that right here and let's work with it a little bit I want to change it let's do it right down here so if we take you're going to shut down if total revenue is less than variable cost that's what we've got right there I'm going to divide both sides by Q I haven't changed anything that says we're going to shut down if average revenue that's the definition of average revenue is less then this is average variable cost you're going to shut down if average revenue is less than average variable cost well we saw just a little bit ago that average revenue is always equal to price so this gets us this condition you shut down if price falls below average variable cost that's important that's what we're going to call the firm's short-run shutdown condition so what I need to do is let's clear this off and then I'll show you what that looks like graphically let's take a look at what this shutdown condition looks like graphically so remember our shutdown condition looks like this you shut down if price falls below average variable cost and let's for a second just think about why that makes sense because remember the firm that we had in our previous example they had to pay their fixed costs no matter what there's nothing they could do that's the idea behind a fixed cost you can't get out of it so the question for the firm is should we stay open should we bring our workers in and stay open or should we shut down well if you can bring your workers in and cover all of the cost of your workers then you're going to be able to cover cover at least some of that fixed cost so you lose less than the full fixed cost so in that case if you can bring your workers in and cover the cost of workers the variable cost then it's worth it to stay open on the other hand if you're gonna bring the workers in and you can't even cover all of that cost then you shouldn't do it so if your revenue is less than what your cost of workers your variable cost would be then you just shut down just just lose the fixed cost rather than also part of the variable cost now we put this on a per unit basis so the original example we had was you're going to shut down if your total revenue is less than your variable cost that's true this says the same thing we're just putting it on a per unit basis so if your price per unit is less than your average variable cost per unit you shut down and now we can put that we can see what that looks like on our picture here so if we've got our marginal cost let's put our average total cost and our average variable cost here's what this says this says that as long as the price stays above the bottom of that average variable cost curve as long as it's up in here you're fine but if price ever falls down here below average variable cost then you need to shut down so the bottom of the average variable cost curve is right here that's going to be what we're going to call the shutdown point that's the price above which the firm will continue to produce and below which they are going to shut down so as as long as prices below that they shut down they produce nothing that would be part of the firm's supply curve I'm going to connect these with a line here once the price gets above this point that's that minimum average variable cost then the supply curve is the marginal cost curve we would just draw wherever our price was we would draw the marginal revenue curve over C we're intersected marginal cost so this is the competitive firms short runs like Irv competitive firms short-run supply curve it's the marginal cost curve all the way down to the bottom of the average variable cost curve and then from that point on down it's the vertical axis okay they would shut down they would produce nothing at those prices so that's what our shut down condition looks like let's think now about sunk costs let's talk a little bit about what a sunk cost is this we would call a cost to sunk if it's already been committed and it can't be recovered so you've probably heard something about sunk cost somebody may have said something to you but like don't cry over spilt milk and the idea there is that once the milk has been spilt it doesn't matter what you do from that point on you can't undo that so in terms of making good decisions from that point on it's not useful to cry over the fact that the milk was spilt okay so once a cost has been committed and can't be recovered then it's not useful for making good decisions it's very different from an opportunity cost remember an opportunity cost the magnitude of the cost depends on the behavior that you engage in you have control over an opportunity cost you can change your behavior and change its magnitude a sunk cost you can't what this means is that sunk costs are not useful for good decision-making okay so not useful for decision making so if there's something that you can't control you need to avoid thinking about it for making decisions from this point on that doesn't mean that you can't feel bad about it but it's not useful for making decisions the example that I usually use in class would be let's suppose that let's suppose you have two options to choose from a and B and let's suppose the first characteristic of option A is you lose your car and the first characteristic that should say car the first characteristic of option B is you lose your car let's suppose the second characteristic of option A is you lose your house and the second characteristic of option B you lose your house the third characteristic of option A is you flunk this class the third characteristic of option B is you pass this class now if you have to choose between a and B then what you start to realize real quickly is this is not useful for making good decisions it doesn't distinguish between a and B you're going to lose your car no matter whether you choose a or you choose B so we could erase that off the board because it's not useful for good decision-making it's also the case that this is not useful you're gonna lose your house whether you choose a or b so you've lost your car and you've lost your house the only thing that's different between a and B is that with a you flunked this class and with B you pass this class these two things are the only pieces of information that are useful for distinguishing between a and B these things are sunk costs sunk costs are not good for making decisions only the things that you have control over are good for making decisions let's think about why this matters if we think about what happens in this picture let's suppose we go to this picture and let's think about what would happen in this picture if we change the magnitude of fixed costs if we were to change the magnitude of fixed costs remember that fixed costs average fish cost show up as the difference between these two curves right here we saw that in the previous chapter so if I were to increase fixed cost our very average variable cost curve doesn't shift but this average total cost curve is going to shift up well notice that changing our fixed cost is not going to have any impact on the firm's supply curve because the thing that determines the firm's supply curve is this average variable cost curve it's the marginal cost curve all the way down to that point it doesn't matter where that average total cost curve is it could be up here it could be down here it could be right there that doesn't matter so what we're seeing here is that the firms are going to ignore their fixed costs when they make production decisions that's counterintuitive for a lot of students I can put on a test true or false firms ignore their fixed costs for making product when making production decisions and students will say oh gosh Shirley Shirley well-run firms are not going to ignore anything and so they'll they'll answer false but that's true firms ignore their fixed costs when making production decisions so now let's think about the long-run decision to exit so long-run decision I'm going to say to enter or exit because we can think about when firms will want to enter a market when firms will want to exit a market and this one's actually easy to understand if you're if you're an entrepreneur and you're looking for a market to enter you're going to enter the market where the profits are being made the profits are being made that are positive you're not going to be entering a market where firms are losing money so a firm exits the market if their profit is negative a firm will exit if total revenue is less than total cost a firm will enter if profits positive if total revenue greater than total cost now let's manipulate this one for a second we're gonna exit if tone revenue is less than total cost I'm going to do the same thing that I did with my short-run shutdown condition I'm going to take it total revenue less than total cost I'm going to divide by Q this is average revenue so they're going to exit if average revenue is less than average total cost average revenue is equal to price so a firm will exit if price is less that should be average total cost firm exits if price is less than average total cost this is just another way of saying that you exit if profit is negative so let's think about what that means if we wanted to think about the firm's long-run supply curve let me draw a picture of it right down here so if we thought about let's put on here the firm's marginal cost curve and then we're going to put average total cost here we would not need to put an average variable cost because all costs are variable in the long run so our average variable cost curve would be the same as that average total cost curve all costs are variable but what this tells us is the firm's long-run supply curve is the portion of its marginal cost curve all the way down to where price is equal to the bottom of that average total cost curve if price ever falls below average total cost then the firm is going to exit the market they would produce nothing so this would be the part of the long run supply curve I'm going to connect it with a line and then the rest of the firm's long-run supply curve would be its marginal cost curve above that average total cost curve this is the competitive firms long-run supply curve it is the portion of the marginal cost curve above the average total cost curve so now let's think about kind of the competitive firms profit maximizing strategy how does a competitive firm maximize profit well here's the first rule you produce the quantity produce the quantity where marginal revenue is equal to marginal cost but now here's what we've seen for a competitive firm we've seen that price is equal to marginal revenue well if price equals marginal revenue and marginal revenue equals marginal cost then the profit maximizing condition for a firm is to produce the quantity where price equals marginal cost that is extremely important produce the quantity where price equals marginal cost the way that looks is we've got our marginal cost curve here we know that all the firm does is it looks where that marginal revenue curve intersects marginal cost there's the quantity okay but now what we've seen is that that doesn't happen at every price we've now got some ranges what we see here is that in the short-run if price were ever to fall below the bottom of the average variable cost curve the firm is going to shut down they will produce nothing that's what the short-run looks like in the long run they would exit the market if price were ever below average total cost okay so our short-run firm's supply curve has this little portion on it right here that little tail the firm would be losing money profit would be negative for a price in between this range profit any price down in here profit will be negative but the firm will still produce a positive quantity but if price would ever fall down here they shut down okay so this gives us a good idea of what the firm's supply curve looks like in the short run in the long run their marginal cost curve is the supply curve but not all of it this little tail of the marginal cost curve down below the bottom of the average variable cost that's irrelevant the firm never pays attention to it what we want to do now is clear this off and we'll take a look at what profit looks like for a perfectly competitive firm let's talk for just a second about how to visually see the magnitude of profit on the picture that we're we've been drawing for the firm let's talk first about what profit looks like algebraically so let's think about profit profits equal to total revenue minus total cost so I'm just going to do a little bit of manipulating things here what I'm going to do is I'm going to multiply and divide each of these by Q so if I take total revenue times quantity divided by quantity and it changed anything - total cost multiplied by quantity divided by quantity that's another way to write profit and again these Q's would cancel and those Q's would cancel I haven't really done anything but now I can recognize that's average revenue and that is average total cost so we can write it this way this is equal to P times Q - average total cost times Q now if you're looking at this you might be saying hold it why did we go through this step right we ended up with total revenue is equal to P times Q we already knew that and you're right I mean this is really kind of an unnecessary step to go through but we get to the same place in the end and it demonstrates that a lot of times there's more than one way to get to the answer that you're thinking about what we can do now is we can factor out Q so I can write profit is equal to price minus average total cost multiplied by Q and this is an important version of profit that you need to remember this is what profit looks like in total if we're thinking about total revenue and total cost this is what profit looks like on a per unit basis and remember the pictures that we're drawing are a per unit basis we're talking about average revenue and marginal revenue and marginal cost so we're thinking about it per unit this term right there that's just profit per unit whatever the price is minus your cost on average this is how much profit you make per unit so if you can sell it for ten dollars and on average it costs you eight dollars to make it then you make two dollars per unit and then if we just multiply that by the number of units that you produce and sell then that gives us total profit so this and this say the exact same thing they're just different ways of looking at it okay but now this one we can graph so let's take a look at what that looks like graphically I'm going to put up here a marginal cost curve and an average total cost curve and let's put a price up here let's suppose that the price for this good is right here there's the market determined price the firm didn't have any control over that now we can draw the marginal revenue curve we know that the marginal revenue curve comes over like that there's marginal revenue we look where marginal revenue equals marginal cost and that happens right here so the firm is going to produce that quantity that's their profit maximizing quantity now if we want to identify profit we've got price we've got quantity right here now what we need is average total cost well if we go up from this quantity to the average total cost curve we hit it right there there's the average total cost of producing that particular level of output now we've got the three things we need this distance right here this is price minus average total cost this distance right there is that term and then this distance right here is Q right the distance from the origin out to the quantity is Q if the quantity were 90 then this distance would be 90 units so what this says is this vertical distance multiplied by a horizontal distance is profit that's telling us that the area of this rectangle is equal to the profit that this firm earns that is profit greater than zero this firm is earning a positive profit the reason they are earning a positive profit is that their price is greater than their average total cost curve where it's greater than average total cost any price that is up above the average total cost curve will create a positive profit for the firm all right if price is equal to average total cost then this term would be equal to 0 and then 0 times Q is zero profit would be equal to zero let's draw a picture of a perfectly competitive firm earning a negative profit so let's put our marginal cost curve up here here's average total cost now what we need for profit to be negative is we need average total cost to be bigger than price well that's easy all I would need to do is put my price sum here down somewhere down below the average total cost curve so if I put it right here suppose there's my price then I draw my marginal revenue curve I look where marginal revenue equals marginal cost that happens right here there's the quantity I go up from that quantity until I hit the average total cost curve and I hit it right there so there's average total cost this area would represent the loss the firm earns this firm would earn a profit that is negative profit that's less than zero so you can see that if price is above that average total cost curve above the bottom of it then the profit is going to be positive if price were below the bottom of this average total cost curve like it is here profit is negative let's just finish up here by thinking about where the price would need to be for profit to be equal to zero so if we had our marginal cost curve and our average total cost right here well if price is ever above the bottom of this average total cost curve profits positive and if price is ever below the bottom its negative which means that if price were right there then we would draw the marginal revenue curve it would intersect right there there's marginal revenue it intersects right there there is the quantity that the firm would produce and in this case the profit for the firm would be equal to zero so you can see that it's very easy to see what profit looks like visually on this picture so let's kind of review what we've done here so we talked about what the revenue look curve looks like for a perfectly competitive firm in particular the marginal revenue curve and we saw that it's horizontal at the market price we know that the firm maximizes profit by producing the quantity where marginal revenue and marginal costs are equal so graphically all we have to do is draw this horizontal line at the market price and see where it hits the marginal revenue curve that gives us the quantity the firm's going to produce we talked about the short-run shutdown kindig and we saw that the marginal cost curve is the supply curve but only down to the bottom of that average variable cost curve in the short-run I talked about the firm's long-run supply curve and that's going to be the marginal cost curve down to the average total cost curve and then we talked about how to identify a profit what we're going to do in our next video is we're going to think about what the market supply curves look like these are the firm's supply curves the marginal cost curve we still need to think about what the market supply curve looks like in the short-run and what the market supply curve looks like in the long-run and then in our next video we're gonna talk about what happens in a market how do these markets function when we put lots of firms together with each other so I'll see you in the part two of this
Principles_of_Microeconomics
Chapter_6_Supply_Demand_and_Government_Intervention_Part_1_price_controls_and_taxes.txt
now we want to talk about that demand and supply model and what happens when the government puts some restriction on that on the way that demand or supply functions so we're gonna think specifically about what happens if the government puts a price control a price floor or a ceiling and what happens if the government imposes a tax in a market so let's start by thinking about a price ceiling and a price ceiling is going to be a legislated maximum price so this is a law that would be passed to keep price from going above a certain level and these would typically politicians typically justify these laws by saying that they want to protect buyers from high prices and so they want to pass a law that says that price can't go above a certain level so we're going to think about what effect that has in a market and we're going to think about whether or not that actually achieves the objective of protecting buyers from high prices so the first thing we want to do is think about where that price ceiling gets imposed so if we think about a demand supply picture here if we were to think about a maximum price a law that says price can't go above a certain level if we put that maximum price right here I'm going to use P with a bar above it to indicate my maximum price that's my my price ceiling then what I always encourage my students to do when they're first starting out by thinking about these floors and ceilings is to put in here what the allowable range is so if it's a price ceiling if it's a law that says price can't go above $20.00 it can be five or two or seven or 19 but it can't be 22 it can't be 50 well that means that the law allows anything below that price ceiling so this would be what I'm going to call our allow allowable range so when I first start to do a price floor in the ceiling I'll typically draw in that allowable range once we you get comfortable with it I doing that so here's our price ceiling it turns out that the market wants to take price two right there we've already talked about the incentives that are created that will push price to that intersection between demand and supply and that is in the allowable range so this would be a price ceiling that would be what we call non-binding the the outcome of this market would be the normal free-market outcome P star and Q star so this is a non-binding price ceiling mm-hm so let's think about what it has to look like if we have a binding price ceiling and you probably can already guess that what has to happen is that the price ceiling has to be placed below the equilibrium price in the market so if we have a price ceiling here now the allowable range is anything below that and that's going to create a problem because the market is going to want to push price to right here but it can't it's going to push it up there's going to be up pressure on price but the law says that the price has to stop rising right there and so what we need to do is we need to look at what happens in this case so when we go over from that price to the demand curve and the supply curve we hit the supply curve first right there's quantity supplied right over here's quantity demanded and we already know that anytime you have a situation where quantity demanded is bigger than quantity supplied you're going to have a shortage so what ends up happening if the government places a binding price ceiling and this one would be binding a binding price ceiling will result in a shortage of the good and remember what a shortage means a shortage means that there is not enough to go around that there are people that are willing to pay this price and they want to buy the good but everywhere they look it's they can't buy it okay so occasionally if I'm not in a classroom if I'm just talking to somebody out in the world about something like this a lot of times it's kind of natural for people to say okay well I can see why that might cause a shortage but but over time people will just get used to it and the shortage will go away it turns out no actually the opposite happens we know that demand and supply are more elastic over longer time horizons and so if we've got this shortage right now as the demand curve and the supply curve start to flatten out over longer time horizons it gets that shortage it gets even bigger this is not a problem that just goes away it's a problem that if anything tends to get a little bit worse so what's happening here and this is important price is no longer rationing the good so the way we would describe the outcome of a binding price ceiling is the price no longer rations the good now we've talked about what that means that phrase price rations the good that means that in a free market the price is determined by the interaction of demand and supply neither the buyers nor the sellers have control over the price but what happens is that that price creates a signal that the buyers and the sellers respond to and the buyers and the sellers sort themselves into two groups those who end up buying the good and those who don't buy the good and the sellers sort themselves into two groups those those who sell the good and those who don't now they don't identify with the group you just make a decision I either buy it or I don't but you are part of a bigger group now if we all of a sudden pass a law that says price can't go above a certain level and we put that level below where the market wants price to go then we've created a problem there's one cure for a shortage or a surplus with a market and that is for price to adjust that's the law of demand and supply the law of demand and supply says that the price of any good or service will adjust if it's allowed to to make sure that quantity supplied and quantity demanded or equal but if it's not allowed to then you're going to create a problem you're going to create a shortage in the case of a price floor we'll see that it creates a persistent surplus so price no longer rations to good price does not serve as a signal that that you can look at and say okay I am willing to pay that price so I can go out and buy it you can look at this price ceiling price and say I am willing to pay that price I want to go buy it and there's not going to be enough for everybody to be able to buy it what happens then is some other rationing mechanism has to emerge other rationing mechanisms let's just talk a little bit about those so something else the good will get rationed out and what we mean when we say that it gets rationed out is that that there's some mechanism that will determine who gets it and who doesn't that's a rationing mechanism with a free market its price and everybody getting to make their own decision let's think about some other rationing mechanisms so if we think about a situation where there's a shortage of something a good example of that might be something like a sold-out concert let's suppose they just put the price of tickets below what would occur in a free market and there's more people that want to buy tickets than there are tickets to go around or maybe the government has passed a law that says the price of gas can't go above a certain level a dollar in that case there's going to be a shortage of gas more people won't want to buy gas then there will be quantity to go around so some other rationing mechanism has to emerge and the most common one is a line so if we think about a line that's just first-come first-serve so it doesn't matter how much you're willing to pay in that situation if you happen to be at the front of the line before they run out you get it and if you don't happen to be at the front line if your towards the end of the line it doesn't matter if you were willing to pay that price it doesn't matter if you valued the good more than anybody else in front of you if they run out you don't get it and so what we're going to see is that this is going to create what we call deadweight loss this is going to cause the economic pie to shrink because now the goods are not being allocated to the people who valued them the most in the case of a line that rationing mechanism allocates the goods to the people who have the ability to go stand in a line in other words it allocates the good to the people with the lowest opportunity cost of their time not the people who who are willing to pay the most it's similar to what would happen if let's say the university told me hey you can no longer allocate grades based on point you have to come up with some other allocation mechanism then in that case what I could do one possibility is to say okay in in the class I'm going to have two A's and I'm going to give those A's away on the steps of the County Courthouse at 3 a.m. on whatever March 24th and if you're what the two people in line right there on that day then you will get an a everybody else let's say gets a C well what that's going to do is that's not going to reward the people who know the most about economics it's not going to reward the people who have worked the hardest in the class it's gonna reward the two people who don't have anything better to do then go stand in a line so at some point before that day a couple of people will go down there and start camping out on the steps of the courthouse and those will be the people who don't care if they don't do well in other classes or if if you know maybe maybe they hire somebody to go stand in the line for them I don't a whole range of things could happen but the fact of the matter is it's not going to go to the people who have learned the most economics so lines are inefficient another type of rationing mechanism would be discrimination we could think about this as a favoritism so if you happen to know somebody let's suppose there's a price ceiling on gas and now all of a sudden there's a shortage if you happen to know somebody who owns a gas station then you might be able to say hey look can you just kind of off to the side make sure that I get some gas and if they like you they could and so discrimination treating different potential buyers differently notice that what this does a rationing mechanism like this forces the sellers to do that I mean they can choose to have a line but a lot of times it gives them a very strong incentive to to engage in favoritism okay and that's the last thing we want and we're not saying that if we never had price floors and ceilings that all discrimination vanish but what we are saying is that we certainly don't want the government to give people in an incentive to engage in more of it so that's a rationing mechanism of course we could think about the black-market the underground economy so if you're willing to pay for it now remember paying for it through the market is illegal you can't pay a higher price than P bar you can pay that or you pay less but let's suppose you try to get it whatever the good is assets gas you try to get some gas there's a price ceiling you can't get any well you could probably always go to the black market and secretly pay more than that now if you get caught by the government they're going to prosecute prosecute you and they're gonna prosecute the seller but you can buy anything you want if you're willing to pay the price so that gives people in an incentive to engage in this underground economy okay the problem with these is that all of these are inefficient compared to allowing price to ration the good what ends up happening is remember the justification of this is that you want to protect buyers from high prices and I can tell you that I've mentioned in this class previously that first day survey that I would give my students when I would give my students that first day survey I would ask them questions like this would you support a politician who believes that that we should have a maximum amount on rent right maybe maybe you know you guys spend a lot on rent and so let's suppose we had a rule here in Warrensburg that landlords could not charge more than let's say $200 a month for rent that's going to allow you to keep more dollars in your pocket and when you're young and struggling and and you know not making that much money you really need that extra incentive to get you that boost so you can go out there and be successful well that sounds great right and my students would overwhelmingly say oh yeah I would support that or or they would support a law in Warrensburg said no gas station could sell gas for more than a dollar they they would even write stuff like gosh these are great ideas why don't you run for City Council and do this and then we get to this and you start to realize that the result of those is that there's going to be a shortage if you're one of the people that actually get to buy the good then yeah you've paid a price that's lower than the free market but there's gonna be a chunk of people who want to buy the good and can't buy it and they are clearly worse off than if they had just paid the free market price even though that free market price would be higher so if we think about these rationing mechanisms they are inefficient let's talk about now the effect that the price ceiling is going to have on consumer and producer surplus because we can talk about how these things are either going to make people better off or worse off but until we look at consumer and producer surplus and deadweight loss we don't have any real way to put numbers to that so let's clear this off and then we'll take a look at that all right let's take a look at consumer and producer surplus with a binding price ceiling so we've got our demand curve here supply let's put on here what the free market price would be in the absence of any kind of price floor or excuse me any kind of price ceiling so here's our quantity here's our free market price but remember that the law says price can't be right there it's got to be lower and so right here's the price that the law says is the ceiling so we know that in that case here's going to be the quantity that ends up getting sold so that's the quantity in the market let's call this let's just call it q1 I'm going to call that P bar but just so I can distinguish between these two quantities I'll call that one q1 so that's the quantity that's going to be available for sale that's going to be the quantity that sellers sell and the quantity that buyers buy now let's put some areas up here let's call this a and let's call this B and this C and D and E and let's think about consumer and producer surplus first without the price ceiling and then with the price ceiling so let's start with consumer surplus so let's begin with no price ceiling just the free market so we already know that consumer surplus would be the area under the demand curve and above the price so it's gonna be area a plus B and you wouldn't divided this up into these two areas to calculate it you just calculate the area of that triangle if we think about producer surplus it's going to be the area under the price and above the supply curve it's going to be C plus D plus E and again you wouldn't if you were just using the free market or thinking about producer surplus in the free market you wouldn't divide it up into these three areas but deadweight loss is going to be zero right we know that the free market is efficient the free market maximizes the total amount of consumer and producer surplus but now we've got a price ceiling so with the price ceiling we've got a different situation now the price is going to be pushed down to here quantity is going to fall to q1 and so let's start by thinking about consumer surplus consumer surplus is going to be the area under the demand curve and above the price but you only go out to the quantity that gets transacted so consumer surplus is going to end up being a plus C now notice that consumers lose B but they gain C we'll come back to that here in just a second first let's talk about what producer surplus is and then we'll come back to this it actually turns out that consumer surplus will not be this big in the real world but let's talk about producer surplus first so producer surplus is going to be the area under the price and above the supply curve so producer surplus now with the price ceiling is going to be just area II so clearly beyond a shadow of a doubt a price ceiling hurts sellers and we can put a dollar amount on the lost producer surplus that sellers experienced its areas C plus D so the lost producer surplus sometimes I will call it the change in producer surplus how much how much in terms of producer surplus do the producers lose well it's area C plus D and then we can think about deadweight loss so how much does the economic pie shrink how much well-being are we missing out on because the government has created a policy that artificially lowers price and pushes us away from that free market quantity well area B plus D B plus D is the deadweight loss of this policy it is well being some of it consumer well-being some of it producer well-being that vanishes out of the economy notice for these quantities right here for those quantities at the margin the value people place on them is higher than the cost of production right at the margin they have a higher value than it costs us to provide them and yet because of the government policy they are not transacted it prevents a mutually beneficial transaction between buyers and sellers because of this law let's talk about area C so area C is what we call a transfer it's a transfer from producers to consumers hey so if we think about it's just right here it's a transfer from producers to consumers that used to be originally with no price ceiling area see used to be part of producer surplus it ends up being part of consumer surplus it's like taking dollars out of producers pockets and putting them into consumers pockets so Area C is a transfer so this policy redistributes wealth from one group to another not because of anything they've done productively it's just an arbitrary redistribution let's go back to this area a plus C this area right there turns out that in the real world it's actually going to be smaller than area a plus C this is what we would call an upper bound that's as big as it could possibly be but let's think about why we know it won't be that big remember that your consumer surplus is directly under your portion of the demand curve so for this to be consumer surplus these consumers would have to be the people that purchase the good right those consumers represented along that portion of the demand curve would have to be the consumers that purchase it well if a fruit this was a free market then the good would be allocated to the consumers who valued them the most it would be allocated to them but remember when you've got a binding price ceiling other rationing mechanisms emerge other things like lines other things like favoritism right discrimination and so what's going to happen is that there's no guarantee at all that the good will go to these consumers with the highest willingness to pay as a matter of fact notice that if we extend this price out here every consumer represented from this point all the way down to there is willing to pay a price that's at least as big as the price ceiling and if some of these consumers out in here happen to be at the front of the line then they will get the good at the expense of some of these people with the higher willingness to pay and if one of these consumers out here gets the good then amount of consumer surplus they're going to get is smaller than the amount of consumer surplus some of these people are going to get so in reality because of the fact that the good is not allocated based upon willingness to pay it's not allocated based upon value the consumer surplus that act consumers actually experience is going to be lower than this okay now on a test if I give you a price ceiling and I ask you to calculate consumer surplus I'm going to want you to calculate areas a plus C okay and the way that you would do that is you would need to divide that into a little triangle up here and then just calculate the area of this rectangle right here okay let's talk now about a price floor so a price floor works just like a price ceiling does it and except now we're thinking about a legal minimum so let's start by thinking about a non-binding price floor let's put a little picture up here if we had our demand and supply curve I'm gonna put a bar under my P now so let's suppose we had a price floor right down here I'm gonna use that symbol to indicate a price floor well the allowable range for a price floor is anything above the floor so anything up there is allowable and what we see in this picture is that the market wants to take the price to something in the allowable range so this would be non-binding what we need for a price floor to be binding is we need that for that price floor to be set above the equilibrium so let's take a look at what a binding price floor would look like if we had demand supply if we had a price floor that's set right here P bar now we've got a situation where if we go over here's our allowable range anything above that but the market wants to take down here and that's not allowable the law says that is illegal so if we look at quantity demanded at that price floor remember the markets going to push price down as far as it can it's got to stop right here and here's quantity demanded at that price and here's quantity supplied at that price what we end up with is of course a surplus and that's going to be a persistent surplus it's going to be a surplus that does not go away and if anything tends to get worse over time because demand and supply are more elastic over longer time horizons let's think about some examples of some price floors so the obvious one is a minimum wage and if I on that first day survey asked my students well do you think that we need a minimum wage to guarantee workers an adequate standard of living a lot of them will say yes and and part of that is that they've never really thought about the impact that a minimum wage has on a market they've grown up with it a lot of them would have worked for it and thought well heck I had a job and I was earning minimum wage and you know everything seemed to be fine and and politicians will talk about it and a lot of people would say well if it was if it was as bad as economists say it wouldn't have been around for this long well I would beg to differ there's a lot of things that don't work that have the opposite effect that people think they have and they've been around forever so minimum wage gets a lot of support but the reality of what it creates is that it's going to create unemployment all right a surplus of Labor if the good is Labor that's being bought and sold as you increase the wage firms are going to want to buy less labor and as you increase the wage the price of unskilled workers firms are going to start to substitute away from unskilled workers towards skilled workers in suppose you raise the price of of people with no skills you create a situation where firm want to buy less labor from them and as you artificially raise the wage more of those people want to work than before there's a higher quantity supplied and so there's going to be a difference between quantity supplied and quantity demanded and we can think about how elastic the demand and supply curves are you can see that if I were to draw my demand curve and my supply curve very steep that the amount of unemployment might be small so it could be the case that if demand for supply or demand and supply for labor in a particular market was very very inelastic not perfectly inelastic but very inelastic then it might create a small amount of unemployment and you might look at that and say okay well that's worth it but if we had a situation where the supply curve and the demand curve were more elastic then it could create a lot of men a lot of unemployment and it depends on the market so economists disagree with each other as to how much unemployment is created but let's just go with this picture here so we've got a surplus of the good let's think about whether or not that protects sellers from low prices because that's the justification right well if you're one of the people to keep your job then yeah you get to work at a wage that's higher than what the free market wage would be but if you're one of the people that want to work and can't find a job then clearly the minimum wage doesn't help you and so it's it's like a price ceiling it helps some people it hurts other people it in general does not achieve the objective of helping every low income work or achieve an adequate standard of living it's clearly going to hurt some of them so let's clear this off and let's do what we did over there with our consumer and producer surplus but let's do it with a price floor all right let's take a look at the effect on consumer and producer surplus here so let's put our demand supply we're gonna do the same thing I'm going to identify my equilibrium price and my equilibrium quantity if there were no price floor let's suppose we have a price floor right here then here's going to be the quantity that gets transacted let's call it q1 like we did in our previous picture the amount of the surplus is going to end up being right there but let's just label these spaces these areas let's call this a B C D and E so same thing let's start with no price floor consumer surplus is going to be area under the demand curve and above the price a plus B plus C producer surplus it's going to be the area under the price and above the supply curve which is D plus E deadweight loss is going to be 0 the economic pie is going to be as big as it can possibly be and now we're going to impose the price floor so with the price floor let's start with consumer surplus our consumer surplus it's going to be the area under the demand curve and above the price which is now P bar so consumer surplus is going to fall to just area a consumers lose B plus C which is of course this area so the loss of pretty of consumer surplus is going to be B plus C producer surplus it's going to be the area under the price and above the supply curve but remember you only go out to the quantity that gets transacted which is right here and so producer surplus is going to be B plus D but now the same conversation that we had in our previous picture is going to apply here that's going to be an upper bound in actual practice it won't be that big if we think about deadweight loss deadweight loss with no price ceiling our quantity is out here excuse me price floor our quantity is out here but the price floor pushes quantity back to here so we lose out on C plus D that's economic well-being that is just gone area B that's a transfer that starts out as part of consumer surplus and ends up being part of producer surplus so B is a transfer and it's a transfer from consumers to producers it's an arbitrary redistribution of wealth that has nothing to do with productivity or willingness to pay or cost of production or anything like that it's just that this policy takes dollars out of the pockets of consumers and puts them into the pockets of producers so this area right here let's again say that this is an upper bound and let's just review the reason why this would be producer surplus if these low costs producers the low-cost sellers were the ones that actually sold the good but remember all of these sellers all the way out to this point on the supply curve are willing to sell the good because here's the price they can sell it for and here's their cost of production and so since price is not going to be allocating the goods some other rationing mechanism has to emerge and so it could be that these sellers are not selling the good and some of these are and these sellers do not get as much producer surplus as these sellers do so that is clearly an upper bound I've actually gotten a couple of textbooks to review over the years and and well I've gotten lots of textbooks to review over the years but a couple of them have actually had a chapter where they make an argument at a price floor and a price ceiling actually make people better off which is complete nonsense but their argument goes like this if you look at what happens here let's consider producer so producers lose e but gain B and so if you look at what the producers lost in this picture this little triangle right here and you compare it to what they gained which is this area right here this rectangle the area of that rectangle is going to be bigger than the area of that triangle and so in those text books they've argued that a price floor is an attempt to help sellers and it does indeed help sellers because this area is bigger than that and they forget or they just conveniently don't mention that this is not going to be the magnitude of producer surplus when you have a price floor because the rationing mechanism is going to be something that's inefficient so it's an argument that is either dishonest at worst or at best it's it's uninformed it doesn't take into consideration the fact that any type of prohibition on price adjusting results in some other rationing mechanism so for thinking about price floors the most obvious price floor is a minimum wage and remember that in a price in a market like we're talking about here neither the buyers nor the sellers have any control over price so some people who argue for the minimum wage their argument is this if we didn't have a minimum wage then sellers would just pay them nor the the firm's would just pay them nothing the firm's can't do that firms don't have any control over the price the price would go to the market clearing price to the equilibrium price they're not going to be able to pay a price down here because people aren't going to work for that and so if you think about what's going to happen the price is going to go to the equilibrium and that equilibrium is out of the control of either the buyers or the sellers but a minimum wage is just one example of a price floor the farm bill includes price floors on commodities so there have been price floors on things like corn to protect farmers or price floors on things like milk if we think let's let's think about something else that doesn't show up in this picture and let's think about say a let's go with a minimum wage let's think about the other incentives that are created for employees and employers with a minimum wage and there are things that are bad beyond this deadweight loss so let's suppose that we have a minimum wage we know that that's going to create a surplus of Labor so the employers are going to have a big stack of application so that they can choose from and so rather than having the free market allocated they've got to somehow sort through those applications and decide who they're going to hire and what that does is it gives employers an incentive to search for people based on other criteria things that we don't want them necessarily choosing who to hire based on so they're going to have a big stack of applications for people who want that job because there's not enough jobs to go around let's think about the employees so the employees if they get a job and we've got a binding minimum wage that means there's a shortage of jobs so employees tend to stick around in jobs they don't like longer than they would have to if the free market was operating and the reason they have to stick around is that if they quit this job it can be hard to find another one it also distorts the incentives in terms of the incentive for an employer to be good to the employees because if you're an employer and you've got a big stack of applications for people who want that job you don't necessarily have to treat your workers that well because you can always tell them look if you don't like it take off I've got a stack of applications I can go in there right now and hire somebody to replace you so those things don't show up in this kind of picture but there are other incentives that are created by these these price floors and ceilings in general what we can say is that any floor or ceiling is going to help some people and hurt other people they don't achieve this overall objective of somehow protecting buyers from high prices or protecting sellers from low prices so let's talk about some things that are better than that you might be saying well ok so if those are bad then then what do we do is there nothing we can do if there are people let's say people who have are the heads of a household and they don't have any job skills and they're trying to work a minimum-wage job and raise kids yeah minimum wage job it can be really tough to to make much money at that what else can we do well so most economists would say that rather than fixing the price what you another option would be something like wage subsidies so a wage subsidy would still allow the market to push price wherever it needed to go it would simply subsidize the wage of people who needed help and so it's a way of helping the people you want to help without somehow inhibiting or prohibiting the the market from being able to clear because market remember if you leave it alone it's going to result in quantity supplied being equal to quantity demanded so wage subsidies in the case of something like a price ceiling rather than arguing that we should keep landlords from being able to to have a rent that is the market clearing rent we could have rent subsidies and a rent subsidy would look like a certificate that you take to your landlord and it says this this certificate is good for say $400 in rent this month and so you pay them the certificate plus whatever extra as to where the total adds up to the price and in that case you you help people that that need the help and you don't prohibit the price from moving so that quantity supplied and quantity demanded come together these aren't perfect they cost money sometimes people abuse programs like this but in general they are much better than a program like that and yet most people tend to think that that's the kind of program that we need to have what we want to do now is talk about taxes but I'm going to leave that for the next video
MIT_2003J_Dynamics_and_Control_I_Fall_2007
2_The_spider_on_a_Frisbee_problem.txt
the following content is provided under a Creative Commons license your support will help MIT open courseware continue to offer highquality educational resources for free to make a donation or to view additional materials from hundreds of MIT courses visit MIT opencourseware at ocw.mit.edu you know I didn't get to say in person to you how how fun this class is but this is the material itself is definitely the the uh the most exciting in all the tw3 curriculum um I want to take a minute and just kind of reintroduce myself my name is Sanjay Sarma please call me Sanjay right I'm very informal I demand informality I will be informal to you as well I will learn all your names trust me how many people do we have in class 130 okay I'll learn half your names okay but we're going to have a lot of fun we the some of the classes are going to be videotaped uh because as we're doing it slightly differently this class is going to be a little more MIT than Harvard in other words we're going to do stuff um a little in a little more formal fundamental way um and it'll help us address and understand some of the uh Concepts especially in kinematics more intensely than if uh you took it the way that it's s in other classes um as I said my style is going to be very informal we're going to take Pi we're going to see who speaks we want you to speak up I'll ask questions uh you will never ever be uh penalized for seeing anything class you'll only get credit for it okay even if it's funny okay so in the last class we set up what we're going to do um I we explained that in the field of Dynamics there are actually two pieces there's kinematics and there's kinetics right and we said we would do kinematics in this class and we're going to do K kinetics afterwards kinematics is all about motion and as I said it's the most fun part of Dynamics so let me just lower this uh sorry sorry so there's no better way to get get going than to Simply start by writing out a problem and trying to solve it in class and that's what we're going to do now get this all lined up focused okay so the problem that we're going to start with is imagine that I'm standing here in the corner of this room okay I have a Frisbee and I throw it across the room the Frisbee translates that as it moves it rotates and um let's make it more interesting there's an insect on the Frisbee okay and when the frisbee is moving across the room and rotating in insect starts running away from the center okay now this is a made of problem but this is the problem I'm going to solve today and what does solving a problem mean I'll solve a piece of it but let's let's examine what this could mean first of all this is obviously a silly you know exercise but um if you imagine an astronaut on the space shuttle hurtling through space um this is a similar problem and the question you might ask is the astronauts boots let's say the're magnetic exert a certain force on the space shuttle right now obviously uh there are certain conditions under which the astronaut would lose contact with the space shuttle assuming the surface actually the surface of the space shule isn't ion it's aluminum so my example doesn't really hold but that's okay okay so assuming it assume it's iron why isn't it Alum why isn't it iron because yeah it's heavy yeah we know all right but is assume it's iron right so let's assume that the astronaut's holding on to the space shut using a magnet using magnetic boots under what conditions would the astronaut stay in contact with the space shuttle anyone hi excuse me sorrys so under what conditions would the astronaut stay in contact with the space shuttle space shuttles swirling through space it's hot in here isn't it let's open the windows I don't know if there's a way to do this go ahead how do how does the astronaut stay in touch with the space shuttle okay we'll figure this out it's broken anyone raise your hands yes excellent magnetic force is strong enough to keep him or her in contact uh and able to overcome whatever force is pulling him or her away from the space shuttle what force is pulling him or her away from the space shuttle what else inertial Force what's that mean yeah yeah some that's right look it's very simple right think of this astronaut forget the spaceship imagine you know just imagine the spaceship isn't there right if you just think of the astronaut this guy this woman is hurling through space right and it's he or she's doing it under the influence of a certain force and that Force had better be equal to mass times the acceleration of that astronaut right if the magnetic force cannot equal or exceed n * whatever the a is by by astronaut right get it because the magnetic force cannot provide the uh change cannot affect the change in momentum rate of change in momentum required to keep the astronaut in position get it so the magnetic force is weak and the space shuttle is kind of spinning away right it the if the force isn't enough to keep the astronaut spinning accelerating with the space shuttle so they're going to separate right so this problem of you know me throwing a uh a a frisbee across the room an insect on the Frisbee running away from the center of the Frisbee the the the practical application of the problem could be hey what is the attractive Force for example the you know spiders have adhesive pads right so what is the kind of the adhesive force that the spider should be able to exert if it wants to stay on the Frisbee as I hle it across space and if it wants to kind of move on the Frisbee in some predetermined way get it now in order for me to answer this question what question do I need to answer first in order to answer the question what is the force required right let's say I know the mass of the the spider what is the one thing I need to figure out yep coordinate system you need yes that's how I start and then what do I need to do that's right all you need to do is find the acceleration of this dud the spider as it stays on the Frisbee and tries to move as it's hurling through space right that's all we need to do and answering that question is the subject of a topic which begins with k which is kinematics right okay all right so how do you do it how would you do it where do you start say it again okay and then what do you do very good right look if I'm sitting on the Frisby let's say I'm sitting on frisbee I'm the pilot of the Frisbee right and I see my buddy the astronaut trying to I move on the space shuttle right I'm or similarly I'm sitting on the Frisbee and I see this insect the spider running away from the center of the Frisbee with respect to myself what's a spider trying to do in a what what sort of path straight line right as far as I'm concerned just moving away in a straight line right what about with respect to the ground Observer of the Space Shuttle or with with respect to me what's the uh what's the uh Spider doing yeah it's kind of going like this right it's kind of bigger and bigger spiral kind of thing right so is the acceleration different depending on where I'm looking from yes okay which frame should I pick where should I look at it from let's say I'm on Earth where do I look at it from should I should I pick the frame on Earth should I pick the frame attached to the spider I mean to the to the spider should I pick a frame attached to the Frisbee which one should I pick where should I be looking at it to do this F is equal to ma because obviously the accelerations are different right a frame attached to the spider sees no acceleration whatsoever right okay this is irritating so which frame should I pick come on let's warm up we have another 15 weeks to go we going to be doing a lot talking that's excellent Insight so you got to pick some frame because if you pick the Frisbee then you're ignoring some things right so which frame should we pick that's exactly right but which one would you pick where you wouldn't have to add anything H which one the Earth what do you think the Earth but isn't the Earth moving what but that's true of any frame but you're actually on the right track should I tell you the answer the answer is actually you pick an inertial frame remember that word but there is no such thing as an inertial frame really okay what it means is it's a frame that is Motionless in space and the Earth moves with respect to that frame and the Frisbee or the space shuttle moves with respect to that frame and you need to calculate the acceleration with respect to that frame but it turns out that the relative velocity relative motion of the Earth with respect to inertial frame is much lower compared to the relative velocity of the Space Shuttle so we can just pick the Earth get it so it's actually an approximation so we're just going to pick the Earth right because space sh moves really fast on the earth so if you actually picked the inertial frame it would make such a big difference so from a Dynamics point of view we pick the Earth because it's an inertial frame or an approximation of inertial frame but from a kinematics point of view we don't care what an inertial frame is an inertial frame is simply the frame in which f is equal to ma right what we're going to concentrate on now is calculating the a in whichever frame you choose get it it just so happens then we start writing Dynamic equations by the way the word is the objective is dynamic or dynamical they're both correct okay it took me 20 years to figure this out but anyway um if you from a kinematics point of view we should be able to figure out the velocity of anything with respect to any frame right and then when we come to Dynamics we'll make sure we do the Spectrum inertial frame and write f is equal to ma but right now our focus is on the a all right so let's figure out how to calculate um the velocity of and the acceleration of the spider with respect to different frames and We Know by the way that the Earth is the frame we really care about but we don't care in kinematics right because kinematics is all motion we don't care about okay so what's a frame let me start by asking you to guess and then I'll tell you what it is what's a frame perspective yeah it's a perspective any other words for it I'm actually holding a frame here's what a frame is when in doubt imagine a frame to be some sort of three-dimensional transparency okay now let's limit ourselves to two- dimensional space a frame is a transparency that's all it is it is this imaginary set of points that are kind of rigid with respect to each other get it so here's a frame let us say that I say that draw it here this is the corner of the room okay and this is the Frisbee now I can pick attach a frame to the Frisbee as well okay now I'll call let me remove this I'm going to call this guy from name f for Frisbee for the time being and every point of the transparency moves kind of rigidly and that's that frame that's frame F get it I used a capital letter F I put a circle around it I put a squiggly line right and I called it a frame so this is the frame right and this is let's call it Earth e right so let's call this planet Earth where most of us live and uh so this is the frame attached to this frisbee right it can turn it can move whatever right this is the frame attached to the Earth right now stationary and this guy's moving spect to this got it so with respect to the Frisbee oh oh by the way now we have an insect here the spider let's call it s and it's kind of heading out in some Direction like this right so with respect to the frisbee if you're sitting on the Frisbee f is frozen right so all you see is the spider kind of walking away but respect to the ground or the Earth e right S is doing all sorts of funny things right so that's what a frame is so now with that let's jump in and get a little more detail I'll use transparencies once in a while but I really be doing chalk and talk right so you need to stop me I'll have my back turned towards you you need to stop me if you have any questions I'll put this away um in future by the way um if you don't mind what's your name I'm sorry kie if you don't mind you know this thing being right next to you it's hot um it's okay but if you do you know leave that chair unoccupied all right I've learned one name cly getting there another 129 to go okay so let's start the actual uh the serious portion of the class so I said a frame was basically a transparency okay and that's the best way to think about it um and a frame is essentially a in two dimensions a rigid imaginary set of points that are rigidly connected to each other okay like the transparency it doesn't mean there's any stuff there is an actual rigid body of frame anybody Yes No Maybe sometimes yes or no yes yes it is it just happens to be a a set of points where there's Mass on a frame get it that's all there is to it a rigid body is a set of points where there's Mass so a rigid body can also have a frame embedded in it and in fact it is kind of a frame it just has a mass right okay so let's start a frame is a rigidly connected continuum of imaginary points right now one of the things I did was in that frame I put I picked a point I pied two little vectors right two vectors you know what those vectors are called those little two vectors I put yeah yeah you could say axis you could even call them unit vectors right but the technical term is the're Basis vectors okay so I can identify a frame you know that transparency I could take the bottom left corner and take put two little unit length vectors at the bottom left corner right and that becomes the basis vectors of the frame I pick them once I can pick any two mutually perpendicular vectors I don't need them to be mutually perpendicular they just shouldn't be parallel right but for convenience we pick two mutually perpendicular little vectors unit length and we call them basis vectors right so we can embed two mutually perpendicular unit vectors um which are frozen for that frame right once and for all and called them the basis vectors okay so what that means is that I could take uh I raised it you know what let me keep that down until I finish my demonstrations I could just take in this case for this Frame right for this Frame the edge of the transparency is here can you see it let me see can you see that's the edge of the transparency I could have picked a little unit Vector along you know the horizontal a little one along the vertical and call them the two basis vectors of that frame does it matter where I pick them no I can pick any basis Vector right so I'm going to pick them here and I want to give you a little convention I'm going to refer to them as little F1 and little F2 the underscore indicates that it's a little Vector okay now I can similarly take Earth planet Earth where most of us live and call this E1 and E2 and this is my little convention right I'm going to limit my descriptions to two Dimensions but everything I tell you about in kinematics easily and seamlessly translates to three dimensions that's the beauty of the way we do it okay so we'll do everything in 2D but you just go to 3D everything's fine with cross products dot products everything will be the same okay so these are the basis vectors in the unit length and uh now so any Vector with respect to frame F can be using these basis vectors can be expressed as you know some something * FS1 plus something * FS2 right with respect to these basis vectors that be something * E1 something * E2 right I might come back to this okay let me ask you a question so let's say I pick a a frame f F1 oh sorry F and I take a vector R it's a little unit it's a little you know vector and I freeze it in that transparency I draw it can it ever change within the F frame if I draw it there right and I'm sitting on FR frame will I ever see change in length or Direction no but that same Vector if I'm sitting in the E frame I'm using moving the transparency around will it change right will the length change no but what will change its direction will change right so that means that a vector embedded in a frame will not change in that frame but with respect to some other frame it might change okay all right so here's what I'm going to say and then I'll explain this more by the time we're done with today's class and next week's class you'll understand this completely because we're going to do this in gory detail okay but bear with me we're going to make some a a few leaps here but when when we're done you'll you'll be completely totally cop pathetic with it oh sorry Miss Mis wrote by the way I haven't said anything profound yet everything is very intuitive right I haven't said anything special so you're not missing anything I'm just going slowly we'll gather speed in a minute and I'm going to say this in a I'm going to say what I just said in English in a slight more mathematical way okay so so now I'm going to write something for you which might seem a little odd but this is the thing you need to get once you get this all of kinematics is is Breeze d by DT of a vector fixed in a frame let's let's consider frame F which give me an example of a vector that's fixed in frame f um oh yeah actually give it to me in terms give me make up a vector that's fixed in terms of you know B1 B2 F1 F2 in terms of frame F it's it's an odd question yeah huh vs vs is a velocity right it could be fixed let me actually let let me just tell you is this Vector fixed in frame this is f is Vector F1 fixed in frame F yeah what about this Vector is this fixed in frame one yes right so any Vector which can be written as in terms of the basis vectors and constant numbers is fixed in the frame F right so any Vector r that is fixed in frame M I'll just call it R right now where R is fixed in F here's what I'm going to say d by DT of that R is equal to zero I haven't told you precisely what this D by DT is I'll explain it to you right but you know that that Vector if it's fixed in in frame F if I take the derivative it should be zero right it ain't changing isn't moving right now if I take this derivative of that same Vector not in the frame of the frisbe but in the frame of the earth is it going to be zero no so I need to have some way of saying listen I'm taking this derivative in frame F not in frame earth right how would I denote that very good so for example I could put a parenthesis and a little F like this right to say listen I'm taking the derivative respect to frame F right I could do that but it turns out that a better convention is to Simply write the frame of reference here okay so when you see something like that this is the way to read it in English the derivative of vector r with respect to frame F and if I tell you that derivative that R is fixed in F the statement I'm making is that derivative is zero got it makes sense any questions I insist on at least one question at this point I get tired I need you know I need a break anyone yes thank you what's your name Rachel Rachel two is that what two under the f it's actually just wasn't even meant to be a line it's not a vector it's just a frame so it's just a capital letter thank you right f is just the name of a frame I could call it Rachel right Rachel right yeah so I could call it Rachel I just call it f just a name okay any other questions anyone else it's oppressive and yeah we should call facilities and figure out how to turn the air conditioning or open the window or something so if you remind me I'll call them later thanks okay so that's all the RIS of frames when you look at one frame and now the fun starts because now we're going to start looking at taking derivatives and looking at stuff across frames all right and once I do that we'll jump into the Frisbee problem we'll nail it it's a 1010 about 105 uh we might not finish the Frisbee problem we'll certainly set it up and start doing some of the math and U we'll finish it off worst case next class okay okay so now let's look at multiple frames so I'm going to call it multiple but we'll always look at pairs of frames two frames right and if you can do the math across two frames three frames is just a chain of two frames right so we'll just look at two frames but we'll call it multiple because it goes to multiple okay so imagine one transparency or one frame which is planet Earth let's attach a basis Vector to it a basis or basis vectors to it E1 and E2 okay now let's think of another frame frame B1 B I'm sorry actually we called it f for frisbe right so we call it f what are the basis vectors in f f and F2 okay first question can you write E1 and E2 in terms of F1 and F2 or F1 and F2 in terms of E1 and E2 2 yes yes what do you need to tell me what E1 and E2 are in terms of F1 and F2 the starts with an A the angle right so now one of the things I'm going to do in this class especially because uh we're going to do a lot of stuff and I want to keep you guys engaged is I'm going to have these snap quizzes AJ may have explained the snap quiz to you but the way the snap quizes work is we look at them and we make sure you put in some effort and we absolutely don't penalize you for making any mistakes in fact we penalize ourselves we don't penalize ourselves but we you know we uh see what we succeeded in transmitting all right so here's what I want you to do and some of these snap goers are we haven't it's not like we're asking you to recall something we're actually asking you to look ahead here's a snap quiz I want you to write for me fub1 and F2 in terms of E1 and E2 is you this angle is Theta and this is basic trigonometric yes pull out a sheet of paper write your name on top and have a go at it and we'll collect at the end of class yes sorry the angle is the very good so let me explain I have defined Theta I could have defined it any way I could you know because I'm the professor I can do what I want you know you know it works but anyway it's the angle between F1 and E1 at a snapshot in time at an instant in time I'm calling it Theta and I want you to tell me what FS1 and F2 are in terms of E1 and E E2 as a function of theta y1 F1 and F2 are both unit vectors e and E2 unit vectors okay for yeah I want you to write so I want you to write something like fub1 is equal to something E1 plus something E2 F2 is equal to something E1 plus something E2 that's what it'll come out looking like and it might have a few SS and cosine thetas floating around in fact it better have some sin thetas and cosine thetas right and by the way we will put up a handout um has it been put up yet AJ will the uh um kinematics PDF we'll put it up like today or tomorrow which explains all this all right I'll tell you about the book in a minute when you're done with the quiz yep um it's just fub1 equals to cinea E1 ah hold up up okay yeah good yeah I get it I know it's easy good I want you to know it's easy but I want everyone to know it good what's your name Joe what's your full name Joe kurri k h o u r y kurri excellent only because Joe is a common name so you know there's a Dean at MIT I think he's still a Dean right with your last name yeah Phil cury okay everyone done another 90 seconds it doesn't matter if you don't get it once I write it down you'll get it I want you to put in the effort of trying that's all I'll tell people that with things like this it really helps to grit your teeth doesn't really I'm just conf got it got it okay 10 seconds all righty we're done anyone Joe what's F1 and F2 in terms of E1 and E2 F1 is uh cosine th E1 plus sin Theta E2 is um F2 = to- sin thet want plus does did everyone get this yeah be normalized um they would automatically be yes they should think about it right no no think about it so the the question so first of all let me come back it's a great thought hold that thought all right does do people agree with this AJ is this correct a is a TA he wasn't looking okay that's fine let me look at my notes and confirm okay F1 and F2 that's correct excellent okay so what's your name r r how do you spell it r a n e R any excellent rain so the question rain is asking is look fub1 and F2 are unit vectors E1 and E2 are unit vectors I'm assuming this is what you're asking so obviously this better be a unit Vector right but as it turns out if you transform between unit vectors what comes out will end up being magnitude one and the reason is that if you take the magnitude of this you get cosine Square theta plus sin Square Theta which will be one so it'll kind of self normalize get it did people understand what I just said right so this is correct any questions about this do you does anyone want to know how I got this it takes a lot of courage to say yes and I'll show you you want me to show you okay it's very simple right in the if you think of it the following way I'll I'll do it below all right I'll do it here by the way this is a terrible classroom I apologize so you can't see stuff from a distance just hey call me and I'll you know I'll try and make it bigger so fub1 right FS2 and we're trying to calculate F1 and F2 in terms of E1 and E2 right so the way we would do it is there are a couple ways to do it one is you can just take components and add them up right so one of the things you could do is you could say listen um F1 is composed of this component of E2 and um um how would I do it that's an easy way to say this yeah or yeah I could do the other way right right so I can take the projection of F1 on E1 and take the projection of F1 on E2 get it what's your name ER Erica right I can just take the projection of fub1 on E1 and E2 right and just use the these thetas right and then take the projection of F2 on E1 E2 and then write that's what you get okay I don't want to do it only because I uh I can show it you after class it's just very very basically just add the projections and you'll get that it's in the notes okay what is E1 and E2 in terms of F1 and F2 what's a quick way to do this H yeah you could solve it or there's another way to do it which is if you put yourself in F1 and f2s if you kind of turn it around if it's like replacing Theta with a minus Theta right so if you replace Theta with a minus Theta what you'll get is got it so this is how you go back and forth between two frames and as long as you know the angle between the frames right now here's the deal if this Frame is rotating with respect to this Frame the Theta will change but at any given point in time if you know what the Theta is Binger you can calculate fub1 from F1 and F2 in terms of E1 and E2 and vice ver okay any questions about this yep yes you will but it's many more steps so we'll come to that we'll nail the spider now all right okay but this is how you trans transform between frames okay so now the notes go there okay now I'm going to tell you something which will seem like an empty trick and then we'll do the spider problem and it'll make sense now let's assume that we have some Vector r I'm trying to thread the needle here I'm just thinking whether I should give you the conceptual Point first okay let's just jump into the spider problem and then we'll because a lot of people asked about how solving that getting the velocity let's just do that now and then I'll make the final point I want to make after that okay so this is how you translate between frames this is how you transform between frames and we're all set so now we have all the math we need there's one little trick I want to tell you which I'll tell you now in the context of the spider problem and now what we're going to do is actually try and figure out what the velocity and acceleration of that spider are all right so let's do that okay so this is the corner of the room right this is the room and um in the in the corner of the room I have two two unit vectors let's make them a little shorter these are too long right uh we'll call them E and F just because we've been using that for Earth and frisbee right e and this is our frisbee we call that F and it's spinning we call these fub1 and FS2 so first question to your folks the first thing you need to do by the way actually as a comment is when you draw a picture like this you must always go out of your way to label points that are of importance so for example this point we'll call it P right it's just a point P I could have called it C center of the Frisbee let's call the position of the spider s now have I lost any generality in claiming that the spider is along this unit basis Vector have a lot I haven't right because I picked them arbitrarily so might as well just pick that okay let us call now actually even before that so here's the question to you how do I parameterize the position of the spider at any given moment in time what do I need to know to tell you where the spider is right now say loudly the relationship of F1 to reference frame e and what would that relationship be think about what do you need to know excellent okay so one parameter you need for sure is this angle right which is Theta which we just used up there what else do you need to parameterize if I want you you to give me the exact coordinates of the spider what are the numbers do I need yep now if I know the angle but I know it is a function of time then that's one parameter what are the other parameters I need don't think of motion yet just think of like a snapshot right just give me distances we'll get to Velocity right see this this is what I want you to think through velocities are derivatives of location so go ahead distance exactly so what else do we need we need the distance of this point P from some point embedded on the earth it doesn't matter which point we pick we need to pick one point that's embedded on the Earth right so I'm just going to pick one point at the corner of these two where I've drawn these axes it doesn't matter I got have picked it here uh I'll call that point O So this distance I'm going to call it U and what else do I need it's two Dimensions I need the vertical distance I'm going to call that V what else do I need come on quick excellent I need this distance and I'm making the making this up as I go I'll call that L okay now and this is something you should do every time you solve a problem like this what is the position of the spider with respect to that point we picked on the surface of the Earth which we call O anybody okay this is so watch this I'm going to call it R all that thought okay I'm just going to write it out first the the position Vector from o to P which is the vector this Vector right that Vector get it right is equal to r o is equal to what go ahead well no no you're no keep going U plus plus L cosine but don't do I need some uh L cosine Theta right yeah yeah yeah okay you going to write like this U plus L cosine Theta E1 okay that's perfectly correct yep I'm sorry that's s yeah a position of spider right that's what he wrote I beg you pardon right so that's exactly correct say it again that should be V yeah okay so you actually that's perfectly correct you did it right but now I'm going to give you a slightly different way to think about it which will lead to the same answer you did a shortcut in your head which is brilliant right but I'm going to give you a different way to think about it which is a little more methodical but which also brings out the point I want to make to you I'm going to write this differently that was right I'm going to write it differently I'm going to write this Vector R oos as equal to because I'm not as smart r o plus r PS is this fair I can do that right break it them into two pieces now then I'm going to say r o p is simply U E1 plus v E2 that's this guy is that correct am I going too fast do you see what I'm doing here everybody yeah and what is r PS and here's the rub I'm going to write it as get this l does that make you queasy right makes you queasy right it's like putting uh you know strawberry ice cream on a cheese sandwich yeah my daughter my daughter did that exra this morning so yeah oh I'm sorry that's probably what made you squeasy and in this way now does this make you queasy no it doesn't make it that should still make you queasy think about it anti- mixing basis bases is that okay can iic bases yes no okay all in favor all against okay I set the all against up I I was misleading you it turns out no it's my fault buty you I wanted to wanted to kind of expose any hidden latent queasiness is right it's okay it's okay to mix bases absolutely okay yep find them up there in terms of each other yeah well it's okay because even if I didn't Define them it's a vector right I can add vectors it's perfectly okay even if they're different bases that's okay right so what you did was you can of downon converted everything to E1 and E2 right but I'm taking the lazy approach which is I'm going to write them like this just because you'll see in the class certain things come out if you write it this way but what you did was perfectly right in fact I'm going to do what you just said in a moment now here's the deal what is the velocity of the center of the Frisbee first of all that's an incomplete question I haven't asked I haven't told you everything you need to know but what's the velocity of the center of the Frisbee what I I can't hear you no no you might have said it right I really could everyone spoke I'm ten so you need to speak louder what what was incomplete about the question I just asked I said what's the velocity of the center of the Frisbee yep I didn't tell you a reference frame right so what's the velocity of the center of the frisbe with respect to planet Earth yes yeah so this is how I'm going to write velocity with respect to planet Earth a sorry sorry e right the velocity V of which point P right this notation is very important if I just said the velocity of Point P it makes no sense because I could be asking the velocity of Point p with respect to the Frisbee which would be zero right right so the velocity of Point p with respect to Earth is equal to D by DT of r p is that complete or am I missing something there any someone in the back someone on the that quadrant the way I wrote d by DTM I'm missing something here yeah I need to say from here on when you take a derivative which we still need to completely define I need to tell you which frame I'm taking the derivative with respect to okay and I I should have written Ro I'm still a little rusty I've been gone for the summer so I'm making mistakes but you caught me that's good yep do you need the second with respect to why do I need the second this one that's a beautiful question and hold that thought and remind me if I don't get back to it that's a great question okay but you can see that it doesn't hurt to write it it's complete it turns out that if I flip the frames it might look correct but it'll turn out to be wrong okay well what's your name Sam Sam all right I'll remember I'll remember that and we'll come back to that and what is the velocity of the spider how do I write the velocity of the spider with respect to the Earth quick e v s is equal to come on E DT let me raise this of r o p oh r s which is the same as e d by DT r o plus e d by DT r PS right so I've done nothing differently than what Sam did but I'm just doing it in more painstaking way that's all nothing special okay I'm going to raise this up there and go to go back to these boards and finish up this is very simple stuff so you're not missing anything if you if you're like what's it doing I know this it's okay I'm just doing it simply nothing profound okay so we're picking up here okay so let's go directly to this guy let's analyze this guy I just broke it out there for you just to kind of get you to walk through it with me I'm going to expand on that so I'm going to take first this term okay okay e d by DT R oos and what is r oos it's UE E1 sorry R I'm sorry r o p right this is r o it's U E1 okay yes what do I do now yes that's an equal to no that was a kind of a stroke I'm writing like this it's kind of hard I'm trying to say Blackboard but it wasn't a negative okay is this good so what do I do now folks I'm simply trying to calculate this term what do I do very painstakingly I'm doing very slowly and painstakingly how do I take the derivatives and the vectors what do I do here say it again you want constant so you can pull those out exactly and shouldn't be constant exactly nailed it all right so I could think of as two ways one is I can do it by parts right so I can go e d by DT U * E1 plus u e by DT of E1 what is e d e DT of E1 zero right it's what a constant is right so this corresponds to Simply get it anyone good with this yes okay so that's that first term we just nailed this guy in excruciating detail now let's write this guy out is equal to what help me out you so what do I do here folks is F1 constant in E yes no no right so strictly speaking I'm in a bit of trouble here so what do I do Sam did that actually but he did it in he was right what should I do yeah yeah exactly so let's write it in terms of you know our wonderful trans transformation there those Transformations are very important by the way you need to nail them understand them okay and put them in your Sheets if if you know you in the exams we permit your sheets there are only tools we formulate that are important and that's that's kind of an important one these Transformations that we had down here those guys okay so these you should remember and understand them don't make mistakes there because they Cascade and screw everything else up okay so that is equal to expand it for me because I can't see it from here please come on plus sorry okay I'm going to raise that and finish it up here we're still calculating velocities right what did I do wrong oh okay I thought let's see I can't see it from here so I've F1 yeah they will backwards thank you okay so what do I do now H chain rule right so I just take the derivative that's and this is what it'll come out to be it'll be um is L constant is the Theta constant is the E1 constant in E yeah that's going to be in parts H actually I'll let you guys do it once you do it and read it out to me do it in Parts like I was trying to do it there I don't want to just write it out and you know I want you to read it out to me take this term Right Use the chain Rule and tell me what it's going to come out to be yes um E1 what's your name Amanda Amanda yeah yeah E1 uh in parentheses times uh L do cosine Theta minus L sin Theta and then nothing else L sin Theta that's it yeah is there a theod DOT yes yes very good and then um plus E2 L DOA L make sense right so here's the trick you know when I write vectors like in this way it's perfectly reasonable to do this E1 E2 and then I put an F1 here right and I said should make you feel queasy it shouldn't really that's okay it's okay but when you take the derivative of this guy with respect to a frame like e the first and cleanest way to do it is to downconvert all the uh kind of contaminating terms which are not E1 E2 bases down to E1 E2 bases get it and then take the derivative because even in E2 bases the basis vectors are constant in E if you just do this you know if you take the derivative of F1 by itself then You' figure out what it is and you'll end up converting anyway get it so the bottom line is that we've just computed that R PS is this so the velocity of the spider when you put it all together with respect to the Earth right which we write as e spider with respect to the Earth is equal to now add it all up and tell me please what yeah that's d by DT right okay what is this what is the velocity of the spider with respect to the frame e the Earth and that my friends is the velocity of the spider with respect to the Earth okay and that's the cleanest way to do it and we use no magic right we use frames and a very basic geometric transformation between frames and we calculate the velocity that's it there's no magic to it it's painful and when we get an acceleration it's going to get even more painful trust me we're going to calculate the acceleration now right but that's what it is so any questions about this and if if not we'll set up the next Point any questions I insist on at least one question at this point I'm out of breath anybody totally get it okay then I have a question for you how do you calculate the acceleration of the Spider by the way actually let's take a minute here and just appre appreciate what we've done here if this if the uh frisbee were not translating okay which terms would vanish so let's say I was spinning the frisbeer on my finger right this would go to zero and this would go to zero okay and you know what this turn the rest of it will turn into it'll be it'll be some sort of theta. L term theta. L does it make sense theta. L right this is the vector equivalent of theta. L get it right it's actually Omega cross R that's where it'll come down to be Omega cross the L Vector right we haven't done any cross products here by the way but you can see how it came out right does anyone understand what I just said to you right okay what if the if the SP if the spider were the center the Frisbee and not running away which terms would vanish l dot would go away what else and L would go zero right so what would the velocity be simply simply translation makes sense right right so this is a composition it's an addition of the translation velocity and the kind of the rotational aspect right so you know from basic physics you kind of see makes sense right yes okay now what we're going to do is calculate the acceleration and now we get into No Nonsense you know Vector derivatives here but there's no magic either it's just painstaking stuff we'll introduce the subtle stuff in a minute actually in the next class so how do you calculate the acceleration yep that's it no magic to it just take the derivative of the whole thing so let's do that because you know we right we don't do kind of stuff we do it all right let's do it so let's see e acceleration a of spider s is equal to D by DT I'm sorry I'm writing it small here of e velocity s is equal to okay let's start at this term what does this term become so let's put an E1 here because E1 is constant u double dot yeah keep going keep going keep going come on huh did we miss a term is that correct hold on that that should give us three terms right so it should give us the lble do term right the l dot um oh are we coming to this one now two terms I'm sorry okay yeah so go ahead sorry but I interrupted you go ahead yeah someone was reading out the next term to me go ahead uh-huh see I don't have my notes in front of me I'm right next to the board so you need to help me out here yeah okay is everyone in agreement with with this huh okay everyone agreement with this you did it I didn't do it okay that's the first term right plus E2 anybody someone else someone back there just then l dot sin Theta okay try and keep it in the same pattern okay because I'm trying to pull out a pattern for you here yep keep going plus uh-huh and then plus or minus next minus the double dot plus okay Theta double dot okay two more terms nearly there plus Theta do L do cosine Theta no magic was just pure hard work just took the derivative of that and we calculated the acceleration of the spider okay and what you just did encapsulates 300 years of physics believe it or not and there are famous names attached to some of these terms and I'm going to give you those names now and I said it to the last class and I make the pr same promise to you guys if he came up with a term here that the other physicist have missed we'll I'll name it after the guy the person in the class who came up with this right so for example some of these terms are called cholis terms so we'll call it the Rachel term if you came up with that okay all right right so let's examine this so this is the vector form of it right what is this kind of in English translation acceleration right translation okay of the Frisbee center of the Frisbee right what is um this capture kind of in English of with respect to yeah yeah in English that's kind of what this is this is a spider kind of right there spider spider term okay these are kind of informal terms but I'll formalize them later on spider translation okay um anyone recognize this term yeah and that's called the centripetal acceleration get it remember Theta do squ make ring a bell Theta dot squ right spin something kind of accelerates inwards right L Theta do squ hey there's an L Theta do squ sin Theta cos cosine Theta so that's the vector form of it right right so that's centripedal right it's named after a famous French engineer but the name is centripedal I'm kidding centripedal right cental okay anyone does this term kind of make sense what's that yeah is the angular acceleration multipied by the force arm right but again written in vector vector form right does that make sense and that is actually for some reason coil the oiler acceleration and then we have two guys left out anyone remember what those are called yep cholis here's the deal they're just two straight terms if you do the math right they show up we did the math right they showed up no surprise right now actually the funny thing is you see it's minus L do theta dot it's the same term twice right so I can actually make this two right and remove this and the same term twice here I can put a two here and remove this the same thing I just consolidating terms and there was this French engineer by the name of coris he found I think the history is that um when you shoot a projectile from a ship or from any vehicle but usually ships in the northern hemisphere they I think Gunners found that they were missing to one side and there to kind of compensate and once the Gunners perfected the skill and they cross the Equator they found that their skill was useless because they would go to the other side and the straight term is referred to as the coris acceleration okay this is also the reason hurricanes spin one way in the northern Hemisphere and another way in the southern hemisphere we'll talk about all this the beginning of the next class but what you've just done with me is from Ground Zero using the most basic basic geometry you have derived terms that go back 300 years or took 300 years to derive so this term this collection of terms is called coris and this is the most general statement of the calculation of acceleration that you will need in kinematics to compute the motion of complex systems and now if you have now look we can we can do frame e to F right let's say those let's say the Earth spinning and there's someone watching us from other another planet well no problem you can kind of compose it and you can you know compute the acceleration with respect to that planet the math will get hairy but there's nothing profound get it questions all right we'll see you next class
MIT_2003J_Dynamics_and_Control_I_Fall_2007
3_Pulley_problem_angular_velocity_magic_formula.txt
the following content is provided under a Creative Commons license your support will help MIT open courseware continue to offer highquality educational resources for free to make a donation or to view additional materials from hundreds of MIT courses visit MIT opencourseware at ocw.mit.edu before we start just a few words about the book and the way I teach and reading assignments and stuff like that so the book is written by a written by a very deire professor at MIT by the name of and a very brilliant John Williams I'm sorry Jim Williams Professor James Williams and in my opinion it's the most complete book um for Dynamics for undergrads on the planet right there are graduate books uh grad school books which are more which I most tough but this is the best reference book I have seen in how complete it is now the uh and I haven't found any book better so with it's with great pride that we used this book as our text um and he wrote it about 10 years ago and we kind of brought it back right and he's a very interesting man in fact at some point I might ask him to come and give a talk in the class as well there's a picture of him in the back now the uh the way that we're going to use the book though is not in the traditional sense uh you know of the way classes are taught we tend at MIT not to go by the book because we modernize we change we um invent uh new ways of teaching stuff and we adopt other people's uh techniques um and so on um but everything that I'll talk about is in the book right so the way we'll follow it is in two ways first I will give you reading assignments which will not correspond exactly with the book uh I'm sorry which will not correspond exactly uh with the way I teach it but when you read it if you really understand what I'm saying you'll see what the book is saying so there's the the the essence is not different but some of the terminology Etc is different I'll tell you what that difference is in a minute the second thing is we will use the book for problems and the solve problems that are terrific um so I suggest I I in fact I I insist that you read the book and the sections that I I recommend in the class but the third point is that I will also give you we will give you handouts these are things that I wrote up last year because what happened last year was that a I was doing the class differently and um uh because of uh of a snafu where the Publishers the book didn't show up so I just wrote up a kind of a parallel textbook not that many pages but you know 50 to 100 pages and we'll give those um handouts to you in bite-size chunks and we recommend you read those as well so now in in the book The reading assignment from the book so your reading assignment is going to be like this um I want you to read a chapter 1 and I want you to read chapter 3 it might seem like a lot but actually it's a very quick read chapter one is the history of Dynamics and it's a gorgeous piece of um it's a gorgeous summary of uh you know several hundred years of Dynamics and uh strongly recommend you read it you know the thing about some of these subjects is that corol is force you know why on Earth is it called corol is force right um and if you read the book If you read the history it gives you some reason you know we write things in a way because of the way they evolved over history and if you understand the history you'll understand better why we write things in a certain way it'll just make more sense trust me um chapter 3 is on kinematics now the fundamental difference between me the way we I teach the class and the way the book does it is in my maniacal maniacal um insistence on using frames of reference okay so for example I would never write I would never write this I would say you have I would insist that if you are taking the derivative of some Vector you specify which frame you're taking it with respect to okay and this is a modern way of doing it the book would write the vector that way you know it would like write the derivative like this and the context of which frame would be you know it would come from the context right which is okay you know that works because you need to keep track of the context but the reason I prefer to do this is first of all even I get confused about context and the second thing is this works when you have two frames right but when you have multiple frames the whole relative you know relative is with respect to another frame but let's say you have three frames so relative relative you know the you know the whole relative concept becomes confusing but if you specify the frame and say it's with respect to a it's very clean so fundamentally that's kind of where the differences creep in and once I introduce this notation you notice just the way I proceed with things is slightly different but when you read the book you'll see what the difference is are and you'll understand why I do it this way and why Professor Williams does it the other way but there's no fundamental difference yeah um that's actually a much deeper question than you might imagine let me explain why um all editions are good except the editions which had typos okay so the reason we had a snafu last year was because Professor Williams went to the coupe and pulled out some of the books there and he found they had typos I mean significant typos so he and I started you know screaming in the publisher and the publisher made um one mistake after another and took us about 3 months to get the final textbooks in so there are textbooks floating around with typos and perhaps a you can just put a little note on how to discover the whether your addition has is a bad Edition or a good Edition okay so if you inherited it from someone else you might have a bad Edition all right but if you have a good Edition I don't care which one okay a any comments mhm yeah so if it's 2006 December you're good for sure if you bought it from C now yeah anything you bought from the coupe now is good but if you inherited anything published and say October of 2006 um it has some evil stuff in it okay so you know if you take this uh you know do you know what a gimbal assembly is a gimbal right anything with a lot of degrees of freedom right this this Earth thing here can rotate this way it can rotate this way and I I can rotate it this way right it's got 3 degrees of freedom this is a very typical dynamic system in Space Systems this is standard so but the moment you get to three the whole concept are relative you know relative to the first one or the second one right the whole concept kind of breaks down that's why I prefer to write a b and c and just kind of Flesh it out so that's the difference from the book all right but the handouts that uh we've put we're putting up on the web today they say it in the right way um or say it in a way that's compatible with my thinking and we should be good with that okay so these are the reading assignments do read these and the handout so let me write that as well okay now um I have uh one favor to ask of you guys because I left one segment of my notes at home and uh you will it'll be good for you to help me out with this tell me what the transformation if you have two frames a with you know A1 and A2 right I may not write A1 and A2 each time I do it because it's a repetitive you'll just assume it's okay if you have two frames we have those Transformations can someone read out the trans transformations to me this is Theta can anyone read the trans someone please from your notes first tell me what A1 is and A2 is in terms of B1 and B2 and then tell me what B1 and B2 are in terms of A1 and A2 Andrew do you have your notes anyone yeah just conver to A and B for me I'm writing it as a and b [Music] all right just the trick one is going to be negative and one is you know this it's Q symmetric right this is going to be negative this is going to be positive these two are positive that's the pattern or you can switch them if Thea gets switched these negative positive things switch all right that's just a pattern you want to look for so what is B1 and B2 yep oh yeah sorry yeah what's this you know what's going to happen this negative is going to switch that's all that's going to happen right because sin Theta is um um you know s of minus Theta isus sin Theta so these two will switch so I I'll just write it down quickly and tell me if I'm doing it right okay we'll keep this as a reference and we'll keep coming back to this okay all right so here's where we're going to start we'll start with where we left off in the last class obviously which is We examined the velocity and the acceleration of the spider on the Frisbee as the Frisbee hurdled through space we did it in two Dimensions so the center of the center of the Frisbee was moving right and the Frisbee was rotating and the spider was also simultaneously with respect to the Frisbee it was walking in a straight line away and we computed the velocity and the acceleration so if someone could help me here let us um so in fact here's let me just draw the picture for you we had a frame which was we called it earth right and we called it um I'm going to make it a now A and B all right I'm not going to call it E and F anymore we had a frame B and the Frisbee was on this Frame the distance call this o we call this point p and we call this this point where the the spider was s frame B we call this distance U we call this distance V and uh we call this length what do we call it Locus L and we call the angle what Theta I think right actually we said E and F but I'm just going to change it to A and B okay because f is also used for force and eventually it'll get confusing um we're not at forces yet we never considered a force in analyzing this why do we not even mention the word force in analyzing this anybody I we are in Dynamics yep we haven't given anything in Mass yet there's no Mass you know it's just pure kinematics it's all geometry we're trying to figure out as a function of theta Theta do theta dot l l do L do u u do U do v v do V do we're trying to figure out what the acceleration is later on we might put a mass and say f is equal to ma or something right but we there yet okay so this is the picture now what I'd like you to do now is read out to me what that final acceleration was actually both the velocity was both the both both the velocity and the acceleration but I want you to replace en andf with A and B any volunteers I just want one person yep go ahead a how should I write it right so let me ask you a question I I'll come back I just want to ask something does a rigid body have a velocity yes it's a trick question actually I want you to say yes so I can correct you say it again is the body St it doesn't sag it's rigid you just sag right S A sag what did you say static static oh no no no assume the rigid body is moving Can it have does the whole body have a velocity how does that work think about it right so good so here's uh Point number one there is no such thing as the velocity of a rigid body there is a velocity for a point on a rigid body it may be by coincidence that all the points on the rigid body are moving with the same velocity but there is no such thing as the velocity of a rigid body get it this thing doesn't have a velocity right every Point has a different velocity right it's like saying what's the position of uh a person well what the center mass of the person yeah okay I can give you a position right but they can have different velocities so when you talk about velocities the right superscript is always a point get it in this case is a center of mass of that spider that we care about okay only a point has a velocity on a rigid body and a frame whatever make sense okay so go ahead M actually break let's break it out a A1 U dot yeah plus A2 V Dot I I'll help me write it this way then what's the next term I'm going to write it this way just to point something out to you okay okay I just wrote the same thing I just bunched the terms differently okay so this is what we design we we we derived as a velocity all right tell me what the acceleration was I want you to break it up like that and I know what it is in my head so M any other lble dot terms sin Theta did consoled some of those terms the L do theta dot does it show up again um shows up twice right so I can put a two here okay and so then the other one is 2 A2 and then L do Thea do cosine Theta and then we have one more set of terms mhm does there one more term oh yeah we missed that okay where shall I write it I space I forget it no I'm just kidding I can't do that okay give me the other term I'll put it here A1 Theta do 2 L cosine Theta Square yeah yeah okay so these are all the terms okay you miss anything so I put them up for two reasons one is towards the end of the last class I was trying to point out a pattern to you that's the first thing the second thing is I want to keep a record because I'm going to resolve this problem using some additional math later on today and I want this around when we get there okay so first let me show you the pattern again let's look at this guy first the velocity so what is this term what was the pattern what's this you know just in English forget just intuition what's this term yeah claudo right that's right it's the motion the velocity of the center of Mass sorry sorry not Center of mass we have no masses velocity of the center of the Frisbee right good excellent so we've taken care of this guy what is this guy what is this term the spider H translation of the spider on the frisbe Yep this is the translation of the spider on the Frisbee so now let's let's uh so we can write this as a velocity of the center right that's that term okay we're going to recognize these terms more and more you know you could be completely brained about it and just kind of write it out and you're perfectly re right it's completely okay reasonable you're done now I'm trying to tease out some intuition and then when I do the next thing this intu becomes useful right but so this is very fuzzy hand wavy stuff I'm doing so this is the velocity of the spider uh with respect to the first B is that correct now do you notice A1 cosine theta plus A2 sin Theta A1 cosine theta plus A2 sin Theta huh is that B1 yeah so this is l dot B1 right nice pattern to it yeah that makes sense because this is in fact the velocity of the spider with respect to frame B right and what is this term yeah it's a rotational thingy right it's rotating and you know it's kind of doing this so it's the uh labor arm L multiplied by Theta dot right what's Thea dot it's the yeah it's angle velocity but think about this I never even Ed that word I never even said angular of velocity keep that in mind so far I have not said it you said it first right Amilia said it first I didn't say angular velocity so you don't need angular velocity to do all this it kind of comes out of the derivatives it just so happens that Theta dot shows up a lot in this math so we call it angular of velocity we Feist a kind of an official word on it and then we use that to simplify the math I'll do that today right but this is simply actually if you look at it this is simply the dot which is the angular speed L hm say again who said that sorry oh bless you sorry um so this is A2 A1 - A1 sin theta plus A2 cosine Theta - A1 sin theta plus SE B2 see that right which is kind of the I won't write a word I won't write it here but it's kind of the angular velocity component right from the angle velocity got it right the angle velocity zero this term kind of goes away so that that was some of the intuition we got from this kind of makes sense we did it purely with math we didn't make up any we didn't make up we didn't need any angular velocity but we got the right answer right if I ask you do this whole thing again using angular of velocities would you have gotten this answer yes nothing profound the problem is when you get to the second derivatives there are all these terms coroli centripedal all this stuff Oiler right and you can miss it but if you do it methodically as I did it they'll all come out naturally right you know there used to be an age when you had the coroli effect and you have to add it in you have the centripedal effect you would have to add it in the oiler effect add it in but we don't care for all that we just do the math and it comes out done right but still let's call out those terms I won't call them all out because there's no room here but this is actually a acceleration Point P right this term is actually um B acceleration of the spider okay this guy if you I won't write it is the coroli term or coris term this is the oiler term and this is the centripedal term and that cosine Theta sin Theta they'll become just like B1 or B2 or something and it'll all kind of fall into place all right so this was the intuition so I'm now that we have this there I'm going to leave it put it aside we'll come back to it later but now you have some intuition and the two pieces of intuition are we never used an angle of velocity we never formally formally defined it and yet we completed all this math and we had a completely good answer right what happened to the angle of velocity why did we not need it anybody how could we solve this problem without ever invoking an angular velocity I'm sorry um yeah yeah that's that's part of the answer yeah because you're taking the velocity yeah look here's what I did folks I just wrote the geometry out I took derivatives and I get the answer right angular of velocity is just a trick to simplify the math but it's just the same thing if I take derivatives I'll get change of rate of change of the vector and that's velocity right do that again I get acceleration angular of velocity is just a trick to make the math simpler okay done now how about we solve another problem using this technique before we move on to angular velocities let me take a vote would you like to do that another problem gory detail just nail it you know yeah you feel like feel like that was it too early in the morning okay all right okay you guys are help me solve this problem because you know I don't feel like yet CU I just have had I've just had one coffee okay so this problem that I you know AJ and I made this up we were thinking about what other problem we could solve so we made a problem up last night here's the problem you have a pulley that is nailed into place on a wall so it's not really a rotating pulley okay this is Frozen now from that you have a rope hanging with a weight at the bottom okay and you pull the weight up and you let go what's the weight going to do well first it's going to fall well we pull it up tangentially like this right you kind of unwrap it and then you let go so it looks like this I pull it as far as I can because unwrapping will get longer and then I let go right so the pulley is going to fall I mean the weight's going to fall like this it's not a perfect circle it's kind of a spiral because it's winding up right comes here and then it keeps going up and then it'll go up and eventually something going to happen to it dynamically right now we don't care about the Dynamics this is a kinematics problem my question to you is I would like to calculate the acceleration the velocity and the acceleration of the pulley in terms of some parameters geometric parameters I want you to struggle with this a little bit and help me frame this problem how would I go about it what parameters capture the configuration of the system at any moment in time um just before I I'm you will be given you're given the radius of the pulley all right you're given that and you're given the length of this cord at any given point in time right or initial initial length of the C and it'll wind up so sorry go ahead to yeah so that could be one thing you could use you could take the distance from the center of the pulley to the the uh the block and that could be one number which if I give to you you can completely describe to me the configuration of the system right right so when you parameterize a geometry you're trying to find that one or two parameters with which you can completely describe the geometry so that's a perfectly reasonable measure but is there a more straightforward one yep claudo one line that goes from the center of the pulley to the to the tangential like location of the string that's just going to rotate and then just the distance length of okay so you're saying this length so I'm sorry what your soci your name Amanda Amanda said this length right and Claudia is saying why don't we just take this length that's a little more intuitive right that's true if I give you that length which is the unwrapped length you know the remaining length of the of the cord then that's a way if I give you that length you know exactly where the pulley is right and you know where the where the the intus but is there an even simpler measure y 10 use the angle yeah the even simpler measure and by the way by the way all these are perfectly reasonable and I could have solved the problem any of these ways all right so these are all completely correct answers and now it's intuition this is the only kind of you know place in Dynamics where you need to exercise some intuition where if you pick one parameter with respect to or another the math just becomes easier right so it's Amanda was right right Claudia is right and what Ted is suggesting is use the angle so in fact an even nicer or even more convenient measure is this angle get it if I tell you what an angle is I know where the you know what situ what the situation is for the system I know where it is any questions about this it's the angle between the last point of contact between the in this fixed pulley and the center of the pulley if I tell you what that angle is you know everything about the system right so let's say so the way to think of kinematics is as this thing does stuff I take snapshots right and I need to the number of parameter I need enough parameters to pick which snapshot I'm in right if I tell you the angle you nailed it right so the question now is can you tell me instead terms of theta what the position of that so I'm going to redraw this just kind of clean it up because there's a lot of scratching here going redraw this let's use that angle As the metric because it makes the math simpler it's no nothing profound but just makes the math simpler pretty good Circle I have the circle thing going today good I should do more Circle problems okay we're going to say this is just a point we don't care about it right and this angle we're going to call it Theta so if I captured this angle then I know everything I need about this system all right yes no get it okay any questions about this yeah how um top of the from the to top of the pulley I could there's no reason why I couldn't right I could certainly do it from the top of the bully I could measure Theta from anywhere as long as it's a unique metric as long as once I know the Theta there's only one solution to where they you know which snapshot this thing is in It's Perfectly reasonable I could do that why didn't I didn't because as you see the math will be easier that's all just convenience laziness I just don't I just happened to know the back door you know um into this puzzle right uh but no reason why you couldn't do that in general I should tell you that framing problems as rotations makes the math easier that's all okay any other questions I want you to think about this a little bit this is important now if I tell you what the Theta is do you need to know what the length exposed length is why not yeah yeah if you know what the initial length is which is a constant if I tell you what the Theta is you know the remaining length is right right so you don't need that it's an additional thing you don't need it right in fact that is you'll see later called a kinematic constraint okay I could have treated this L and this Theta as independent but they're not right so I could have had one you know I could have just solve the problem as if L and Theta were independent and then later on stated the truth that you know L is equal to some L minus r Theta right which is the which is the wind up get it but I don't have to do that I can just do it explicitly so in fact if the length of the pulley if the length of this cord let's say that this is initial condition if this is L KN then what is this length anybody L is equal to l knus r Theta okay so with that one Theta I've captured everything about the geometry I'm going slowly because you guys need to get this and if you get kinematics everything else follows so nothing profound I've just written it this way and we're ready to go right okay now I ask you to tell me what the acceleration of this point p is this is p so what's the next thing we need to do I want both the velocity and acceleration what should you ask me yeah with respect to what right okay I'm going to tell you I want the acceleration with respect to the Earth to the ground to which this bu nails okay how would you do it say it again yeah yeah that that point P the kind of the center of the mass assume it's a point Mass not that it's relevant but just assume that okay so how do I start what do I need to do say it again the position yeah find the position Vector yeah absolutely always the first step when you talk about a velocity or acceleration it's simply the rate of change of a position Vector right whether it's getting longer or rotating whatever right with respect to some frame so let's write the position Vector now you can write the position Vector directly just using A1 A2 in to the math or if you want to make a math a little simpler you can introduce an intermediate frame you don't have to just makes your math easier that's all would you like to introduce your intermediary frame please say yes because that's how I did the problem my notes yes please yes oh yes thank you that's a good idea so I knew you would agree B1 V2 the intermediate frame here is simply aligned with a string okay yes makes sense right I just introduced an intermediate frame and you'll see why just makes my math a little easier right now tell me what the position Vector r o p is quick somebody it's very easy I know it's I know you know it so just say it yeah let me help you let me help you let me just say that let's call this point C okay it's okay to be painstakingly simple with these things in fact the whole Genius of Dynamics is to be able to break down when you see a problem you will have a tendency to freeze right what makes you you know it's like a baseball player a football player whatever right what makes you a good engineer is your ability to fight that nausea that Panic okay and then break the problem down into smaller steps Each of which is distractable yes perfect does everyone get this this is it's very simple right remember what I said take a deep breath break the problem down r o is actually this Vector plus this Vector right what is this Vector it is r o which is the radius but in the negative B2 direction right that's this term and then then this Vector is L in the B1 Direction which is you know lus R Theta in the B1 Direction done make sense it's it's perfectly okay now to say I don't get it do it again I'll do it again do you want to go sit next to a there might be a little easier there's a seat right there can you see okay yeah so that's a very deep question Amanda asked does this count for changing Theta is Theta solid you know does it is it frozen what's going on here right where does the change coming yeah that's right so here's the deal right so this is the thing you need to understand a is frozen B is moving now at an instant B has an angle Theta with a right but B is also moving so the next instant is going to have a different angle the next instant is going to have a different angle so write it in that Frozen state do the you know write the math out and then when you take derivatives don't assume Theta dot is zero just give it some number right call it Theta dot get it and that way you account for the fact that b is actually moving make sense Amanda right this is the sort of thing where you do more problems it'll become uh you it gets into your gut right so this is Roop believe it or not this was the toughest Pro part of the problem and in fact that's if you think about it if you can do that kind of panic control thing you can you can make this easy too once you have it here you're done so what is the acceleration of Point p with respect to a can I write that can I write this yes no or maybe or only on Tuesdays no because B is a frame and it needs to be a point the point P has a velocity yep Andrew yeah yeah yeah frach ahuh good the I just I kind of I said it fast right I said it correctly I think but it's easier to get confused here's what I attached B2 this is this this is why there's an interesting problem it's a transparency when we get confused about frames think of transparencies rig rigidly attached to this block and always parallel to this remaining segment make sense right yep I can't do AVB I got to do AV P um it is a coincidence if all the points on a frame have the same velocity if a frame has any rotation at all then different points will have different velocities so for me to say AVB is an incomplete statement get it of course it is possible that my mistake a coincidence uh makes my mistake not relevant okay so if I'm a frame and I'm walking this way you can certainly say a b right because that my frame is moving all the points are moving but my arm isn't moving the same speed right because I'm not a rigid body but if I'm doing this right with respect to the Earth my head has a different velocity my knee has a different velocity right so there's no AVB it's AVP velocities are always of points very important it'll all be intuitive but you know once you do the homework this will all be very intuitive got to do it okay so what's the what do we do now AVP we're trying to find the velocity of Point P we the first step already there that was the hard part what do we do now quick take the derivative right so the derivative is a d by DT of this guy what's the problem yeah right was what you going to say right there's no problem per se it's just that if I take the derivative of this resp this this basis Vector with respect to frame a it's not constant in it it's kind of you now right so I need to rewrite b and a you know in terms of a and um if you if you if I hadn't introduced an intermediary frame right I would have written everything in terms of a anyway right so you might say Sanjay why on Earth did you introduce that intermediary frame if you're going to rewrite it and there would be a fair question to ask and the answer is it just made it easier for me to write that vector and later on you'll see those intermediary frames help a heck of a lot right now it's just a pain in the neck okay so it is minus r what is B2 in terms of A1 and A2 anybody please sorry say it again A1 yeah and then this becomes I'm going to break the terms out okay what is B1 I can't see thank you I you take the derivative of all this with respect to time but now all the a1's and a2s are constant in the a frame right so I'm good and now it's just painful math I'll do the velocity part and I'll let you do the acceleration part okay and just to warm you guys up I'm actually going to have a snap quiz right now and have you do the accelerations I'll do the velocities first to get you going so for right just pain sticking math okay I'm bored so I just wrote that and you guys can figure out what the acceleration is yes the third term I got them mixed up the third term should be this should be a sign and that should be a co sign is that what you're saying Levi yeah if you if you if you know where to start how many of you at least have gotten started you know what to do it's not such a big deal okay anyone who you know if you want to talk to me later if you want to ask me now if you're not getting started I can understand that if you can't kind of get going because you don't know wa to start let me know I'll come and tell you or we can talk after class it's not a big deal this is new stuff right a you done you want just put it on the board why don't you put it here because I huh that's okay if you're wrong you're wrong AJ do you mind putting it over there on the yeah thanks no just just put it here just put it in like all different terms there a lot of terms I think you can break it up yeah so AJ is going to write the answer all right you can stop now uh how many of you got going I was just kind of just working through the details okay anyone struggling with even getting started it's reasonable and brave for you to say it and we'll all be the better for it or if you can ask me later on okay so as AJ writes us down I just want to make a point this is very UNS subtle stuff do not look for a great amount of subtlety here but it's perfectly correct it's the long but totally robust way to do stuff you will see in Dynamics and in other fields the long and the UNS subtle way to do something is usually also in some ways the most mechanical way to do it when you for example analyzing the Dynamics of a satellite with multiple moving parts right equations will get long they you know it's only in exams that tend things cancel out and that's because we arrange it to make your math easier in real life things might not cancel out and then you to get smart and make approximations and delete terms okay when you analyze a really complicated satellite with multiple moving Parts this is how it looks and there are there is a there are computer programs known as uh symbolic computation engines like Mathematica maple have you heard of these right they do this for you this first couple of weeks in class you will have to go through it you'll get it and then we'll start pulling out the subtle insights the problem with the insights is they're great but you can also go completely wrong if you miss Q you understand it's a shortcut but you know you know how shortcuts are in Hansel and grle right you can get lost so from here on we're going to depart from this boring and mundane approach with which is actually beautiful in its own way and start pulling out some insights and taking shortcuts okay but when in doubt you can fall back to this this is fundamental does that good make sense okay but the reason we want you to do this so that you understand this works it's painful but it works thanks a lot AJ uh AJ just did it live in class because I made the problem up last night okay and I solved it at 5:00 a.m. this morning and left my notes at home so I just did it live there might be a typo here but I doubt it because usually very good and I can tell you that from the appearance there's kind of what I got and it's probably right okay if you find an error you can fix it but this is the way to solve problems in the most fundamental way to solve kinematics problems any questions how many of you find this completely different from where you were heading with your answer anyone H jro yeah yeah he digested it for you a lot of good yeah if you finish it the all come out all right here's the here's just common pitfalls when you take the derivative of a term like this Theta do sin Theta right it's going to break up into two terms Theta do sin theta plus Theta do^ 2 cosine Theta right so that's the kind of thing where you'll if you're not careful you know you fall off the bike you got to kind of concentrate on the bike and stay balanced do the derivatives properly okay does that answer your question Geral yep andrew1 and2 and found out frame beautiful I I paid him to set me up because that's what we're going to do next okay five bucks later okay right okay so I'm going to leave that up there I'm going to delete this erase this and now we're going to start looking at a beautiful beautiful thing along the lines of what Andrew was just saying for simplifying the map we have 20 minutes and I think we'll make a lot of progress okay so I said to you earlier I did all this without using the word angular velocity never used it right what once the Theta dot just fell down fell out of my math and you recognize that to be angular speed but I never called it out I never did Omega r remember in physics you see Omega r a lot right and centripetal force is like Omega squ centri acceleration is Omega squ R remember that I never called it out it just fell out directly now let's call it out so the first thing I'm going to do now is Define the term angular velocity so anyone take a stab at it what's angle of velocity in the context of what we've said so far about frames anybody what's your name k s isn't there a a singer by that name is I okay sorry sh sh oh sorry sorry orientation perfect that's exactly right but there's a condition under which that might happen for example let's say I have one frame here and um I need something to toss right I toss this in the air can you tell me what the angle of velocity is at any point in time can you FR has to be what a polar frame of reference yeah you're on the right track actually right track but I haven't used the word polar in the class I don't want to use it anybody else but ktie you're right actually sorry s sh sorry s here's the answer if when you look at two frames two frames there is one line between those two frames that remains I'm sorry the two each in in both those frames the two lines that remain parallel right then that means that one frame can be described as having an angular of velocity with respect to the other and vice versa okay so if this is a frame and right now I'm rotating like this then a line a vertical line through me and a vertical line through the frame remain parallel that means I'm rotating you understand now if I take there's a reason I brought my daughter's Globe here right this thing's got one degree of Freedom right it's got another degree of freedom and I can rotate it this way it's got three degrees of freedom so if I do all simultaneously right you can say well San there's no you know what's the angle of velocity right now there is some Advanced geometry which I won't go into you can actually find an angle velocity but it turns out that in the worst case I can say hey look there's an angle of velocity this way and there's an intermediate frame right there's an angle of velocity this way and there's an intermediate the two intermediate frames and there's an angle of velocity this way so I can take any combination of motions and break it down into the composition of pairs of frames Each of which have an angle of velocity with respect to each other get it now you can ignore everything I just said because in this class we're going to remain in two dimensions and in two dimensions any two frames will always have the vertical axis parallel right right yes right if I take two transparencies the vertical axis always remains parallel so that means that there's always an angle of velocity which is the change in angle between the two frames the rate of change of angle right and that's the angle of velocity so let me define angle of velocity to you yeah the angle of velocity we'll liit this definition to two Dimensions very simple right rate of change of angle between [Music] two frames okay but that's just a scalar pointing in perpendicular Direction so s is exactly right so if you have one frame like this a another frame kind of another transparency sliding in it B could be moving could be rotating it doesn't matter at any instant in time if you take the angle and see how much that's rotating Theta Dot and upm Mark it with the vertical axis which could be B3 here the same as A3 here then angle of velocity with respect to a of frame B is equal to Theta dot A3 is equal Theta dot B3 okay that's a that's a lot to say in one go but I want you to stare at it for a second and then we'll talk about a little more first of all does that make sense you know I make mistakes on purpose sometimes you need to stare at me and tell me if I that makes sense or if it doesn't does it make sense yes or no yes anyone with a no no reasonable answer could be small probability right so that's what it actually that is correct it is essentially the rate of change of angle right we saw we saw Theta dots show up many times remember we saw Theta Dots here right so Theta do B3 which is the same as the do A3 because A3 and B3 are aligned when you look at two dimensional situations that is the angle of velocity now in the past we have shown we use we've used Omega for angle of velocity we do the same here now let me ask you a question how would you say this whole thing in English say it in English or French for that matter Fring yep um you set it all right except one little thing it's the angle of velocity of B with respect to a right so here's how you say it remember the top left corner this is always the frame with respect to which you're taking the angle of velocity get it so this is the angular velocity of frame B with respect to frame a okay get it now yes teda with B on the left and a on the right negative ah excellent see all these guys I paid them off before class so let's do a couple of kind of twists on this make sense look with respect to the Earth I'm rotating like this if I freeze the Earth seems to be rotating the other way to me right what is this a no what is it Z zero okay that's the angle of velocity any questions now we come back to Andrew's comment he said why don't we take you know this derivative thing we're doing this right painful way isn't there a way to take the derivative of B1 B2 once and for all instead of converting everyone to A1 and A2 and just kind of making the math messy so I'm going to tell you now um I'm going to answer that question and this is very profound and you might hear some music in your head when you hear this okay be ready because you'll be like it's struck back by this Vision that I'm going to paint in front of you okay are you ready for a religious experience okay I'm going to write a formula to give you an intuition erase it and write the more general formula because that's the one thing I want you to remember so the intuitive stuff first promise me you won't write this down it's correct but I don't want your mind to enter this rot I want you to think of it more generally so put your pens away you know why uh where the shake can comes from anybody yeah right so you should need put your weapons away do you know where cheers comes from you know when you drink you shouldn't drink but if you did you know where cheers comes from anybody how do you know all this have you just been transported here from the medieval times oh okay yes so the idea was that if you you know if you're drinking with people people still poison each other a lot you know the usual right so they would pour a little bit of drink into each other's glasses to make sure that you would both die right assured Mutual destruction okay um so this is the religious thing right you ready right here's the deal if you take a vector v that is fixed in uh frame B it's fixed in frame B the derivative that respect with respect to frame a remember this is fixed in B don't write this down is this [Music] Tada okay just St at it for a minute I hope you're having this religious experience I promised Gregorian chance all that stuff right okay so a vector if you take the cross product of a vector if you take omega cross you know a Omega B Cross of a vector you're actually calculating calculating the derivative of that Vector with respect to this Frame it's beautiful okay and here's why look at my arm right so it's going in a circle so right here if I move a little bit up the difference is this little Vector right so the derivative is kind of that Vector isn't it at 90° to this Vector right but it's 90 along the plane so if I take the cross product of that vector and this Vector when I get this Vector get it it's very beautiful did everyone there get it let me do it again so this is my arm's going in a circle right so this is the plane okay so my arm's going in a circle so at this moment in time snapshot an instant later snapshot the motion of my arm was this right think about it first of all that's perpendicular to this vector and it's perpendicular to this Vector which is angle of velocity so if I take the cross product of this and this I'm going to get 90° right so in vector land which is a space you're now inhabiting more and more comfortably taking the derivative of a vector is kind of like taking the 90° to it right and the angle of velocity taking the cross product which you've seen is a way to just do that derivative now you can pick up your pencils because I'll give you the even more profound version of this formula and as a joke this is not the official name for this but in the notes as a joke so please do not go to you know loed Martin or wherever you end up and use what I'm going to call it we call it in this class and only for this class the magic formula only because to me this is religion okay so the magic formula which is kind of an extension of what I just wrote a minute minute ago is a d by DT of a vector v is equal to b d by DT of that vector v plus a Omega B cross that vector v okay and the only difference from what I said earlier was I said imagine that Vector was fixed in b as in it didn't change it didn't change in length or anything right so I'm generalizing it so now imagine a vector that's actually changing in B like becoming longer like l in that example right it's getting longer so it has a derivative of b as well you can add it good you understand now why is this profound well it's profound because I told you but apart from that anybody with you that's right that's right look with this if you need to calculate the frame the derivative of something in frame a but it's naturally expressed in frame B no problem just add a correcting term and you're done got it how much time do we have we have about 5 minutes 7 minutes do you think that's enough to solve that terrible uh frisbee problem with this let's do it yep Ted say it again that's not a velocity that's a vector let's call it something else that's just some Vector just some random Vector that's in B just some Vector it's not velocity please don't assume it's velocity I run out of characters in the alphabet right and V is for vector and V is for velocity so could be confusing but it's just a vector I could use n p Q whatever it's not velocity okay so let's look at this problem frisbee problem again okay and the way I'm going to do it is I'm going to leave that live erase this and use that board that board and that board I want this so we can compare it right so that we end up with the same answer so I'm going to erase all this all this stuff is in the handout that is on the web that AJ and I wrote up so what is the position Vector the spider quick we have 5 minutes quick position Vector of the spider with respect to uh uh from point A come on click U A1 Plus V A2 plus L well we can just write it in B let write it in B okay the intermediate frame so it is l B1 this is the Beast that we need to take the derivative of the problem the pesky term was this guy because we had to down convert it to A's and that's what exploded on us right you don't have to do that anymore so what is uh a velocity of spider is equal to the derivative of this guy with respect to frame a right these two terms nothing special right same as before but is this term where we end up with this kind of mismatch right and that's what exploded on us because of all the conversions so what do we do now use the magic formula and every time you use it you'll hear this kind of music it's cool okay so they use the magic formula so AVS tell me what does it become it's actually oh let's write it on full Glory right it's equal to b d by DT lb1 plus a Omega B cross lb1 get it that's us of that formula the magic formula where instead of vector v I have lb1 and I just so essentially I can instead of taking this this guy I can take the derivative with respect to B and use this correction term get it yes you got to be really coetic with this have you heard the word copesthetic before yeah where did you hear it like seventh grade you know I've been looking in the dictionary for years and I haven't found it I learned it from a guy called Alex slum who teaches W7 and he uses it all the time and I just like it let's just Define it stipulate that it's a valid word okay good okay so you co- pathetic with this good now what is a Omega B quick in Vector form write it for me in in in what is a Omega B it's what Theta dot huh Theta dot A3 or B3 remember this guy is B1 so it's more convenient to write this as B3 right so this right so ABS what is this and what does this become was that a whole lot easier or what done is that the same answer as before yes is that beautiful now how would you find a acceleration S I have have nearly done 1 minute let's try and do it I'll redo this in the next class but I'm going to go really fast and for those of you who are with me for something like this okay we'll do this in the next class I you know I just wrote it really fast okay we'll stop here we'll pick it up at the next class um and if you don't understand that last bit are you good with the are you good with AVS everyone's good with it I just did that again all right I just did it fast we'll do it again in the next class see you guys
MIT_2003J_Dynamics_and_Control_I_Fall_2007
8_Single_particle_Two_particles.txt
the following content is provided under a Creative Commons license your support will help MIT open courseware continue to offer highquality educational resources for free to make a donation or to view additional materials from hundreds of MIT courses visit MIT opencourseware at ocw.mit.edu I want to ask you guys something what are the applications of Dynamics the subject that we're studying applications homeworks and exams I know that I mean beyond that following yeah trajectories so missiles stuff like that anything to kill people you need Dynamics what else build a what building a car okay that's good good and what for that's actually that's right but what for well actually in automotive design right uh Dynamics plays a very important part because it's not rigid body Dynamics it's a bunch of rigid bodies with springs the Springs are called starts with an S suspens suspensions right and there's a trade-off between how comfortable the ride is and how tightly the car handles has anyone driven a BMW it tights it's very tight handling right it's a beautiful car huh beautiful love it anyway uh the point is that that's actually hardcore Dynamics satellites um CD players okay and in Dynamics there two sides to it one is here's the system what is its trajectory going to be in other words how will its various degrees of freedom behave over time if you you know stretch it and let it go and it goes twang you know boing right and you want to figure out how it goes in time that that's analysis so that's the first kind of the yin you know the yin yang the yin of Dynamics the Yang is you here's a system how do you modify it so that it behaves in a way that you want all right and that is called it starts with a C control so control is given a dynamic system how do you put an actuators Motors Rockets Etc so that the trajectory is what you wanted to be so can you give me an example of a control system H actually suspensions of cars tend to be passive although some suspensions are active but actually in a car cruise control right cruise control you you know if you hold the throttle at a certain level depending on the road conditions the wind the slope the car is going to uh go in a certain way right but then cruise control puts in this you know this artificial foot essentially on the throttle and it tries to maintain the speed at a constant level right so we're doing Dynamics end of the course we're going to solve the dynamical equations dou4 is controls where you actually try and put in extra things like cruise control so to make the uh system behave in a way you want it to behave so if you have a rocket open loop right it's called open loop and open loop system is a system that is not where you don't close the loop you just have a rocket you have you know boosters you fire it it goes where it wants to go that's an open loop system a closed loop system is you might have a GPS system some sort of you know gyroscope in it and you try and change the um the burn you try and change the thrust so that the rocket goes where you want it to go so that it deposits a satellite in some location where after that the Dynamics can passively take over and the you know satellite does what you want it to do you understand right you put the satellite in orbit now after that it's pure Dynamics by the way do you know why a ballistic missile is called a ballistic missile or a missile why is a ballistic missile called ballistic say it again that's right once you launch the missile it's ballistics ballistics is trajectory as in passive once you launch it it's on its own you know so the missile only the the uh the thrust is only for the first few minutes of its um journey and then it's ballistic ballistic means trajectory it's all Dynamics after that you control it initially to make sure it's pointing in the right direction compensates for wind Etc now a lot of missiles there's a little bit of fuel left so in the end you can do some adjustment right but a bistic missile most of its journey is ballistic okay okay so in the last class we did um I should also warn you I'm a little woozy today uh I had some serious drugs this morning prescription drugs and the result is that I might I think Sam was saying if I might start babbling but you won't notice because I Babble anyway right so all right Sam didn't say that I said that I bet but anyway s was very respectful okay so uh in the last class we did uh uh we did a problem essentially the whole Pro class was we looked at you know the skier situation and there's a handout AJ's published right on the web where we do the energy formulation and we solve the problem we do the uh and in in um actually we did the energy formulation sorry in class but he also solves the problem the direct way and by coincidence the problem not comes out to be the same is it a coincidence no it's not I'm joking it's not a coincidence the answer is the same both ways so today we're going to now we're going to kind of flow the accelerator a little bit we've been slow and steady we've kind of pounded the concepts in and now we're going to kind of take it up a notch and we're going to get into uh um rigid bodies but the way we'll do it is we this this lecture is also but points I'm going to introduce to you the concept of angular moment and angle momentum there's a little bit of a surprise there it's good it's good news because there's something about angle momentum you didn't know all right and I want you to know it and after that once you have angular momentum for a point and angular momentum just a point Mass angular momenta have only limited value all right because linear momentum kind of solves all the problems but the angular momentum concept is our stepping stone into Dynamics of rigid bodies because then you can start looking at two and three particles more easily so we'll start with that okay any questions about the last class comments ridicule jokes nothing okay all right so so today we're going to do angular momentums we're still in points Point masses but this is the stepping stone the link to rigid bodies then we'll do a problem we're very problem oriented in this class and then we're going to do yet another problem but we're going to do multi- particle okay now just a couple of announcements uh one is that pet 4 was due uh is due on Wednesday the 10th and the reason is there's no class next Monday and um we posted the solution to that problem from class uh just one last thing I won't have officers today only because you don't want to hear me Babel I'm really sick um but I'm also going to change my officers um several people suggest so the timing isn't right um so we'll talk about the end of class but I I might go to like a Monday off M um like later on a Monday or maybe Wednesday later or something like that okay does it make sense would you prefer it not be midday Wednesday everyone who's in favor of my changing it away from midday Wednesday oh okay all right get it okay but just one final thing I you know historically AJ and I and some of the other T we've done the following we come in on Saturday sometimes just before the exam like a week before the exam it's an open session it's not like it's not mandatory it's nothing we'll come in for an hour and we'll just shoot the breeze solve problem just hang out like at 3:00 p.m. on a Saturday is that something that you you folks would find useful okay we'll do that okay um so here's what we going to look at what no okay we're going to see our first angular momentums and torqus today so consider a frame consider a point fixed in frame and this is an inertial frame and a point fixed in that frame o okay now let's consider a particle P by the way when I say particle a point mass or in context point I'm referring to the same thing essentially I'm referring to something with no Dimensions but with a finite Mass you know that someone asked me the other day and I just want to be sure to say this and let's say that you have a particle Point Mass heading that way some direction and let's define its velocity we'll call it a v p okay so I have um two questions both of which you probably know the answers to the first is what is the angular momentum of that particle just from your memory go ahead say it aha so I need to tell you which point right so let us say I could have done it around o but here's the deal when you do angular momenta it's very useful to pick some point that's convenient you'll see this a lot right if I'm trying to calculate the angular momentum of that door I prefer to do it about the axis for various reasons right forces vanish all sorts of cool stuff happens so I need to tell you which point I just want to to pick a random point and do it that that way so let's say it's Q so what's the angular momentum of particle of Point p and by the way let's assume the mass is m what's the angle of momentum quick say it h lvm yeah but now let's do it in Vector form that's correct by the way how would you define it you can say it I you know yep R well R is a vector should angle momentum be a vector very good excellent it is this it's actually the by the way m is the mass the point is p so we call it rqp so I'm going to define the angular momentum and I'm going to write it with a little H okay don't don't ask me why angular momentum is written as H I guess it's angular momentum something right it's a vector I'm going to write little H because I'm referring to the angle momentum of a particle it's a little U notational thing for me that when we refer to multiple particles we use capital a single particle write little so little F is for a single force on a single particle okay this are just definitions no profound concept here um of particle P about Point Q very important what else do I need to say I'm going to have to say frame frame a which is in this case inertial Okay is equal to let me raise this R the vector QP cross what M ABP and that is what the angle momentum okay it's a definition folks this isn't some sort of derivation it's a definition okay and um I'm going write it as a definition I'm defining this term for you I can choose to call it whatever I want I called it angular momentum okay and I'm telling you it'll be useful i't told you why it will be useful just take it from me for the time being I need the a here because there's an a here because if I use a different velocity to you know it's going to be a different angle of momentum um P because that's a point and Q because that's the point I'm thinking it about that's how I'm going to write it questions okay I'm going to Define one more little concept which you've seen before I'm just trying to put it in our notation I'm going to define torque it's purely a definition so I'm going to define the torque caused by let's assume this particle is moving because there's a force on it I'm going to call this Force F Vector P it's the force in particle Peak right so can anyone Define to me what the torque is going to be just from memory from stuff you've seen before you know it just say it R cross F done right nothing new I'm just writing it there by definition two definitions and now I'm ready to go okay and torque is something I made up too right you guys have an intuitive feeling about it I just made it up I could have called it a sanj but I'm going to call it a torque right so now let's examine the relationship between angular momentum and torque yep it's a vector sorry uhuh uhhuh that's good I didn't write it so what else do I need to say about torque by the way I'm going to write torque on a particle with a tow because Little T looks like time so I write as a tow torque on a system write to the capital T going forward okay now I missed something there what else do I need to say on point B or particle P what else with respect to Q now do I need a frame no and why is that anybody say againe that's right we assume we do it from inertial frame and if you look at the definition the a doesn't show up anywhere right so it's same the torque is torque is a torque all right okay no raised hands so now let's try and relate these two things right and the analog is this is like momentum this is like force except this is kind of this rotational R cross thing right how do you think they're related just now intuition just taking the analog a little further the analogy a little further say it you know it so you would think and that's generally correct torque is equal to rate of change of momentum right angular momentum just like force is equal to rate of change of linear momentum so let's try and prove it all right and the MIT way is to do it exactly right using all the mechanisms we've used and guess what you're going to find a stray correction term which you can only ignore in some cases okay in other words torque is not always equal to rate of change of angular momentum there's a correction term there are conditions under which it's a rate of angular rate of change of angular momentum but not always you understand because you know it's an analogy so it all kind of makes sense but there's a little you know pesky little term we'll point out okay so let's uh write it out so um we have defined a uh so let's take the derivative of okay first of all if I take the derivative do I need to say which frame I'm taking the derivative in yes and this thing is going to be I need I'll put it out big let say scaler does that make sense I'm taking the if I'm going to take the derivative of this guy I got to take the deriva to the right side there two pieces to it I need to use the uh chain Rule and yeah nothing special right everything is completely reasonable according to the laws of math and physics that we've learned so far now what is this folks what is this this this thing acceleration right so this is going to come out to be so now I'm going to do the next line this is going to come out to be M or in fact I can re you know I can just kind of write the terms simply like this R QP cross m a acceleration of Point P right that's that term H that's right we'll write it yeah we'll get there that's exactly right okay exactly you're getting but you're you're exactly right so give me I'll write in The Next Step all right let's write this what do we do with this guy it's an ugly term right what do you do with this guy here's what we're going to do check the out I'm going to write see that Vector is this guy I'm going to write it like this that Vector is this guy right that's rqp I'm going to write rqp as r o p minus r oq go with me because you know that's right I'm leading it on a path which you would think I mean I'm not doing anything wrong why I'm doing that will be obvious in a second okay everybody Lauren everyone yeah okay so this I can write as this this thing a d by DT of r o minus r oq cross M A VP you understand I haven't done anything wrong I'm just going down I'm doing something odd but not wrong because I know with where it needs to end up question yep very likely there shouldn't be another M here yes uh well where should the M be well I put the M here I just you know I'm moving the M around okay is that cool so let's do be to be clear this whole thing is this term and this is this term now let's let's write this The Next Step what is this what this m a acceleration of P or acceleration of p with respect to a hm H it's the force on particle P right if before we did all this kind of cross product nonsense this is simply the force so this is force on particle P right cross product rqp yes now so I'm just going to put a dotted line so you know that that's what this is is this I'm just going to write it out for you and I don't think you'll disagree let me just write some draw some lines I'm trying to save space as I said there's a really badly designed classroom in the sense that there's not enough room but it's a huge classroom so I'm going to write it big right and I'm not that neat so you're going to have to bear with me so this is a velocity of Point P minus a velocity of Point Q cross product M velocity P I'm going to rewrite that because I have a little more room so it just looks a little nicer make sense and now I'm going to write the final step what is the cross product of something so what what will this so we have to Let's expand this right associative what is AVP cross AVP excellent so that term is going to vanish what is avq cross AVP this this is what it's going to be so what we're going to end up with is this minus a VQ cross what is mavp is a linear momentum right so it's the momentum of particle p with respect to a right plus what is this guy look here what is it okay so so far I've done things that might be like why is he doing that but you you can't disagree with what I did now let me show you why I did that all that what this tells you net net is that if I now move this to the left hand side check this out I'm just rearranging terms get this guys plus a pesky term so tell me is a to work equal to rate of change of angular momentum nor in general right you know what 99% 99% of all engineering graduates who take a car class in Dynamics Berkeley Stanford Princeton might not even see this there is this term and you need to know about it okay it just so happens the term vanishes in many situations but it's key that you know see for Force f is equal to D by DT of P momentum completely coer Crystal Clear when you come to angular momentum it's say artifice yeah there's this funky term which vanishes let's identify the conditions but you need to know it exists when do you think this vanishes so what is this so what is what it's saying is look let's examine this term first what it's saying is look let's say this is an inertial frame right and uh this is the Earth planet Earth where most of us live many of us in fact and that's the corner right o this is the corner o and I see a satellite flying overhead and I calculate its angular momentum and I take its rate of change and I say hey that's equal to torque right okay well if I do the same thing instead doing it about o if I do it about a car that's driving down at constant velocity it's also inertial I'll get a different angle of momentum weight of change and the torque will appear to be different right unless I put this correction term in you understand it's very interesting and basically what happens is if I'm in that car the angular momentum of that satellite will seem a little less because the car is kind of keeping up with the satellite right you understand but the torque hasn't changed that's where you need the correction to okay so question to you when does this term vanish okay so let's call this let's give it a nice official sounding name how about we call it pesky ter that's nice and you know when you work at JPL you can say you know the pesky term i' be very impressed right okay the pesky term vanishes if it's astonishing that no one thought to give that term a name again stunning vanishes on under the following conditions one a V Q is equal to zero what else exactly condition two has in general and a sub example of that is if Q is equal to P obviously right if I'm taking the moment angular momentum but myself it's going to be zero right but it's also not going to be changing I mean the correction term vanishes it's kind of silly but the reason this will become important is that when you look at rigid bodies it it actually you will end up doing something so this is a silly term but these are the two conditions yeah Q is fixed uh if Q is fixed right now it just so happens that we often take angular momenta momenta about things like this point of this door about this point so AQ is z everything's good right or you know we're parallel so it vanishes but you know often it's not parallel so you need to be careful that's the point I want to make okay this term will crop up later uh and we'll you know we we'll it's a pesky term so way account for it or we get rid of it yep can you your absolutely absolutely you could right see the problem here is we're calculating the velocity with respect to reference frame a right but Q is moving with respect to a so let's say there's a car right so let's say I'm sitting here and I'm calculating velocity with respect to me and but I'm taking angular momentum with respect to the car that's when the problem occurs but if I'm sitting in a car and calculating the angle the velocity with respect to the car then everything's fine you understand okay why is this important let me tell you why this is important in Dynamics inertial frame my arms robot this is one link this is one degree of Freedom this is another link so I have one degree of Freedom here one degree Freedom here right that's an arm robotic arm right right now you might you'll be very tempted when you do problems to take angular momentum about this point of this arm you'll be very tempted to do it trust me right now tell me if I take angular momentum of this point and this arm is doing this and it's moving up and down is the velocity of this point St zero with respect to the inertial frame no right now is the um velocity necessarily parallel to the end of my robot no so it's not Kosher can't do it if I do I need to include that term right and you you've got a figure that a simple thing like my a robot is a very typical Dynamics application okay common mistake okay now that we've done this let's try and figure out why angular momentum is useful let's solve a problem any questions about this any questions bottom line if you're taking angle momenta about moving points be careful bottom line many conditions it'll be okay we'll elucidate we'll Express we'll we'll nail this conditions in fact that's the whole point of the next two weeks where where you can take talks about and where you can't right turns out you can take talks about centers of masses I mean we nail it but this is the reason why any questions okay so let's do a problem you know I it's it's strange you never know that it's going to be hot or cold that's why I'm falling sick we got to complaint MIT about the weather it's ridiculous maybe they can change it what do you think you know there are experiments right now in China where the uh where the uh um Olympics going to be next year where they're actually trying to modify the weather I going to tell you this other story a friend of mine you a lot of MIT grads old friends work all over the world very important you know positions around the world for example the foreign minister of the new British government is an MIT grad okay milbrand Benjamin Netanyahu who might end up being the next uh president of um is the prime minister of Israel MIT grad kofan MIT grad right but one of my friends Works in a congressional he she helps Congressional uh you know Congress analyze things from a physics point of view so you know when Katrina occurred someone asked if it would be possible to change the temperature in the when when a hurricane approaches to for example dissipate the temperature you know what I mean so you have a hurricane kind of heading Florida could you you know turn on a few air conditioners and kind of disappear the the hurricane and uh she did some analysis and showed that you need something like a nuclear weapon but like you know the most the largest nuclear weapon ever conceived to even you know impact it by like 2% because the energy in a in a hurricane all right I told you I'd Babble all right let's do a problem here's a problem so imagine a um a table right you notice I've drawn it in perspective you know what perspective is right it's very cool it wasn't intentional but looks good I see everything in perspective right now okay I have a hole on this table right so this is a table right I have a hole here and and um it's frictionless hockey puck basically a point map attached by a string and the string comes through the hole and uh so someone's holding it here very fingers okay and pulling down on it but the thing is also spinning so you kick it it's frictionless no no losses so it's spinning as you do it right so it's you give it some initial velocity right and it's going to spin let's draw it more schematically a schematic drawing is a drawing where you is not necessarily realistic in terms of realism but captures the physics more easily so the puck is here we'll call this Theta the initial length is L one and the initial velocity is we'll call it a scalar because this is how I'm defining the problem of V1 when we actually solve it we might have to define a vector okay and the question is as the thing's going around this person is going to pull the string down and as it kind of goes around it's going to spiral in and end up it's a new length L2 and the question is what is V2 going to be okay so I'll let you uh let resolve this for a second I don't know I don't know if it's constant force I just know that I'm ask the question I'm asking is constant force or not it's a good question I want to know what the velocity of the buck is going to be when Hy fingers you has pull the uh string until the length of that cord's gone from L1 to L2 where L2 is smaller than L1 although it doesn't have to be right let's assume L2 is smaller than L1 so how would you go about solving this problem so let's forget angular momentum all that stuff let's let's examine it from a uh from a uh basic you know intuition point of view from what we studied so far first of all when is linear momentum conserved forget this in general linear momentum is conserved when what condition occurs no external Force right so let's study this guy this particle as it moves around does it feel an external Force let's do a free body diagram on this particle um o Point p as the particle moves around kind of intuitively which direction is it accelerating in kind of cental right so it's has an acceleration in this direction what else is happening maybe there's a potentially a tangential acceleration kind of an Oiler acceleration as well potentially right kind of thing okay maybe it's a drag slowing down speeding up we don't know right so in that direction there might be some acceleration okay so we don't know if it's up or down so there's going to be a you know a net acceleration kind of this way do you think we don't know right so what is the force on this in 2D where is the force coming from on from this is so where does force get applied on this the string so the string is applying a tension force on this right are there any other forces gravity is not an issue here are there any other forces no so the only Direction it can really accelerate in in fact is what towards this in this direction right so there is a force on this so is momentum conserved no is momentum conserved in any direction linear momentum how about tangentially yeah yeah yeah tangentially it is conserved right it is conserved tangential it's just not conserved towards the center but the problem with the tangential momentum is when it's here the tangential momentum is conserved this way but then when it comes here it's the directions change so direction is always changing right right so it's momentum conserved instantaneously in the tang in the tangential Direction the tangential Direction by definition is changing so can we apply linear momentum conservation conveniently here no which is basically where this comes in because what angular momentum is when is is when things go in circles or kind of circularly it's a nice way to do instantaneous linear momentum conservation get it that's basically what it is you understand so let's so is angular momentum conserved let's think about it is there any talk on this particle yep go ahead well there are no forces in the tangential Direction on this are there right isn't the velocity changes ah but the velocity so let me put it here this way right so let us say X Y there's a hockey uh uh what what do you call the surface of a hockey uh an ice rink or a rink right now X and Y right uh let's say I have a wind blowing on some you know like a a blow right just put you know putting out a strong wind in this direction now I kick the hockey puck this way the wind will cause it to accelerate this way but the Velocity in this direction assuming there's no drag will never go down get it because angular it's so linear momentum conservation is a vector equation which is the vector linear momentum is conserved right but if if it's not conserved in One Direction but there's no force in the other it's still conserve in the other direction understand so there's no force on this guy in this direction so instantaneously the velocity stays the same but then it goes to this new position the force is changed now it's pointing towards its Center again so it's a slightly different tangential direction right so linear momentum is conserved instantaneously but not continuously because because the direction is changing so we need some rotational momentum which is what this guy is get it so if you examine this particle the force on it at any point is FP and the torque is going to be let's call this point q and P right torque of Point Q about Point p is equal to going to be r p R QP cross F and FP is pointing inwards so what is going to happen to that zero right so there is no torque on this particle at any point in time it's just a very long way to say listen things going in circles and the only force is radial if we take a cross product it's going to vanish which is why this is such a convenient formalism right because we're taking a cross product here right so it works in rotation so it works in things where Works in situations where the force or something always points to the center get it I'm don't get a feeling that everyone's convinced there's a frisen of disbelief doubt anybody or it could just be that it's the morning okay tell me do you get it yes or no yes okay so torque is zero now what is the right hand side of this equation so that means that the right hand side is equal to zero which means a d by DT a h p about Q Plus a okay right the right hand side is zero now let's look at this term is point Q moving so that's zero right and that's what I mean in a lot of situations that pesky correction term doesn't make a difference you just need to be careful that it exists a lot of situations the angular momentum is conserved because this goes to zero which means d by DT of ah p q is equal Z and what that tells you is understood yes now if I said the table was sitting on a truck a flatbed truck which was actually moving then I would have to do one of two things I would have had to either calculate this term or calculate or make you know make my frame attach it to the truck I would have to do one of the two things so that's the gotcha there got it so long story short using a lot of you know notation what I've said is hey angular momentum is conserved you do this in physics but I've just done it in a very precise way okay so in the end there's no surprise the whole point is to show you they could be surprises but be careful any questions about this all right snap quiz in the next 3 minutes I want you to calculate for me the final velocity literally 3 minutes because I have to do the dumbbell problem now you know the irony is this problem you could have done before you took this class the only difference is now you know all the ways you could have done it wrong that's it you know all the pitfalls you you need to avoid right very simple guys calculate the angular of momentum at length L1 V1 calculate the angular momentum at length L2 V2 equate the two and get V2 in terms of V1 and L1 and L2 it's very simple just for bless you all of you you too no the fing how I do this I'm going to change here actually if have you done it how many of you have done it well you done it okay so good I'm actually going to to uh having asked you to do it that's good turn it in but I'm going to change it a little bit just to make it more interesting that is I won't give you the initial velocity I'll give you the initial angle of veloc angle of speed Theta dot Theta one dot so I'm solving a slightly different problem here with the ma is but it's the same same problem in terms of math okay so if in of I giving you the initial velocity if I give you the initial angle of velocity then you know you can just write just to make it more interesting L1 B1 cross AQ right angular momentum L1 B1 cross a Omega B cross L1 B1 right and a Omega B is theta. B3 if I do all the math the angular momentum as I get get it is a vector l^2 Theta 1 do s right so initial moment angle momentum is equal to final angular momentum so L L1 2 Theta 1 do^ 2 is equal to sorry there's no square that what am I doing sorry uh sorry yeah L2 squ Theta 2 dot right so that means that Theta 2 dot is equal to L1 upon L2 s Theta 1 dot you understand I just did with Thea 1 with the with the angular speed Theta dot instead of V1 you could have done the same thing with v it would just be proportional okay what that means is if I have the length of the cord the angular speed is going to double and the reason it's more interesting it could because when you watch a skater you know do the uh what's what's it called the twirl you know when figure skaters kind of rotate spin spin thank you yeah spin when they spin you know when they pull their arms in right they're angular velocity increases with a square which is why it's so spectacular you know that I just want to write that okay so that's uh angular momentum conserved you've also seen by the way that angular momentum is a vector in the it's a cross product so it's a vector out of plane in 2D you see that okay is energy conserved for this particle why not well think about it you wor yeah work it out has the energy gone up or gone down well think about it the velocity has gone up linearly so the energy kinetic energy is gone up by the square a velocity right and um so energy is not conserved so where did the energy come come from yeah that's because you were doing work by pulling it right potential energy is constant so the can energies because you can also do this with energy by the way right but for that you need to formulate the problem differently you can do it but you need to figure out what the forces what work was done and so on so there half there three three ways to solve this problem right you could have done it using Newton just directly without using angular momentum you could have done it using angular momentum which we did or if you formulated the problem you had a lot of details you could formulate it using energy we're not doing that for example I could have said listen um assume that the uh particle that there's a weight pulling it down I could have you know I could have formulated it that way right instead of a finger a weight okay the only thing is does the particle have a velocity towards the center yes or no well not necessarily because I could have brought it to a state where I sto pulling it anymore why so it doesn't necessarily have to have a velocity I could stop pulling it so now it's just going you know on a circle of the new radius right but at the same time I could keep pulling it it could have a velocity towards the center and which case that would go into the energy term so for energy you need a little more detail of what I'm doing here right any questions can you get the angle the from the we did in fact we did right if you think about this angle the angular momentum formulation actually came precisely from FAL to ma let's look at this guy what we did was I just defined terms the conservation yeah sure well this way this is the nice way to do it look angular momentum is simply f is equal to Ma you put an R cross product in front of the F and in front of the ma that's kind of where it comes from right so effectively that's what I did but I just gave you a canned way to do it instead of doing it you know each time you have a question I meant ABP I think yeah it was meant to be AVP thanks for catching that I told you I'm on drugs okay and there's a master missing as well there you go and the mm will cancel out okay go ahead yeah it could have a velocity inwards yeah because it's a good question but I'm taking the so it's a good question right in fact I I kind of skimmed over that when I was saying it earlier right and I didn't State this in the problem it's my fault so let me let me go ahead and state it very clearly and I said it fast when I talking about energy so great question if the particle has an inward velocity when I look at that V2 State the L2 State then I missed some terms if however I State you which I I did not state I screwed up I made a mistake if I state to you that listen you start at length L1 is just going in a circle right and then I pull it in and then I bring it to an length L2 and I stop pulling I wait a second you understand and then I say what's the velocity then the supplies because there's no Inver velocity otherwise you're right this would not necessarily hold right although one of the that term might cancel out anyway but go ahead for us even if there was velocity yeah even if there's an in velocity it will cancel out right because you're taking the cross product yeah but I but I should have stated to you you know what the total situation was and I didn't okay any questions get it get angular momentum conservation linear momentum conservation all comes from f is equal to ma there's nothing all this is f is equal ma right so far we've done nothing Beyond it energy comes from f is equal to ma these are all just tricks okay okay so with that now let me solve a multi-particle system so this is a stepping stone into uh into uh rigid body Dynamics in the fullest Glory so what we're going to analyze now is this system this is all think of this in the horizontal plane right like a skating rink um and this is a frame a inertial this is the corner of the frame what I have now is two particle masses basically a dumbbell and they're attached rigidly by a massless bar massless and I scoot them across and it's rotating it's hurling through through you know across this rink and I'm going to try and identify understand the behavior in fact I'm going to make it even more complicated what I'm going to do is I'm going to attach two rockets get it Rockets right to this thing one rocket here and one rocket here but the Rockets are designed such that they always Point North all right so they always point in in the horizontal Direction in the on the Whiteboard right so they little Gyros they you point that way or maybe they're magnets and there's a big magnet at that end so it's only being pulled in that direction okay and the question I ask now is how does this thing behave so let's examine this first of all how do I parameter or to use the more technical term what nonstandard coordinates do I need to describe the configuration of this dumbbell let me make a proposal to you what if I said let's label them first of all let's call this particle o this point O I'm sorry let's call uh this guy P by the way I didn't give you the masses right but let's give them different masses M1 Q M2 and I need to give you a length um and I will state Here length I don't want to mess up the diagram isal to 2 R so here's my proposal for a set of generalized coordinates I could use X and Y for particle P1 that is you know some two dimensions and X and Y for particle sorry particle p and particle Q is that reasonable it's perfectly reasonable but do I have a kinematic constraint and what would the kinematic constraint be H yeah it's a they connected by a rigid body so they can do whatever they want as long as the length between the two of them remains constant right by the way if instead of a rigid bar if I had a string connecting them what would the kinematic constraint be less than equal to to R right but in this case it's a rigid bar so it's equal to R to R all right by the way just so you know if in rigid bodies you can make an equality constraint um in non- rigid in if it were a string it's an inequality constraints right the former and you can erase this from your memory the moment I say it is called called a holonomic kinematic constraint the latter is a nonholonomic kinematic constraint and there are whole you know there are lots of people who make their whole careers lives work right studying path planning with nonholonomic kinematic constraints things like that if you do a PhD they're also clonic and romic time dependent and non-time dependent constraints right I mean imagine in the Str with changing length right over time right and by the way that does show up for example temperature something strings you know that does show up but don't worry about all that the point is if I went with four non-standard coordinates X and Y of this X and Y of that I would have four non-standard coordinates and one kinematic constraint right is there a better coordinate system General a non-standard coordinate system you can think about anybody clao are we assuming thatass no but although in the kinematic from the kinematics point of it makes no difference but yeah go ahead I mean maybe a coordinate system that's on the bar and facing I mean if they were equal place in the middle of the bar maybe facing or Q rotate so the middle of the bar in the center of the mass of the system okay and facing I guess P or Q it doesn't matter CU going to be on the same line okay all right so what Claudia is suggesting is a very very nice suggestion right what he suggesting is let's just say that we use the center of the bar as the kind of the XY location of the dumbbell and then we use the angle of the dumbbell how many non-standard dimens Dimensions do I have non-standard coordinates do I have now three do I need to con consider a kinematic constraint no because it's implicit in that assumption the way I Define the non-standard coordinates the kinematic constraint is kind of absorbed into it so I don't have to spell it out net net I could have done it both ways I would have as many equations I would as I would have unknown but but by picking the minimum number of non-standard coordinates I don't have to explicitly spell out the gtic constraint got it okay now however going forward just to make our math easier I'm going to assume that the same mass doesn't make any much it doesn't make a difference the problem just makes our lives easier later yep claudo if they had been different would you have placed them the point in the center of mass for reasons that are not clear yet if it had been different if the masses have been different you'll find out later I would have picked the center of mass which would not have been the geometric Center Center okay but you don't know why so I can't tell you why so I just did some slate of hand and I made the masses equal all right a lot of terms cancel out that's the only difference I could still have by the way I could have picked a point on that line anywhere there would still be reasonable generaliz coordinates there would still be three just the math is easier that's it okay so I don't have to pick the center of mass but I pick the center of mass a lot of terms cancel out the rest from here on a lot of everything we do is about get getting terms to cancel out it's as simple as that literally getting terms to cancel out there's a plus term there's a minus term they cancel out Central Mass L terms cancel out done that's the reasoning okay all right so what what what we'll do is we'll describe the location of this guy I look I'm doing this slowly because I want you guys totally understand the uh um right totally understand the uh reasoning behind kinematic constraints I'm going to call this point c r o That's when one vector now this angle of this thing is Theta and I'm defining um because this thing I've rotate a lot right uh I just you know it's a little confusing but let me just draw it for you like this if I had if if the way I had drawn it I hadn't rotated that dumbbell so much if I drawn it like this it'll make more sense so let me draw it here and then I'll redraw it there but if I had drawn the dumbbells like this and if I did this right then I would have said this is B1 and this is B2 like we usually do right and this is B and our called this angle Theta you understand but I rotated it a heck of a lot so B1 is going to look like this get it and this is B2 because I've rotated a heck of a lot just you know just and that whole angle is Theta get it so that's how we're going to do find this R OC which is some component this way plus some component this way and an angle that we know the configuration of this dumbbell at any point in time so now what we're going to do is solve this problem we're going to figure out how it behaves what it trajectory is going going to be over time we won't solve it we write the differential equations to solve it right and you'll see that they come out to be very beautiful and you'll see out of the primordial soup you'll recognize the shape say hey that's a moment of inertia we'll do that but we'll do it without using any moment of inertia Concepts okay so let's write it out so we know that uh let's calculate so the what's the first thing we need to do we're going to write f is equal ma for both particles so first we will do some free body diagrams then we'll figure out the accelerations of both particles and we'll write f is equal to Ma and we have differential equations we'll make sure we have as many equations as we have unknowns and we'll the do all right so from a free body diagram point of view I'm not going to do it right now but very simply there are each bar applies a force on the part on the particle I'll do it later on because I want to the kinematics first so I'm breaking my own rule I'll do the free body diagram later on okay let's just do the kinematics first all right so R of part the position of particle p is equal to r o Center plus since the length of the rod is 2 R I can just write it as did I do that right make sense because the length of the particle is R little r right and B1 B2 yes okay now I want to figure out the accelerations of the particles I can actually just use the ultra super cool magic formula here directly I need you need have done that but we can do it brute force and just figure it out so um actually let's just use the um the uh super cool formula and what we will get is the acceleration of particle p is equal to can you tell me just use the ultra super cool formula and tell me what's the acceleration of particle p a c yeah look at all the terms is is there a b acceleration of P that's zero What's the next term is there a CH Stone no right because the uh the points p and Q are rigid they're not moving with respect to frame B right they're attached to frame B is there a centripedal term yes and that's going to be what what well that's the oiler term so there'll be an Oiler term as well right so let's do the oiler term first just because you named it right it's a a alpha B cross what rpq sorry R CQ CP and that's equal to minus B1 R right and what about this term okay that's the result of the ultra super cool magic formula right and the acceleration of Point Q I'll write it a little more neatly a acceleration of Point C plus a Omega B cross a Omega B cross rb1 plus a alpha B cross R B1 so the only difference between these two is where at an rb1 for the upper point I have a minus rb1 for the lower Point got it so we figured out the accelerations of these particles it's very cool in principle now we should be able to equate those ex multiply them by m and equate them to the forces do a free body diagram we should we done right nothing really special here any questions about this anybody okay so I'll just write it out now I won't do it because we have only about 2 minutes left but let me do this this side this white board sorry now let's do the free body diagram on each particle so there are two particles p and Q and I'm applying a force from the Rockets right so let's examine that particle Q first free body diagram oops what other force is that particle Q facing H tension right it's facing a tension okay why is the tension point along the axis is it really tension tension comes from strings this is a massless rod why does that Force point along the axis yep yes so that's that's that's fair because you're using kind of the acceleration and saying well it must only be a tension but if you ignore that right can a stick if I have something at the end of a stick can I apply like a you know does it only have to be a force inwards right if I take a stick and I have a little ball at the end of it can I apply a force tangentially with that stick I can I didn't assume it was a string right you know what I mean CL you I mean you're getting that's right so there are a couple of ways to interpret this okay it's a massless rod and I'm invoking the strong form of Newton's third law right and what we'll do is next week we'll pick up on this and essentially what I'll do is I'll do the three body diagram I'll take components and write F equal to ma right I'll write f is equal to ma in two directions here I'll write f is equal to ma in two directions for Point uh P I'll get a from here I'll get four equations one of those equations will turn out to be the same so we get three equations three unknowns we'll solve it and we'll end up with essentially what what what we'll show is that the acceleration of the center of mass is uh related to the total force and the angular acceleration of the whole rigid body is related to the torque and we'll show it okay and then we'll generalize it and Define angular acceleration and moment of inertia and a more General sense okay so let's stop here because we are over time
MIT_2003J_Dynamics_and_Control_I_Fall_2007
10_Three_cases_rolling_disc_problem.txt
the following content is provided under a creative commons license your support will help mit opencourseware continue to offer high quality educational resources for free to make a donation or to view additional materials from hundreds of mit courses visit mit open courseware at ocw.mit.edu okay we are exactly uh one two three more lectures before the exam exam is next wednesday right uh today here's what we're gonna do remember the derivation and most of you slept through it did anyone remember the derivation remember it you know we derived a lot of stuff you know anyway so we'll pick up there uh it will lead to certain simplifications we will talk about that okay and then we're going to do problems and in particular we're going to do a rolling now rolling um is introduced in dynamics usually as a case all of its own and i hate that right because to me it's just another example of dynamics okay and um think i should take my cell phone off it's another case of dynamics it's another case of a kinematic constraint i don't like to treat it as any special behavior rolling is just basic dynamics everything we've done i want you to see it as a powerful technique that you can apply to any situation and rolling's just a special case i hate this kind of special treatment of rolling okay all right so today we're going to do ugly pick up with the ugly equation left off simplified do problems wednesday unfortunately i'll be in brazil yep sao paulo what birthplace of uh item center any formula for formula one fanzier ayton cena remember him i'm going to go to his gravesite that's kind of my uh that's not why i'm going to brazil i'm going to brazil because i'll i haven't worked there but i'm going to go to his memorial because i'm a big formula 1 fan anyway so i won't be here but we're actually beautifully set up to do something that i i'd announced that remember i said right in the beginning that i the first class i did by video this is the other one that i couldn't get out of this trip uh so we've arranged but we're in perfectly in in time um if i had to do a lot of content on wednesday i'd have requested one of my colleagues one of my faculty colleagues to teach but as it turns out this is just right for problems so what we're going to do on wednesday i'll be doing a bunch of problems today and then rj is going to do a bunch of other problems and he's going to introduce the concept of energy in rigid bodies just like in in regular point particle masses you have kinetic energy and potential energy in rigid bodies you get kinetic energy and potential energy potential energy remains the same in rigid bodies you just take the center of mass take its height that's the potential energy okay the only thing that changes kinetic energy and kinetic energy in i'm telling you right now has two terms in rigid bodies half mv squared where v is the velocity of the center of mass plus what's the other term omega square only works for certain reference points about which you take the i that will get you today okay so he's going to do that and then just do a bunch more problems and then i will come back on i'll be back friday and uh on monday i'll do more problems and your exams on wednesday are you folks interested would you like me aj and me to do an informal review on saturday yes let's nail it right now 3 p.m 3 p.m 3 p.m here's what aj and i'm going to do this is not formal all right aj and i will happen to be here at 3 p.m i'm sure the rules were breaking i have no idea i don't care we'll be here at 3 p.m just shooting the breeze you know just kind of chatting and if you guys show up we'll chat with you for an hour okay three to four wednesday a saturday this room aj can you talk to rachel and make sure that we can nail the room okay cool so that's the plan and the next wednesday is the exam exam is closed book it'll be held here it'll be 80 80 minutes right and uh you will be permitted a crib sheet and it is one sheet both sides and i assure you you don't need that much space okay yes you need to huh actually yeah you can keep them actually you know what you can keep them we used to take them back but keep them right it doesn't matter right you can make xerox copy worst case right so okay so let's start here's where we left off in the last class we went through a horrible derivation no it wasn't horrible it was quite it was fun i found it fun but many of you slept through it it's like a red eye like the type i'll be taking tonight and what we ended up with let's take a step back remember the remember that we had said earlier that if you have an ensemble of points and this is a and this ensemble of points we call it e and this is point i oh sorry j and this is point i and each point has a certain force on it f i remember that we said that if you treat the whole thing as rigid you can show that the torque on the ensemble about point q some point q this is point o point q some other point right a truck you know driving down the road a satellite whatever we said that torque is equal to d by dt a h that's the angular momentum of ensemble e about point q this is a vector plus a v q cross a the total angular total linear momentum of the ensemble remember that formula right and we said that's all great but we said listen this guy we need to figure out what this guy is in terms of the particles in terms of you know the locations of the particles the you know and stuff like that right in terms of how fast they're moving right that's kind of where we started that the ugly derivation right and where the ugly derivation left off was here what we derived was just confirm this okay was that a little bit of a big p remember yeah there's a big p that means it's the momentum of the whole ensemble obviously plus don't worry there's a pattern here we'll simplify stuff i is equal to 1 to n m i r q c this is actually qc a bigger button cross a omega b cross r c i where c is the center of mass of the ensemble so we said one of these points of all these points one of these points is the center of mass and we refer to that as the center of mass of ensemble e right sometimes we just call it c okay plus uh this is what we derived in all its glory in the class okay so we're going to pick up here now do people remember what this term is all right it's the moment of inertia right now i'll you've done it in the in recitations you've done it in previous classes i'll only touch on it briefly here but i is essentially for a circle right a full circle it's half if the mass of the circle is m and the radius of the circle is r what is it half m r squared what is it for a hoop huh what is the i for a hoop m r squared right for a rod you can do it right it's 1 12 m l squared so okay so that's i we'll come back to that in a minute but this is where we left it off and in the recitations i asked i requested the registration instructors to make sure you understood what i was for different shapes right here we treated the points as of the individual point particle masses but if you think of them as a continuum right it's just a kind of a smeared continuum of mass then you get the i that you're more familiar with which is half mr squared for a circle for example for a disc okay anyway this is where we left off now this is the angular momentum really what we're trying to do is we've calculated this so we're going to try and reconstruct this equation right we're trying to equate torque to the angular acceleration so what we can do right now is the torque on ensemble e about point q is equal to what come on the derivative that's it yeah so i'm just going to write you that i'm going to take the derivative a d by dt of this term plus d by dt a right of this term i'm going to even bother to write it plus what is the derivative of this guy i alpha i are we missing any more terms right surely we're missing some terms which which term have you missed so which term the pesky term that's right so now this you have to admit is not a mess right before you took this class before you started 2003. did you even know the existence of this and this and this did you seriously you didn't right but we went out and we did the math right royal honesty with roy with a lot of honesty and clarity and that's what we end up with where it's about oh that's right um that's right yeah right it's about the central mass okay so what do we need to do to the terms what would any self-respecting engineer do let's brainstorm a little bit go ahead anyone figure out the ones that are zero yeah here's what we're gonna do it turns out that if you take the angle you know the moment of inertia and if you if you try and use the uh the uh take the torque about some random point if you're not careful you get into a lot of trouble all right so what we do now is we find the points q about which many of these terms vanish you understand so and it turns out that there are many many complex conditions for example if the point q is you know any point q which is uh accelerating towards the center of mass and this happens and that happens yeah you know some terms cancel but i'll give you the three most important things to keep in mind if you walk away from this class and you remember these three things a you'll know that you know these three points are important because you're ignoring those ugly terms and they only vanish for these three points and b you will have pretty much the three points that are important in formulating dynamic equations all right those three points are so there are three cases where this equation simplifies well there are actually many more the only three cases that are important okay and those three cases are and i'll write the corresponding equation and then we'll solve some problems using this one anybody i'll give you a hint last time right yeah q equal to c okay and you can see when q is equal to c this term goes to zero because our q c is zero this term will go to zero because our q c is equal to zero this term will survive and this term will go to zero because this velocity is parallel to this momentum got it so v is equal to c sorry q is equal to c and the corresponding equation is are beautiful elegant the q of course is ce so might as well just write it as ce okay the second simplification anybody it's actually a tougher one and i hinted at it last time anyone parallel velocity actually is good enough to cancel this term for a single particle but the other two terms not necessarily okay so you're you're right for us we have you see this equation kind of the little t the tau little h little v little p version of it applied to particles there if they're parallel you were good but this is a more complicated beast here so you can't slay this dragon with just that arrow in your quiver it's a tough one it's a it's not tough it's an easy one i'll tell you if the point q is the instantaneous center of rotation remember that we'll go through it again instantaneous center of rotation if the point q is the instantaneous center of rotation of a rigid body then so if q is equal to the instantaneous center of rotation we'll go into this in a little more detail in a second then and i'm going to write this in a subtle you need to be very careful when the way i'm writing it tau that's our torque of the ensemble about i'll just write icr is equal to i of the ensemble about the icr get this okay times a alpha b okay that's the second point if i do this okay let's say i haven't take my arm here right i have the my uh this is the forearm what is this called the hind arm i don't know anyway this section of my arm on the forearm right if my arm is doing this right the forearm the the this part of the arm is swinging back and forth and now i rotate my forearm as well right first of all what is the instantaneous center rotation of this piece shoulder what is the instantaneous center of my forearm trick question huh it's not the shoulder is it the elbow what do you think yes right the instantaneous center rotation is not the first hinge you can find the instantaneous center of rotation actually is kind of more complicated what you really need to do is attach a large transparency to my arm to my forearm and find the point in the transparency that it's instantaneously not rotating which is not actually this point because this guy this point is actually moving see get it for this rigid body this point is still moving right so actually there's no nice cute instantaneous center rotation for this piece you understand so if you wanted to use instantaneous center rotation it works with this guy but not that well for this guy for this guy you just want to use the center of mass all right the problem with that is when you use the center of mass the hinge force you know the reaction force that internal force of the hinge you can't make it you know if you if you could have taken the for the torques about this point those forces would have vanished those unknown forces right it would be more convenient you understand right so there you can't make them you know you can't conveniently make them vanish you need to write them out as unknowns spell them out and figure them out right that's the only thing it's just more inconvenient but this is not an instantaneous center of rotation a lot of people think oh i can just take the talk about any hinge no cannot do that you can only take talks about instantaneous central rotation right what is the instantaneous center of rotation of this door anybody yeah so that's okay so you can take torque which is force times this distance equal to i which is the moment of inertia of this door about that axis which is the moment of inertia about its center of mass plus using the parallel axis theorem what m d squared right so i alpha that you can do but if it were a double door right and both were rotating and the double the double door part was rotating you couldn't do to t fly alpha about this point if it just if it's moving because instantaneously it's not it is not the center of rotation if this part were fixed and the double door flapping back and forth right or this part instantaneously has zero velocity then that would be the instantaneous center of rotation or the second wing of the door get it get it so very important that you think about it that way if i give you a sheet and i say if i showed you a transparency right let's say that there is a transparent i'm going to erase this let's say that i give you a transfer i give you two points on a transparency right and the two points one it's point a and point b and that's you know there's a transparency i'm holding up these are points on the transparency then i move the transparency point a moves to here and point b moves to here how would you find the instantaneous central rotation that's right right first of all this is a large motion but if it were a small motion you would that's insanely a center rotation get it you understand so you get the velocity at every point on the transparency and take the normals and where the meat that's ins instantaneous central rotation i'm saying this to you because this is actually a powerful formula but it's also a dangerous one because you'll be tempted to think that something is the icr when it isn't are you good with this well don't worry we'll use as an example now the center of mass you have no control over it's where it is the icr you have no control over it's where it is but let us say that you insisted on taking torques about some other point right you might you know the joke i i usually crack is that let's say you know you're from ohio and you have to take talks about some point in ohio right it may not be the center of mass it may not be the instantaneous central rotation but grand mom is in ohio and by god you want to take moments about ohio right there is a formula for that and that formula is a third one and that is if q is in ohio in other words if it's a general point then the formula is this the torque on the ensemble about q is equal to very important i of the ensemble about this in about the center of mass a alpha b and then those two the all the three ugly terms reduced to this beautiful r um q which is grandmom's home in ohio to the center of mass cross mass of the ensemble okay so the three these are the three simplifications okay and there's a kind of an intuitive way to do think about this if you're not taking it by the center of mass right then there's kind of an inertia force you know kind of a drag right across this inertia force it's kind of an inertial torque that you need to put in you understand okay it's just kind of a correction term so there's a way to think about it intuitively but the intuition might let you down if you're not careful okay keep in mind though here i took the moment of inertia by the center of mass here i took it about the icr okay these are the three ways you can simplify this formula now why you might ask and go ahead and ask why do you need three ways go ahead and ask thank you the reason you need three ways is because it's all about convenience they're all in terms of information the same there's no difference at all you can solve every problem in any of the three ways in fact we'll take one particular problem and skin the cat three ways it's a terrible expression i know it comes from because i own a cat who okay anyway i mean it's my wife's cat but i still won't skin it the three ways you can do it right the reason is when you take torques you're taking cross products and if you find a convenient point you can make certain unknown forces that you don't care to find out go to zero right you're going to get the same number of equations but if you can eliminate certain terms you have to solve for your simultaneous equations you get it it's just a convenience thing that's all does everyone have an intuitive feel of what i'm saying please raise your hands otherwise i'll say it again do people have an intuitive feel of what i'm saying okay let's solve a problem and then we'll we'll address this okay so the first problem we're going to solve and we're going to solve it three ways in the next r is rolling right as i said in lot of classes that's a subject on its own in our class it is but an example rolling so the rolling problem consists of a circular disc in 2d sliding down a ramp actually it didn't mean sliding down a ramp it could be sliding under any circumstances but we look at the situation where it's sliding down a ramp let me make sure i use the same notation that i generally use okay there you go so let's assume that there is a ramp the angle of the ramp is fee for what it's worth on the ramp is a disc a hockey puck how's that it's pretty good yeah did you know there's a contest for the most perfect circle you can draw and there's this one professor dude who draws incredible circles and if you go to youtube look go do a search for circle drawing and you should see this guy you know he kind of loosens his arm arms up and i think he's like double jointed or something and he just draws a circle it's perfect each time not a contest i would win or participate in but anyway okay this is the circle now first of all we're going to assume perfect rolling anyone remember what perfect rolling is no slipping what does that mean that's right that's right that means that whatever distance this moves the angle right it's like you have a string wound around this right so if it goes down the string is going to unwind so the angle is related to the distance traveled by the radius okay so we'll do that right now okay so what we'll do is let's assume i'm going to do it in a funny way i'm just going to assume that theta positive is this way all right i do it just to make a point which is i'm going to assume it rolls uphill which clearly it doesn't right it doesn't roll up here but i'm going to assume it does okay when i solve it it'll come out that actually rolls downhill but the reason is i just want theta to point in the normal positive direction if it comes out negative so be it all right now first of all if i want to solve this problem and there's gravity it's got a mass right i want to start solving this problem it's a rigid body in two-dimensional space how many degrees of freedom does it have overall whether without the constraints right if you remove the constraints the rigid body has three degrees of freedom it can translate x translate y and rotate what what are the constraints on this guy very close you're very close i think what you mean is it can only the tip the bottom must always contact this surface is what you're saying right so one one constraint is it can't move this way right but it's almost implicit so we want you know you'll see when i write it out that will be taken care of the other constraint is the rolling constraint which is that how much ever it rotates it must be related to the distance it travels right so really if you take those two constraints and three degrees of freedom the thing has only one degree of freedom remaining right okay so what should i do now to solve this guy what should i do we're going to because i want to start getting into the rhythm of solving problems now what should i do what's the first step i need to take hmm well the first step kind of the the first step i always say is draw the free body diagram but there's kind of a step zero before everything else which is finish the labeling so let's label this right so the label is i'm gonna call this distance l and i'm to assume the radius of this thing is r okay so next step draw the free body diagram what are the forces on this guy what gravity so that's mg what else normal force which direction would it be where's the normal force being applied precisely on the object where at the bottom right so when you first draw the free body diagram be very careful to label where the force is being applied because it might affect the torque right let's call that normal force anything else friction could it slip without friction oh it would slip could it roll without friction is what i meant to say no okay all right okay and theta is the angle okay so your unknowns are going to be theta f and n you'll get three equations of motion you solve them and you're solved you solve the problem right all i care about is i want to know what the theta is going to be as a function of time so really only i don't really care about f and n all i care about is figure out for me theta equation for theta with the f and n eliminated that's what i care about right but i could you know in some cases i might care about f i attended a talk by a guy from michelin the tire company and their entire lives is spent analyzing f and n and the pressure on the tire and it's very interesting right um anyway but we don't care about that but the problem is that in solving this and writing out the equations we'll end up having to care about f and n all right so let's talk about it so how do we solve it how do we solve formulate the equations of motion for the sky anybody say it again okay so here's what we need to do what are the three equations of motion for a rigid body f is equal to m a in the x direction in one direction and physically i mean the y direction whatever those two directions are and what's the third one torque right okay so which direction do you think we should be writing a physical m a in anybody what's the convenient direction okay parallel to this all right so here's what we're going to do so f is equal to m a so let's assume that this is the a1 direction and this is the a2 direction right so in the a1 direction what is the net force in the x direction by the way what is this angle anyone is it fee yeah right so in the a1 direction that is in this direction what is the total force on this rigid body come on you know just say it help me here minus is equal to what what was that m what a of what a of this point a of this point center of mass so this is the center of mass so what we care about is m oh i'm sorry that should be a figure sorry right m a of the center of mass and you know that this is only going to accelerate the center of mass is only going to accelerate in this direction so i'm just going to call it a i'll leave out the frames and all that for the time being okay if you want i can write this as a center of mass but it's you know i'm taking the because i'm only taking components right it's a scalar so it's the a center of mass right a center okay in the a2 direction what do i get look it's very simple stuff you know this i just want you to go through it with me i know it's a little boring but we need to do it so the a2 direction is equal to zero because it's not accelerating in the a2 direction okay now i have three unknowns n f and the third unknown is is what theta right so i need to somehow get theta into this right we'll bring it in so the way we can get theta the kinematic constraint is this distance that is rolled down l right so the way i can say it is a i'm just going to write it and i want you to understand it by including it right now does that make sense i want everyone to stare at this kinematic constraint ac is equal to minus r theta double dot where do i get that l is equal to some constants some constant minus r theta right l dot is equal to minus r theta dot l double dot is equal to minus r theta double dot right is everyone are you good with this okay and so the kinematic constraint is ac is equal to minus r theta double dot okay do i have enough to solve the problem anybody because the way i've written the equations i actually have four unknowns the unknowns are this guy this guy this guy and this guy right just the way i wrote the equations so i need a fourth equation where does the fourth equation come from torque okay so what is the total torque on this guy first of all let's pick a point now which point should we take the torque about okay now if you take the talk about the center of mass the problem is this f guy force which is an unknown which you know it's an ugly force we don't care about it it's going to show up explicitly right wouldn't it have been more convenient to take the point about this point the talk about this point because both f and n would vanish right but can we is it the center of mass in fact it's the icr all right but let's not do that right now we know you can do it with the center of mass center mass is always kosher so let's do it about the center of mass ugly as it might be so let's do that now right so what is the torque about the center of mass anyone sorry say it again minus rf right right hand rule so it is r cross f so this cross this is going to be negative so it's going to be minus r f is equal to what i alpha why but what is alpha theta double dot uh so now we have four equations four unknowns we can solve the problem i wrote it out this way just to kind of look it's pretty obvious what we're doing right but i wrote it out this way just to kind of show you that the unknowns might balloon depending on how you wrote the kinematic constraints you might add one or two but the kinematic constraints will compensate for them you get it right but you have four equations four unknowns and it's a very easy system to solve which is the objective of the first snap quiz do it so although it was more convenient to take moments about this point which is perfectly valid because the center of mass we chose to take moments about the center of ma sorry this is the in icr we chose to take moments about c it just makes the equations more complicated but the answer is going to be the same so do it and let's get the answer this is approach one by the way where we take the moments about the center of mass where the reference point is the center of mass uh the question the answer what i'm trying i want you to solve for theta double dot all right give me theory double dot if anyone is struggling let's get rid of acn f put this replace that ac with this guy okay so all that's left in that term is f and replace that f with this guy and you're done right one more minute so good how many of you are done okay i'll give you another 30 seconds if you're not done you're probably just you know struggling with the equations in terms of manipulating them nothing profound here's what's going to happen the solution is very simple insert from equation 4 into 1 and insert from equation 3 into 1 and what you will get is mg sine phi minus all right what did i miss something oh it's a plus here okay if you insist okay so you bring the theta double dot to one side and you'll get what's the final answer what's the answer well let's okay i was trying to do it fast let me have a look okay one person speak what do you want to change here everyone agree with that yeah yeah actually that is correct i had that negative in the wrong place i was doing my head all right so theta double dot will come out to be what final answer negative mg sine phi okay done it's as simple as that question you raise your arm yeah so if you want to absolutely i is mr squared right i'm sorry half of r squared so then this will become you can simplify i and it will become half mr square that will become three three by two mr right and you can come up with the final answer okay the funny thing is i don't care to care about the eyes exactly this formula still holds for example if we've got a hoop right this would be i would be mr right in fact let me ask you a question based on this if i gave you two circular entities of the same mass one is a hoop and one is a solid disc but the total mass is the same which will accelerate faster which one the solid mass will accelerate faster right makes sense because the inertia term will be less for a solid mass compared to a hoop right to your point right why is that intuitively yeah you're considering more mass of the edge right in a hoop so there's more resistance to rotation get it the mr square there's an extra half mr squared there does everyone understand what i just said right okay so we just solved this problem soup to nuts right using approach one where we use the center of mass and the answer was okay that was the final answer that was approach one now it's vaguely satisfying now let's do approach two quickly now we said it earlier we hinted at it what is the icr of the rolling disc point of contact right let me ask you a question if that wedge was sitting in a truck and moving 100 miles an hour north constant velocity would the point p be an icr what do you think very good so here's the deal very important all right there's a lesson in there so you're exactly right that is the icr but i put a twist into it i said if you're seeing a truck heading north at a constant velocity or east to west it still works the same the point is an icr is the instantaneous center of rotation of a rigid body with respect to an inertial frame so you just attach the inertial frame to the truck it would still become the icr however now if you're trying to find the icr of some other rigid body also in the same system you better use that same truck get it you understand you pick one frame and you can make that the icr but if in that truck there's another tiny truck heading some other direction of the same situation that rolling it won't be the icr you understand do people understand i'm saying there's only one you get only one choice to pick an inertial frame and it's an icr in that frame everyone cool with this okay now if it was if it were not perfect rolling if it was slipping would that be the icr yes no or maybe yeah if the disc were not not like a gear right but we're kind of slipping a little bit and rolling you know slipping rolling slipping rolling skidding right would the point of contact with the icr no it would not where would the icr be kinda lower under the floor or higher what do you think it would be lower all right now imagine you're in a motorbike here let me give you a give give another one i'm on a motorbike right i flow the gas the throttle right the wheels really start spinning in the same place where's the icr center of mass get it so in the slippage the icr can move up or it can move down right if it's slipping in this situation you know it's kind of not rolling but kind of slipping and rolling it would move down if i'm revving up an engine right and i'm kind of spinning my wheels it's moving up so the icr is not always but if it's perfect rolling it is the point of contact everyone cool on this yes okay so how do we solve this problem with the for the icr what do we do first of all would the icr be more convenient in this situation yes or no yes and the reason is look when we took the torque equation we got this silly term here right rf right and f is an unknown if we taken the torque about the icr f and n both the unknowns would have gone away right we would still we could still write equation one so let's do it so equation one um which is f is equal to m a in a one direction same as about right equation two which is f is equal to m a in a two direction same okay what is the torque equation about the icr somebody anybody what's the net torque about on this rigid body about the instantaneous center of rotation quick quick quick quick what r cross mg that's right and that would come out to be but okay and is that positive or negative yeah so torque equations come on going to come out to be minus r m g sine phi is equal to i theta double dot but where do we take the i about that's right i of that rigid body above the icr times a alpha b which is theta double dot right right now let's expand this what is the moment of inertia of the that rigid body about point p not about its center of mass quick plus m r squared right so let's write that down so let's write this equation down again minus i minus r mg sine phi is equal to 3 by 2 m r squared or the other way to write it is i'll write in the more general way it is i center of mass which was just called i earlier plus m r squared times theta double dot right and what's the answer for theta double dot quick the same right is the same thank god got different answers last year it was very embarrassing you know all my derivations my whole life flashed before my eyes i questioned everything my mother done now did i use equation 1 in doing this did i use equation one in solving i could write it but i didn't need to use it right did i use equation two no i used equation three did i even use the kinematic constraint explicitly kinda not right so just using this i got the answer it's just a matter of convenience it's a shortcut that's all and the judgment about the point q all it'll help you with is in solving the simultaneous equations more quickly so that was what i was saying earlier and i said you get understand why certain points are more effective than other points in writing equations you understand this was just a trick it's like you know if there's a not you pull the string at the right place it just unties this was it that's it nothing profound when in doubt you can always take the center of mass you'll have a few more terms but you can solve it get it is everyone copacetic with this right everyone copacetic yes okay here what happened was that of all the unknowns and all the knowns you got one equation with one unknown you solved it right now there were three other equations and three other unknowns but you didn't care about them right those were these three equations and there were three unknowns embedded in there but you didn't care about now if you had to calculate the normal force you would have to have to go back and solve them anyway okay a lot of dynamics is really all about convenience that's it after you learn a physical m a you can always solve the heck out of anything just by brute force and a lot of dynamics is just simplifying the map that's it even the concept of a moment of inertia is just a way to simplify the math for a lot of particles right right is everyone good with this okay now let's take on a bigger challenge which is your grandmom right all right so the way grandma enters the equation here is you have a same situation right ball rolling downhill i'm sorry not ball a disk well not bad i'm having a good day today okay all right now this is the center check we did that right we just did we did the first approach which is solving the equations using q is equal to the center of mass got an answer this is the icr point p is the icr we did that and we got the same answer thankfully now let's pick some of the point q and i could pick it anywhere i could take this point and say tau is equal to i center of mass alpha plus the correction term and it would be perfectly kosher right but let's say that for now you wouldn't do that why would you not pick this point yeah why would you do it right it's just going to complicate your equations there's no convenience to it but it's perfectly reasonable but let's say there for sentimental reasons because you do have this aged relative who lives at point q here you want to take it about point q and we'll just do it let's have a go at it okay and let's say grand mom and liv grandmom lives in not to perverse a place it's r below the surface a distance r below the surface just to make our math a little easier okay and let's see if we get the same answer what do you think do you think we'll get the same answer yeah of course you know the script okay okay so let's write the whole thing again does equation one change same equation two the same so what happens to equation three we're taking talks about we're going to call this point q help me out here folks stadium sorry i can't hear you well well okay let's talk about it right you're just going to take the forces all these forces you're going to take torques about this point right is n going to contribute no because r cross n and r r and n are parallel so it won't contribute what will the what will the contribution f be contribution of f is plus rf right okay any other torques on this guy what and an r right let me write that neatly up i apologize minus 2r times mg sine phi is equal to that's a net torque on this thing right what is i for this guy where do you take i about look at the look at our uh which i should we consider center of mass so what is uh that's just the same ic that we usually use and we need this correction term right right so what's that guy let me put it up there so you can stare at it what's our correction term going to be think about it i'm sorry well okay but just yeah exactly it's exactly right sorry go back yeah say it again uh 2r well look it's very simple yeah we have r theta double dot here right so it's going to be that's right it will be 2 r that vector cross m a which will be minus um r theta double dot i forgot yeah is everyone good with this i just took r cross m a right a is minus r theta double dot from there we've only calculated it right and r is 2 a 1 r right across m a is this is the sign right okay and the fourth equation is the same as before okay and we now come to the last part of your snap quiz solve it and for god's sake please give me the same answer so you know the answer right this should be plus yeah actually thank you that should be a plus by the way right and make mine make it easier if did the crossbar let me step back and see if i can see it from here okay this approach number three we're doing so how many of you've got it okay for those who haven't let me give you a very simple explanation of why this is right why you look at the same answer look at equation three right equation one is the same as the top right board equation 2 is the same equation 4 is the same liquidity equation 3. is it the same as the equation on the equation 3 there it's not right they're different but if you stare at it you will realize that equation 3 that we have here is actually a restatement of a couple of those equations namely equation three here and equation one up there you understand if you double one multiplied by r and subtract it from 3. so this is in fact the same as minus 2r times this plus this get it so if you sort you know if you kind of shuffle equations nothing changes get the same answer and you can see why therefore you can kind of understand that the moment of inertia you see this m a term this r cross m a term it kind of was like a parallax theorem at some level kinda right it's not really okay because it's not really a parallel axis but it has the same effect so essentially what this does is this equation this set of equations is equivalent to that set of equations and when you solve it as before you're going to get why is the last term positive okay so where did the last term come from it was this term right so the way it would be is it would be let's write it out rqce let's write it in full vector form is 2 r a1 sorry a2 a bigger button that's what i had wrong a2 right cross m e what is ace it is minus so it's minus here ah r theta double dot times a1 a2 cross a1 is minus a3 and there's a negative here so it'll become positive got it so that's the origin of this guy i was trying to do it in my head and i put a negative in there but rj and sam caught me it's actually a positive and once you put it all in the answer comes out to be the same so those are the three ways in which use torque and acceleration okay now it's very interesting this was a system with one degree of freedom really right if you wrote the equations in the most if you pick the most elegant approach which was this one you got one equation one unknown you were done brute force the most inelegant approach that's not the wrong approach right and in fact in many ways that is the most elegant approach because the same technique for every problem right you get four equations four unknowns right and the reason you get four equations four unknowns is because it's got three degrees of freedom plus a kinematic constraint right in fact if you've written it even more brute force it could be five equations five unknowns because you could even have said well i will state that the acceleration kind of perpendicular is an unknown and then stated equal to zero right so you get one more equation one more unknown get it right so in its most general sense you might get a lot of equations but you'll always get as many equations as their unknowns the smarter you are about picking the reference point to take talks about the fewer unknowns you will have to deal with and for your equations and get into your solution but if you want to know all the hidden forces all the reaction forces all the internal forces then you have to write it all out get it if you solve for f using this you'll figure out what the the the friction force needs to be and if you're designing tires right you need to know how much friction force the tread can support okay if you figure out n you know what the air pressure in the tie needs to be in order that it doesn't bottom out the tie doesn't bottom out and the rim doesn't touch the ground right so this is you know i attend i attend this this great talk by michelin um it's a very interesting thing they've invented something called the tweel how many of you have heard of this tweel you heard about it you heard about it tell me about the tweel stand up stand up this is great but i want everyone to hear about it go ahead yeah it's a tire that doesn't have any air in it that's right so it turns out that dunlop invented the rubber tire with air inside it in like 1898 right and that's where the company dunlop came from a couple of years later someone using was using a dunlop tire in france and it burst and there were two brothers who fixed it their last names were michelin that's where the company michelin comes from they're like cool we can make tires too right and for the last hundred years every automobile worth its salt has used a air hose tire and now michelin has invented a new tire in fact i was showing the i showed ajay the the design a few uh about a few months ago essentially what they do is they have the tire and then instead of putting air you know between the inside and the outside they put these little kind of spooky things they look like spokes they're not spokes okay and you get this and when when you get contact you get a little bit of shear which you you will see in 201 and it it kind of flexes like a tire or it kind of spreads out and the beauty of this is it has no air so it can go flat it's great in military applications etc but number two it actually if you do it right it could be more efficient and cheaper than a tar and it might actually reduce power consumption like five percent three percent power wastage because stars stretch right brilliant stuff right and already you know the you know the segway scooter right segway is only two tires if one of them blows up you fall over right so they're already thinking of using this many forklift trucks etc they go into this so i said they came and gave a presentation and you know where they started the stuff and they calculate the normal force they calculate the friction force and then try and figure out how much pressure you need or the suppose it all starts here right so i hope now you're very clear about the three approaches and that they're all the same it's all a matter of convenience next week rj is going to cover energy very simply very basic energy and then we'll solve problems saturday at 3 p.m you'll see me here again and monday we'll do more problems and i'll be a little more i'll give you a technique for solving problems that's i think pretty good okay see you guys
MIT_2003J_Dynamics_and_Control_I_Fall_2007
9_Dumbbell_problem_multiple_particle_systems_rigid_bodies_derivation_of_torque_Ialpha.txt
the following content is provided under a creative commons license your support will help mit opencourseware continue to offer high quality educational resources for free to make a donation or to view additional materials from hundreds of mit courses visit mit opencourseware at ocw.mit.edu okay in the last class this is what we did first we did torque we we uh define torque and torque of a force on a particle don't look at that look at this the torque on a for of a force on a particle is equal to tau it's a vector little tau on the particle p about point q is equal to you know this but cross the force on the particle p you can pick any q you want the second thing we did was angular momentum thanks aj okay the second thing that's just recreating stuff we did in the last class don't worry i just i just want to save some insight i want to give you some context don't worry about it we'll get there okay angular momentum we defined it as we called it a with respect to frame a h is angle momentum because angular momentum is really angular momentum that's where it comes from the h is silent i'm joking you know that thank you is equal to r qp cross a p of p right so all this is you know for a particle this is o this is q is the reference point this is a particle p right and there's a force on it right this is for that can this is for that situation little p of course is the momentum linear momentum take the cross product with r to get angular momentum and then finally we said that torque kind of the equivalent of f is equal to m a for rotation is this torque p over q sorry p uh let's just pick a q is equal to a d by dt a angular momentum right rate of change of angular momentum is that strictly correct is that correct so that's only true in some special cases the real complete formula is there's this correction term which again has no name so i called it the pesky term which is because i need to refer to it and i need a name for it is a v p sorry avq a bigger pattern cross a p p and we call this the pesky term okay so that's the three things we did yesterday i mean on in the last class torque angular momentum and torque related to angular momentum right two definitions these two are definitions and this is a a law which comes from f is equal to m a all right and then the other thing we did in the last class was we started the dumbbell problem okay now all this was for a single point no rigid body is a single particle right which is a point mass the dumbbell problem was two point passes connected with the link remember that but i said look a physical ma assume some unknown forces in the link and we'll just solve it like it's two one particles right and we should be able to handle two particles everything that follows in this course we should be able to solve with f is equal to m a so we started that we didn't finish it so i'm going to finish that today and that's what rj has written up for me on the right hand side i'll go finish it right there's a lot of stuff we've already done and he's going to save me some writing then the next thing we'll do today and when you when you're done with the dumbbell problem you'll be like well listen i saw the dumbbell like it was two single particles but really i solved a rigid body problem because it's actually a rigid body right and you'll see moment of inertia just kind of drop out magically and then you'll be like huh moment of inertia is therefore comes naturally when you solve multiple particle problems right so then what we'll do is we will actually formally define a moment of inertia and formally introduce you to torque is equal to i alpha that equation all right it'll actually come out of this you'll see torque is good i alpha but you'll see it here right so you'll see torque is equal to i alpha you'll see that so that's kind of what we're going to do today any questions okay so if you remember the dumbbell problem what we had was two number a dumbbell which is two point masses right and uh we applied two forces f p and f q the unequal forces were just to make the math easy we made them parallel i don't care it really doesn't matter which you can solve the problem anyway okay and we said that the for the reference frame a is inertial right the origin is o we call the center of the two dumbbells c of the two sorry point masses right the center of the dumbbell c this is q and that is p we pick these non-standard coordinates because they are convenient they're minimal right because this thing has three degrees of freedom and now you have you can take these two lengths and the and we we picked theta as the other non-standard coordinate yep oh you missed bix p and q are you sure okay okay okay and the two forces on it so the two drum dumbbells sitting on a on a ring right think of it that way and you have two rockets attached to them they both apply forces heading that that way and the two forces are constant but they always head that way fp and fq and what we're trying to figure out is what this dumbbell does okay so we'll come to the free body diagrams actually so the first thing you do is you draw a free body diagram and the free body diagrams are um what's the u here ok oh i see just the unit vector okay all right so essentially what uh what we're doing here is well we could have called it uh b1 right okay okay so we can take each point mass this point mass has only two forces on it one force is this f and the other is a kind of a tension in the in the rod right we'll come to that in a second this point mass has the force fp and it has the tension in the rod this way right yes why is the tension along the rod anyone anybody yeah and why is that why does it have to be coaxial can a rod not apply a sideways force i mean i can do it right i can pick up a stick and wave it what was that well the reason actually is this if you just did a free body diagram on the rod and the tensions were not collinear and they were kind of skewed right that they didn't meet there would be a net torque on the rod but you know the mass of the rod is zero right so the change of momentum the rod is going to be zero change of angular momentum so the net torque must be zero get it the angular momentum of the torque is zero forever more so the rate of change of angular momentum is zero right and torque is equal to i alpha plus you know even if even if you take all this the right hand side is going to be zero so the torque is going to be zero which means that the tension forces have to be co-linear if you don't understand this i'll explain later it's not a big deal right assume that this is a sort of a struct which can only support compressive attention it can't support any bending that's the other way to think about it so these are the two forces so we've drawn a free body diagram then the next step is we're going to calculate the accelerations of points p and q all this i did in the last class right the acceleration of point p is ap is basically this vector plus this vector the rate of change of that right and you can use the ultra super cool magic formula if you want and what you get is the acceleration of point p is acceleration of point c right minus this what term is this centripetal term just from magic formula plus what is this euler formula right this is r i'm just writing in my handwriting right and then a q same thing right so we know the accelerations of the two endpoints right i'm going to stop here because i see some quizzical faces everyone good with this copacetic with this not a big deal all i'm saying is look the you know this end instead of finding the if i want to find the accelerations of the endpoints in terms of the coordinates i picked which are the center points kind of x and y location and the angle of the dumbbell this is how i would write it andrew everyone good go ahead see i knew okay let's talk about the tension here's why i don't have a place to write let me just write it here here's why this is the rod these are the spheres right the point masses i'm saying the tension has to be this way if the tension would ratchet this way they would still be equal and opposite so they would still satisfy the weak form of newton's law right even without invoking you know newton's strong form right the fact of the matter is if they're this way then if you just look at the rod does it have a net torque on it think about it if i take let's say it talks around this point the center of mass just the rod get it do you understand the torque if the rod is massless and it's got two forces creating a torque around it it'll have to accelerate at an infinite you know alpha or go to infinity right at some intuitive level do you understand what i'm saying right a massless rod therefore cannot have uh or the forces on it must be along the the line connecting the points they have to be collinear get it okay anyone not get this i can say it again okay and are you okay with that right anyway so now i got the accelerations of ap and aq and now i can write newton's laws right so i can write newton's law for this guy f q minus t b1 right fq minus t b1 f q minus t b 1 is equal to mass times acceleration of point of particle q f p plus t b b 1 is mass times acceleration of point p good i've done i actually have this is actually very simple i mean i you know sometimes i'm saying things so simply that you might get confused any questions from anyone you can stop me here if you want first step label and draw everything second step draw the free body diagram third step figure out the accelerations fourth step right f is equal to m a that's what i just did well that's what rj did right now actually what happens next is look you have two equations they're both vector equations right and at some level you have three unknowns so there's actually four equations right but you have how many unknowns what are the unknowns the x location of the rod the y location of center of the rod the angle is there any other unknown anybody anyone well okay so let's list them again what are the unknowns you want to know where the orders x y and angle that's three i have four equations so there must be one more unknown right tension okay if the two rods weren't connected the fourth unknown would be like you know x y x y of the two you know two bodies and there wouldn't be a tension so if there's a kinematic constraint one of the degrees of freedom would go away but an unknown force replaces it you understand it's very important that you get this gestalt all right okay so four equations four unknowns you should be able to solve it okay but now let's solve it and i want to show you something interesting now what do you have what do you do when you have simultaneous equations you subtract them you add them you eliminate terms right i'm just going to do them all in vector form so if i add these two equations 1 and 2 this term will cancel out right so i'll get f p plus f q is equal to the sum of these two guys correct so that's the first equation here f p plus f q is equal to 2 m a acceleration of center of mass got it yeah so that's the only little bit of you know shell game i played here m a so the definition of the center of mass is the so that's how that's the definition of the center of mass right if you take the acceleration of one end times the mass of that end plus the acceleration on the other end plus the mass of the other end right what you get is like two times that mass at the center of mass at the center of mass accelerating with the acceleration center of mass all right this will become more clear in a minute when i do the actual derivation you'll see it but essentially when you add maap plus maaq it's the same as 2m times acceleration the central mass is the weighted average that's what the central mass will do don't worry i'll nail this in a minute okay so it's a good good catch that was a little bit of shell game i played all right actually rj played it not me i would have you know if i derived it i was shown you you know very carefully but he what did you derive it said again i'm sorry yeah actually actually thank you if you write ap and aq in terms of these two terms right it'll all cancel out good catch all right but it's also the definition of the center of mass just so you know okay so if you just expand this right just write ap then see this term will cancel out this term will remain and this term will remain right and even this will cancel out so all you'll get is two times this okay so you'll get that anyway everyone's good with that adding up the equations right now so that's the first one take a1 and a2 and add up the second reduction i'm going to do is i'm going to take equation multiple one pre-cross product it with rcp okay which is basically the moment let me explain that so i'm going to take equation one this guy and go rcp cross this plus rcp cross this plus rcp across this still a valid equation right and i'm going to take equation two and go r c q cross this minus r c q cross this is equal to r c q cross this and add them up get it so it's kind of like taking simultaneous equations you know multiplying all the terms by something and then you know adding it that kind of thing right we'll give you a handwritten version of this don't worry okay you'll see it i wanted to make a very simple point which was that if you do that and you do all the math a lot of terms will cancel out and guess what you will end up with you will end up with rcq cross so i just do that now you know cancel all our terms will cancel out right i'll end up with rcq cross so you don't have to write this down guys we'll give this to you all right i'm trying to make a deeper point here not the map point all right the math trust me if you check through it it all works out r c q cross f q minus f p you will get is equal to m r squared a alpha b okay and i want you to stare at this for a second because it's really cool here's what's really cool about it and by the way i i eliminate t right if you you can come up with another equation it tells you what t is but essentially i've solved the problem this is saying the sum of forces on the particles if you take if you take the two particles and treat them like one particle with twice its mass right at the center of mass right the sum of forces is equal to m a mass of this kind of combined you know lumped particle times the acceleration center of mass that's inside number one right can anyone interpret this for me torque is equal to i alpha do you understand this this is simply the torque about the center of mass and in our math out popped what is actually the moment of inertia although i haven't defined it you officially yet the moment of inertia of this dumbbell multiplied by alpha torque is equal to i alpha okay and there's a reason we did all this and the reason was to give you a hint of what was going to come ahead so my question to you first is do you understand this do you understand what i'm trying to say can anyone yep levi i don't really always understand what you are talking about uh okay we uh i didn't emphasize it but in the drawing last last week we defined the length of the rod as 2r there are a lot of r's floating around all right r is a position vector but an unsubscripted unvectored r is simply the length of the uh dumbbell say it again so they're all unsubscriptive but they're all vectors no no which maybe we shouldn't have used r maybe we should have called this l get it just this length of the dumbbell then this would have been then this would have been 2 m l squared a alpha b got it it's just another it's just it's just a it's the length of the rod it's 2r right that's you think of it as 2l then this would be m 2 m l squared alpha b you understand yeah okay someone else had a question here do you have a question anybody else look here's what i was trying to say you want to send f is equal to m a for particles right totally get it here's the deal guys if you have multiple particle systems for example a two particle system where the two particles are rigidly attached you can apply a physical ma to each particle and assume an unknown force and just solve it that's completely completely correct we just did it completely correct when you do it what you'll find is when you solve all the equations they will reduce to a very interesting form which is f is equal to m a for the center of mass and this funky new term that keeps cropping up which we've kind of already defined which is this thing which is actually the torque times i this kind of mr squared kind of term 2mr squared alpha and this will keep cropping up if you have three particles you will get the same thing you get four particles you get the same thing 300 million particles a lot of equations you would still end up with the same thing got it so why not we just derive the same thing once and for all and we can show that tau is equal to i alpha around the center of mass and then every time we see a rigid body we can just use that directly get it but there's nothing new in here all i'm saying what nothing shocking what i'm doing i'm just going to derive this for an infinite or any number of particle system i'm going to show that torque is a chloride alpha but you can see from a two particle system you can show toxic i-alpha get it no yes maybe it's good okay that was the objective of this exercise was to just give you that taste of what uh what we're going to do next so what we're going to do next is we're going to take millions of particles right and do the same thing except we'll do it in a kind of a generalized format and that's called a derivation and the output is going to be that same result but in a generalized way it's a general formula okay oops huh oh raise it i'm sorry i thought you said erase it i was like what do you want to erase it it's only below what do you want me to erase it guys don't seriously i want you to get the point of what i'm saying right i'll give this to you promise promise you we will scan it and post it if it's already written down right we'll post it for you on the web so you can read it okay writing it down it won't add a lot for this you should write all the other stuff i doubt i say and do down but not this okay um but you can stare at it but don't write it down because we'll give it to you point i'm making is a very simple one right torque is equal to i alpha for rigid bodies that's all i'm trying to say okay and really this is all just boring math this is the result i want you to see okay and i didn't cheat i just did all from my physical dna all right so what we're going to do now is we're going to look at multiple particles like that in one shot so first consider again a frame of reference a and this is point o consider a particle i a particle j and thousands millions of other particles okay now on each particle i or j there is a force which i'm going to write subscript of the superscript i little f which is the force on i and there's a force on j which is j that's the force on j okay and each particle has a position vector o i r o i and r o j okay same thing x i'm now going to do it in a general way now first you need to tell me what is how is fi related to the acceleration of particle i anybody is that right okay uh actually let's not assume that thank you m i ai right that's the acceleration of particle a or i right what is the uh torque on particle i let's pick a point q we'll call this the reference point for torques what is the torque on particle i and how is that related to its motion anybody sorry yeah so that's the definition of torque what's the angular momentum type thing it's just going to be the formula i didn't ask the question clearly it's just going to be this formula sorry right these are the two these are the two statements we can make about those two particles right yes okay now what happens when you have multiple particles i j k 1 to in you know 300 million n how do those equations work look at these two look look what happened here what happens in your multiple particles anybody say it again oh let's do it that's right this should be q is that what you're saying yeah thank you that should be q reference point now what happens in the multiple points take a guess what's the equation going to look at for multiple points anybody let me say it's going to look the same what's now by the way i'm going to call this entire set of points all of them i'm going to call them the ensemble e so when i say e i'm referring to the entire set of points because i need a term for it all right so here's what we're going to end up the total force on the ensemble the total external force on the ensemble is equal to the mass of the ensemble where the mass of the ensemble is sigma mi so it's the sum of all the masses times the acceleration of the ensemble that's right the center of mass of the ensemble that's where we're going to end up all right the center of mass and i'm going to write it like this all right just right now for emphasis i'm going to say the center of mass of that ensemble when i say c subscript e it's the center of mass of that ensemble okay after some time i'll get lazy and i'll stop writing the e it'll just be c when you see a c assume it's the center of mass of the ensemble i'll try and keep it up as long as i can and you know what else we're going to find we're gonna find the torque on the ensemble about point q which is the sum of all the torques about all the points about point q we're gonna find this i'm just giving a heads up is equal to a it's going to look very similar to what we have there d by dt a h when i write a capital is the total right the total torque not tau the total force not little force it's the total angular momentum of the entire ensemble about point q plus a v q cross what's this term going to be what's the equivalent here going to be say it again yeah capital p what's the reference frame a and what what is it the angle the linear momentum off the center of mass of the ensemble now as it turns out in this one case center of mass of the ons the momentum of the center of mass of the ensemble is the same as the total momentum of the ensemble got it so this is simply going to be this okay so this is what we're gonna do this is what you've seen this is we're gonna go to and then by the way we will show that under certain circumstances this reduces to i alpha and really these are the two equations we seek these two okay so that's the roadmap we're going to follow got it this equation we're going to turn into this guy this equation we're going to turn to this guy under certain circumstances this equation becomes i alpha and we're done that's what we're going to do now any questions i insist on at least one question at this point i'm feeling unquestioned i need to be questioned yes thank you that's what we're going to do now so this is that's where we're going to go and what i'm going to do is i'm going to take you there i'm going to show you these results good question okay so what we're going to do now is derive this derive this then we'll divide this and then we'll divide this okay so let's derive this first in order for me to derive that the first thing i need to do i need to do is i need to introduce the concept of a center of mass officially you've seen it unofficially but i need to do it officially okay if i have an ensemble can anyone tell me the definition of the center of mass of the ensemble take a guess i'll say it in english and let's see if you can put it in formulae it is the weighted average mass weighted average of the position vectors of all the individual particles right does that make sense in english yes here let me tell you how to do it right i'm just going to write it down and you'll see it's correct you by the way this you need to get you're exactly right by the way but the only thing i'll defer with is you don't have to do it separately you can do it in one vector form let me explain here look this forks is the center of mass take every vector of every particle right multiplied by its mass add the whole thing up divided by the total mass the vector that comes out it's some sort of mean vector that is the vector of the center of mass see what i mean and in fact you were right in principle i mean in practice you might do it in in coordinates but you can write this way so now here i'm going to stop and ask you does anyone have any questions about this i've simply written an age-old concept things you've seen in high school in vector form and do you get it you twitched was that just doing this or did you have a question just kidding anyone do you understand this if you don't if you think i went fast and you want me to say this again i'll say it again who wants me to say this again anyone okay that is the central mass it is again take every vector every position vector multiplied by the mass of that particle add up all this ensemble you know position vectors weighted and divided by the total mass boom it's the average position vector got it okay now if i wanted to find the velocity of the center of mass it would be this i would write it as this i'm only doing this because in the end we'll need an acceleration of the center of mass right so i'm going to take the first derivative and i'll write it for you and you'll see that there's no great magic here it'll come out to be simply by the way we can expand this summation it's i is equal to one if there are n particles of summation the summation i is equal to one through n off m i can anyone tell me what the velocity is going to be at the center of mass for the position vector to take the the weighted average of the position vectors for the velocity what do i need to take excellent i'm going to replace this ensemble summation with just this term which means the mass of the ensemble okay and finally if i want the acceleration of the center of mass it's going to be what the weighted average of the accelerations of the individual particles make sense now i'm going to raise this only because i need some room so now i've told you what central mass is ajay can you publish the handwritten dumbbell problem if you just scan your notes thanks all right that's the center of mass stuff okay now you understand center of mass uh i i remember when someone asked me you know where that a acceleration c came from right there were two ways to do it the ajay way which is to expand it and just show that it comes out to be that but my explanation was actually i was kind of alluding to this you understand okay now let's take this guy and summit for the entire ensemble of particles okay that's just simply that equation summed up for the entire ensemble of particles right right everyone yes uh you know that's the same thing believe it or not in fact that's so think of it that's actually um i'm glad you're asking that question because it's good it's important to think about it those are not different things they're not incompatible if you remove this it's an individual particle get it and then when you sum it you're just saying this individual particle plus this individual particle plus this individual particle a counter is simply saying consider each individual particle and then do repeat until it's like writing a program get it everyone understand this okay so where does this lead us what is this written as what the total force on the ensemble capital f e okay what is th and what is this look up there what is this right hand side that's correct so we have that first result okay so f e is equal to m e times the acceleration that's the first result we've just proved it it's very simple i just took the equation of the motion for each particular added them up i use the definition of central message is what i get okay it's just a nice way to do that once and for all and we're done two quick questions for you first question does this assume that the particles are connected or rigid did i make that assumption anywhere i did not that means that eve in fact i'm assuming these are like independent hockey pucks with rockets attached to them right these are independent hockey pucks with rockets attached to them add up the forces of all the rockets i take the total mass i find the central mass and i go a physical damage so i made no assumption that the particles are connected now here's the tough question if the particles were rigidly connected by massless trucks would this formula still hold and this is actually a deep question i won't do this in detail right now you'll see the same question pop up later so i'll give you the verbal answer right now and i'll do it in more detail later so my question to you if the particles are rigidly connected by massless struts would the formula still hold question mark yes or no or maybe or only on tuesdays yes and why is that anybody yeah go ahead yep that's right look if it's a rigid if the particles are originally connected there might be internal forces for sure in fact there will be internal forces right but you know what you give to one particle you're going to add to the other particle get it remember how here it kept showing you know this to this these internal forces when you add them keep canceling out these two cancelled out so even if it's rigid that holds but the beauty is it also holds if the particles are unconnected so it applies to multiple unconnected particles and it applies to a rigid body is everyone good with this i'm not going to show you the internal forces canceling out it's pretty obvious because you've kind of seen it here good so we have just derived this guy done now i'm going to derive this guy and then i will reduce it to this that's the roadmap for the remaining 35 minutes okay okay i'm going to keep referring to that diagram all right that picture there okay so now now how did i do this i just took a summation of the f physical m equation right i'm going to do the same thing with that with that tau is tau is equal to d by d a dt of h plus v cross b i'm going to just sum it no great magic right and i'm going to add it up and show that you get the same equation okay you know the reason we do derivations which are really boring no i said the wrong the reason that even though derivations could be boring we do them is because when you take something like this makes a lot of sense great and i do like to do this but when you generalize them they generalize kinda but they're pitfalls and the reason to be precise is because it's those pitfalls that you need to be aware of right so when you're a supervisor at nasa right you're the person who's going to catch the you know dude you missed a term you know you know you're the you're the that's your thing right you're the dude you mr term people right so that was a terrible movie by the dude where's my car terrible movie i finally saw it you know i was trying to be cool i thought yeah i'll see it you know it's terrible i'm sorry if you guys liked it i'm not cool okay did anyone like it come on and i had to detox i haven't watched a monty python i was like that's better okay i just i know i aged my i just dated myself but okay so we know that our oh you know i'm just rewriting that guy now right the torque i'm actually just writing defining torque is equal to this is the definition of torque right that's the total torque on particle i from the external force okay now what if there's an internal force what if there's an internal force from particle j onto particle i right what if there's a force like that right if you look at particle i it'll have two forces let's write it down remember i said here i just hand waved it in this case let's write it down what happens you will get another torque which is and i'm going to write it like this force on particle i from particle j okay and i'll get you know k i mean i'll get a bunch of these terms right from every other particle that wants to you know push in internally in an internal force on this particle right so that's the force total force on this particle now if you look at particle j right because it's a counter i need to look at each particle i'm going to get r the torque on it is r o j cross f j plus r oj cross f what am i going to get here what's the equivalent here dot dot right now if i add them up what do i get here some of all the torques right by the way why does this cancel out oh sorry sir so sorry we'll come back so add them up what do i get here you said it right huh talking the entire thing and the word the way i'm defining that is t capital t it's the sum of the torques on the individual particles added up about reference point q all right excuse me torque on the ensemble about point q and i define it as sigma i is equal to 1 to n if the ensemble has n points r o i cross f i and here's the deal what is f i j what's the relationship between fij and fji so all these terms cancel out all the internal terms cancel out get it i mean look here's the deal if you take two particles let's say let's say you're flying a plane right let's say launcher or you know yeah there's a you're flying a plane right now if two kids sitting next to each other pushing each other does it make any difference to the flight of the plane it doesn't as long as they're stably you know they're just kind of pushing each other in the stable against their chairs their seats it makes no difference to the flight of the plane right it's an internal matter and neutron sitting outside doesn't see this you understand so that's the way to think about the internal forces make no difference they cancel out they only make a difference by the way if the kids are moving around but if it's static which is what happens in a rigid body they make no difference okay yep how's that that was quick huh sorry i'm sure written q thank you okay so that's the definition of the total torque on a rigid body i write it as capital t and it's simply the sum of all this and i can ignore the internal forces got it okay so that's the first result i want to show you in that's in in my trying to prove that other equation now let's do the same for angular momentum because the three terms i'm going to sum up the torques i'm going to solve the angular momentum and i'm going to try and sum up this term okay yep yeah so i understand like right yeah yeah but if you know because yeah so it turns out that that's a quick question it's a very deep question it turns out that even in that case it becomes goes to zero right i won't write it out here but let me explain why first of all if it's a continuum then these two are very close right so the terms go to zero if they're you know attached by massless rods then you've got to use that kind of an argument and you can show that fij is the negative of fji and you can you know you can expand it all out and write f r i j and kind of write it out like that you're sure the term still vanishes okay and if you would like if you're interested in that if you come to uh if you come and see me i can show you some of the stuff all right so this term these so that that sort of body you're talking about which is not a real body a real body is all particles right next to each other right even in that sort of a thing these terms will cancel out it's very cool actually and in fact aj and i spent banged our heads heads against this every year we're like ah why does it cancel again and oh yeah finally cancels okay all right that's a good question it's actually not that obvious you know einstein was giving a speech i've forgotten where i think it is in berlin in the 50s and uh he wrote something you know i'm einstein you know so it's a reasonable comparison anyway so he wrote something or an equation and he said obviously and he wrote the next i think the next result was easily mc square or something very profound obviously right and a young physicist on the back apparently said professor einstein it's not so obvious to me and all the you know the old bearded you know professors kind of turn up at him and stare at him right and the guy's like oh sorry can he sit down and then einstein goes you know actually you're exactly right and he takes another four blackboards to derive that and if i remember correctly the story is that that man that kid was leola landau who then won a nobel prize himself so you might win a nobel prize like me all right okay so this is the uh so let's do the next term which is a h i q is equal to r q i cross a p i and if you write that out and you sum it up for all the particles i'm not going to do it there's no terms to cancel it comes out that the total angular momentum of the ensemble about point q you can define it simply as this summation which is sigma i is equal to 1 to n where n is the number of particles in the ensemble r q i cross a p i hm did i miss something eq did i miss something is right yeah okay okay and then finally let's plug this and this into that equation and into actually that equation so now we can say tau i point q is equal to i'm just recreating that equation i'm going to add it all up now that we've done the hard work a d by dt of a h by q i'm just rewriting that plus a v q cross a p i repeat n times when you add it up we can see that this if you add this up use this to get the total torque on the ensemble right that's it by definition right is equal to what is the summations of this anyone sorry yeah it's just a d by dt a h i how about q say it again yeah if it's capital here if it's yeah this is yeah that's right and what's that going to become anyone it's just going to be look it's just good this is constant right and the summing this hang up add up all the linear momentum of the individual particles to get the linear momentum of the total ensemble so that's going to be capital p because it's an ensemble and that is that equation got it the sum sum sum it's done any questions okay by the way the same things apply right so torque is equal to rate of change of angular momentum of the ensemble what happens if the reference point is stationary this term goes to zero right if the reference point is moving parallel to the center of mass this term goes to starts with z zero right same thing right similar concepts just added it up all right okay so now we have our second result and now we're going to do the final thing uh vis-a-vis the dumbbell problem we're going to show that under certain circumstances this reduces to i alpha okay and everyone remember moment of inertia what's moment of inertia anyone definition what it means any anything yep yeah it's hard to just rotate something right and the basic insight is a mass that is twice the distance from you is four times more difficult to rotate square it weights with the square it's integral m r squared d r right and that's why we ended up with this mr squared here right if the dumbbell is half the length that equation doesn't change but it'll it'll it'll rotate four times as fast right so now let's do that final thing which is going from here to introducing the concept of moment of inertia okay and this is where you know here be dragons so you need to be careful because you cannot do that about any reference point so only some reference points are valid and that's the difference between you and a harvard student right okay because you know that so in order to go from here to i alpha what we'll do is we will express this guy we'll actually expand this guy so what is a h of the ensemble about point o it is equal to as we said earlier and there's a lot of math and just a lot of writing here so you'll have to bear with me but it's useful to pay attention all right even now after all these years i get myself confused and i have to go back to the derivations to kind of figure out because a lot of things go unsaid oh i cross what anyone anyone what's this cross p yeah exactly but i'm going to write it out like this i'm going to expand p right now here's a very important thing what did we do in the dumbbell we took if you remember correctly if you remember we expressed the velocities of these particles not by themselves but how in terms of the velocity of the center of mass with the and the lever arm omega cross r right right because we're trying to get the whole rotation thing into it so we're going to do the same thing here uh yes that should be q it should both be q we will let's thank you they should both be q okay is equal to and now the way we'll do it so how do we break it up into uh how do we rewrite this anyone just to keep you awake for a boring derivation as i'm writing can anyone figure this out how about this does this make sense so it again right so think about what i'm doing right here's the ensemble here's the dumbbell here's the center of the dumbbell i want to find the end point of the dumbbell velocity the end point so i find av center of vc plus omega cross that distance that is the magic formula yeah so fundamentally what i'm saying here is i'm assuming this is where i'm doing it that those particles are a rigid body right which means that if the ensemble rotates all the points rotate you know how if you look at the stars and you kind of do those time lapse things they're all in scrap circles time lapse photographs right that's kind of what i'm doing here i'm saying look whatever the velocity of a particle is it must be related to all the other particles in the following way which is they're all center of mass velocity plus omega cross their distance from the center of mass as a vector get it are you okay with this if you're not you stop me you're saying where is bvi that's yeah that makes sense bvi zero because it's not moving with respect to b i'm not writing up the whole uh formula here all right if you wanted to use the entire formula it should be your right it should be a v c plus b v i plus a omega p b cross r right b is zero thank you rj that's i think that's where you were going because it's rigid so if you're sitting on the ensemble right getting the those particles aren't moving okay is anyone good with this sure so if you call the ensemble a framework reference e then the kind of the velocity version of the super ultra magic formula would be a velocity of i is equal to a velocity of this some reference point c plus b velocity of i plus a actually we call be right the ensemble i just use e omega b e sorry e cross r c i you understand so space shuttle right a point on the space shuttle space shuttle is e endeavor right c is some point on it we want to find the velocity of a astronaut who's running away from you know the central space shuttle with evi velocity right it doesn't matter this formula holds for any reference point but as it happens i'm going to do everything about center of mass of the space shuttle this formula holds right but this term is going to vanish because it's a rigid body right with respect to central mass on a frame attached to the ensemble the point i isn't moving get it this term is not zero because the whole ensemble is moving and this term is not zero get it just using the super magic formula answer your question okay that's this so let's rewrite it i have one see 10 12 minutes i'm going to rewrite this now is equal to sigma i is equal to 1 through n r sorry mi r q i just pull the m out cross can you help me out here folks just to be clear the c is the center of what do i get here right plus sigma pi is equal to 1 to n i'm just expanding it out the two additions again i'm going to pull the mi out what sorry yeah i'll put the mi out and i'm going to rewrite this guy again in terms of rci how can i rewrite this guy in terms of rci anybody our qi is equal to rqc plus rci all right basically i'm going to take the vector from whoa rci our our qi is this vector right the center of mass is some point here c so i'm going to go this way i want to express everything in terms of this guy so this is equal to r q c plus r c i this what i just do here that's this guy okay and then i have cross this term i'll just summarize in a second okay here's what i did first step i wrote it you know just the definition of ahq a h e by e around q then i wrote rewrote this then i uh rewrote avi in terms of the center of mass velocity and angular velocity i took this term out here and then the remaining that's this term i split it up this way just drudgery okay it's just it's just drudgery i'm not doing anything particularly profound here just vectors express a vector this way that way all right and in some ways you can't even see why i'm doing it until you get to the end point you go ah or hopefully you'll go ah if you're not sleeping all right so this rewrites two right this is actually the definition of center of mass at some level okay plus this rewrites to a really really ugly term this term plus a term which will look ugly for a moment but is actually very elegant okay so i just expanded that out okay so just so you know from here to here it's just expanding rewriting there's nothing profound if you fell asleep and you just woke up it's okay i didn't miss much right so that is uh simply rewriting uh the angular momentum of ensemble e about point q in terms of center of mass distance from centromere stuff like that okay now we get to the fun part this is in fact nothing more than the distance of the center of mass from point q cross product with what anyone anyone hmm it's the angle it's the linear momentum yeah that's cute sorry okay now this guy i'm not even going to bother to expand it because it's just a mess and there's no saving it right okay that's this guy this is just no i can't simplify it even i can't do it now this is very interesting this is called the vector triple product here's a beautiful little formula if you have something like this a vector a across a vector b across a vector c this is called a vector triple product you can solve you can rewrite this as did you know that have you seen this before it's a vector identity you've seen it in map trust me you've forgotten it's called the vector triple product you can prove it easily i'm not going to prove it it's a well-known mathematical thing we'll just call call it here so with that you can rewrite this okay that's the vector triple product formula any three vectors so if you rewrite it here's what you get so okay so this is the mess that we've just inherited and i'm going to clean up the mess okay first of all in a tour we're looking at two dimensions by the way this can be generalized to three dimensions what happens to this term in two dimensions position vector of points i with respect to the center of mass dot product with the angular velocity what happens to this term vanishes why because if this is the plane the angular velocity points this way position vectors are all in the plane right and these days the dot product of two orthogonal vectors is zero right so this is zero yeah yeah so this is i use the scalar dot cross you know the vector dot uh triple product rule to write rewrite this this is our position vector of point i from the center of mass it's in the plane of the blackboard angular velocity is outwards yep maybe i wrote it in the wrong order it's fine okay uh it it doesn't matter only because it goes to zero but it does matter in the sense that i screwed up so in what way did i what did i what was the mistake you have your parentheses and the second part as opposed to the first two ah good point we'll have to check let me check okay i don't want to just because i'm running out of time it turns out my answer is wrong so i've made two mistakes and cancelled them out or maybe it doesn't matter i can just look at that yeah we'll just confirm that thank you all right but the point is this term vanishes anyway because it'll be a dot product anyone recognize this term m r squared of omega so what is m r squared all right okay so this is i in its primordial form anyone recognize this term yeah it's a terrible term this term whatever right so the point i want to make is these guys only vanish under certain circumstances and that's the pitfall you need to be aware of all right and for example if q the point about which you're taking this the angular momentum is equal to is the same as the center of mass this term vanishes this term vanishes get it because this is rqc right so if q is the center of mass then what we have just shown is a h ensemble about point out where the center of mass is equal to i that in fact my friends is the definition of i that is the definition of i a omega b got it and then if that holds and only if that holds can you also say that the total torque on the ensemble which is the rate of change of that about point q is equal to what okay by the way in a continuum a summation is an integral so when you do integrals it's the same thing but for very tiny particles but in a continuum so it would become and this will be done in recitation it'll become dm you know it's an integral right so the definition of of of moment of inertia is this guy okay yes same velocities if i can't answer this question i'll answer it i mean if i think about it i'll answer it in class next week
MIT_2003J_Dynamics_and_Control_I_Fall_2007
4_Magic_and_supermagic_formulae.txt
the following content is provided under a Creative Commons license your support will help MIT open courseware continue to offer highquality educational resources for free to make a donation or to view additional materials from hundreds of MIT courses visit MIT opencourseware at ocw.mit.edu all right folks so last week we uh looked at the we first saw our first magic formula which you've probably forgotten because it was a wonderful weekend so I'll remind you and um then from there we will uh resolve the spider problem and then uh we're going to solve uh we're going to look at uh uh something called the super magic formula these are not official names I made them up or it's please please never ever say these in public okay but uh it's a useful formula okay um so last week we looked at as I said first of all we looked at uh um angular velocity for the first time for the first time and in fact um everything before I mention angle of velocity you know the stuff I did before I even mention angle of velocity for the first time the Machinery I provided you is powerful enough to solve pretty much any kinematics problem because it's based purely on Geometry angular velocity is a kind of a madeup concept that just simplifies the math and it's kind of intuitive stuff is rotating Theta Dot multiplied by you know the Le lever arm right gives you a velocity but you can see how it kind of slaughtered into place so we introduced the angle of velocity like this we said um we said the angle of velocity of in fact I'll fill that in later on a b and I wrote it as this so help me write this what is the English statement here it is the angular of velocity frame B with respect to frame a right okay so we saw that for the first time now a little bit on the low of angular velocity that you've seen in previous classes but we we Plunge in anymore um the first thing is can of Point have an angular velocity now can a point have a velocity yes can a frame have a velocity kind of no because if a frame's rotating some point different points will have different velocities so yeah a frame can have a velocity but doesn't really mean much and you tell me which point in the frame you're referring to right look this thing is a frame what's the velocity at any given point in time of the frame well no you I need to tell you which point I'm referring to right so if I look at the corner of this guy this corner yeah you can tell me the velocity at any point in time right but the velocity of the frame doesn't mean anything unless I give you a reference point so points have velocities frames have angular velocities good points of accelerations frames have angular accelerations okay so just to talk a little bit more about angular velocity why is angular of velocity a vector um say it again I'm sorry it's a cross product uh I guess well I guess you could look at it that way but I want something even more kind of in you know just very basic hand wavy more basic yeah magit Direction okay and why does it have magnitude and Direction back there oh that's I was to say but why you know how is rotation and an you know obviously rotation is how can you add rotations what's up with that you know it You' seen this in Cloud yeah huh and that's right and why is that say again okay that's right but let me ask you something if I said look I'm standing at this right I'm going to rotate about my vertical axis and then do this my hands up here right but if I rotate my hand first and oh that did the same thing but you know what I mean angles don't add up right if I change the order in which I add angles they don't add up right so if I said 45° up and I'm going to trade 45° my arms up here right why is the why is the angle adding up here a has something changed over the weekend if I turn 45° anyway you know the point if you add up angles they shouldn't add up something happened this weekend and all my angles are adding up yep well look should vectors add up right angles don't add up right so yeah clockwise and counterclockwise means yeah there's a plus minus two angles right but you know where I'm going with this yeah I'll just tell you the low the simple thing is angles don't add up but infinite decimal angles do add up you remember this you did this in physics I'm assuming you did this in physics okay so if I rotate about one axis a certain amount 30° and then rotate about another axis 30° right if I reverse the sequence they don't add up but if I rotate about one axis like a tenth of a degree and then rotate about another axis a tenth of a degree and then I reverse the sequence actually come pretty close all right so it turns out that infini dismal angles do add up so while in general rotation about different angles right it doesn't different axes doesn't add up infini dismal rotations do and because angle of velocity is D Theta by DT right and it's a vector if because it is about some direction and there's a d Theta here it turns out that angle of velocity is in fact a vector in the sense that there is an axis the angle is above that axis and if you you know have another angle of velocity you add them up they do add up so the map does work out you understand am I making sense or not making sense any questions about this anyone yeah angles don't add up what I mean by angles don't add up is that if I said um rotate a rigid body right take this rigid body body and we we have two rotations one is about this direction right I was I was a little quick because I assumed you seen it before but I'm noticing that you haven't seen this before perhaps they don't talk to you about this in physics anymore so maybe let me take a step back and describe it in more detail let's take this rigid body right this is a z Direction let's call that X Y and Z so let's say I'm going to rotate it about Z 90° and then I'm going to rotate it about x x is this right so it ends up like this so remember this it was facing you right I started like this z x so it's facing you now instead I if I said rotated about X first and then rotated about Z you understand so if I just took Theta as a Theta with this vector and I said apply these two rotations well depending on the sequence in which you added the rotation vectors up you end up with a different position right but vectors don't have the problem when you add two vectors it doesn't matter what sequence you add them in right if I have two vectors whether I do this or whether I write this first and do this the net Vector is the same you get it so if I'm going to treat angles like vectors they should add up correct regardless of sequence they must give the same result but they don't but it turns out that here if I take a very tiny rotation not 90° but like. 1° and then. 1° then the sequences do actually add up okay and this is the sort of thing you need to kind of walk around with a book you know and people will think you're nuts but you know do that and you'll see that the angles kind do add up you get it so that's why when you take small angles they do add up and angle of velocity is a measure of the rate of change of angle so it's a d Theta it's a very tiny angle get it in a moment of time right in a very tiny moment of time so angular of velocity does add up so you can treat it as a vector in the math works out in the answer to Sam's comment okay so what that means is and this is what I wanted to tell you if I have one frame a another frame B and another frame C so let's say this is frame a right and um I'm frame B I'm going to rotate okay and as I rotate uh this this thing which is frame C which is moving in my on me right then a Omega B is my my angle of velocity right what's the direction this way that's the fra that's frame a I'm frame B I'm rotating what's the direction upwards right straight up so that's a Omega B is Theta dot this way and the vector upwards but now let's say that I'm moving this what's the frame what's the vector here this way right depending on the direction this way or backwards right so here's the deal a Omega C is equal to a Omega b + B Omega C and this is what I wanted to tell you and the beauty of this is that if you can identify between two frames and angular of velocity right then and between two other frames and you can create a sequence of frames you can calculate the angle of velocity of the final frame so if you look like look at a like a a um like a really complex system a gimbal assembly for example right you can always find little joints right mechanical engineering is full of these you know one Dee Freedom joints we call them hinges for example right we call them you know axes and you can always add up the angle of velocities and the net angle of velocity is the angle of velocity of the final frame with respect to the first frame and that's a very beautiful thing which means that with this we can take really complex situations and break them down to Pairs and analyze the Hearth them add them up and you can get the behavior of the final you know in terms of that initial frame get it and W so we introduced the the velocity and then the next thing we did last last in the last class was I revealed to you this magic formula I said if you have a vector let's call it n and you want to take its derivative with respect to the frame a now if you the way we did it was we would rewrite in in terms of the basis vectors of frame a right in terms of A1 A2 a lot of math lot of derivatives that's how we did it uh but and actually it was Andrew asked he said isn't there a nice way to kind of simplify the derivatives here it is if it turns out it's more convenient to take the derivative of that Vector in b then you can do this but you need this correction term which is very beautiful done okay so this was the magic formula that I introduced in the last class how would we go about proving this anybody yeah do the math how would we do the math how would we prove this here's what we'll do let's prove it let's consider a frame a by the way you notice I will always show frames with this little circle a saying that it's a frame and with a little wiggly line pointing to it I will always do that if I don't stop me and make sure I do it okay let's say there's a frame B so frame a for example could be the TR the projector frame B could be a transparency sliding on the projector got it now you can always assume if I have a capital A that little A1 and little A2 are the basis vectors okay or the unit vectors you can always assume that little B1 little B2 are the basis Vector vectors in b I'm going to stop writing it because it just clutters up the diagram now let's assume I have a vector let's call it n and that's that it's given to us for some reason naturally for example maybe it's fixed in frame B maybe it's a spider walking you know away from the center of frame B which is the frisbe whatever let's assume that you have a vector written in terms of frame B's unit basis vectors let's let's assume n is equal to sum U1 which is not a constant time B1 + U2 * B2 now what I'm trying to prove is this formula how would I go about doing it someone wake up everyone's asleep wake up wake up yeah let's Express exactly we're going to express B so here's what we're looking for we're looking for a d by DT of N and look n is expressed in terms of B1 and B2 that's really not nice we need expressed in terms of A1 and A2 to do it old way the long way right so the way you would do it is convert it to an A1 and A2 representation can you please look at your notes and tell me what B1 is in terms of A1 and A2 quick look at your notes say it again is it plus or minus so B1 is cosine Theta A1 is that right everyone in agreement okay good excellent plus U2 sin it's actually minus sin Theta A1 plus cosine Theta A2 all vectors okay I need to sorry sorry okay yes um I'm going to finish that down here so will you please expand it for me what does that come out to be this is the left hand side of that formula right and we'll do the right hand side separately and show that the two are the same I'm trying to prove it to you right so I'm not you know I'm doing just very boring stuff but we need to do it just establish it what is uh expanded for me what is uh d by a d by DT of U1 cosine Theta it is U1 do cosine Theta right minus U Theta do sin Theta okay and on the right hand side it is the next term that's this term I just wrote this term comes out to be U do sin theta plus what theta. cosine Theta I'm bunching the terms separately just to get the a1's all lined up all right so this is A1 and this is A2 all right okay and then let's do this term minus what U2 do sin Theta minus U2 cosine Theta and Below what do I get mhm Theta one here this should be a Theta dot right yeah okay so these are the terms um any simplification does anyone anything cancel out here I should shouldn't I so is this one or two one what is this this one one 2 okay is this correct okay and does it simplify U1 do cosine Theta U2 do sin Theta no it doesn't simplify right and so this is the this is the left hand side you want U1 dot okay now is there some scope for a to rewrite this in terms of B1 and B2 is there some scope to write this in terms of B1 and B2 is U what is U1 do A1 what is A1 cosine theta plus A2 sin Theta huh B1 uhuh so this is U1 do B1 what else this is B2 plus U1 Theta do B2 plus U2 do B2 minus or plus huh okay so that's the left hand side up there okay any questions about this you you can even ask me Sanjay why are you doing this go ahead you can ask me that thank you I'm doing this because I want to show you that that magic formula is simply a compaction of a lot of math that's it okay so I'm going to write it out Brute Force take derivatives using geometry that you understand and show you that you get that formula and it really works so that when you use it you don't feel uncertain that's it okay so this is the left hand side up there now let's compute the right hand side up there what would the so the right hand side would be to take U1 B1 plus U2 B2 and take the derivative with respect to to frame B right yes maybe right so tell me what it comes out to be what's the first term on the right hand side mhm B2 let's see if these terms show up there do they aha yes this shows up this shows up up so this matches this matches right so now let's take a Omega B what is a Omega B folks quick Theta dot it's a vector B3 isn't it the. A3 why is it the. B3 they're the same what if they're not the same hm well here's the deal by the definition of angle of velocity if you can define an angle of velocity between two frames there's got to be one axis that's aligned get it okay so that right hand side term and we're going to use B3 instead of A3 just because our Math's going to be simplifi simplified right so the right hand side is going to be Theta do B3 cross product with u 1 + B1 U1 sorry B1 + U2 B2 sorry right and if you do the math does it come out to be these two terms yes or no yes so if you do the math there I'm not doing it because I want you guys to stay awake in class I don't want to just blindly copy down the notes here right so it all works out okay I will never ask you to prove something like this in an exam right this is almost like a solved example I want you to understand that it all works out that's it and if you're confused that is fantastic because that's a symptom of something else and I'd like to surface it and help you figure it out any confusion about this anything yes claudo right claudo what's your name oh sorry go over how you got that last term in there again say it again just go over how you got that last term this term so I took so look what's the magic formula it is the left hand side is this is equal to b d by DT of n right in the B frame plus this correction term right so the first term here was BD by DT of n the second term is simply that this guy make sense John you convinced no no no are you convinced uh convinced enough okay anybody else you've totally got to get this Andrew yeah you had a rotating and Des okay let's do that so let's imagine I'm spinning a basketball on my finger I could never do that but let's just imagine I'm doing it right I'm standing at this and spinning a basketball on my finger the room the world the Earth is a frame the basketball is another frame the kind of the U you know you guys need to be like uh Neo in Matrix have you seen that movie right it's very very important if you understand Matrix algebra you need to see Matrix all right so go see Matrix and you know how Neil kind of you know he when he's totally you know after Lawrence fishburns you know kind of trained him Etc he goes down and he can see he stops seeing buildings and he sees these characters you know flowing instead of the building right he sees right through all the the video game right so similarly so what others see is a is a basketball spinning what you see once you're done with the scores is you will see an embedded V embedded basis frame you know with its AR you know pointy arrows sticking out of the basketball get it and you would see it spinning do you see it spinning you see it right but one of the axes which is the one one pointing up ain't sping right it remains vertical got it and you can kind of do this Freeze Frame where you frame you freeze it then a second you freeze it again you measure the angle and that's Delta Theta over Delta time so you get angular of speed and this this the vertical axis is frozen get it so you get that the angular velocity of the basketball is upwards is my hand waving can are you following me here can you see it right anyone not see it I'll it's basically all I'm saying is that here's me this is my finger right here's the basketball okay my hand here's the basketball it's spinning what you guys should see is these embedded vectors basis vectors sticking out so you'll only see kind of the surface you know sticking out right and then you see these two rotating like that right but that guy stationary and so in your mind there's an angle of velocity that that basketball has with respect to the ground got it now if I was spinning it this way which I couldn't because of gravity but let's say that you know it was at a different angle you can always introduce a frame on the earth such that you know everything lines up right you just want to pick a frame and be done with it okay any more questions about this very important hey let me ask you another question when I say frame is rotating with this is this is a tough one so let us say there is a frame rotating with respect to another frame is it rotating about a point or let me put it differently is there such a thing as angle of velocity about a point no who said that that's right that's right but if you think in your mind you can't understand why an angle of velocity isn't about a point look I'm rotating is there an angle velocity about a point if you think there's an angle of velocity about a point right then I can understand that but I want to talk about it how many of you kind of feel there should be a point about which there's an angle of velocity right come on say it I know I felt that way you can admit it good right I felt that way when a frame rotates on another frame for an instant with respect to that other frame there might be a point that does not move get it that is called the instantaneous Center of rotation but that but the angle of velocity is in about that point it's the angle of velocity of the frame because the instantaneous Center might move and you know with respect to another frame that's translating it might be some other point right get it so look if I do this if I'm spinning it you know the center is staying stationary so that is the instantaneous Center of rotation but this Frame there's no kind of special point because with respect to someone who's flying overhead with a constant velocity the instantaneous cental rotation is going to be some other point the frame has an angular of velocity it's not an angular velocity about a point get it so you need to dispel that if you if you have that confusion try and think about it there is no such thing as an angular velocity about a point it is an angular velocity of the frame period end of story just so happens that with respect to some other frame there might be a point which is instantaneously not in motion and that's the instantaneous Central rotation but that's just a special thing just a the ey picks that up but that's not fundamental okay so that's angle of velocity so here's what I'm going to do now I'm going to solve uh our old problem Spidey spider the spider on the Frisbee using um your claudo right yeah okay using um frames uh using angle of velocity so the problem was we have the ground we have the spider I'm sorry the Frisbee we put a frame here we call this point o we call this P the spider is some distance away from the center we call that point Q now and I want to make another editorial comment here we call this length L we said this distance is U and this distance is V the editorial comment I want to make was and this is very informal I'll formalize this later on how many degrees assume 2D right assume this it's like a hockey puck it's sliding around the ground how many degrees of freedom does the um does the U frisbee have two dimensions in two Dimensions I have a Frisbee and it's you know I'm I'm throwing it in a flat on a flat plane right how many degrees of freedom does it have three okay how many degrees of freedom does the insect have in the same plane in the same plane what was that can it depend on which frame you're looking at well I'm just asking you how many degrees of fedom does an insect have on a plane treat is a point okay it's a treat like a DOT it's only two right but yet we characterize this situation with three parameters okay and the parameters are the positions of the center of the frisbe right and the distance of the spider and the angle of the spider from the center of the Frisbee right and assuming the Frisbee the spider is only moving in a straight line with respect to the Frisbee away then we just need a distance right so what we're doing is we're saying those are the three parameters that change in our geometry and with those three parameters we're going to capture the blocation of the spider but we're using three parameters but in the end the SP spider is only going to have two degrees of freedom right but that's okay because when you get the velocity of the spider it'll have only two components when get the acceleration only have two components so it'll work itself out what are the three param the three parameters are u v and L the way we're doing our geometry we're saying the the spider look this is just in formal okay I'll formalize this ston right uh actually that's right we're using four thank you we're using four thank you I forgot about Theta thank you Ted right using four parameters right but in the end that insect can go right or left or up or down in the black on the Blackboard right so it has only two degrees of freedom the insect but we're using four parameters to capture this whole system and the reason is that we need to capture the location of the spider but we also need to figure out where the frisbee is so that we can do the math with respect to the Frisbee Etc right so we're capturing it in terms of these parameters I just want to say this leave it for the time being we'll come back and I'll you know in the context of Dynamics it'll become more clear when we talk about degrees of freedom and parameters and things like that it'll all become more clear but I just want to say this the parameters are the things that change okay so how do I use the the magic formula to find the the acceleration the velocity and acceleration of the of the insect anybody how do I begin write down the yeah always begin by writing down the position Vector so r o s is equal to U A1 Plus V A2 plus L B1 L is the distance of the spider from the center of the fris oh oh I call it Q sorry sorry I'm sorry q h say it again ah let me ask you when I write a position Vector does it need a reference frame doesn't the only thing where I can get away without writing a a uh reference frame is when I write a vector at a vector is just you know the frame when the frame when I write a vector is implicit in the basis vectors get it I don't need to say it's with respect to a or with respect to B it's just a mishmash of little vectors that are all defined in terms of parameters it's it's an important question you need to understand this this actually encapsulates the geometry of this entire problem right because if I CH change U1 the whole thing is going to move move to the right if I change U uh V uh sorry if I change uh sorry U the frisbe is moving to the right if I change V the frisbee is moving up right if I change Theta the whole thing is rotating because it'll change B1 right if I change L the spider's running away from the Frisbee okay by the way if you if you ever end up in video game games writing video games Graphics Etc this is it this is how you do Graphics it's the geometry of points right so imagine you were writing a video game where you had a spider running around a frisbee right and maybe you you know and the frisbee is you know flying about and you want to write a video game where you want to put a I don't a bigger spider and you want to chase the spider towards the middle whatever right the geometry would be this the this is how you would do it get it so think video games kinematics is video games actually okay and when you change the parameters things change parametrically that's also called parametric design okay okay so how do we solve the problem now I want both the velocity and acceleration of the spider quick H take the derivatives right so the velocity of the spider is a v oh Q I call it Q Vector so a being a derivative I need a frame is equal to a d by DT of this Vector which is now these two guys no problem I can write them as U1 Dot A1 Plus U uh sorry V Dot A2 not U1 not U1 U2 but U do V Dot right plus now I need I have this kind of mismatched term where I'm taking the derivative respect to a but I have this Vector expressed in terms of the B1 basis Vector right before I would convert it all to a one and do the math today now with this with the magic formula we can rewrite this as what so um first of all L is a scalar so there are a couple of routes we can go here okay and I just want to give you the routes one thing we could do is we could say l do a by DT B1 plus B1 you know we can kind of do it that way but the simple way to do it is to simp well you know it doesn't matter we can just write this as B take the whole term d by DT of lb1 plus what quick hm I'm hearing a hesitant murmur not the yeah I was mumbling something a moment ago I'll tell you what I mean what I was trying to say uh cross lb1 what is a Omega B so we can rewrite a Omega b as Theta dot B3 and so this whole thing comes down to I'm going to write the answer here to save a Blackboard what's the what's the answer what is avq what which is the answer we got before as well right now what I was mumbling I started mumbling in the middle of the class I do that often I also drool that's okay um what I was trying to say here was you can go a couple of ways with this right you can say l is a scaler and pull it out and just apply this whole thing to you know do by CH by by uh uh use the chain rule or you can just treat the whole thing like one vector get it and they're both okay they'll both come out with the same answer that's okay okay because L is a scaler you can just L doesn't care which reference frame you take the derivative in right L is doesn't care what a reference frame is B1 is a vector it cares which reference frame you take its derivative in L doesn't you understand so you can do this by by Parts I'm Sorry by Chain Rule and say l dot you know um L dob1 + L A by DT of B1 whichever way you do it the answer will come out to be the same just go naturally understand the the basics go naturally it'll work itself out is my point all the rules apply okay all right so now we want to calculate the acceleration now we want to do the acceleration plus you how do we do this come on say it again yep take the derivative again so we're going to go so we're going to take the derivative of this guy right I'll just write it all out and we'll do it neatly this time d by DT of u. A1 Plus v. A2 plus L dob1 plus Theta do L B2 that's why we Tak the derivative of the first two terms are easy it's going to be uble do A1 Plus vble dot A2 plus what do we do now come on someone back there huh magic formula right and so the magic formula the way it will come out is this is going to break down to two terms the first term is it's going to be b d by DT sorry it's going to be two terms the first term is b d by DT of B1 of l. B1 plus a Omega B I'm going to stop writing a Omega B I'll just call it lb3 to save our soul sorry theta. B3 cross l dob1 so these are the two terms from this guy and what do I do with the next term I repeat it so I'm going to get b d by DT bless you again yeah Theta do L B2 plus a Omega B which is the B3 cross theta. L B2 is that complete have I missed any terms anybody I might have on purpose by mistake you never know with me I think it's correct okay and that is U do a1+ v. A2 plus this is going to be easy now it's going to be L double do B1 and this is going to be what B2 plus this is now we need to do this guy by by in the using the chain rule right because the two terms you have to take the derivative of right say it again I'm sorry oh l dot here yeah so here what do we have we get th. lb2 L do B2 so by the way you notice this term showed up twice right that'll become so anyway we'll remember that plus what is this and let's summarize this and you have to admit wasn't the math M simple simpler than the first long way right because we used a trick we compacted the math recognizing the magic formula right and it was all initiated by Andrew who asked the question why don't you do it right for which he gave up his watch AJ I'm going to show the videos now okay just to kind of break them on monotonous prob start with the um Med go round and then the other one right so this is it I'll tell you when to start uh which on this one right okay I'll tell you when to start in a second right so these are all the terms they're going to be there should be one two three did we miss anything oh we missed something for sure yeah okay um help me remind me what these terms are what is this guy translation acceleration what is this guy spidey's acceleration with respect to the Frisbee what is this guy yes and you'll notice with coris these two terms always show up in different places and they add up just and it becomes two right what is this guy cental what is this guy it's called Oiler whatever okay if you as I said if one of you invents a new term we'll name it after you okay and and so far I've used these words coris and all that I kind of called them out without telling you a lot more about them right so what we're going to do now is take a break in the following way I'm going to pull down the uh the uh screen I'm going to show you some videos showing you the coris force when I'm when you're done with these videos um you will a totally understand what the cholis not effect sorry not force cholis acceleration is and B have a headache all right and then we're going to go back and do the super magic formula where we actually lay these things out we're going to derive a very general formula for something moving on something that's moving get it no seriously it's going to be a general formula you see all the formula that you have so far are for something moving on a frame right but now we're going to kind of but this frisbee is something moving on something that's already moving so we're going to come up with a general formula for that and for the lack of a better term because it doesn't doesn't have a name I named it the super magic formula as a code word in this class so it turns out that several people have done research uh and I mean not like hard C technical research but research involving merar rounds bowling balls billiard stables and a lot of Courage on um the use of on demonstrating um the Coriolis effect and these videos if we can get them going are actually very very interesting okay and what you'll see by the way here okay just one second a before I set it up right I just want to set it up so these are four kids they're sitting on a Mero round and then someone comes and starts spinning the Mero around right there are two cameras there's one camera outside that is attached to the Earth looking down so what you see from about is what really happens in in inertial frame right which is things want to go straight but the kid in the m go around sees something different G volume turned up to an observer above the merry go around the path of the ball appears straight isn't it straight to someone sitting on it the ball appears to curve to the left this exemplifies the Corola Force an observer on the rotating Earth the path of an object appears to be deflected and this is a result of the Earth's rotation get it kind of got it kind of all right let's do the other one all right so this is to this one I discovered uh I think last night and um there are two dudes who went to a high school and they put a bilard a pool table a bilard table on top of the mound okay and then they did some experiments and these are very committed people I must say was obviously also a cold day and I just thought I'd put it up just to show you the power of human Innovation go ahead but these these dudes are totally into it so they don't stop but the real the funny thing is that the kids actually can't see all this because they're below the level of the pool table so they're like jumping off you know okay so let me explain what's going on here so it turns out and then we're going to do the super magic formula in about a minute right it turns out that with respect to someone standing in space and not rotating with the planet earth right so if you're standing above the North Pole and you're looking down at the Earth rotating it's like a mer around right a particle of air which is moving for one reason or another with respect to me it's going to move in a straight line but to people sitting on the earth because they kind of move rotating away they're going to see it kind of Swing Away get it and what happens with things like hurricanes and we'll show you I'll show you some videos later if we have time is so now you're you're on the North Pole right you're sitting in the North Pole and this is the Earth this is the North Pole okay you're looking down a particle when it's let's say there's a low pressure right let's say this's a low pressure region here low pressure so when all the particles start rushing towards it because it's low pressure right but they think they're going in a straight line but meanwhile the uh this guy right is attached to the Earth kind of it's moving with the Earth so with respect to us the particle kind of vears off to the right get it and then it collapses into a spin this way and that's why hurricanes and tornadoes and stuff like that that's why it spins in a certain direction in the north on in the Northern hemosphere and the southern emisphere spins the other way right because the particles going to rush in and then they Veer to the right and then they get sucked in in this kind of rotation so you've probably heard about uh about sinks right about about uh toilet bowls and stuff like that and how you know water rotates in One Direction or not in the northern hemisphere another Direction on the southern hemisphere is that right is that valid yes or no why not yeah it's turns out that in that scale in fact there is a Coriolis effect but the slight deviation angle is so slight it's like a fractions of a degree per meter at that scale that it doesn't build up enough to really impact the direction in which the vortex takes place okay the vortex is created far more by asymmetries in the sink or by uh rotation that was created earlier that can of remained right okay the angle of momentum which we'll cover later on the class it stays for a long time and it influences things to go in one way or the other so it sinks you in fact you can construct a really large sink and you know apparently you can show it but it's very hard to do it it's very hard to kind of isolate environmental factors so sinks not a valid example hurricanes a valid example okay so do you get a sense of what the Coriolis effect is and remember the Coriolis effect is this guy and and we'll now formalize it when something is moving that is translating with respect to frame which is itself rotating other words there's a Theta Dot and an l dot term get it then there is a deflection in the B2 Direction in the way we've written it now I'll give you the general formula and you know B2 what it is ETC it become more clear any questions about coris by the way in the last class I told you this anecdote I said that artilleries artillery guns right in the 1700s 1800s um they would have to compensate for directionality because of the coris effect it turns out that that is true but I made one mistake it turns out that when the Coro effect was discovered or when someone formulated it coris right the range of guns wasn't high enough for that that to have been you know really valid so it is true that in artillery you need to correct for the coris effect but that wasn't the reason the corus effect was discovered so I take it back right but there it is also true that in the in World War I when the British when a British Fleet near Faulkland ran into a German Fleet they spend they shot 1,000 shells uh taking the Coriolis effect of the northern hemisphere well they were in the Southern Hemisphere and missed and then 60 shells um they actually use the right correction and they nailed the German Fleet and they won that battle so it's certainly true that by this you know by by the time World War I came along the range of these guns was high enough that the coriolus effect was relevant but I take back my earlier statement that was the uh uh motivation I don't I believe not any questions about this anyone any comments like the video you want to see more videos like this just in fact why don't we just watch YouTube videos all day it'll be fun right all right what I'm going to derive now is a general formula for something that is moving with respect to something else which is also moving and this is a very common thing a ship is moving and someone's moving on the ship a spaceship is moving and an astronauts moving on the spa on the spaceship what is the force that the astronaut feels you know or what force must the astronaut exert in order to move in a straight line with respect to a spaceship or a space shuttle or something given that this space shuttle itself isn't moving in a straight line that's the question we seek to answer and what we're going to do is basically generalize the Frisbee problem and the way we're going to do it is like this we will Define as usual and you'll find this all repetitive because really it's all the same stuff over and over again just getting more and more compacted with more insights kind of teased out of it right and we're pretty much done with kinematics once we finish this concept it's beautiful and now by the way this kinematics that I've done is completely 3D right you can do any 3D system with this stuff in fact the way that we're doing kinematics is called the can's method of kinematics Kane K an is a professor at Stanford he I think he's still there retired um and uh kan's method is used uh to analyze the Dynamics of robots complex robots and Space Systems so anyone who works in you know complex Aero Astro space system things like space shuttles with robots moving around Etc they use this method they use the symbology a b and all that right very few undergraduate courses on the planet uh learn teach the mechanisms that you've learned and understood right this is it that's all there is to on the kinematic side now Kan also has a way of doing Dynamics which I won't tell you I'll do the Newtonian approach uh but you certainly on the kinematic side you've got it okay so the problem we seek to solve now is a your frame a a I'm actually trying to derive a general formula that captures what we did for the frisbe you have a frame B and what we're going to say is frame B is hurtling through space and SP frame a is our ground reference frame okay and in frame B let's assume that this is point what did I call it to my notes p and this is point Q so I'm just generalizing it earlier put that you know the spider was going along uh the B1 basis Vector right so I'm just generalizing it so now there is a a point here and that point Q is moving it's moving with both with respect to a and with respect to B but we want to express the acceleration the velocity and acceleration of Point Q with respect to frame a is assuming that it's more convenient to naturally denote it in terms of frame B get it like I'm walking in a spaceship I can measure my distance to the back of the spaceship right but I really want to calculate my acceleration with respect to the Earth okay so that's what we're trying to do now okay so this is R PQ and we'll call this point O and this is R op okay and what we seek is the acceleration with respect to a of Point Q sorry the velocity and the acceleration with respect to frame a this is what we seek and it's just a generalization of stuff you've done now several times okay so let's write it out so what's how do we begin we're looking for these two guys how do we begin position Vector always yes correct r oq now I'm going to compact things and go a little fast okay is equal to expand it for me I'm going to have to well I'm just trying to calculate our avq which is the derivative of this and what do I do H I'm actually leaving it a little unstructured because I want you to say something the answer I was looking for was what is the derivative of R Op clodo with respect to a what do we call that say again it is yes it is if I take the derivative of this it is right plus the derivative of this so in fact if you want I'll expand it I'll write it in smaller steps so you you you can follow it right right I've just written it out right and what I'm saying is this term you will recall is what I just wrote it and what do I do here apply the magic formula right and I can write this as b d by DT of rpq plus a Omega B cross rpq right just applying the magic formula and the reason I'm doing that is I'm going to reduce it so to a simple form which is a VQ is equal to a VP plus b VQ isn't this B VQ plus a Omega B cross rpq so this is the velocity version of the magic formula and I'm sure you'll admit that I did nothing particularly profound in deriving this okay and then that handout the typed handout that you you hopefully downloaded and read from the web you will see this EXP expand it out for you okay so that means that if you have a frame a which is stationary for example you have a frame B something's moving on frame B then the velocity of that thing on frame B is simply the velocity of uh a point on frame B with respect to frame a velocity of that thing with respect to frame B and Omega R Omega time R basically a Omega B cross rpq makes sense right questions it's obvious nothing profound okay now so this we've done I'm going to do this now so if I want the acceleration of Point Q with respect to frame a what do I do yeah just take the Der of that one that is equal to a d by D t a VP plus a d by DT of bvp right bvq I'm sorry plus a d by DT of a Omega B cross rpq right I just wrote it out again you know boringly I I strive to be boring at some level if I'm doing anything that's not boring that means I'm making a leap that is not uh grounded yeah did I miss something yep yeah yeah P just so you know thank you AJ that Point p is a point fixed in frame B you understand very important so space shuttle astronaut cting on Space Shuttle find one point on the space shuttle for example the instrument panel that figures out you know where the space shuttle is that's Point P get it so rpq is that vector okay okay so now let's do this how do we expand this what is this term come on yep what do I do here guys magic formula so that you know I line these two guys up right plus what what plus so that's these these these these two terms correspond to this guy what next what do I do with this guy same thing okay absolutely the same thing it's correct precisely okay same thing okay don't worry it's all going to work out this is perfectly cure I didn't do anything wrong here I don't think anyway I do anything wrong I just took this whole thing and I just applied the magic formula right yes anyone feel queasy about it no it's perfectly reasonable okay all right so now let's expand it what is this guy yeah ah very good B where's this guy well it's just that it is what it is you will recognize it as anyone recognize what this guy is it's half of the corus effect the corus effect is actually two terms that come together right and it'll become two eventually so we need to look for another of these guys keep an eye open all right okay what do we do here folks tell me claudo leave it as it is no we don't want to leave it as it is we want to expand it out take the derivatives away y what's your name yeah what's your name sry Jeremy yeah what did you say a yeah but it's a vector you're on the right track though can you do it using the chain rule why not let's do that okay let's do this b d by DT of a Omega B right cross rpq okay I have a caveat when I finish this I'll come back to it what I'm doing it is is fine plus a Omega B cross b d by DT of rpq so that's these two guys make sense plus what about this could I do there I just just leave it as it is don't worry okay yeah okay I can't change the order did I change the order yeah you can't change the order you cannot change the order say it again I'm sorry aha we'll come to that let's come to that okay hold that thought if it's zero it'll go away okay so just let's not expand it yet so um and this term is just this term all right okay a good right okay a is kind of my guardian angel if I make typos or he catches me a just a PhD student is getting his PhD in U in wiress sensing oh oh sorry sorry sorry Simone I'll be out in a minute one more minute excuse huh just one minute sorry thanks Andrew next time set your watch 3 minutes fast like mine okay so folks I'm going to write out the super magic formula actually this is it okay the super magic formula is this and I'll name the terms and we'll finish it next time a acceleration of Q is equal to read it out to me please is equal to the acceleration of Point P which is the fix point plus b a Q Plus ahuh what is this guy Alpha huh do I skip which term do I skip we'll come back to this okay this guy shows up as this guy this is also right plus what q q q q yeah sorry and plus this thing okay we'll end now because Professor Socrates needs to take over this is it this is correct acceleration of P acceleration of Q with respect to B Oiler coris centripedal and you know why it doesn't go to zero because I'm not taking a Omega B cross a omeg B I'm taking a Omega B cross this whole term which is at 90° to a Omega B got it
MIT_2003J_Dynamics_and_Control_I_Fall_2007
1_Course_information_Begin_kinematics.txt
the following content is provided under a Creative Commons license your support will help MIT open courseware continue to offer highquality educational resources for free to make a donation or to view additional materials from hundreds of MIT courses visit MIT opencourseware at ocw.mit.edu hello welcome to tw3 my name is Sanjay Sarma I am a professor in mechanical engineering and I'm the instructor for this course um it's a great pleasure to have you all in this class uh unfortunately as you can tell I am not in Boston I'm in India right now I was supposed to teach a class in India because I thought I'd be on sabatical this semester as it turned out I'm teaching this class um and uh I'm going to fly back this Thursday and I'll see you in person on Monday but I have talked to3 a couple times before it's a great pleasure so welcome W3 is probably in my opinion the most fun class in the mechanical engineering curriculum it is all about stuff that moves which is what mechanical engineering is all about so um what I'm going to do today is uh I'm going to go through the class um outline uh you have with you uh the class outline in terms of uh a definition of the class the schedule instructors contact Etc uh but I'll just go through it in slides and um and then we're going to do two other things after I introduce you to the to the course we're going to have a quiz but don't worry it's this quiz does not count for anything it is to check how well do you know your prerequisites and it's also to check uh uh how well um you understand what we will be doing in this class so there's a portion of this quiz which uh you should not be able to do and if you can that's great uh and it is uh an indication of the sort of problems that you will learn to solve um in this class so anyway and the last piece I will begin um actually getting into the uh course and we will start doing uh kinematics which is the topic of the first lecture I'll tell you what kinematics is in a minute so don't worry let's get going I'm very informal I'm going to cold call you guys I'm going to make sure that you speak this will be a very interactive class if I would here right now I would have call called someone already uh and I think we'll have a lot of fun um we have a terrific staff of other professors who will be doing recitations and um instru and teaching assistants who will uh make this a good class now this class has over 120 registered students so it's going to be a very tightly run um uh unit and uh we will have very precise schedules and you need to uh try and play ball because we don't want to fall behind it's becomes a bit of a mess um and the instructors and the Tas um all are uh uh we're all raring to go um the we have first of all recitations uh in this class recitations in two3 are important and mandatory in other words you must show up attendance will be taken and in fact you will see later that a part of your grade will depend on recitations so you must show up it's not optional if you can't show up for some particular very important reason make sure to talk to your recitation instructor and tell him why one of the key things about this class is that we will have a uh section on matlb matlb as you know is a mathematical programming tool that's very important in Dynamics and controls it's actually very important in engineering and mat lab I believe was founded by an MIT alumnus as well um and and uh that will be taught by Professor Peter so and Professor so's contact information is also listed here and uh if the instructors could uh wave so that people know who they are that' be great excellent um so that's the teaching staff you'll get to know us all very well during the course of the semester perhaps equally if not more important uh then the professors are the teaching assistants uh the head ta is Mr aay Deshpande uh AJ I hope you're waving um AJ is going to be very key because he's going to run a lot of the recitation I'm sorry the uh um review sessions and um he'll be uh kind of quarterbacking the homeworks um Etc uh and his room number is 35-10 that's the same building that I'm in but in the basement and um his email address and telephone number below Mr FAS Eid who's the other ta who will be doing the bulk of the uh uh non-mat laap portion of this class um between a and FAS they'll be in charge of all the homeworks u a lot of the organization of the class uh and um between the instructors and and these two gentlemen all the reviews and finally for the mat laap portion we have uh Mr Deon Kim who is um going to be uh uh taing uh the mat lab assignments and uh and so on and then we have graders on my one name who will be helping us grade uh the uh the homeworks both in math lab and uh in the actual course in the actual Dynamics uh part of it um this is a pretty heavy course we have two lectures and the lectures will go 9:30 a.m. to 11:00 a.m. on Mondays and Wednesdays uh they are going to be obviously in this room but as you can see access to this room is limited because you can only come in through the front and it's a little disruptive um again we're very informal so if you uh are late that's fine but please try and be on time and um we will get going at about 9:35 sharp we have a lot of material to cover in this class and we'll be going at a fast pace um by the way one of the things about the way I teach is I encourage people to interrupt me uh if you have any questions raise your hand and I'll be asking you questions um I'll be asking the audience questions answer them and if I find someone particularly unwilling to answer a question or unwilling to volunteer to answer I will call you out so we'll all get to know each other very well um matlb sections there'll be four matlb sections um Fridays um all all four sections are Fridays um um and the four contiguous hours between 1: p.m. and 5:00 p.m. they'll all be in Prof in room 343 of building 3 and Professor so will conduct the matlb section um after the slide I'm going to uh request a to pause this video and perhaps Professor so can spend a few minutes describing mat lab what it is and uh how it's going to be run and U what he expects from you some of these sections will be tutorials or recitations and some of them will be lectures um so uh Professor so will describe that in a minute and then finally the actual recitations for the material that I will be covering in class they will be six um and you've all registered for these six four will be taught by professor mcris and two will be taught by Professor Patrick alakus and um we're going to try and balance these uh recitations uh we're looking at uh the the um compositions right now strictly we need about 24 to 25 in each recitation um we don't encourage you to change if you have to please um um uh uh put up your hands um or actually come to the recitation um instructor um uh in in actually come to AJ as we when we pause in a minute um and uh we'll try and sort it out but uh you must try and stay with your original recitation unless it's something very very important um it's really as I said this class is going to be a bit of a circus if we have too much change so please try and uh cooperate excuse me and then we'll also have office hours um we'll announce them very shortly in fact uh um by the time you see this video we might have them all cleared up and I'm going to ask AJ to pause after this and um announce the officers as well write them on the on the Blackboard so um so three things now I just going to pause this video Professor so is going to describe the mat lab portion of the class um we will we sort out the sections uh right now um and finally AJ will announce the office hours so we're going to pause now okay we're back um so going on now with uh the actual material um we have um two textbooks there's only one that you really need to buy the other one is a reference you can get it from the library if you want to buy it it's it's a good book to have the first is by professor Jim Williams who is a professor in our department and a very well-known um author and engineer and uh it's an excellent textbook now we will be following this textbook in a u lose way we will have reading assignments which we encourage you to uh to read um our notation is slightly different but the problem sets will uh a many of the problems will come from the textbook not all of them but many of them and it's just a fantastic book to own because it addresses a lot of questions in great depth um but the way we're going to teach this class will be slightly different in the sense that the notation will be different and um the kind of the way we're going to parse the topics will be different but there's a onetoone ref uh onetoone correspondence between what we cover and and and uh the textbook although the sequence is off um so that's the textbook the other book that the rest of the world uses is something by the name of uh engineering mechanics uh and Dynamics which is by Miriam and Craig um the way we teach Dynamics here at MIT um is um is not it's it's for MIT students you know it's not simplified it's not you know we don't dumb it down and we're going to teach it in in a way that is profound and interesting and exciting and the way we feel MIT students should know it uh medium and Craig a great book by the way just a very good book teaches it in a slightly simpler way so it's a good book to read but uh you know if you can depend on it for everything we will cover in this class uh we will take the the differential equations approach to Dynamics and Miriam and Craig T you takes the instantaneous approach to Dynamics I'll explain what that is when I'm back in town the prerequisites for this class are 801 and 1803 um 801 is um and 1803 are basic courses in math and physics that you've taken before and we will pick up on a lot of that stuff and we will proceed from there but there isn't necessarily um you know once we start the class we will very rapidly start doing stuff that isn't covered in those classes there'll be a few terms like parallel axis theorem fical to ma Etc you know conservation of angular momentum conservation of linear momentum that um we will pick up on the physics is obviously the same it's just that we're looking at more interesting and complicated problems but it's but you need to know this stuff um and we will use it uh uh as the foundation and finally uh we have we're going to be using the Stellar website all homeworks will be posted on Stellar um and all solutions I'll talk about homeworks in a minute um we will have um exams obviously you will get in class but the solutions to the exams will be posted schedules will be posted last notes that a j and I make up will be posted example problems so it's going to be very key um and we encourage you to refer to that slide um excuse me on that site on an hourly basis no I'm kidding look at it look at it at least a couple of times a week uh and we might send you email reminders as well but uh do do keep looking at that website it's an important website for this class okay um so that's uh supporting material um problem sets are a key part of this class uh you will need to pay a lot of attention they will go out usually on Mondays and they'll be due the following Monday there will be exceptions the exceptions will be for example when we have holidays on Mondays or when we have an exam and we have to you know we have certain rules of what we can make due uh on weeks of exams and so on so we'll follow those so we might bend that a little bit but the problems goes out on Monday and it's due back at 9:30 a.m. the following Monday we will not accept late homeworks unless it is an extraordinary event that prevented you from doing the homework um we expect a high level of honesty that should be honesty not honest um groups are okay copying is not and we've um we will disallow uh homeworks that we um think have been copied and um we're going to be fairly diligent about it so uh this is going to be a fun class folks um you got to take it seriously and we'll all have fun uh but we'll have certain rules that we expect you to follow um two tests in the three two midterm exams the one is on the 24th and one is of October and the second is the 19th of November and one three our final exam which will will be on a date that the registar hasn't announced yet but will be announced oops very shortly um all exams um including the final will be Clos book you will not be permitted to bring in um a textbook but you will be permitted cryp sheets uh the first exam one sheet second exam two sheets final exam three sheets so that's the test okay this is the way the grades will be broken out 35% of the grade will come from classwork that is outside of exams and the way that breaks out is 10% for homeworks 7% for recitations 15% for ma lab and 3% for class participation so what is class participation so here's here's the uh the way we do class participation you might consider what I'm going to tell you bad news but I actually think it's good news um you know you you guys pay a lot for your MIT education and we want to make sure you get a good education right so we're going to have snap quizzes 5 to 10 minutes on random and they snap so it's on random classes during random classes and I might um you know in the middle of a lecture say snap quiz and the exam the quiz would most likely be written on the Blackboard and all you do is have a go at it now you we won't grade the quiz Beyond giving you a zero or a one so it's kind of it's kind of a it's as much uh a metric of attendance and for us a metric of whether you're getting the material or not so we're not trying to really kind of treat it as a quiz but we will have these snap quizzes you can miss a miss one or two you'll be fine but you can't miss a lot and we'll also keep track of who's talking in class we really want you to be communicative um we want you guys to go on and become the leaders of Industry not some Howard MBA who didn't go to MIT and um you know just wants to give you orders so we want you to be uh articulate uh engineers and leaders so that's the 3% and we'll take that very seriously um and 7% from recitations and that also has to do with your participation and perhaps more importantly your presence so there will be attendance in recitations two midterms 40% each uh 20% each and they'll be conducted in this class during the class and one final 25% one common here is that um this class we cover a lot of material math lab there's a lot of activities and steadiness pays off so you need to be steady um the final exam is only 25% which means that you can't redeem yourself uh you know just by nailing the final so you need to really go at it it's a bit of a marathon and you need to join us in the kind of like a triathlon okay so I'm going to pause now and uh now uh um we will um um uh deliver the snap quiz and uh and then AJ is going to give you about 20 minutes or so for the snap quiz and then um he's going to restart the video and we will get into uh the basics of the class that we want to cover which will set up my um lecture when I return on Monday uh uh next Monday uh when I return from India and I'm here in person okay so we'll pause now AJ go ahead pause the video okay now we are going to talk about the class and we're actually going to get into our first lecture the first uh content uh of the class Beyond organizational stuff so this class is about Dynamics what is Dynamics Dynamics Dynamics is about stuff that moves you know if I take my keychain and I toss it in the air and I grab it it moved um now obviously studying my keychain isn't as U important as studying say a satellite or studying uh the Dynamics of a baseball or uh studying uh you know a truck and seeing why it vibrates uh and when it goes on a on a road and how much it vibrates so that you can design the right shock absorbers um so that you can design the right stiffness for the spring so that you can design the right damping coefficients in the shock absorber so this is the essence of Dynamics it's a very interesting topic as I said I think it's the most interesting topic in in the mechanical engineering curriculum although I've also taught 21 and 27 and 28 and I made the same claim in those classes but I actually do think that this is the most uh interesting topic in uh in mechanical engering I have just four slides with which I'm going to set up the class today this is kind of the first mini lecture what is Dynamics all of Dynamics can be captured with this one equation that's it that's all the restra all of Dynamics is this it is Newton's law and um or it comes from Newton's law Newton's three laws which we'll go into more detail later but essentially force is equal to mass times acceleration and with this fundamental simple law and with math you can do everything need to do with dynamics that has to do with rigid bodies we'll come to you will do in two1 and two2 which some of you are taking you've seen deformable bodies but in rigid bodies this is all you need and in fact even deformable bodies use just this plus some you know constitutive relationships that tell you how much things can stretch like elasticity and so on all right so that's it folks so why such a big rigal and why such a big course uh why so many professors and uh why is this such a big deal well it's a very important thing for us but the analysis of this equation of motion um becomes difficult when you talk about complicated objects it's actually not difficult it's interesting and it's exciting and intriguing but uh you can get lost and so we try and put structure around it and that structure is this course so let's examine this first of all you will notice that this is actually a differential equation why is this a differential equation because f is a force m is a constant a is actually the second derivative of position d2x by dt^ 2 so what we're saying here is f is equal to M * d2x by dt^ 2 and that is actually a differential equation the way I've stated it it's a simple differential equation to solve and it applies to a single particle but it is a differential equation actually it's three differential equations because in three dimensions it's a vector differential equation f is a vector it's Force it's got three components XY z a is three components x y z ax a y and a z um so if you it's a vector equation and if you break it up into its component uh pieces it would be FX is equal to M ax FY equal to m a y FZ is equal to m a z that's three equations and um a particle in uh in space in our space um has three degrees of freedom it can move X Y and Z directions and so you need three equations and bingo you have three equations they just happen to be differential equations but they're different there are equations three equations three unknowns solve it you're done and that's the essence of particle Dynamics now the problem is that when you look at more complicated pictures calculating that a gets more exciting and challenging and so much so that we have a name for that and that is called kinematics and kinematics as I just said is simply geometry that's it it is motion and geometry it's got no forces nothing is simply trying to figure out what the velocity of something is and what the acceleration of something is so you might say well that's easy right but let me ask you a question if I told you that the space shuttle was orbiting the Earth at a certain velocity at a certain distance from the center of the earth and I told you that an astronaut was space walking on the surface of the Space Shuttle and with respect to this to the uh to some reference point of the Space Shuttle the astronaut was walking at a certain velocity and if I said that the astronaut's actually holding a robot and the tip of the robot is now moving with respect to the astronaut at a certain other velocity and I asked you what is the velocity of the tip of that robot and then I said what is the acceleration of the tip of that robot how would you calculate it no forces no Dynamics just kinematics and and the way you would do it is you would do the geometry and try and figure out what the acceleration velocity is and you know what you have all the tools to do it right now you could write down the equations you could draw vectors and you could figure it out and that's what kinematics is and what we're trying to do in this class in the first piece of this class is going to be understand how to do understanding how to do that well now if you know what the velocity the acceleration of the tip of the robot is then and you know the mass of that tip is then you know what the force is that or the force you need to apply to accelerate that tip and then you can figure out what the Hydraulics need to be to generate the force and what the rocket propeller the rocket sorry Thruster needs to generate and so on so that's where kinematics is at the core of the subject now after kinematics once you figure out figured out the a part then you can write down the f is equal to ma equation and you get a differential equation and that is called kinetics which is once you know the kinematics then relating the kinematics to the force in the mass together kinetics kinematics and kinetics together are what we refer to as Dynamics kinematics is geometry kinetic means motion uh due to force and mass and together kinetics and kinematics form Dynamics now that's for one particle when you have multiple particles for example if you examine the uh um the desk in front of you or the chair you're sitting on um if this desk or this chair was spinning through space then there are multiple particles and all the particles are applying forces on each other and you could of course you could write a million F physical equations and Sol sort it all it just becomes computationally burdensome so one of the most utterly brilliant developments in Dynamics is that we develop an equivalent equation which you'll see more of and don't worry if you don't understand it right now although you have seen this in previous classes which is torque is equal to something called the moment of inertia multiplied by the angular acceleration which are also vectors and it's for handling rigid bodies and this moment of inertia is a strange quad moo quad uh Quasimodo Beast um which is hard to explain in in a word but you guys have a sense of it but we'll explain it in great detail later and it's just one more equation with which you can handle rigid bodies of infinite particles and it's pure genius because what happens is if you take f is equal to ma that's three equations T is equal torque is equal to I Alpha that's three more equations that's six equations and a rigid body in space has six unknowns XYZ motion of the center of mass and roll pitch and ya three rotations six degrees of freedom six equations it's beautiful and really once you understand this you understood Dynamics uh um and so this is what we're going to do in this class and that you know the development of a moment of inertia I to replace thousands of equations for thousands of particles with one equation for a rigid body is just a beautiful one of these beautiful things that uh make Dynamics such an interesting topic so with that let's examine I'm going to go through I have kind of a map of the course and we're going to examine uh um how we're going to Traverse the topics in this course as I said there are broadly two topics in um in Dynamics um kinematics and kinetics in the handout which I'll hand out later on you will see that there's another topic called constituted relations which is more important in two1 but we'll kind of bunch it all into kinetics right now and uh first we will examine kinematics and kinetics for a single particle or I'm sorry I'm sorry we will examine it in a different way but um you can examine the kinematics and kinetics of a single particle you can look at four particles or multiple particles but when the multiple particles reach you know thousands and hundreds of thousands and they're a Continuum then you obviously can't you know write equations for each individually where if and if those particles are constrained now to move against each other we call them rigid bodies and then you can you know you need to analyze the kinematics and kinetics of rigid bodies turns out that lrange um a uh a scientist who lived several hundred years ago developed a way in which you can analyze the Dynamics of um systems rigid body single particles complex systems without teasing app the kinetics and kinematics and that's a very beautiful if less intuitive way of doing things we'll cover that and then finally the whole point of this is we come up with a differential equation you need to solve it and in the last part of the class we will solve the differential equation so this is how we're going to navigate this we will do all of kinematics we will Barrel through it in the beginning of this class that's the first thing we're going to do we're going to nail kinematics we're going to do it in its full Glory three dimensions you will see terms like cholis accelerations centripedal accelerations and you will completely when you're done with kinematics you will be the masters of kinematics uh Way Beyond any undergrad anywhere in the world so we will completely nail kinematics the reason is that's the toughest part of Dynamics at some level of rigid body Dynamics and then we will do the kinetics of single particles so we'll say F so now you figure out the a we'll write f is equal to MA for single particles uh we will kind of do multiple particles but I find it boring and I'm the instructor so I can do what I want I'm going to do multiple particles because I find them somewhat tedious but we will do rigid bodies again in a great deal of glory and we will really understand rigid bodies now one simplification we will limit ourselves to two-dimensional uh rigid body Dynamics and the reason is that um there's a lot of Machinery that goes to three dimensions which you will eventually do in grad school if you want to you can pick it up but it's very distracting for a class like this so we won't do it that's one compromise we make then we'll do lrange lrange is kinematics and kinetics all bunched into one group and then we will solve the equations of so that's uh what we're going to do in this class so the um the way the class is laid out I don't know if you can let me just walk up to the uh to the Chart here this whole thing is the first piece of this class the first section of this class the first half of the first third is kinetics and then we'll do sorry kinematics then we'll do Dynamics so this is the first midterm basically lrange is the second midterm and then you come and we'll finish this in the piece before the final exams so I will refer to this uh slide uh this kind of map over and over again and that'll be a um it'll it'll tell us where we are in the class for those of you who've read Harry Potter it's a Marauders map and by the way 24 which is the followup plan last to this one picks up here and two4 is all about taking the uh equations of motion now you know where equations of motions motion come equations come from from rigid bodies you can also derive equations of motions for no motion or equations Dynamic equations for uh for solid uh bodies which are not rigid you can do it for fluid mechanics and two4 deals with now looking at the behavior of these systems uh Express in terms of differential equations okay I have uh one last slide and uh I think I want I think I have one more slide yes this is actually this is the last slide now we're going to look at um kinematics um the first thing you need to know about K actually I have two slides is what can what what goes into kinematics now in kinematics we really have only three or four key key uh Concepts the first key concept is the concept of a position Vector a position Vector is fundamental whenever you're trying to find the velocity or acceleration of something you need to ask yourself what's the position Vector right is the basis for everything we do and we will basically take derivatives of position vectors and we will build from this and in fact if you get this if you get position vectors and um and the derivatives of vectors then everything else I'll do later is just a uh um a building block that you can derive from this so we will concentrate on this and this will actually be the focus of the next class the class when I'm back on Monday we're going to nail this and the way we're going to nail this is I will create I will describe to you a problem I'll describe that problem in actually the end of this class as well verbally and we will solve it it's going to be very similar to that problem of the astronaut uh on the on the space shuttle as it turtles around the earth and we're going to try and figure out what the accelerations are and the velocities are so position vectors then frames of reference it's just a simplifying building block and in particular when you come to kinetics that is Newton's equations there's one special frame that we will keep referring to which is the inertial frame so we'll talk about frames of reference then we'll talk about rotations it's just another simplifying um concept angular velocity is actually just a way of taking derivatives believe it or not um and we'll talk about angle of velocities uh and it's very useful for rigid bodies um it's also actually useful for in fluid mechanics you have rotation in fluid mechanics as well but the way you do it there is different um and uh I don't know what the because all bullet is I'm sorry but the bottom line after that is we will do um constraints so you can ignore that because all bullet wasn't supposed to be there so this is my last slide and what I'm going to do now is describe to you what a what a frame of reference is and what a position Vector is and and we're done today and I'll see you on Monday after this all right so here's a question it's a thought experiment if I showed you this picture of a particle it's a dimensionless particle but I blew it up so that you could see it P traversing some trajectory in space if I should you this picture and I said hey what's the velocity of this particle how would you figure it out think about it for a second what is the velocity of this particle well if you think about it it's actually an incomplete question can a particle can I just say what is the velocity without providing you with some additional information let's see the question you should ask me is Sanjay what what are you measuring it against because a particle doesn't have a velocity by itself you need to be measuring it against something right for example I if I'm trying to track uh a uh a missile coming in from someone I don't like um I can for example consider a ground station a satellite a a a radar and I could say that I can take a position Vector remember we're talking position vectors here from that radar from some point fixed on the ground r o p which goes from o to p and I can call the ground and the uh and the uh ground station which are all stationary with respect to each other I can call it reference frame a and I could take the derivative of r o with respect to time and that's the velocity of Point p with respect to frame a right now um quick third experiment here if instead of Point O fixed to that you know to the center of that radar if I picked a point some other point O Prime fixed to the ground such that o and O Prime don't move with respect to each other will the velocity be the same well the position Vector will be different but the velocity the derivative would actually be the same so it doesn't matter where the point of origin is when I'm looking at velocities and accelerations but the position Vector depends on the point of origin in a single frame so so this this uh missile has a certain velocity in this Frame now let's write it down and you know what let's plunge into some fairly complicated uh um symbology here the velocity of this particle this missile in space as you can see now as I'm going to point at it is the velocity of missile the point P if there were two missiles or two missiles then I would have to um the other missile would have another velocity so I need to say this is the velocity of Miss of of P okay and I'm going to write here frame a because and it'll become obvious why because the velocity with respect to frame a and I'm also going to say it is um with respect to frame a it is actually the derivative of position Vector of Point p with respect to this point O the derivative with respect to time so that is the um is the uh position and velocity or that's the V that's the velocity of Point p with respect to frame a is a with respect to a the derivative of this vector R Op with respect to time now I might be tracking that same missile from as you can see on the slide a different frame of reference for example I might have a low Earth satellite uh tracking these missiles and the satellite could be moving around the earth could be rotating and the position Vector of that satellite from of the missile from this satellite and this point is Q is R QP so clearly the velocity that this guy is going to see is going to be different right because this guy is going to measure a different unit vector and different velocity and that velocity first of all we can't call it R sorry AV VP by the way uh the V here is in bold right I put it in bold or I might use an underline to show it's a vector R is in bold I'm might put an underline to show a vector but the velocity of Point p with respect to the sine is obviously not a VP if we call the satellite frame B it's going to be B with respect to B velocity Point P so it'll be bvp right so it's going to be bvp derivative derivative respect to frame B of the position Vector Q top R QP so what this is telling you is that obviously the frame of reference is important but there is something known as the frame with respect to which you take the derivative of something that is a concept that when I'm done with you guys in the next two or three classes you will have understood at a very deep level we will consistently constantly write things in terms of frames when you look at Dynamics derivatives cannot be defined with respect to time if you do not not specify the frame in which you're taking the derivatives this might seem an etheral concept but trust me we're going to do a problem next week where we will nail it and if you nail this the rest of the class is easy so it's a bit of a tough Bill to swallow but this is going to be the essense of next week all right and here's what I'm going to do next week I'm going to examine a problem in which consider a frisbee thrown across the room consider a spider on that frisbee and the spider's kind of running on that frisbee we're going to find out what the acceleration of that spider is with respect to a frame attached to the Frisbee and with respect to a frame attached to the ground and we're going to show that different and we're going to show how much they differ and what the terms are all right and once and believe it or not that's actually the most General situation we'll do it in full Vector notation and when we nail that actually you've understood a lot about kinematics and everything that follows in this class is going to be about simplifying that and doing that in quicker and more efficient ways so listen it's uh as I said welcome to two 3 um thank you um for taking this class we'll have a lot of fun and I'll see you on Monday and uh have a great weekend sorry I'm not here GL we could do this by video and we'll connect shortly thanks bye
MIT_2003J_Dynamics_and_Control_I_Fall_2007
5_Supermagic_formula_degrees_of_freedom_nonstandard_coordinates_kinematic_constraints.txt
the following content is provided under a Creative Commons license your support will help MIT OpenCourseWare continue to offer high quality educational resources for free to make a donation or to view additional materials from hundreds of MIT courses visit MIT opencourseware at ocw.mit.edu actually we are give you a sense of what's going to come in the rest of the course so forget the stuff inside in the beginning in my video I explained to you that the field of dynamics consists of three things I kind of didn't tell you about this all right but I'll tell you about it now kinematics which is motion pure geometry no forces nothing not a pure motion which is what we've been concentrating on for the last couple of weeks all right kinetics kinetics is actually motion right kinetic and Kemp any any chemical engineers here probably not but in yeah chemical kinetics right it means dynamics when stuff happens okay motion kinematics is more like geometry yeah that you can take care of motion but it's not enough forces involved then finally the thing I left out which is constitutive relationships how many of you are taking two double O one you heard the term I'm sure constitute a relationship anyone can anyone tell me what it means anybody not real well yeah you could tell me in the context of double to the one but now you probably right that's right that's right what are the other kinetic constitutive relationships and on back there in July blow one that's right I physically KX what's the equivalent into double O one loudly you're right yeah yeah basically stiffness all that stuff gravity by the way right so you're right so constitutive relationship is stuff like F is equal to KX okay or gravity right now here's the deal kinematics just deals with finding X and x dot and X double dot right position velocity acceleration what we did spider on a frisbee how much does it accelerate on what's its velocity right this is connect kinematics constitutive relationships build these models okay a physical KX it's not really true but it's I know my blackberry is going off here but it's kind it you know it's a good approximation it's a model gravity's is correct in nonrelativistic situations gravity is also a good model so those are constitutive relationships they come from experiments and so on right and kinetics deals with a particularly important form of the constitutive relationship also called X sorry particularly the important example of constitutive relationships also referred to as yeah Newton's laws right so if you say F is equal to Ma force is equal to mass times acceleration the acceleration comes from kinematics mass is actually also kind of related related to kinematics but we'll do it in a different context later why but force you know comes from the constant sums from either contact or from you know you're stretching a spring or gravity or something when you relate the two it's kinetics okay now the way we did this course the way we traverse the space is you have kinetics so the kinematics kinetics constitutive relationships I won't be too explicit about this all right when you do electromagnetics for example you will learn one particular new you've already seen it in physics source of constitutive relationships the Maxwell's laws right when you do two double oh one you see another source spring stiffness stuff like that okay when you do fluid mechanics you will see another source which is Casa T and gravity okay when you do thermodynamics you'll see another source which is pressure in other I mean they're all sort of you know that's where the kind of the religious stuff comes in this is all secular doesn't matter which field you're in this is correct okay this now so kinematic kinematics kinetics constitutive relationships now the the way the course proceeds usually is you have particles then we can look at sixteen systems of particles how many degrees of freedom does a particle have in two-dimensional space these days - yeah in three-dimensional space how many degrees of freedom does a rigid body have in two-dimensional space in two-dimensional space three and in three-dimensional space six can you name them in three-dimensional space for a rigid body while they called any names unclip yeah so translation and XYZ and what is the rotation any do you have remember names of the rotations they're different categorizations levi's roll pitch in your roll pitch and yaw that's one way to look at it right so six degrees of freedom okay now so particles systems of particles can you give me an example of a system of particles it's not rigid yeah a fluid yeah a human body contains fluids and it's not rigid right rigid bodies are things which we assume there's no motion relative okay so this course our course has to do with mostly particles we do a little bit of systems of particles but I don't find that particularly exciting besides you have intact courses on this like two double O one where particles move but they don't run away from each other right it's not rigid and then fluid mechanics to w52 double O 6 where particles and not in any way tether to each other they just rub against each other but they're just completely untethered get it so in a way the next six course is w1 w2 w3 w-4 double four double double O six all have to do with systems of particle double-o 3 and double of four kind of had to do rigid bodies and particles that's why I made a look at it but in the end they all give differential equations so it doesn't matter just solve them alright so the particle systems of particles rigid bodies now then we introduced this other formulation called Lagrangian formulation right you may have noticed that in that and you'll see this more and more that there are two ways to solve dynamics problems from a physics classes the Newtonian approach and then just using energy energy stuff and it's like this boom something happens and the problem gets solved but energy is conserved why do I have an equation I solve it and you might have wondered what's the relationship well they give the same answer right but energy is only one equation and it turns out the Lagrangian method is kind of an energy based indirect approach which lets you solve really complex problems in a very easy way ok but it's not as intuitive so once you get dynamics the Newtonian the direct way then we'll do Lagrangian now all all this all the way through Lagrangian the whole point of all this is to generate differential equations of motion this effort is is F is equal to Ma a differential equation yes F is equal to M times D X double dot by DT squared right D D - right that's a differential equation so they generally generate differential equations F is equal to Ma seems like an easy differential equation but when you see some of the acceleration terms they came up with F physical Ma can be a really complicated differential equation right to imagine the particle on the frisbee multiply that by M and equate that to F right and now let's say I tell you f is equal to some funky function right and I just solve for the path of the particle that's a really complicated differential equation so you can never solve it it happens to be nonlinear etc etc it's really complicated right they can even be partial differential equations you know it can be complicated but it turns out a large class of differential equations can actually be solved in a very intuitive and a completely mathematically correct way or simplified and you can get some great intuition out of it and we will do that here in that last section and then you go under to double of for where the whole point of double to double for is to analyze this in great detail this piece because the whole point of what we're in trying to do is figure out how stuff behaves under the influence of constitutive relationships and Newton's laws okay so that's the course now on the kinematics side they're maybe going to do it is I the traditional way is this way whose exact zigzag right I don't like that I want to take in American just beat it to the ground right so you're going to go in fact we already have gone boom all the way down okay everything you did with frames applies completed rigid bodies it's you've done it you're done guys you know you've understood kinematics in a very complete way there's only one piece we need to complete we've concentrated so far on velocities and accelerations right you found the velocities and accelerations of points and you trust me you know this better than any other undergraduate graduate on the planet trust me I really mean that because no undergraduate started in in mechanical genero undergraduate a course curriculum covers it as completely as we did and you have to admit it was fairly straightforward right it was complicated but wasn't complex right it wasn't like some fundamental you know most concepts in my opinion you can break them down into a lot of simple steps and we've done that you should be good shape for that today I'm going to finish that I'm going to do a spidey problem again and then we'll get into configurations and it's just a few words right and if you're done with that then today I'll get into will be done with all this all this will be done done done check right and then we'll get into kinetics of particles and then what we'll do is kind of go like this and we'll you know we will make a little bit of it you know digression back into kinematics but you'll use it a lot and thus kinematics also becomes very useful in here let me give you a you can forget I said this if you're not interested but the Newtonian approach is all it about F is equal to Ma so unity calculate accelerations Lagrangian approach is based on energies potential energy and kinetic energy there's no acceleration involved kinetic energy has a velocity term doesn't an acceleration term get it so you still have to calculate the velocities but you don't have a doodle within it energies right cleaning energy 1/2 MV squared so as long as you have the velocity and square it you have an energy you'll never have to calculate accelerations that's why it's a little more convenient we'll see that but you'll certainly in kinematics because it's you need to get a velocity ok any questions and please don't do that at home that whole you know the burning thing so actually I nearly set that to my office when I was a new professor here I bought myself at that time those before we had these our LCD projectors so I was very proud I bought myself a little portable one of these things and I'd have slides that could present them been to my first presentation set it up you know stead contemptuously the crowd because they all had these old clunky things and set fire to my projector so ok by the way on from a reading point of view we're still on in the same on the same chapter on kinematics in the textbook and on on my notes when we start particles I will recommend a new chapter which is chapter 4 section 2 from professor Williams's book that you should read but we're not there yet so when we get there I'll tell you about it so reading wise I do strongly recommend you read that book because the book is a little more traditional it bridges as with the rest of the world in terms of how they talk right in their less sophisticated way you need to speak them like that right when you deal with the rest of the world still okay so one of the things I'm going to do is just use it beat the same you know you'll see this often I'll take the same problem just do it again and again and again in different ways so you'll see continuity and hopefully you'll appreciate what you're learning but what I want to do now is just again let's fight around the frisbee thing and do it one last time is the frame of the earth the frisbee is B right spider somewhere here some distance L and this angle is Theta by b1 and b2 are aligned this way we call this V we call the Cu and the super ultra incredible magic formula was the acceleration R we call this point Q this point P and this point o bless you okay you should be used to this problem I've seen it a bunch of times so the super ultra cool power formula was a the velocity of point Q was what actually yeah it's probably we start with this right point P and then yeah okay plus how do you say Omega naught Omega be silent right yeah correct E right the super ultra cool formula right actually the two versions the velocity version the acceleration version we derived them both you remember that okay in the Newtonian approach we will be using their sky more but keep in mind when we get back to when you get a Lagrangian and do energies you will need that so don't forget it it's a lesser cousin yeah put your name Kylie that's right yeah Oh B is the frame this is the frame right so let's let's it again write a VP is the velocity of some point on the frisbee in fact this point with respect to frame o BB Q is now you're sitting on the frisbee so to you the frisbee stationary right and you see the velocity of the spider in some direction with respect to you like you're in a spaceship as far as you do consume the spaceship is stationary your colleague is heading off in some direction so you know what that velocity vector is and you know how far this colleague is from you as a vector that's our P Q Garrick very important and then a Omega B is the angular velocity of the frisbee or the spaceship and our P Q is that distance again it's correct it is correct I make typos this is right there are typo project no typos tell me what this is please I'm sorry you'll see five terms total yep yes okay shut again oh I'm sorry this is a typo thank you this is room correct sorry this yeah this is this is yeah sorry I don't know just think thank you sorry okay the this should be whatever written oh I say it should be cute right sorry yeah sorry can you have your nose to the blackboard it's hard to see what's going on okay so this is the complete formula now let's use it to solve the spider in the frisbee problem one last time one last tower okay so let's do that so the we go okay let's do velocity first so a velocity of the spider is equal to what is a VP yeah anything else are there any other terms come on guys wake up come on come on go on wake up wake up wake up you were right but I want others to say so what's the next term come on but someone back there someone in perhaps a stripy shirt thank you you know I know you know it just need to say next term hmm anyone in a red t-shirt anyone how about yourself but it's a red jersey I guess well let's better okay go ahead anyone else that want a cold call it's too early come on okay someone back there anyone back there Sam anyone wake up yeah it's this term and that is theta dot B 3 right yes cross lb-1 the correct and that compressor can converts to down right a substitute is much simpler than earlier because we used all that all the tricks in our bag here okay what is a acceleration of Q u double dot a 1 plus V double dot a 2 plus what I'm going to raise this as I write it you're going to okay that's better because you're going to look at that and tell me right Plus what come on thank you Plus okay we'll write let's write it out lets you know and then we'll expand it in the milk will calculate in a minute so what what is its two theta dot B three cross L dot P one right what's that theta double dot B three cross L be one and this one theta dot R theta dot M sorry theta dot B 3 and then we'll finish it up here actually I'm going to finish it up here only because I don't care about this result particularly we've done it a 100 times three times now so that comes out to be well this term this Same Same Same this term is what to theta dot b - this is and this is just expand just do it all for me and tell me and in let's think about it intuitively it's centripetal right centripetal towards the center so it should be minus b1 right so it should be minus b1 and it's omega squared our basically right in our earlier terminology so theta dot squared well done thank you just for aesthetic completeness let me just write this down done it's much easier right but you nailed it now you totally understand how to calculate the acceleration the velocity and acceleration of a point in the x and y direction are they in in in some basis right functions totally help it you're done you really get this is awesome okay remember this is 200 years of evolution physics just a couple of notes RJ asked me to point out to you that in an exam or your homework if we ask you to do things in closed form you don't give your numbers even if you give your numbers it's okay to leave the B ones and B to them it's okay that's all point of this right now when you're simulating it let's see it trying to solve a math problem yeah you'll probably want to convert everything day ones and twos but you know it's fine we don't want to burden you with all the unnecessary map so that's fine okay so you can leave you know a1 terms mess with a1 a2 tones mixed with b1 b2 I want to emphasize again everything they've said so far works totally in 3d as well completely that formula works in 3d as long as you define the angular velocities correctly okay we do it three ways we did the first one was the kind of completely general way that works completely in 3d even regardless of angular velocities because we didn't we didn't invoke an angular velocity the magic formula works because you recognize the angular velocity took advantage of it to take derivatives right the super ultra magic formula is kind of a compression based on that for a specific situation as long as you apply the situation right everything is completely 3d any questions about this therefore we wave it goodbye yep sitting in oh this one no you can't because the Omega B is a vector one seven it's a good question yeah you can do that you could you could do that when you so what what Ted is asking is hey can I write this as a Omega B magnitude squared L yeah that's the magnitude of it correct so absolutely right but you know don't think of it in those terms just write it in vector form right because you usually care about the net acceleration and you're going to have to take the magnitude of the whole thing anyway so think vector okay think pointy things all right any other questions I'm sure someone has some question lurking yeah with you yeah I the reason is very simple differentiation is trying to figure out how much it changes right now if I say what is the length of an acorn or something let's say I take something in space right I say what's the length and it hasn't changed the links length doesn't change the same right but with the vector if the angle changes then the difference is the sky nice little angle which is at 90 degrees which is why we need take the Omega cross stuff right scalars don't care this is just a scalar you know it doesn't care what orientation does right it's just a number get it you know it doesn't care what frame looking at it unless you look at relativity or something you know which we don't do right the length is the length is the length doesn't matter where you look at it from it the length of changing it changes the same amount in terms of length right regardless which frame but the hang you know when you look at vectors that angle thing comes into the picture okay any other questions and the math works it's the right ad by DT of B 1 we're saying hey B 1 is not constant today express it in terms of a 1 and a 2 and bingo everything works itself out because they only need to a constant in a 1 in a why any other questions as a deep question whether were any other questions about life mechanical engineering pursuit of happiness no ok I mean I wouldn't know the answer I'll make something up ok so now what we're going to do is get into something a little more interesting and different a little more hand-wavy but I'm going to I'm going to now firm it up I've been have your hand waving my way through this because I wanted to give you the concept but now we'll firm it up so what we're talking to talk about right now is configurations ok ok let us say I forget forget dynamics let's say I give you three variables three unknowns and I give you two equations can you solve for the unknowns yes or no no you can't if I give you three equations and I give you two unknowns what's that called hmm it's an over determined system all right so either the third equation is basically the first two equations rewritten right or something's got to give by the way into double-o one that giving is called bending right think about it if I have a beam and I say hey it can't be you know this points got to be here and I say this points got to be here that's two equations right if I had a third equation which is this points got to be down there my hands got a bend right so it's an over constraint system or an overdetermined if it's under determined you can't really solve it okay by the way fluid mechanics deals under contains under constraint systems okay now what a solving mean I want to get into that a little bit and so that's all we're going to talk about now you know equations unknown stuff like that okay let's look at our spider situation here how many degrees of freedom does the Frisby half in two dimensional space from here on when I said degrees of freedom I'll say two dimensional space how many degrees of freedom does it have in two-dimensional space that's frisbee three okay how many degrees of freedom does the spider have it's a point drift as a point hundred degrees of freedom does the frisbee and the frisbee and the spider have in total in total in order for me to specify exactly what the frisbee is what position it is in and where the five spider is how many parameters do I need let's say in the in some that's good question in some ways it kinda doesn't matter but let's assume and see I should say deep question I'll come back to that let's say in the free reference frame of the earth good question okay and the reference frame of the earth how many degrees of freedom do I need does this does this whole system have if I were you know if I'm trying to animate this how many numbers do I need to tell you where the spider and the frisbee are together think about it for a second it's not that obvious is it for that is in 3 plus 2 5 these days I think they updated the math on that I think I'm kidding yeah it is there's something wrong there right it I want you to struggle with this for a second cláudio spider in reference to the air and it reference just Christi I mean in terms of 50s she's walking up and down right so if you know where every 2250 is which is CDs freedom you'll meet one for the spider yeah you put your finger on it and we'll come back to it you're exactly right but let me put it your differently me confuse everyone a little bit okay let me give you a fake counter argument which is the spider has a two degrees of freedom right it can move in the X direction you can move in the Y directions to frisbee as three degrees of freedom that's five so how many degrees of freedom does this problem have so I've kind of you know I'm doing a little bit of hand you know a little bit of prestidigitation you know hiding something behind my back here what do you think yep three degree for you to build risky and aspiring that's exactly right so okay let me explain by rights this system actually has five degrees of freedom but we did some we made an assumption okay by rights the frisbee can be wherever the heck it chooses the spider can be wherever the heck chooses but we made one assumption which was the spider can only walk in a straight line away from the center of the frisbee okay remember I didn't say the spider can go off to the left or to the right or you know I just said it's walking away from I said it repeatedly and I made sure I said it right but you didn't pick up on it obviously because I would either now is going to ask you this question but I said the spider can only be on a straight line and walk in a straight line from the center of the frisbee so in fact unbeknownst to you I introduced a kinematic constraint which is and the kinematic constraint abuse I'll write it in a minute reduced it to four degrees of freedom okay and the kinematic constraint I I took was I could have created a theta2 which is not the theta 1 between bit you know that theta between B 1 and B 2 and a 1 and a 2 I took I created a theta 2 which is the position of the spider so the fried spider could have not walked in a straight line but we don't want it off in some bizarre you know instead of instead of locating the spider with just an L I could have had it like a mm N and n to find the coordinates of the spider with respect to the frisbee get it but I said no it cannot walk in the kind of the B 2 direction going to walk in the B one direction right so I introduced the constraint that it's wandering in the B 2 direction is 0 right and so in doing so I implicitly introduced the kinematic constraint I never spelled it out because we wrote the configuration of the Cintas system implicitly assuming that so the kinematic constraint got sucked in implicitly get it right in fact in going from three dimension to two dimensions there was an kinematic constraints there - I said Z motion is zero right and I just very glibly call it two dimensions but those are also kinematic constraints and I took off for the frisbee three degrees of freedom and for the point one degree of freedom right and even saying that so in fact those are kinematic constraints - very important that you understand this what I'm trying to say to you is that part and parcel and a lot of what we do is this concept of a kinematic constraint okay it's very central to us because sometimes we'll say it sometimes we won't sometimes it's so implicit that we'll just not account for it and we will pick the parameters in such a way that it automatically accounts for the kinematic constraint get it and sometimes we'll write things in with more parameters than you need and then introduce the kinematic constraint right so help me here how could I have redefined that previous frisbee problem in more general terms and by the way the magic formula does not assume that the spider is only moving in a straight line from the center of the frisbee remember in the magic formula right this rpq is pretty general bvq is pretty general but when we wrote bvq we assumed it is only lb-1 right it wasn't an l1 b1 personnel to be - you see where the kinematic constraint slipped in guys it's very important that you get this I want to see everyone nodding or disagree very important okay if we had done that it should have been 5 degrees of freedom Thanks right in this there actually five parameters but in the instantiation of the frisbee problem there were only four parameters and those four parameters were you theta and L get it very important okay so I will now what I'm going to do now okay now you have four unknowns in this right we want to solve for the trajectory and find out these unknowns right that's what dynamics is about how many equations do you need four and let me now give you a kind of a heads up into where dynamics is going to lead we have four unknowns u dot u now write B L and theta we have four unknowns and we want four equations right so when you do dynamics you will get four equations relating this to you know various forces and masses you will get four equations this is just a heads up okay the only difference is that unlike in algebraic equations where you have only you in our equations you'll have u u dot u double dot be V dot and V double dot L l dot and L double dot theta theta dot theta double dot you know because you're going to take an acceleration and say F is going to MA for example right there'll be theta double dot terms so you'll get four differential equations to solve the problem or you know some of the equations might simplify but in general get four differential equations get it so you'll have as many unknowns as the equations and it has our goal to generate as many unknowns as there are equations and solve it and when we solve it we solve the differential equation you'll get X as a function of time and that's the trajectory of the system so the objective of dynamics at some level is to formulate enough equations as many equations as you need which is usually which is always equal to the number of unknowns right solve them and get theta as a function of time the equations themselves will be differential equations instead of U U in terms of U U dot u double dot right in the algebraic world you would just have this stuff and you solve for u v double and theta but in dynamics you solve for u as a function of time get it such that the differential equation is satisfied do you get a sense of where we're going with this yes no yes maybe a little bit okay we haven't done dynamics yet by the time we're done with all this you'll see it okay let's uh you know a couple of minutes and we're going a little ahead in the class which it's good to discuss let's take a step back let's look ahead into this class and recall your physics right we have a particle that's moving and a frisbee that's moving what equations can you write for a frisbee in two dimensions F is equal to Ma how many what equations can you write for a rigid body equating forces and force like things to motion and accelerations let's start with the particle anybody let me get you going here okay this is we just discussing this is just nothing formal okay so let your head down just get out here don't relax okay you can write this with a particle right assuming for let's say for example the particle has some you know blue right or the feet have some addition then it's walking the frisbee and you know what the force is you can write this right how many equations is that in two dimensions two to direct to two equations in two dimensions for the frisbee you can also write F is equal to Ma you'll see this in later you take the center of mass and equate episode ma on the equations will you get to alright and finally what's the last equation you'll get right torque is equal to I alpha right in general there are five degrees of freedom in this problem right in the special case though before and you'll see one of the equations will go away and you'll get four degrees of freedom for equations you write for differential equations solve them and you can solve for four trajectories okay and when you're done you know if for example let's say the frisbee I threw it right and it's being dragged by air so it's slowing down and it's forget gravity because it's two dimensions right and let us say I know how much power the spider's little legs are applying so I know how much force it's applying right once you solve all these differential equations if I give you the forces and I tell you what the drag is you should be able to come up with the trajectory of the spider and it'll look like something like this all right if this is the a one direction and this is the a two direction you know if I throw the frisbee and the spider is kind of running away from the center it will probably look to something like this right it's kind of this outward bound spiral and that's kind of what we're trying to solve for and the way you capture this is a solving for the position of the spider you're trying to solve for R o Q as a function of time right you solve for that but you know exactly what the spider is okay if you solve for R o P as a function of time what am i solving for the center of the frisbee which is kind of going to look like this with time different instants in time right think of it as animation right and if I solve for R P Q as a function of time what am i solving for sit again yeah now I'm actually sitting on the frisbee right sitting on the frisbee and so that's me and I'm looking out to me the frisbees and rotating right I just see the spider kind of walking away right but I know the spiders getting all these forces right spider doesn't know stuff is rotating like just like we don't feel the earth rotating right spiders feeling this kind of you know being heaved by you know Coriolis forces centripetal forces all that stuff that's kind of you know making its way through right and you can see what that is get it this is kind of what we should employ here folks okay this is why we did velocities and accelerations kind of the first step okay so with that and then when I raise this and kind of define things more formally and we call those numbers parameters but I want to now say things maybe a little more formally a mechanic yep Davey Levi I was kind of leaving a little bit so but let me give you a hand with the answer before I formalize it okay yeah the differential equations will end up being something like position of the spider right which is that there will be effect it will be a vector differential equation so it'll be some some really complicated in fact I'm going to go into that right now so I'm really complicated relationship between this the mass of the spider and the force on the spider okay and in two dimension that's two equations okay so hold that thought Levi and we'll we'll talk about it more okay any other questions this was meant to be hand wavy because I know we can't see it all now I could kind of hold my breath and tell you on the end of the class or I could kind of take a stab at kinda giving a sense of it going right and then hopefully things will become firm later right I don't like this business of keeping you in the dark and you know just kind of hoping things fall into place on the end okay so a mechanical system consisting of points and rigid bodies okay has in general infinite possible okay the frisbee on the point there are infinite possible locations right each arrangement is called a configuration okay so if I told you the frisbee is here in this orientation and the spider is three inches from the center BAM I've frozen it that's a configuration okay now a little bit of history before I say I may write the next sentence used to be that the way you located something was using Cartesian coordinates for the Cartesian coordinates XYZ right that's you know two meters not three meters you know West and you know two meters up right XYZ so that word coordinate might be ingrained in your brain as being you know Cartesian XYZ right but it turns out that that's not always the best way to look to define the location of you know the configuration of something for example for the spider did we use XYZ only no we used the x and y kind of U and V but we use the angle of the frisbee and the distance from the center so it's kind of more like a polar coordinate right so use some combination of coordinates right so the Cenex entrances a set of parameters that define the asari a unique configuration of a system what is referred to as the non-standard to emphasize that it's not Cartesian non-standard coordinates of that system okay and what are the non-standard coordinates of the system that we looked at here they are you V L and theta they're not Cartesian coordinates they're they're known standard coordinates of that system get it so what we do in mechanical systems and we did this in the frisbee but now you're going to do more formally from here on is when you look at a system you say what are the parameters I need to define exactly where the thing is right and I mooned I don't mean just you know the thing but everything that moves in it and that set of coordinates and it doesn't have to be Cartesian it's a bunch of numbers that you would think work that set of numbers is called the non standard coordinates of that system get it and we just a non standard because Cartesian is really taken over the word coordinate okay and how many non-standard coordinates will you find in a system it is equal to the degrees of freedom of that system okay can you have more non-standard coordinates then degrees of freedom that's right you can have more than on standard coordinates and there are degrees of freedom it's okay but then you better specify the kinematic constraint that makes one of them independent of the others get it so the number of non-standard coordinates minus the number of degrees of freedom nine minus the number of kinematic constraints must equal the degrees of freedom of that system okay so let me write that now in the next so the next fact so this was actually a definition fact it's very intuitive you should understand this right okay and how many equations of motion will you get as many as there are degrees of freedom that's kind of how we're going to build this right you can throw the or you can just say you'll get as many equations non equation of motions as they're non-standard coordinates but then you need as many differential equations as that degrees of freedom and kinematic constraints as in other set of equations but this is the fundamental thing you're going to going to be dealing with okay so I'm sure there's questions about this but I'm going to go very slowly and have another 20 minutes I want to just totally nail this I thought I'd get into point dynamics but I knew I'd only kind of get into it today so I don't mind starting that next Wednesday Monday is a holiday by the way or there's no class it's not a holiday so we're getting the point dynamics in the next class but I'm going to spend a few minutes and just totally nail this and then we're going to do some more kinematic constraints yep yeah shouldn't need to use it freedom - the kinematic four to six to four joke that's true too right oh no no that's fired on me for non-standard coordinates minus 1 K Mattie constrained nobody was wrong hold on second Oceana plus there or pull the hook good good good point let me just see no let me explain let me explain okay good good point Cordia good thank you this is how we probe and understand this anyone else confused about that huh Claudia won't you stand up and say loudly what you just said those are done by the way it's a perfectly reasonable statement so I'm not great thank you subsea game you said to why why that's a kind of a - there is it yet for example in styrofoam for non-standard coordinates - the one kinematic constraint I'll give you three degrees of freedom and we'd agree we had five degrees of freedom without the constraint or for with kitchen string so I thought it should be two - American corn constraint on the right so let's let's look a back okay I'm gonna write the DOF here very good I have to tell you 20 years you know after studying for this for the first time I still have to prepare for this class okay so this is you know it's it's fairly profound stuff so you need to think about it once you get it's really cool it falls back into place so essentially I wake up in the money and go and I drink a lot of coffee and I think about it at Pace right and it all falls back into place so it's a perfectly reasonable thing here let me explain actually set up what is going to do anyway very well I could have set up the frisbee problem in two ways okay I could have said here's the frisbee I need it's this is in two dimensions right and let's say that you know starting hey I'm trying to draw and three dimensions here right I need its location in terms of U and V right and I could have said listen that you know our friend Spidey here could be in any location it's pretty wondering you know generally right so I need U and V and some M and n with respect to the frisbee of the spider and I need the angular difference of the spider right and so our angular the frisbee that is five non-standard coordinates to describe the configuration of the system okay fine yes five right three four the frisbee do for the spider and then I could say but guys you know what I'm going to do spider the spider I'm not going to let him wander around what I'm going to do is I'm going to build a rail here like this okay and we're on the rail by the way this is called a linear guide in mechanical drawing prevent professor Slocum teaches you design he will tell you what a linear guide is so just like a track so Spidey can just walk in any direction right the spider has to remain on this rail and can kind of you know paddle forward a paddle back if I do that I've introduced one constraint which is n is always equal to zero which is you know I call em and end the kind of coordinates on the frisbee and I'm saying that it can't move this way get it so five minus one is equal to four okay alternately accurate picked which is what I did in the class throughout I could have just picked for non-standard coordinates right which was I just assumed that the spider is completely specified by an L a distance from the center how many kinematic constraints do I need to explicitly list them get it so it's good I'm glad you asked me I don't think but for a second but that's the key okay and what we do is we will naturally take some kinematic constraints and some will write out explicitly you'll see that okay and by the way some kinematic constraints it is impossible to kind of do it naturally you have to express them explicitly and we won't look at systems like that we look at systems where the kinematic constraint can be stated explicitly but if you really tried you can make them implicit you understand so good so you can see what I'm saying here right any questions anybody okay and by the way all we did in this class so far what we've done is take these non-standard coordinates I want to introduce one more concept and I'll do an example we take these non-standard coordinates and calculate the accelerations or the points and rigid bodies angular backed angular accelerations etc in terms of the non standard coordinates right look look at this the acceleration of point Q is U double dot right V double dot theta double dot theta dot L l dot L double dot right so can what we've done so far is Express we have expressed accelerations this is by the way the short form from accelerations and velocities in terms of non-standard coordinates Garet that's what we've done and why we're doing it because when you write F is equal to MA we have a write in terms of these non-standard coordinates we'll multiply it by m and equate it to an F which will come from some constitutive relationship and we have the differential equation got it so this is the tough part now I want to make one final comment to you and this will only become important when we do Lagrangian so the way we do Newtonian which is what we're doing right now is just be clear that they're non standard coordinates the kinematic constraints and the degrees of freedom when we do Lagrangian as Claudio pointed out there was many sets of non-standard coordinates I could have picked right and I could chose them to introduce a kinematic constraint when we do Lagrangian you're also forced to pick the minimum number of non standard coordinates get it you're forced to if you do that it just map becomes really simple in Newtonian kinda doesn't matter okay so the thing that Claudia is pointing out this minimum actually if you could do that it's the best in Newtonian doesn't matter we're in the Newtonian phase right now okay and there's a special word for that reduce it when we get there okay by the way there's one other concept that happens with these non-standard coordinates which is some non-standard coordinates are more general than others does anybody understand that okay ma'am let me take it the opposite way some non-standard coordinates are more natural than others you want to share that let me explain you don't doesn't fight let me explain so I stated the first frisbee problem as a throwing it in space right forget the spider I just look at the frisbee so the best way to describe its location was U and V right but what if I told you that it's the same thing mathematically but really what I had was a robotic arm which extends very rapidly it's a telescoping robotic arm and the end of it is a spinning disk right and the robotic arm can turn and it can change in length would you still have pick U and V as the non-standard coordinates what would you have picked l1 and theta one right good because it's more natural it reflects the mechanism a little more and this is the difference between mechanical engineers and mathematicians right for you you're trying to figure out how much should I change l1 you know how much should I change the angle so another concept here is one of okay and this is the hand wavy part red judgment comes in but you know what I mean right it's pretty obvious you would use u 1v1 in this situation you would use l1 and theta1 right everyone copacetic with this yes so that's what I want to say about coordinates so we've covered three concepts here we've covered the concepts of non-standard coordinates we've covered the concept of kinematic constraints and degrees of freedom and we've just covered the concept of naturalness which is the fuzziest of all but that's where your judgment comes in okay for example if you wanted to accelerate if you want trying to figure out you know I am trying to design a hydraulic system to power this telescoping arm what pressure does it need so I can buy the right compressor and I can size things properly right this is what you would solve in well with any question you would ask is well what's the mass at the end of this guy right well so then until you hear the mass is you know 2 kilograms are you rotating this yes is it symmetric or is there a spider a really heavy spider up you know metallic object running out now there is a spider running out on that case when I try and extend the arm because of centripetal all that stuff there's going to be kind of a you know a vibration right oh that's right I need to think of the vibration that means the hydraulic system must exert enough force to overcome the maximum plus the system must be able to handle the vibration get it this is the sort of thinking right and this is what we do by the way do you know how a lot of cell phones and other ones that buzz you know the vibrate another vibrate yeah there's been a little eccentric thing right an eccentric mass that's other vibrate right so you can see how a rigid body can create vibration because you're spinning it and that's what we've balanced our tires to by the way right okay so that's that now let me solve the problem and I'm going to have you guys do this is this not quiz because this is a really cool problem that I had a simple problem in mind but others giving the lecture I thought of a more complex problem and I thought why not have them solve a snap quiz so here's a problem folks spider our buddy the spider right frisbee thrown frisbee spinning away Spidey is running right here's the complication this particular frisbee has a little pulley that's kind of a catch the frisbee tional really pull eats like a a winding it's it's a little twine winder I tie a string so there's a lot of twine wound around this so I'm actually tossing a ball of twine get it attached to a frisbee and as this thing goes out right it unwinds the twine and there's a lot of coins you're under whatever the twine running out okay and I tell you that this radius is R 1 and I asked you to come up with for the frisbee problem the four parameters for non-standard coordinates right actually you can ignore the spider if you want just look at the frisbee itself what is the kinematic constraint then we include first of all do you see that is the kinematic constraint right what two parameters justice kinematic constraint effect think about what three parameters think about it try and frame for me I don't know the answer because I was going to give you a simpler problem I thought it complicated because I felt like you were getting it right select test your mettle a little better the question is very simple this is an extension to the frisbee problem right initially earlier the problem was you're throwing a frisbee now I'm telling you you didn't see anything throwing the frisbee but there is this ball of twine of radius R that is going to unwind as the frisbee goes away and I proclaim my claim I assert that there is a kinematic constraint that is created by that the R one is constant here right on the kinematic constraint for me that's it you get approximated if you'd like you a kinematic constraint is an equation relating some of the non-standard parameters with others right you are trapped so I look I described you in English right this problem and I'm saying hey you know the fact that there's a string it's going to constrain this frisbee right and I'm saying give me an equation that captures the fact that this frisbee somehow constraint what we have done so far is list and that's so that's all I can say those I'm solving the problem for you we have listed some non-standard parameters we listed four right we've instituted introduced a kinematic constraint so there's the four some of the non-standard parameters are going to relate it to the others there won't be independent anymore and I want you to struggle with this because if you're not able to convert what I said in English to the equation that's okay that's part of what we're I'm asking you to do this because if you struggle toward once you'll get it okay we're running out of time so let me do it for you okay here's the deal and a couple of you got it I think and you can kind of self judge with you got it but do turn these things in very simple guys what's going to happen to the frisbee as it traverses space under this new circumstance which is this twine unwinding someone yep right so is that what you wrote as well okay so that's one where it will let me okay that's that's okay so okay okay hold watch your name again Connie it so that's right so so when this frisbee flies it unwinds right what is the constraint that happens when it unwinds so again right there that's right so what means is that it can't spin randomly the spins it's kind of unrolling right you see I mean Connie right so you could say something like this distance is almost exactly equal to actually take them geometry corrections there's more some small Corrections right because it's really not this distance but it's this distance but assume the thing is small that distance let's call this it that distance is u squared plus V squared square root right in the UV terminology right is equal to R sorry that distance multiplied by theta that's not that wrong our data get it very simple Sam yes I do I miss correction but for if the radius is small right I don't yes why because remember let's say that the L is constant it will just go wrong a circle so this angle is irrelevant this angle theta is irrelevant right character we call it theta 2 in my alternate formulation this angle is irrelevant yes yes it's actually minus theta 2 right okay yeah what right no no you're exactly right by the way you caught me and actually Ted you kind of missed that right and Claudio I don't know if you got that some of you thank you Sam brilliant actually that's thank you right I took theta so I need to write this properly tell you what I've solved for you and you'll get it right I'll talk I made up the problem on the fly but we'll solve it for you and get I can get it to you we call this theta right if in fact the frisbees kind of rotating that way right in other words if this angle changes and we call this theta 1 then there is actually it'll be that's what will come out to be it's actually exactly correct you're exactly correct okay so if you don't get this don't worry by the way there's the other way to say this is that if we had picked coordinates L 1 and theta 1 then this would simply have become L 1 right whether this theta 1 in the UV it should really be the inverse tan of V over you okay or is this R is the radius of this little coin thing yeah that's correct that okay in this case I just wrote it like that if it didn't there'd be some constant amount right yeah it'll just be a constant number and yeah I would add that okay guys I did this fast okay I should probably have done the simpler example I kind of innovated here but actually this example don't lie to be much more pretty than I expected the hidden depth stood sure is this actually non holonomic we'll think about it may not be but it might it has some hidden depths to it but you but before you go do you kind of get get what I'm saying here it's a kinematic constraint it's an additional constraint that means if you know L you don't need theta if you know Taylor you don't need L right they are not independent of each other but we've kind of captured it as a constraint now as it turns out writing this constraints a little more complicated than I kind of thought I was doing the lecture so we'll write it or three completely I'll give you a handout on Wednesday okay the homework is due next Wednesday in the morning see you next week
MIT_2003J_Dynamics_and_Control_I_Fall_2007
7_Impulse_skier_separation_problem.txt
the following content is provided under a Creative Commons license your support will help MIT open courseware continue to offer highquality educational resources for free to make a donation or to view additional materials from hundreds of MIT courses visit MIT opencourseware at ocw.mit.edu all right folks lots to do exams on 24th today is the first 23 days to go lots to cover uh I actually think that this is a sort of exam where if you really understand the fundamentals everything follows it's not there's no memor memory right yeah you need some memory but it's not a test of your memory as much as a test of your logic so we need to uh keep plowing ahead with the logic and get to a place where we can rework everything all right last class let me turn more the lights on here last class we did what did we do in the last class I forgotten seems ages ago anyone remember what we did inelastic collisions yep so actually elastic collisions first so we started with momentum and I told you that bizarre pum joke pimento joke it's true I didn't I didn't make it up I make up some stuff but that's true and I forgot to get the uh I'll find you the the uh PDF file so that's momentum then Newton's Laws in particular we talked about the strong form of Newton's law the third law then we did the work energy principle I have a comment about that some of you have some questions then we did collisions and did we do elastic collisions or inelastic collisions both and the difference is in the coefficient of restitution if if it is one energy is conserved if it is zero energy isn't conserved is momentum always conserved yes okay so that's what we did in the last class today what we're going to do is I'm going to just spend a few minutes tying up some lose ends from the last class so I'm going to start with a problem so first lose ends wonder where the what why is there a a role in in football called a tight end anyone any football players are there inul lator then we're going to do a problem an examp Le problem then I'm going to introduce you to equations of motion formally I've hinted at them through emails Etc I'm going to really nail it today you kind of know what it is but I I want to do it formally once then we'll do another problem and then we'll do angular momentum it a very cool concept all right two lose ends I want to tie up I'll just do it here the first lose end I want to tie up is something you know but I just want to tell you what it is again is the concept of impulse um I'm assuming you all know what impulse is but let's do it carefully anyway it's very simple first of all what what are the units of impulse anybody say it again newon seconds is there another way to view impulse that's correct by the way anything else any other way to view impulse Newton seconds is correct yeah kilog me/ second Square it's the same thing but the reason it's useful to say it both ways is impulse is the product of force and time it is also a change of momentum done right it's the same thing and the way to think about about it is when you think of two particles let's Point think of Point masses heading towards each other and then they Collide and if it's inelastic there might be some deformation involved and then they bounce off each other if it's inelastic with smaller velocities then you can plot let's call this a and this guy B you can plot the um force from B onto a that is the force fell by a and it's going to be equal and opposite to the force from a onto B right you can plot that Force as a function of time how will it look assume it's inelastic how will it look so initially when they're not in contact the force is zero and then they kind of touch and then the force goes up and then both of them deform the force goes up and then they startop bouncing away then the force goes down eventually they separate that's Force versus time right something very profound impulse is equal to the integral of f DT for the period of the Collision which in this case we can just go minus infinity to plus infinity right but you need to be careful because there some other collusion you you know you need to be very careful about what you mean by the impulse it's for that event and effectively it is this area and all that impulse says is that force that force is changing momentum it's a changing the momentum of a so that's the change in momentum of a right over that period of time so the area under that curve is a change in momentum of a so it is also change in momentum write it here okay if you can't see that that's change in momentum and it's simply FDT and the units are F Newton Second or kilog m/s same thing yes nothing profound now if the Collision is instantaneous if the thing if the two spheres are infinitely rigid what's going to happen to this guy to this shape yeah the width will go down but because the total impulse needs to remain the same the height stays the same sorry the height has to go up right so it goes to infinity and that is our idealized view of an Impulse so you can think of these inelastic collusion sorry elastic collusions as bam right the two particles Rush up to each other they instantly shake hands they Exchange momenta then they had their own ways right the net amount of momentum changed exchanged is transmitted as one infinite Spike of force for zero for a very short period of time and that's kind of the extreme version of looking at impulse okay impulse is a short hand it's a laziness that we in indulge in to say yeah they collided something happened they exchange momenta here's momentum change and since momentum change is a long word and it sounds CL clui we make up a word we call it impulse that's it all right so you've seen that before and that's what impulse is so let's say there's a a um a a uh a basketball actually that hurts a soccer ball heading towards me have you ever tried kicking a basketball by it really hurts don't do it but anyway a soccer ball heading towards you right and let's say there's no drag nothing no losses and unfortunately it's a infinitely rigid so soccer ball so that might be painful too but anyway it's heading towards me 1 m/s towards me right and I kick it I rock back because my momentum is conserved set as well um but let's ignore me for for the moment and that ball heads back at 1 m a second what is the total impulse and it's 1 kilg say say it again that's right it's 1 m kilog 1 kg m/s heading this way initial afterwards it's minus 1 kg m per second because it's heading the other way 1 - -1 these days is two so the impulse I imparted was 2 kg m/ second that's the impulse done all right very simple concept nothing very profound about it I just wanted to put that to rest the second thing I wanted to say I'm going to say it now and then I'll say it again later on more formly is let me ask you a question does energy conservation give you a different equation than f is equal to ma think of a one degree Freedom system right a particle on a line on a straight line I apply force on it it's going to accelerate f is equal to Ma right let's say that the particle wasn't it's on only going to move on a straight line like a train on a track it has zero velocity I apply certain Force for example I turn on a rocket which applies a constant force for a second right I look at the final velocity and let's say it's going uphill a little bit I can get I can formulate one equation which equates kinetic energy to increase in potential energy and and the work done by the force or I can just take the force and write f is equal to Ma and figure out what the final velocity is will I get the same equation or not what do you think yes absolutely fundamentally unavoidably yes it is the same thing it's the same thing absolutely all right and energy conservation is just an easy way to do force balance when it's easier to do energy if you want to get one equation out of it and it's one degree of Freedom that's it you're done look what I derived for you in class last week and you know sometimes it's there's a dud outside kicking a tree seriously I'm not joking whatever any right I didn't make it up he's kicking a tree look here's what I said to you very simple guys f is equal to ma that's an equation of motion don't worry we we'll do this more formally now in a minute right you know that that's Newton's equation of motion that's the only equation that exists everything else follows from this energy is simply doing this integrating this and integrating this so the equation doesn't change get it the same thing and that's what I did for you in class the way I did it was I said f integrated over Dr right and I wrote M this thing as V Dot and I said dot V Dr sorry DT because Dr is the same as V DT T remember that okay and I'm just going to do it very informally this right is m v do^ 2 / 2 kinetic energy you understand and this F if it's if there's no external Force except a potential energy kind of term this is this becomes Del of Potential from Dr that's just going to be potential energy difference Delta potential energy right it's the same thing so Newton's law integrated gives you energy that's it it's the same equation if you get new information you're in trouble because particles can't be in two places at the same time at least in Newtonian physics right so it's very fundamental and I I want to make sure that you're all you know with me on this that you understand what I'm saying that energy is fundamentally the same as Newton's law it's just a different form of the same equation now the problem is when you have one degree of Freedom you just need one equation everything is Hy Dory but if you have two degrees of freedom you need two equations energy only gives you one if you have five degrees of freedom energy still gives you only one because total energy is conserved right you still need four more equations so that's why you have to go down the neutronian path of writing f is to M four times five times right yeah one of them you can cheat get energy but the other four you have have to get by writing f ma get it you understand yep leave on so so when you're doing the ma get are two that's right but when you're do energy which get you get to get total right yeah that's good right so think about it f equal to ma we'll do we'll do a problem now okay don't worry we'll nail this right f is equal to ma F in the X direction is equal to a a m times acceleration in the X Direction F in the y direction is equal to mass * acceleration in the y direction right when you do energy you're looking at the magnitude of the Velocity right it's V do V so they all kind of get bung you know you know bung into one they all get combined so you just get one kind of overall equation get it all right so let's solve a problem so that's uh so those are the Loose Ends okay so um I've told you about impulse I've kind of start talking about energy but now let's get into energy let's get solve a problem and I'll give you the equations of motions more formally now yep i v s right it's V Dot um yeah it's v² actually the way I would have written that just so you just to write it correctly is look at your derivation but this is how written it okay I think yeah whatever you get the idea okay so yeah please look at my derivation I you know this is I just scratched something out here just to give you a sense of integral look at the derivation so here's a problem we're going to solve folks first problem using where we're going to actually solve not just find the acceleration but we're going to find a dynamical equation it's okay to say Dynamic equation or dynamical they're both objectives by the way all right okay so the problem is this you have a um quadrant of a circle think of a ski ramp except it's circular but here it's completely vertical right now you have someone let's call it we'll refer to we'll treat that thing as a point Mass when I say point I'm refering to a point Mass here but it could be a skier in other words ignoring the fact that this person is actually a three-dimensional entity and treat this person as a point and it's stationary actually doesn't matter let's assume it's stationary for the time being doesn't matter right and the question we ask is we're going to just push them a little bit and we want to know where if at all this person will depart from the ski ramp in other words separate right for sure they're going to depart here right if the circle continu they're going to fall off but will they depart earlier and that is the first problem that we're going to solve in Dynamics so I want you to think about it for just a second then going to ask you a question so first of all the question I'm going to ask you but think about it for a second is will they separate or not anybody will they separate why why just think about in English why would they separate just English Forget You know you don't have to use equations say it again that's good and so what will happen when they separate you're right you're right I'm asking a deeper question now next level question which is yep claudo Force the force between the two objects normal force will become zero that's right that's right okay so the force between the two of objects which is the normal force will go to zero in other words the Ski's knees will start stop flexing and in fact the skar will find that uh he or she would need some sort of attachment to the snow to stay on it right it's like driving over a over a a a curve in the road right a vertical curve a hump in the road all right so how would we go about solving this okay find the velocity as a function of angle then English that's good that you're exactly right and then what would we do yeah well Claudio you've spoken you spoken someone else someone in that quadrant yes find the the data where uh the normal force or the force of gravity is by the yeah kind of right that's right that's right okay that's correct so find the angle at which the centripetal acceleration is not sustained by gravity kind of right centripetal acceleration is the the very minimum acceleration that the component of gravity in that direction not really gravity but the component of gravity must provide to keep them attached right and if gravity doesn't provide that then at some point they're going to fly off right okay so how do we start how do we start this let me check I have in my notes yes I know to start it how do you start it it starts with an F it's a word that starts with an F yeah on a frame a fix frame and a moving frame yeah we'll have to do that in fact but uh actually it's a three-letter acronym that starts with an F so from here on we enter the land of three body diagrams all right and from here on pretty much everything you do when in doubt draw a free body diagram in fact in life right for example if a cop pulls pulls you over as it happened to me yesterday and you don't know what to say just draw a free body diagram you know and then trying to argue your way out of the ticket okay I did it didn't work he let me go anyway but anyway so a free body diagram so what's a free body diagram it's very simple we're going to identify all the forces on this thing why is it called a free body diagram anybody remember huh that's right what you do is you remove all the supports and you draw this thing as if it was suspended by strings and you figure out what the forces in the strings need to be to keep it suspended there that's exactly what this in fact that's the cleanest way to think of a free body diagram remove all supports and replace it by strings and figure out what the forces and the strings should be so let's do that so let's draw it for some position here and let's call this and Andrew I'll come back to what you said so what you said was correct right but I that's the next step and I just want to break it down a little bit let's call this Theta by the way let's call the the radius of this row and let's draw the free body diagram so the free body diagram for this what forces does this guy feel by the way let's label them this is O I'm going to call this point Q okay um let's yeah what's what are the free body what are the forces on this particle as if it was you know what forces would you make up you know them come on say it gravity gravity y so let's assum this has to be the worst designed room on the planet well no there are others but it's pretty bad all right so gravity right by the way I can now let's introduce our first frame I'm going to call this guy a because I need some directions right so I have an A1 and an A2 right so gravity is minus MGA A2 right it's okay we'll come back to that what are the other forces on this normal force right and the normal force will point which way that way okay so a normal force n okay so total Force is equal to what let's just use Vector notation for the time being yeah I'll write it for you and when I write it you'll say you'll I think you'll agree total force on particle P I'm going to use it in full notation I'm going to use little f for Force little F because it's a particle later on when I do Bunches of particles with rigid bodies I'll use capital f P because that's the name of the part particle oh is it Q oh okay you know what I mean okay all right just checking just testing okay is equal to Min - [Music] mg A2 plus and now Andrew I'm going to define a second frame because it'll also come in handy another for another reason I'm going to Define this Frame which all right but the way I'm going to Define it is I'm going to Pivot it right here all right so it's actually pivoted it's not moving with the particle it's just pointing towards the particle you understand very important that frame is actually I just don't have space to draw it here it's pivoted right here so this is point Q so this Frame is here now frames can move but the point is it's not kind of accelerating it's just rotating to follow the part particle it's like a gun pointing at the particle but right from point O you understand right so in fact what I could say is that there is this Frame which is a transparency B this Frame which is transparency a b rotates in a and the point of B that stays that does not uh translate with respect to point a is point q and they coincide right here so please be very clear all right do you want to redraw it with B right there will you remember this because if I if B is moving then the accelerations I'll have to do it differently so I'm drawing it that way but be careful you understand please okay all right so the net force is correct okay F what do I do now Levi um you can your or at least basically you care about for so so you can break up your mg force and find is okay okay so at this point um what I'm going to do is I'm going to be more General so because what you're doing is you're using I told you about the problem so you're kind of looking ahead which is good which is what which is the skill you'll start averaging especially in exams to minimize your work which is great right but what I'm going to do is I'm going to figure out the equation of this of motion of this particle in its true and most General sense you understand so I won't break it out and all that I'll write it in it's full and most General sense so what do I do now what's the equation what is Newton's law f is equal to ma a right so I just figure out the F what do I need to figure out next a so let's try and figure out now so we want right this is what we want to do actually it's a little F sorry Q right this is what we're trying to figure out we figured this guy out m is a Scala we know what it is so now we're going to try and figure out a AQ and that's a fundament neonian method okay figure out all the accelerations figure out all the forces equate FAL Ma so now I want you to tell me what a AQ is anybody how do we do that what I I sorry I can't hear you yeah super ultra cool magic formul that's right why okay so how do we use it let's write it out shall we okay so the way to think about it is remember this B right it's kind of here and just rotating B is not translating with respect to a just rotating there so think of a is the Earth the space shuttle or the Frisbee whatever is stationary in the sense that it's right there it's not actually moving it's just rotating right and there's a particle like it's nose that we want to figure out the acceleration for so what is the super alra cool magic formula it is and I'm going to run out of room and I'll finish it up up there is a AQ can you please read it out to me because I'm very forgetful I need you to go back and look at your notes and tell me because this is part of the exercise we're going to use all that stuff right how first of all how many terms are there in the ultra huh five right first term let's reconstruct it or look at your notes first term is can you imagine no one's named this formula it's amazing you know I mean I had to make that up I I think it's a cool name but you know you have thought that you know someone would have given that name to some bud of theirs right plus what is just I'm just writing it out yeah is ah because very good point because I was thinking ahead when I wrote this problem remember in the let's let's examine this a little bit right so we're trying to apply the Frisbee problem here in the Frisbee problem you have you know sanj's G Garden you have the frisbee it's hurtling to space we call that b oh did I label that Q oh I'm sorry I screwed that up that should be p in the formula sorry but let's still let me finish what I was saying I think that's what you were asking but it's a long answer but I'll give it you anyway we call this P we call this Q right and that's what the super ultra cool formula was for right but what I'm going to do is say the Frisbee actually stays pinned right here so it's just spinning and then I'm going to use the formula that's what I was trying to do but I'm sorry I Mis wrote that that should have been P this should have been P I apologize okay my notes I had q and I changed them around I apologize okay what else anyone stting it oh plus so this is the cholis effect what else this is the oiler form Oiler uh turn [Music] [Applause] Plus all right now I'm doing in a painstaking way something that you guys inted right in the beginning which is there's got to be an a centripedal term right but the reason I'm doing in a painstaking way is I think all of you actually also forgot this term all right in the general sense of the problem now it won't show up in solving this problem but there is this term but it's okay let's just do it pain stally okay what is a ation p in our case what is B acceleration Q in our case what is this guy the Coriolis term get it yes B is rotating with respect to a but Q is not moving with respect to P in frame B should that be what oh yeah it should be h this is good okay but Q I mean q is not removing the Spector B yes there's no corus term so that only leaves these two terms so let's compute them so let's compute them so what is a acceleration P now sorry Q now so here's what I'm going to do this is going to be your snap quiz a very simple snap quiz take out a sheet of paper write it out for me this is only got 2 or 3 minutes [Music] yep hold that thought okay so guys excellent question right now we're simply analyzing and trying to come up with an equation of motion in the most most General sense of the ball as it goes down the ramp before it leaves the ramp we're just star to write the equation of motion all right and then we'll do the smart thing which is figured out when it leaves the ramp but right now we're writing the equation of motion good okay there's a the snap qu comes in two halves so use a big sheet of paper with your use a large sheet of paper if you can because is I'm going to ask you to do something on the flip on the back side how you doing is it easy let me give you a hint number one I know the particle is falling right I know it's going to slide down but since I label Theta is going up going up right increasing as it goes up formulate the equation of motion as if it's going up formulate the don't get confused by that the negatives will take care of themselves get it just a hint right when Theta increases the particle's going up I know it's s counterintuitive because the particle is really going to come down but don't worry about it write it that way the come out negative okay in general any uh non-standard coordinate or parameter asum it's going to increase and if it's supposed to go the other way Newton's Laws will drive it to a negative okay just don't worry about signs that much don't double think done yes okay so let's write it out let's solve it anyway what is a alpha B quick H okay cross rpq and what does a Omega be and what does it come out to be uh yeah is this correct yeah that correct oh that's B1 yeah this one oh that's right it's coming inwards yep okay now in English what is this term and what is this term okay so now we have both F and AQ so what is the equation of motion it is get this it is simply forgot one term which is the m make sense that folks is equation of motion there's no magic here I just did F from free body diagram I did a from all the stuff you've learned and I wrote f is equal to ma in Vector form done that's it right so let's do some um let's Ponder this a little bit yeah in the following sense we just assumed a certain direction of theta this is a differential equation okay so this is actually where I wanted to go with what I was going to say anyway this is a differential equation um when you solve this differential equation you'll get some function right for example if you solve MX dot is equal to constant X do is equal to constant over m x do is equal to you know integrated once X you know you solve the differential equation you'll get in this case Theta is a function of time and by the way it'll go the other way Theta will become more and more negative over time so it's falling so trust and have faith that that's how things will work out you understand so it's very simple there's a differential equation differential equations have the same principles unknowns must equal number of equations right solve the differential equation which we won't do yet well we'll solve it for this one because it's easy right but once once you solve the differential equation you'll get Theta is a function of time you understand and that will just happen to have a trajectory Theta is a function of time that's negative so if you solve this differential equation you'll end up with something that looks like this Theta is a function of time with time right and you'll find that Theta is the initial condition would say Theta was at Pi / 2 which was it was at the top of the ramp and what's going to happen to it it'll decrease right and at some point it separates and that's once it separates our equations of motion the way you wrote them are no longer valid because we assumed that the particle was in constant contact right but this is what you're going to solve for now right now in this first piece of the course we concentrate entirely on writing as many equations of motion as there are unknowns that's the entire Focus before the first midterm we don't solve them yeah we'll solve them once in a while just to give you a taste in fact I might solve them here right but the objective is simply to write the equations of motion that I want you to understand the second third of the course is lrange it's just an easier way to write the differential equations but we won't solve them either it is in the last piece that we actually solve the differential equations formally informally we'll try solve it here as well okay now how many equations are there embedded in this two because this is a vector equation there are two how many unknowns are there which are the unknowns excellent there are two unknowns one is this and the other is this okay so this is very interesting in two- dimensional space how many degrees of freedom does a particle have two but usually they're both kind of dimensional degrees of freedom right so the unknown should be like an X and A Y right here's what happens if you constrain something all that happens is one of the dimensional degrees fre Freedom goes away but an unknown Force makes an appearance you get it you understand so every time you constrain something yeah you take out a degree of freedom but an unknown force is going to crop up right and I'm going to write out now for you something that's fairly profound I want you to think about it because there are many ways to kind of go about it and you'll say ah does the math work out in the end you need as many degrees of freedom as many unknowns as there are equations and we ended up there right but now let me tell you how we ended up there if you have a system with and M points and N rigid bodies in 2D space right in general how many degrees of freedom does it have M points and N rigid bodies come on you know it m points n rigid bodies so in general if you have M points in two dimensional space how many how many degrees of freedom if you have n rigid bodies okay that's the total number of degrees of freedom of yeah s I'm sorry s okay all right so in general you're going to have M Points n which you'll have 2 m + 3 n unknowns right degrees of freedom in general right you will get now a point when you write the free body you know do the free body diagram and write equations you'll get two equations right so the number of equations is going to be 2 m + 3 m because for every Point you'll get two equations and you'll see later for every rigid body you'll get two equations plus the torque equation that's three so things balance out you understand so in in the end when you when you when you study rigid bodies which have points and rigid body R uh rigid bodies you will always end up with as many equations as you have unknowns understand but now I want you to pay attention really grit your teeth it depends also on whether you want to call out the kinematic constraints implicitly or explicitly all right let me explain that this is It's profound if you don't get it it'll you know as you do the problems you'll understand this anyway you'll understand at some gut level I'm trying to articulate it for you look at this guy really that point has two degrees of freedom it can move left or right can move up or down right but by constraining it to be on a slide we're saying the distance from the center as long as it's on the slide must be row right now naturally we could have said hey you know write an equation of motion with X and Y is the degrees of freedom and we'll impose the row constraint later or we could have picked a natural um non-standard coordinate which is what we did we just picked dat to enforce it kind of implicitly right so if you pick coordinates that are natural and you enforce the kinematic constraint implicitly this is what will happen if however you pick the coordinates not you know awkward coordinates you will get K extra unknowns in terms of the unknown forces but you'll get K extra constrainted equations you understand but again number of unknowns is equal to number of equations get it if you don't get it I'll say it all over again do you want me to say it all over again because I'm very happy to do that you want me to say it again okay forget that just look at me right I how many degrees of freedom do I have in space as you I'm a Richard I'm a I'm a point three in in space right now as I stand here can I move up and down I I mean let's say I can't jump right so how many degrees of freedom do I have two right now I can ignore that up down degree of freedom and just say my Z height is equal to zero and I write two equations of motion I have two unknowns I'm done right or I could say no no no sanj has three degrees of freedom x y and z Oh by the way Z is equal to Z right I called out that Z made it an unknown and then I said it equal to zero that's three unknowns in three equations right that's a very kind of a silly example but in fact that's an example of how by explicitly calling out a variable and setting it to zero I bumped up the number of degrees of freedom by one right and then introduced one more constraint or the natural thing say for at Z he isn't going to move up and down because he's on the floor right two unknown two equations get it so in this case it's a little more complicated really the point you know if it could travel up and down has two degrees of freedom right so really I need two unknowns and there's an unknown four so there'll be three unknowns right and I can write three equations two from f is equal to Ma and one is that the uh uh constraint that the from the center must be row or I could just naturally say forget that and collapse it and just go for the minimum number of degrees of freedom right so essentially what I'm saying is look and look if you don't get this it's okay we'll you know it'll become clearer and clearer over time the point I'm trying to make is that depending on how you parameterize the configuration of something whether you pick minimum parameters or you say forget it I'll go with some other parameters you know some of the non-standard coordinates and I'll put constraints you might end up with different numbers of equations but you will always end up with as many equations as there are unknowns got it okay you're going to have to ponder this and think about it but in the end the objective of this course is to write equations of motion count the number of unknowns you better have that many equations in the first third and then the second third of the class we just want to make sure the balance is met then we'll figure out how to solve the simultaneous differential equations right and that's two that's the end of tw3 that's the third piece of this course and all of tw4 you understand okay so I will never say hey uh in the first at least first third I'll say just make sure they're equal you know you have you know and you can you know you'll scratch around it initially you be like I didn't did I was it implicit was it don't worry about it just do it and you know you'll say you if you see that the number of unknowns exceeds the number of equations there's probably some kinematic constraint or something you forgot and you'll find it and you'll write it down okay now you guys holding up okay this takes some pondering you will ponder it you will get it I'm I'm confident but sometimes it takes a little longer to get it okay now I want you to do the second piece of the snap quiz I want you to solve this problem the same problem using energy oh by the way we didn't solve the problem let's just let's didn't solve it I beg your pardon let's solve it so what we're going to do now is we wrote it in Vector form now let's write this equation and motion in scalar form in order to do that we can uh we can rewrite A2 if we want in terms of B1 and B2 right and then everything becomes B1 and B2 we can split it up into two scalar equations so look at your notes and tell me what is A2 in terms of B1 and B2 sina2 can everyone verify and and make sure you agree with this seems correct okay all right so let's rewrite this so it's um I'm going to try and do it really quickly this becomes there'll be a B1 term so minus mg sin Theta um plus n I'm trying to do all the B1 terms together and then all the B2 terms together tell me if I'm doing this right okay I'm doing it on the fly with you guys this will become is this correct anyone disagree with me on this okay yep okay when you write the answer right um if you have even if you write it in Vector form a let's reach a consensus here from a grading point of view if you write the equation of motion in Vector form that's good enough if you say find the equation of motion if you write it in Vector form that's good enough all right now we don't even have to pull them apart if you if you don't want but but if we ask an end goal which is a step further for example figure out when the things going to separate then you're going to go all the way you understand so but the end goal here was to figure out when they separate okay yes that's right so now let's let's examine these two guys what is this saying this is saying that the component of the force this way right tangential is going to be the oiler acceleration term basically right makees sense and this is saying saying that gravity plus normal force is equal to the centripedal acceleration so gravity plus the kind of the component of the normal force is centripedal acceleration beautiful right it's exactly what we expected so it's exactly what you know someone was saying earlier right you were saying that earlier right which is this is exactly what we expected we just did it right Brute Force correctly and it all worked out okay now let's solve them I know that solving isn't really a part of this first third but it's a asked you to solve it in this particular case because it's a simple one so let's solve it so what do I do next I'm trying to figure out when the skier takes off what do I do next that's right set this equal to zero okay okay and we get right and um and for this we can go okay are you anywhere to simplify this what how would you simplify this yeah you would have to integrate it right and the way you would have to do it is this is a differential equation okay it's a differential equation and by the way this is separation so the question we I'm going to do it now because now I'm going to show the energy method I'm actually going to ask you to do it in a snap quiz right and I won't do this right now I'll solve it and give it to you as a handout because it's pain in the neck but I want to show you that this is how the differential equations would go and let's kind of set it up so this is the Theta of Separation this is what we're trying to get at right so what we're going to do is first solve this you understand get Theta a function of time and then solve this and figure out Theta separation got it so how do we solve this this is actually a differential equation d 2 Theta upon dt^ 2 is equal to M's will cancel here minus cosine Theta upon row / row it's a differential equation folks oh * G where's the G uh here right and do you know how to solve this huh go ahead yeah basically what you're going to do is I mean the long and short of it is you'll propose some solution for example you'll say assume Theta is you know some function and you're going to propose it solve it twice and it's yeah you'll basically integrate right so I'm not going to do right now you would solve it and then get separation but this is how you write the differential equation you can see that writing out all the differential equations and solving them can be somewhat painful right so what is a nice way to solve it I'll solve it for you and give it as a handout I want to get to the last part in this class you can solve this I'll you know you'll get a handout on this but now the question is it's a pain right you solve this differential equation I want to get the separation angle is there another way to do this I'm sorry yeah yeah that's exactly what you would do that's that's what solving it is right you would guess Theta is some function solve for it right and then from that get the separation angle but is there an easier way and by the way solving differential equations is often also called integrating a differential equation right to make it more algebraic right but this is very painful and just kind of a lot of differential equations and yeah we you know you can do it in this case actually you can do it in closed form I'm not going to do it right now give as a hand out right because the point of this is not to show you how to solve differential equations right now the point of it was to get to that separation angle is there another way to do it which will give me the same answer energy right because energy the whole point was we kind of integrated this in advance right you'll only get one equation from energy but it's for free almost so let's do that and let's try and figure out now now from energy and that's your other snap quiz right remember I said I'll ask you again I want you to figure out an energy formulation for solving this problem go and I want you to struggle a little bit right for example potential energy it's actually a potential energy change what is the potential energy at the top yeah but why not just assume it to be zero just assume that to zero just make your math a little easier you understand or it doesn't matter take bottom actually yeah it doesn't matter you can do it a number of ways but try and solve it I want you to struggle through it then we'll solve it right right now will energy give a different answer than if I just integrate the heck out of this or is the energy equation just a restatement of this equation anybody is there new any new information in the energy equation compared to this there is not it cannot give a different answer it's the same equation written in another way it's some higher you know demand it's well it's it's just these things kind of manipulate it to look different that's it no new information yeah very simple right and Ted uh I saw his notation he had in his head and he just just throw it down it's going on up that right so what it did was what we're going to do is we're going to find the potential energy of the top which is we can set it to 0 100 2 16 and a half whatever you want it's just a number right and the kinetic energy at the top what's the kinetic energy at the top assuming it starts from rest then as a function of theta we're going to find the potential energy and we're going to find the kinetic energy right we're going to equate them we'll solve for the velocity and we'll see if the Rea where the reaction vanishes and that should give us the answer okay so Ted's going to follow One path of solving this problem now how do you separate variables you integrated what this respect to time yeah how can you integrate that with respect to time that's true so let's figure it out and write a little hand out so Chad you're going to have to U uh State what you're doing yeah take on take with the lecture man you need to be a little weird because I'm weird too so um youate the the change in potential energy um to the Chang okay so what's the potential energy of the top is zero did you write it as zero how did you write it okay you took height okay and height was the same as row right and then um as def find y from you know uh from the ground to the top so then the the change in height going downwards is hus y m I guess it's that's going up so this is the change in potential energy okay so this is the so has the potential energy gone up or gone down what has the potential energy gone up or down uh the potential energy is decreasing decreasing okay so the decrease in potential energy must be equal to the increase in kinetic energy right so what is the kinetic energy half MV squ okay okay so let's rewrite that just in terms of uh just stay there stay there um I'm going to use this space here and just kind of write kind of comment on what T did there so basically what it did was half M A VP squ remember squar is the same thing do producted by itself right is equal to um change in potential energy which is m g and in our notation in this notation that is this height right this Gap and that is row * 1 - sin Theta okay keep going and then you just solve for AVP the magnitude to of AVP which is velocity right I'm just going to write that as velocity is equal to root of M's cancel just you don't need to take the RO you don't need to but I'm just solving it for it for what it's worth right we don't need it because we'll use it in its full form 2 g h 1 - sin Theta all right then what do you do so then at the point where um where it Le there's no normal force which means that um the the sort of radial component of the gravity is equal to the Cal force needed to keep it going in C okay so hold on let me just uh comment on that that is this guy right he's just taking components along the B1 Direction you understand so he's saying N1 minus mg you know and it'll be like a uh a sin Theta right we sin Theta yeah go ahead keep going yeah no just write it down right okay so you have the the cental force is m h m and so that has to be equal to um the radial component of gravity so you have so this becom that's correct by the way y mhm um so now you have two um you want to solve for the Theta where this is true this is only true at this where uh so um we know m uh the MS go away and terms that everything else goes away so you just it write the final answer so good excellent thank you so we'll go to lunch you're free today I'll take you out lunch today okay done okay it's amazing what people do for a free lunch include stand on a stage great job thanks Ted that's exactly right so let me kind of just re reinterpret that right so essentially what he did was he used are you clear up to here up to here that's what I just wrote right so essentially what he then did was he said that look in in this term we calculated the force net force right and if you just do a force balance along the B1 direction right essentially what it'll come down to is that the net force along the B1 direction is um m g sin Theta minus n is equal to sorry sorry mg sin Theta minus M how do I say that let me do it right here so it's accelerating downwards so mg so the force on it is mg sin Theta net force is n so the uh force balance is help me out here Ted how do I do this I'm trying to interpret in your words here mg sin Theta right is equal to in your words excuse me let me just reinterpret it here now I'm just trying to rewrite it in your words R say it again yeah is equal to m v² r that's right which is equal to M * 2 g h 1 - sin Theta right all over H actually it wasn't H it's a row is that right okay so the M cancels the row cancels and what this becomes is and the g cancels so it becomes sin Theta is equal to 2 - 2 sin Theta right that means 3 sin Theta of separation is equal to 2 right and Theta is equal to R sign of 203 I didn't do much I think Ted did a very good job I just rewrote that okay now coming back to this guy we'll solve this for you but here's the deal I'm actually going to end now in a couple of minutes but it's very simple if you actually calculate velocity using energy and calculate Theta using energy and plug it into this this will turn out to be true the same equation you understand so if you solve it this way and plug it into this this turn to be true yeah yeah right so if you if you use that and plug it in here because the energy is like a pre-integration of a differential equation right if you plug it in here of course it's going to turn out to be true so that's actually also the solution of the differential equation now differential equations of this sort some of them can be closed solved in closed form by guessing by tricks Etc some of them you have to use matlb for right that's the way it is really complicated systems use Matlab it's got simulation some of them you can do close form you get some nice insights uh this one you can do in close form and I'll we'll give it to as a handout yeah how do we get that equation it's a it's a okay so so um setting which two equal pluging oh how did we get that oh oh okay so what we did here was we said look if you take a a free body diagram here right we said there's some reaction force n um there is some force on this mg right if you take components in this Direction that's mg sin Theta right and this thing's got some acceleration and the only acceleration in wordss is v^2 R right now I'm doing in English all I'm saying is all Ted is saying actually is hey it what Theta is the mg sin Theta component barely enough to overcome the centripedal acceleration get it at some point it won't be enough right at some point it's going to be not enough to keep this guy in a circle and at zero it's the point where it crosses over right at some point the the reaction force will have to be a sticky Force to keep the guy on the cliff right so when the reaction forces Force vanishes is the point where gravity is skipping him down where he transitions from that to somehow trying to hang on for dear life right so that transition happens when n goes to zero answer your question did you have a question no same thing okay so yep D think I a simar question a way um this was meant to be interactive so we're interacting good yeah M okay so so um so the Assumption here is that at the top the the potential energy is z that's right and no no it doesn't matter no no that's not an assumption well well no I would say it's slightly different okay the thing I don't understand is how so when you say m mg y you're saying that that at any given point theine is equal to the potential that's not true no no what's exactly being said so actually this is what you need to understand take the state on top take the state at which it's at some angle whatever potential energy the guy gained sliding down must have gone into kinetic energy and nowhere else very very important potential energy is like a battery all right it's like a battery this person is discharging the battery but going down right and that is being that's going into kinetic energy it didn't have if for example the person was losing energy to drag to friction right then energy is not conserved but we're assuming energy is conserved so drop in potential energy is increase in kinetic energy is everyone good with this Co pathetic with this right now now what I was going to say to you this final comment is differential equation painful to solve no obvious you know yeah you can solve it and we'll solve it for you right but guess what pre- solved differential equation energy only one degree of Freedom so energy is good enough solv insert it back in here hey you have a solution for the differential equation if you want it get it but it's no different it's the same thing okay done so this equations of motion next time we'll do angular momentum and we'll get into rigid bodies and multiple particles lunch
MIT_24900_Introduction_to_Linguistics_Spring_2022
Lecture_12_Syntax_Part_2.txt
[SQUEAKING] [RUSTLING] [CLICKING] NORVIN RICHARDS: So last time, we started doing syntax, and we had started drawing trees like this one. What I said, I think, was-- if you remember when we were looking-- when we were doing morphology and we were looking at words like "unlockable," we were convincing ourselves that it was profitable to think of words like that as not just consisting of three morphemes-- "UN-LOCK-ABLE." It does consist of three morphemes, but that it's useful to think of these three morphemes as being composed in pairs. So there are two ways to make the word "unlockable." You can put "un-" together with "lock," and then you can plop the result of that together with "-able," or you can put "lock" together with "-able," and put the result of that together with "un-" and those mean different things. That was what we were saying when we were talking about "unlockable." We're going to talk about syntax the same way. Yes, this is a string of words-- however many words it is-- 8, I guess-- but there's more to say. It isn't just a string of eight words. It's a string of 8 words that are composed via this pairwise operation of merge that takes pairs of things and puts them together to form larger things. The other thing we said-- so I've just gotten started on building a syntactic structure for this sentence here by saying, yeah, we've got these two nouns, "book" and "garage," which are preceded by determiner "the," which should be merged together with these nouns to form larger objects. In the case of "unlockable," we were saying affixes like "un-" are going to come with instructions like "Put me together with a verb and I'll give you a verb." And instructions-- sorry, morphemes like "-able" are going to come together with instructions that say "Put me together with a verb and I will give you an adjective." And so when we put two things together, we're adding these-- putting labels on these nodes that we create via merge. In the case of morphology, what label you put on the larger node is determined by the affixes. And we decided affixes have to come with instructions, like "I turn verbs into adjectives," or "I attach to adjectives, and the result is an adjective"-- things like that. For syntax, similarly, we're going to take these pairs of things and we're going to merge them together, and we're going to give a label to the result. In syntax, the label of the result is always going to be the label one of the two things that you put together. So in this case here where I merged "the" and "book" or I merged "the" and "garage," I'm going to give the result of that the label "N," because we decided that the rules-- whatever they are-- for determining which things can go where seem to care about the N. That is, there are a variety of things-- take a sentence like this one-- that can go in the slot occupied here by, for example, "the book." And what they have in common is that they have nouns in them. So you can say "I will find the book in the garage," you can say "I will find books in the garage," you can say "I will find purple books in the garage," you can say "I will find the purple books about syntax in the garage." There are various kinds of modifiers and other things that you can add or not, but what they all have in common is that there's a noun. And so we're going to reflect that fact by naming that larger object-- the result of merging "the" with book-- giving it the label "N" as well, passing on the fact that there's a noun in here to that node. In the case of morphology, like I said, we had nice clear rules for what label to put on each of these nodes. The rule was look at the affix and the affix will tell you what you should do. So the affixes say things like, if you merge me with a verb, the result will be a verb, or if you merge me with a verb, the result will be an adjective. Syntacticians have not gotten that far with figuring out the rules for how to label things, so I am just going to label things. And this is a current topic of fairly hot debate, actually, among syntacticians-- like, why are we getting the labels that we are? We're pretty clear on what the label should be, but why is not as clear. So that was one part of the-- start of building a tree for this sentence. Are there any questions about this? This is all quasi-review. So from here, we've created "the book" and "the garage." We decided "in the garage" is also a constituent. It deserves a node. It's a prepositional phrase, so we're going to merge "in" together with the node that we created by-- that first instance of merge, "the garage"-- that N-- that larger N. Yeah, so we'll merge P with that larger N and we'll get a larger P-- a phrase with the label P-- what you could call a prepositional phrase. And then we said similarly, "find"-- we're going to merge "find" together first with "the book," and then finally, merge those two things together to get this structure for that much of the sentence. And we'll stop here for now. We're going to build the rest of the sentence in just a second. And I think this was as far as we got. And I said, is anybody alarmed or disturbed by this. And several of you raised your hands, because, in fact, this is only one of a couple of possible structures this tree can have that's-- sorry, this is only one of a couple of possible structures this sentence can have. We're going to spend a lot of time talking about that today. You were one of the people who raised your hand last time. Yeah? AUDIENCE: So we have P for both N and raised in the variety [INAUDIBLE].. So is N by itself a prepositional phrase or is it just verb phrase? NORVIN RICHARDS: So good question. Let's see if I can answer your question in a way that will not cause more confusion than it gets rid of. I have used now the phrase "prepositional phrase" a couple of times, and "noun phrase," and "verb phrase," and I'm going to dramatically introduce what those mean in just a second. I think it's almost the next slide. Let me just ask you to hang on to that question. It's a good question. But are there other questions about this much-- this far? So if you're looking at this and thinking, wait, I think of it as having a different structure, that's a good thought to have. Hang on to that thought. Ah, good-- this is the answer to your-- I'm sorry, what's your name again? AUDIENCE: Kirai. NORVIN RICHARDS: Say it again? AUDIENCE: Kirai NORVIN RICHARDS: Kirai. Kirai's question-- so he wanted to know, what do you mean when you say phrase? There's a convention, and it's really just to make it easier to look at these trees. But it's a very widespread convention, and I'll be doing it all the time when I draw trees, so let me introduce you to it. If you have a tree-- so the tree we had before-- let's go back a slide. The tree we had before-- whoa-- had various nodes that have the same label. There are three nodes here that have the label V, for example. And there are two nodes that have the label P, and there are four that have the label N, actually. One-- two associated with "garage" and two associated with "book." And if you look at this tree, it can be fairly confusing looking at all of these nodes that have identical labels. So the convention-- one convention just for making it easier to look at trees is to mark the highest node that has a given label when you have a sequence of nodes that all have the same label, because they all started with one word and they're inheriting a label from that word. When you get to the highest of those things, you give it the label P, where P stands for phrase. So the answer to this question-- when we say a prepositional phrase, what we typically mean is the largest thing with the label P from a given-- from a given node. So "in" by itself is a preposition, and that larger thing with the label P is the prepositional phrase. Again, this is just to make it easier to process trees so we won't see so many identical labels. We're just putting a mark saying, this is as far as this label got. That's all that means. Yeah? AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Ah. So in this tree, it's not because there is another thing with the label V that is higher than "find the book"-- yes. But you are right-- there is another thing we do for exactly that case. Let me show you that. Nice question-- it allows me to segue nicely into the next slide. We have another thing that we do for exactly that case. Nodes that are not-- that don't get the label P, because they're not the highest thing and that also are not-- just the bear word, like find-- just the verb-- things that are in between that. It's an unfortunate label. We call things like-- so I'll draw it again here-- "find the book"-- so I won't draw the whole thing, but "find the book"-- there's this thing and then there's this. This is the VP because it's the highest thing with the label V. This is the V because it's just the verb-- it's just a word by itself. And this thing, we use a prime symbol to mark it, and it is called a V bar. I'm sorry, that's just the way we do things in syntax. I think it's called this because it used to be that people would write it like this with a bar over the top. This was in like the-- I don't know-- the 1960s, 1970s people began doing this. And I think people were literally using typewriters to go back and do an underscore over the top of the V or something like that. I think that's what they were doing with Stone Age word processing technology. So the result of all that is that although nobody writes V bar, typically, you will sometimes see this in textbooks and things like that. It's not very common. What people much more commonly do is just write V prime, but it is never ever called V prime. It's always called V bar. If you tell a syntactician, "Find me the V prime node," the syntactician will not know what you're talking about. Sorry. Yeah? AUDIENCE: So if "find the book" itself will not, like, considered a verb phrase, I guess, would it have the bar or? NORVIN RICHARDS: If "find the book"-- in this tree? AUDIENCE: Yeah. NORVIN RICHARDS: So it's a-- I'm sorry, can you ask your question again. AUDIENCE: Yeah, because I'm just thinking from grammatical standpoint, I guess, if "find the book" would be considered a verb phrase. But since it's not the highest standard of the bar, but if it were not a verb phrase, would it still not apply? NORVIN RICHARDS: So if this sentence ended with "find the book"-- if it were "I will find the book," then "find the book" would be a verb phrase. A phrase is just the name for the highest thing with that label. So just like in "the book"-- yeah, the book is a noun phrase. If you had something-- and "book" is not a noun phrase, it's just an N. So there are places where you don't get a bar just because you only have two things with that label. And you just get the N, and then there's the NP. In this particular case, you're getting a bar because there are three things with the label V. And if there were more things-- if this verb phrase were more complicated, you could end up with a bunch of bars in between the verb and the VP. In general, you're going to get one P up at the top and one thing with no extra doohickey down at the bottom, so just V. And everything in between it will be bar. Again, this is all just conventions. This is not meant to reflect anything deep. It's meant to make it possible for human beings to look at these trees and process the information in them. That's all it's for. Raquel? AUDIENCE: For things like "will find" or like an auxiliary where you squish multiple verbs together, is there something more complicated going on there, or could we say that they're both part of a verb phrase? NORVIN RICHARDS: We have not gotten there, but you're right. And so Raquel wants to know what am I going to do with "I will"? And I will-- I will tell you what I'm going to do with that now. We will give "will" its own label in just a second. Good question. Other questions about this? Reasonably clear? So-- but yeah. So again, P is the marker for the highest thing with a given label. The lowest thing with a given label doesn't get a mark. Sometimes, when people want to make it clear that it's not a bar and it's not a P, they'll put a little raised 0 on it. I won't do that, but you'll see people do that sometimes. So you've got the unmarked thing-- that's the word itself. You've got the P-- that's the highest thing, and everything else is a bar. One consequence of all this actually is that there are things which are both phrases and heads. This is connected to Kirai's question from a little while ago. So the D "the" in this-- both of the instances of D "the" in this tree, they are the lowest thing with the label D, so they don't get marked. They're also the highest thing with the label D, because neither of them is higher than the other. They're just two instances of D. And so an influential school of thought about what's going on in cases like that is to say, yeah, those are both phrases and not phrases. Try not to be too uptight about these labels. That's one way to talk about them anyway. So now, I drew you a tree-- and some of you last time objected. I drew you a tree in which a prepositional phrase was modifying a verb phrase. That's-- I'll show you the tree again. You've got a prepositional phrase, "in the garage." And I've got that prepositional phrase merged together with something with the label V. That's a prepositional phrase that's telling you where the finding of the book is going to take place. It's going to take place in the garage. Now, prepositional phrases can also modify noun phrases. You can say things like "I will find books about syntax," where "about syntax" is telling you what kinds of books you're going to find. Now, this is just some consequences of that we can think about. First, we've said that noun phrases can contain prepositional phrases-- books about syntax. And we know that prepositional phrases can contain noun phrases, which can contain prepositional phrases, which can contain noun phrases. So there is no reason to ever stop talking. We talked about this when we were talking about-- I'm wearing my mask again. We talked about this when we were talking about-- the competence / performance distinction-- the fact that we're idealizing the actual kinds of sentences that people say. Here's another case of this. So you can have a book about islands, and you can have a book about islands on lakes, you can have a book about islands on lakes on islands, you can have a book about islands on lakes on islands on lakes, and so on. Eventually, you will run out of things that your book can be about if you keep repeating this, but that is a fact about geography, and this is not a geography class. There actually is a website-- or used to be-- that listed the largest island on a lake, and the largest island on a lake on an island, and the largest island on a lake on an island on a lake. It's kind of cool. There's a lot of stuff going around Lake Taal in the Philippines I remember. Is this clear? So there's another-- I just said there is another instance where we might want to care about the difference between competence and performance. We've got a theory that we're starting to build about syntax that makes it possible for noun phrases to be arbitrarily long. No one's ever going to utter an arbitrarily long noun phrase. Nobody's going to keep talking as long as this grammar would allow you to, but that's a fact about, as I said, geography and life. People have better things to do than to continue repeating these things. But it's clear we want a grammar that allows this. So yeah, you couldn't actually utter an infinitely long noun phrase, but we now have a grammar that's capable of producing one. Another thing that it gets us-- and this is the heart of the reaction that I was getting last time when I showed you the tree that I wanted to do for that sentence-- is that there are cases of ambiguity. So this is a classic Marx brothers line in Animal Crackers, I think, Groucho Marx says to someone, "I once shot an elephant in my pajamas." And there's a pause, and he says, "How he got in my pajamas, I'll never know." So Groucho Marx here is making use of the ambiguity of the original sentence. "I shot an elephant in my pajamas" has two things that it could, in principle, mean. It could mean I shot an elephant while I was-- so here's in my pajamas. The prepositional phrase in my pajamas could modify the elephant. That is, it could be it was the elephant who was wearing the pajamas-- I shot an elephant in my pajamas. Or it could be that we put together "shoot an elephant," and then we merge "in my pajamas" with that so that "in my pajamas" is modifying the way in which I shot an elephant. That is, I shot an elephant while I was wearing my pajamas. And so I shot an elephant in my pajamas, it's an ambiguous sentence. It could mean either I was wearing pajamas or the elephant was wearing pajamas. Do people get that ambiguity? It's a classic joke. It's odd that we enjoy this. There's a lot of humor that has this shape, where someone says an ambiguous sentence that has one normal meaning and one strange meaning, and then the punchline reveals that they meant the strange meaning. That's the form of this joke. "I shot an elephant in my pajamas"-- you, of course, think that he means he was wearing his pajamas. And then he reveals that, no, it was the elephant that was wearing the pajamas, and then everybody laughs. As I said, it's odd that we-- I mean, normally, being told, haha, you misunderstood me, that's not fun. But we actually pay people to do this. It's kind of strange. Yeah? AUDIENCE: So it almost seems like the one on the left that's like started reading is-- almost seems like a simpler tree-- NORVIN RICHARDS: Uhuh. [LAUGHS] Yeah. AUDIENCE: --but compared to the more complicated one, is that some of the reason why this is ambiguous? The one that has the normal meaning has this complicated structure, but the one that has the wrong meaning is a very simple structure [INAUDIBLE]? NORVIN RICHARDS: Yeah. No, it's interesting that you have this-- I mean, right now, what we have is just a grammar that predicts that it could mean either of those things. So that prepositional phrase in my pajamas can be merged in a couple of different places, and the consequence is that it could mean a couple of different things. You're absolutely right that the tree on the left is prettier than the tree on the right. And people who work on how people deal with ambiguities like this in real-time-- this is something people work on-- processing-- develop theories about where the preferences are for where prepositional phrases should be attached in a place like this, whether there's a preference for trees like the tree on the left or trees like the tree on the right. I think there isn't supposed to be a general preference for trees like the tree on the left. I think this is just supposed to be a case where, in principle, it's ambiguous and-- but people who work on processing have found places where there's a preference for one kind of tree over another, so that is something people work on. Raquel? AUDIENCE: I guess a simple question-- so at the bottom of the left, there's an NP, like my pajamas, and there's N-bar that's higher. I guess why isn't an NP down there if there's another NP up above? NORVIN RICHARDS: Yeah, so that's a very good point. When I said before that you give P to the highest thing with a given label, I should have said something more sophisticated-- something like you give P to the highest thing with a given label that is produced or that is being projected from a given head-- from a given word. So this-- so "my pajamas" is an NP because it's the highest node that has an N that comes from "pajamas." And then there's a higher NP that's right under the VP right that you merge together with the verb, and that's an NP that's got its label from "elephant." So there are two NPs here-- one of them coming from "pajamas" and the other coming from "elephant." That's a very good point. What you're pointing out is that when I said the highest thing with the label N, I was being too fast. It's the highest thing with a given label N, and we have to distinguish N's from each other. Ooh, lots of questions. Yes? AUDIENCE: Actually, I think the thing about [INAUDIBLE] about the [INAUDIBLE] simple are not likely [INAUDIBLE]. Because I didn't get the normal reading. NORVIN RICHARDS: Oh, yes. [LAUGHS] AUDIENCE: [INAUDIBLE] imagining this guy was shooting himself while there was an elephant [INAUDIBLE].. NORVIN RICHARDS: Oh, the elephant is inside his pajamas and he is also wearing them. Ah, yes. I don't know what kind of tree we want to build for that. That's an interesting question. Actually, we're going to develop some tests in a second. So far, all I've done is assert this-- assert a couple of things. That string of words ought to have two different structures. So that prepositional phrase ought to be able to modify either a noun phrase or a verb phrase. And we have the sense that that string of words is ambiguous. It can mean a couple of different things. I haven't yet given you any reason to believe that these structures really are the structures that are associated with those meanings. I'm going to try to do that in a second to motivate that claim. And let's bear your reading in mind as I do that, and we'll see how it behaves with respect to the tests that I'm going to show you. This should all feel kind of familiar, right? So when we were doing "unlockable," we were saying, yeah, there's three morphemes-- there are two ways you can combine them. And if we think about what these morphemes do with the things that they combine with, we can understand why the word "unlockable" is ambiguous in the way that it is. We're doing something similar here. So our rules for how things combine allow us to create these strings of words in a couple of different ways. And it's ambiguous-- wonder if our freedom of building couple-- having a couple of different ways to build trees for this has anything to do with the fact that it seems to mean two different things. Joseph, did you have a question? AUDIENCE: Yeah. So looking at-- when you said that-- because on the left tree, there's two different noun phrases. And so following up on that, if you have-- would it also be appropriate to say-- so "pajamas" down all the way on the right is part of a noun phrase-- and that's part of a prepositional phrase, so that kind of chain of the noun phrase is terminated, so it's OK to start over with that? NORVIN RICHARDS: That's-- yes, that would work fine too. Your version of this raises the question of whether we'll ever find a noun phrase that has-- that is merged together with something else that also has the label N, or whether we'll ever find a verb phrase that is merged together with something else that also has the label V. I don't know whether we'll get around to seeing examples like that in this class. It's not clear that there are such examples, and if there aren't, then that's interesting. We might want to make something of that. So yes, your amendment to what I said and my amendment to what I said are both good ways of talking about the answer to Raquel's question from earlier. Yeah? AUDIENCE: So just to follow-up, I know someone asked about "I will," and so that, of course, has a modifier. But "I" in the sentence, do we just not [INAUDIBLE]?? NORVIN RICHARDS: Oh, no. See-- I'm sorry. I'm being very unfair to you guys. I keep only showing you trees for parts of sentences. So we've not-- we haven't gotten any further than the verb phrase. AUDIENCE: That [INAUDIBLE]. NORVIN RICHARDS: So I'm trying to get as much mileage out of the verb phrase as I can before we go on to larger things. But you're right, eventually, we will do sentences, I promise. Questions about this? They're all good. I was saying to the TAs the other day, you guys are-- it's nice being in a class full of talkative people. I appreciate it. So two trees, and a hypothesis that we might entertain is that the tree on the left is the one where the elephant was wearing the pajamas and the tree on the right is the one where the shooting involves me being in the pajamas-- sort of the normal reading-- normal for some of us, although it's been established that not all of us are normal. Now, remember back when I was trying to convince you that there were such things as constituents-- that syntax cared about whether a particular string of words were all descended from a single node or not. I was showing you tests for constituent structure, and in your recitation sessions, you may have played around with different tests for constituent structure. So one of the tests that we were fooling around with was what I called topicalization. It's possible to emphasize something by moving it to the beginning of the sentence. So you can say things like "the elephant I shot in my pajamas," or "the elephant in my pajamas I shot." Now, let's consider the tree on the left-- this tree. Is there a constituent in that tree-- an "elephant" that doesn't have anything else in it? Several of you are appropriately shaking your heads. There are nodes that are above the words "an elephant." There's a D that's above "in," there's an N that's above "elephant." And there's an NP up there which has "an elephant" as part of it, but it also has "in my pajamas." So in the tree on the left, there's no constituent "an elephant." How about in the tree on the right-- is there a constituent "an elephant"? Yeah, there's a noun phrase like that. So when we do the topicalization-- "the elephant I shot in my pajamas"-- we should only be able to do the tree on the right and not the tree on the left. That is, it should only have the meaning that the tree on the right has-- that I was wearing the pajamas and not the elephant. And several of you are raising your hands and I'll call on you in just a second, but before I do that, do people have the feeling that that's true? That if I say "The elephant I shot in my pajamas-- the aardvark I shot in my tuxedo," that I'm wearing these things. It's not my victims. And I think that's right. So the ambiguity that we have if we haven't done any topicalization-- that ambiguity was there because we had both of these trees. And if we run a constituency test, that collapses the ambiguity, because it makes it so that we could only have the tree on the right, not the tree on the left. And so we can only have the meaning on the right and not the meaning on the left. Yes? Sorry, I'll get to you in just a second. AUDIENCE: So yeah, on the tree on the left, would it be more correct to combine "an elephant" first before adding it to the big upper tree? NORVIN RICHARDS: Oh, you mean combine "an elephant" first and then put "in my pajamas" together with that? AUDIENCE: Yeah. NORVIN RICHARDS: So we could do that. Notice that if we did that, there would again be a constituent "an elephant." So we would lose the result that we've got here, which is that if you front "the elephant"-- I'm sorry, I keep switching back and forth between the and an. I'll just ask you to believe me that they're the same as far as this is concerned. But if we topicalize "the elephant"-- if we move that to the beginning of the sentence, then we lose the tree on the left. So you're-- let me make sure I'm understanding what you're saying. You're saying, wait-- what about a tree that would look like this-- shot, and then we've got a noun phrase "an elephant," and then we've got a prepositional phrase here "in the (noun phrase)," and then determiner "my," and then noun "pajamas." You're imagining this tree. And yes, notice, though, that if we had that tree, there would be a constituent here-- "an elephant." So now, there are two ways we can go. One would be to say aha, we're learning that although prepositional phrase can merge with a projection of a noun-- something with the label N-- it can't-- it has to merge with N, or maybe it has to merge before you merge a D-- that there are rules about the order in which you merge things. We're going to get a chance to talk about things like that soon-- other places where you are so far-- let's see. We're at the stage of syntax where life is easy and free. There's ambiguity-- you can create trees however you want. You've now come up with a tree that we want to exclude somehow in order to avoid the result that first sentence would have this tree as a possible tree. We want to avoid that. So there are two kinds of things we could do. One would be to say, no, you may not draw this tree, and maybe that would be about the order in which you can merge D and prepositional phrases. That would be one thing we could do. Another thing we could do would be to say, yeah, in order to be topicalized, you have to be a constituent. But actually, there are some constituents that cannot topicalize-- that is, it's not a bi-conditional. That would be the other move to make. So this has been a very long and elaborate version of "yes." And so you're right-- this is an imaginable tree, and so we must do something. So I showed you a couple of things that we could. Does that make sense at all? OK? Good. Katrina? AUDIENCE: I was also going to bring up that you could combine "an elephant" before merging to the prepositional phrase, and so-- NORVIN RICHARDS: Good point. And so I've answered. Good-- excellent. Good question. Yes, Joseph? AUDIENCE: This might be getting ahead of this-- NORVIN RICHARDS: Where we want to be? AUDIENCE: --but obviously, I [INAUDIBLE].. So if you break that constituent off "an elephant" in the second tree corresponding to the first [INAUDIBLE].. Breaking off "an elephant"-- I don't know what the actual topology of these trees are and what the rules are, but aren't you crossing that over the I branch? NORVIN RICHARDS: Why don't you remember that question and ask it again once we have complete sentences? I mean, so far, the only condition on topicalization-- this operation that puts something at the beginning of the sentence-- that I've offered you is that the thing that you topicalize has to be a constituent. I haven't said anything else about where-- like, I haven't offered to draw you a tree for "the elephant I shot in my pajamas," for example. Eventually, I will, but I'm not going to do that yet. I may not even do it today, partly because eventually, we have to get "I shot an elephant in my pajamas" just for an example. [SNEEZING] Bless you. So we'll get that first. Yes? AUDIENCE: I know this is kind of [INAUDIBLE] but with the example of "an elephant" [INAUDIBLE] is that its own constituent so that we could technically be like, "the elephant, comma, in my pajamas, comma"? NORVIN RICHARDS: Oh, dear. AUDIENCE: Like, separated it if was [INAUDIBLE].. NORVIN RICHARDS: If it was-- you mean if we were able to draw trees like this? AUDIENCE: Yeah. NORVIN RICHARDS: I shot an elephant-- AUDIENCE: Oh, wait. Did we explain why couldn't-- NORVIN RICHARDS: Why we had better not do this? So we had two people point out-- one of them actually got to point it out, and then Kateryna pointed out that she was going to point it out-- that this is a tree that you could imagine. You could imagine being able to build a tree like this. And my reaction was, oh, dear-- we must do something to stop that. So here-- I'll mark this tree with a frowny face or something. We've got to exclude this tree somehow-- maybe. We've got we've got to avoid the following problem. This sure looks like a tree for a meaning where the elephant is wearing the pajamas. So a tree that has the same meaning as the tree on the left. But it's also a tree in which there's a constituent "an elephant." And the fact that I wanted to get across to you with this slide was that if you use topicalization to make it-- to find out-- I was only going to choose between these two trees-- to find out whether an elephant is a constituent or not, then you have to have the tree on the right where I'm wearing the pajamas. The elephant can't be wearing the pajamas. Two people immediately said, wait, what about this-- imagine the frowning face tree that I don't have on the slide. And I said there are two things we could do. One would be to say, no, you may not do this maybe because there are rules-- which maybe someday, we'll get to-- that constrain the order in which you can merge a prepositional phrase and a D with a noun-- rules that will guarantee that if you're going to put a prepositional phrase and also a D in projections of a noun, you better do the prepositional phrase first and then the D. And then we get to ask, well, why? Where did that rule come from? And the answer is, well, this is day two of syntax. But we'll get there. AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Did that-- no, no, no, that's fine. The other thing we could do-- and here, let me say something I said before, but I'll say it a little more coherently possibly-- we'll see-- would be to say in order to topicalize-- in order to be fronted-- you have to be a constituent. But it's not enough to be a constituent. Notice, the thing that you would be fronting in the tree on the right. It's not just a constituent, it's a noun phrase. And what you would be fronting over here is not a noun phrase, it's an N bar. So you could say, ah, we're learning something new about topicalization. Topicalization-- yes, you must be a constituent, but it's not enough to be a constituent. You must actually be a phrase. You cannot just be a bar. And then that would be a new and exciting thing to learn. We have to find out which of these ways of dealing with this tree is the better way. You had a question a while ago. I'm sorry. AUDIENCE: Yes, so I might be jumping the gun here, but it almost seems to me like in the first tree-- no, that's a lie. Never mind. NORVIN RICHARDS: Good. [LAUGHTER] Yes? AUDIENCE: We're doing all of this based on the fact that we don't want the kind of meaning where the elephant is wearing your pajamas, but what if you want to say a sentence like that? NORVIN RICHARDS: Well-- AUDIENCE: We're trying to exclude that meaning from the sentence, but what if someone wants to say that. NORVIN RICHARDS: Well, so-- AUDIENCE: "I exterminated the bugs in my walls." NORVIN RICHARDS: So wait. Leave the bugs alone for a second. [LAUGHTER] We have-- there's enough death in this class already. Whoa, go back. If somebody wants the meaning-- so let's back up. Here's what I was going to do with this pair of trees. I was going to say, hey, look, "I shot the elephant in my pajamas" is ambiguous in the way that Groucho Marx took advantage of, and the ambiguity goes away. And so far, we've only done the first sentence-- "The elephant I shot in my pajamas." That has to be the tree on the right. The tree on the right is the only tree in which that's a constituent. Immediately, people began doing the frowny face tree. Forget the frowny face tree for a second. Those are the only two trees. It has to be the tree on the right. And "The elephant in my pajamas I shot"-- the ambiguity also collapses, but in the other direction. People have that feeling? So if I say, "The elephant in my pajamas I shot-- the elephant in my tuxedo I offered some bananas." That doesn't mean I shot an elephant while I was wearing my pajamas. That second one-- "The elephant in my pajamas I shot"-- only means that the elephant was wearing the pajamas. Why? Well, if we look at the trees we can see why. There's a noun phrase-- "an elephant in my pajamas"-- in the tree on the left, and there's no constituent "an elephant in my pajamas" in the tree on the right. So these constituency tests-- topicalization-- forces us to one or the other of those two trees. And depending on which tree you're forced into, you only have the reading that's associated with that tree. And so the ambiguity collapses in the way that it should. So once we nail down which tree we're looking at, we also nail down which meaning we have. Kateryna wanted to know what if I'm interested in the tree on the left? The answer is, well, either don't topicalize or do the topicalization-- the second topicalization-- The elephant in my pajamas I shot." Those are both consistent with the tree on the left, but you can't do the first topicalization. Now, have I already warned you about this? There's a danger if you go further in linguistics that you will lose the ability to distinguish grammatical from ungrammatical sentences. So your feelings about sentences will all become this kind of gray blur. It's sometimes called syntacticians' disease. And I have been a syntactician for longer than you have been alive, which is depressing. And so I no longer have any judgments at all about sentences. But I think this is how it works. Does anybody want to object when I say these topicalizations only have one meaning and not the other? Yeah? AUDIENCE: So how would you say that, while this topicalization [INAUDIBLE],, and while [INAUDIBLE],, it doesn't-- the first sentence still points to the second tree on the top slide. NORVIN RICHARDS: To the tree on the left? AUDIENCE: Right, because while they both have the constituent "an elephant," the difference is that "shot" has a verb-- has a V-bar that's combined with "in my pajamas" while "shot" is never directly combined with [INAUDIBLE].. So I only point out that there might be a rule that only when they're directly combined together, they have [INAUDIBLE]. NORVIN RICHARDS: So I was with you right up until-- I thought I was with you right up until the end there. The sentence, "I shot an elephant in my pajamas," the verb phrase can have either of these structures. Is that-- no? AUDIENCE: Because I think the first sentence, you said I shot in my pajamas. So it was shot [INAUDIBLE] pajamas [INAUDIBLE].. NORVIN RICHARDS: Oh, I see. And so you can only do that with the tree on the right, because "in my pajamas" got merged together with a projection of the verb "shot," whereas in the second one-- "The elephant in my pajamas I shot"-- what's wrong with having either of these trees? AUDIENCE: So "The elephant in my pajamas I shot" couldn't mean [INAUDIBLE] because an elephant is never [INAUDIBLE]. NORVIN RICHARDS: Ah, I see. Yes, I think-- I could be wrong, but I think what you are saying is another way of what I am saying. I think we're saying the same thing in different ways. So I am talking about these trees as though they are objects that you can grab nodes in them and move them around. You are talking about these trees as if they are sequences of events. They're-- which is the way, I encourage you to talk about trees earlier on. They're records of the order in which you put two things together. And you're saying, yeah, if you put "in my pajamas" together with "shot," then you can do the first sentence. And if you put "in my pajamas" together with "the elephant," then you can do the second sentence. That is, which kinds of things you can topicalize cares about which order you put things together in. And I think we are saying similar things. I'm just saying it representationally, if you want. I'm inviting you to look at these trees and think about them as objects that you can move around, and you're thinking in terms of the order in which you to put things together. So I think you're right, but you're right in the same way that I am right. This is the kind conclusion of this kind of conversation that I always like-- it's the one where everyone involved is right. Other questions about this so far? OK. So elephants-- is this enough elephant violence? Possibly. Let's see. Oh, no, there's so much more violence. I'm sorry. So this is just graphically illustrating what I just showed you-- or told you in the first sentence. There needs to be a constituent, the elephant. And it's-- yeah, the blue constituent. It's the one in the tree on the right. And in the second sentence, there needs to be a green constituent-- a constituent "the elephant in my pajamas," and it's the constituent in the tree on the left. Notice that this also makes predictions about other kinds of sentences. So if you had thought these trees-- all this stuff about merge-- the order in which you'd merge things-- you don't really need any of this. Here's an imaginable thing you could think. You would need to get us to move fairly quickly through that last slide. But forgetting about topicalization for a second, if I told you, "Hey, look, I shot an elephant in my pajamas," it's ambiguous as to who's wearing the pajamas. And I propose to deal with it with these trees, you can imagine being someone who says, no, look, "in my pajamas" is just vague. Somebody's got to be wearing the pajamas, and you figure it out from context whether it's me or the elephant, and that's it. There's no need for any trees. Just so we're clear, this is not the right way to think about this, but it's something you could imagine thinking. Here-- so the stuff we talked about in the last slide is one reason not to think about things that way. Constituency tests seem to nail down where the pajamas are in the structure, and that forces you to one reading or another. Here's another kind of sentence that also suggests-- where we also make a prediction, which I think is correct. So if I say "I shot the elephant in my pajamas in a tuxedo"-- I think that's grammatical-- and I think it's a sentence on which somebody is wearing pajamas and somebody is wearing a tuxedo, who's wearing the pajamas? The elephant's wearing the pajamas and I am wearing the tuxedo. Does anybody not have that reading-- or have instead the reading where I am wearing the pajamas and the elephant is wearing a tuxedo? Please put a cast out of your mind-- hold on-- readings in which the pajamas are inside the tuxedo or something. [LAUGHTER] I can see all of you thinking of alternatives, but just try to imagine cases where one person-- one entity is wearing pajamas and another one is wearing the tuxedo. If that's what's happening-- yes, [INAUDIBLE]?? AUDIENCE: [INAUDIBLE]. NORVIN RICHARDS: No, the pajamas are not the tuxedo. [LAUGHTER] Ah! MIT undergrads-- man. No, there are two sets of clothing and two entities, and I am also not an elephant just in case any of you were thinking of that. And the clothes are not nested, or mixed, or anything like that. There is a set of pajamas and there is a tuxedo, and somebody is wearing the pajamas, somebody is wearing the tuxedo. Who's in what? I just asserted the elephant is in the pajamas and I'm in the tuxedo. It can't be the other way around. And we can see why. If I was smart enough-- we'll see if I was smart enough to draw a tree in the next slide. Yes, I was! Hot dog. Why? Well, because if you're going to have two prepositional phrases-- "in my pajamas" and "in a tuxedo"-- and you're going to attach them in the ways that we've been attaching prepositional phrases before, this is the only way to do it to get this order. Unless you're going to cross lines or something like that, this is the only way to get this order of words. So "in my pajamas" has to modify "elephant" and "in the tuxedo" has to modify the verb phrase, and that's the only reading this, in fact, has. So day two of syntax and we're already successfully predicting facts about reasonably complicated sentences involving too much clothing in inappropriate places. Yeah? AUDIENCE: We haven't really discussed this directly, but is it a rule that you can't cross lines in a tree? NORVIN RICHARDS: I hereby declare that it is a rule that you cannot cross lines in trees. Yes, you're right, we have not discussed that. Back in the day, I would have called this the Ghostbusters rule just because the Ghostbusters had a rule that you couldn't cross streams. Never mind. Yeah-- other questions? Now, I have this vague sense that some of you would like to see what the rest of the sentence looks like. Call it a hunch-- I don't know. When you've been an instructor as long as I have, you get these vague hints like the fact that 12 of you have asked me where is the subject going-- where should I put "will"-- how do we do the rest of the sentence-- why, oh, why are you only showing us verb phrases and not entire sentences? So let's do the rest of the sentence. To do the rest of the sentence-- first of all, "the girl" is a noun phrase that's kind of boring, but "girl" is like "the book" or "the garage." And for "will," we are going to invent a new kind of node. We're going to call it T for tense. Just to prepare you for disappointment, there are all kinds of things that can go under T. You can also say "The girl can find the book in the garage," or "The girl might find the book in the garage." I can't talk today. So there are variety of things that can go under T, not all of which it's plausible to think of as instances of tense-- like "might" is probably not a tense. If you look it up in traditional grammar, they're not going to call it a tense, but we're going to call it a tense. We basically just need a name for it. So there's this thing. Notice it's not a verb, so you can't say things like the girl will the book. It's something else, and our name for it is going to be "tense." People have called it other things. They've called it the auxiliary or whatever, but tense is a conventional name for it in the literature, so I'm going to call it that. Yeah? AUDIENCE: So how would you differentiate the tense from [INAUDIBLE]? NORVIN RICHARDS: Well-- so those-- AUDIENCE: "The girl quickly found--" NORVIN RICHARDS: "Quickly found the book in the garage"-- yeah, good question. Let me give you the-- let me give you an answer that I won't be able to explain soon, but that's a very nice question. So the girl will find it-- I'll just say that-- and the girl quickly found it. And you want to know what's the difference between these two things. Here's one difference. There are a bunch of phenomena that this gets to participate in this doesn't. So for example, if I want to make this into a question, I can ask the question, "Will the girl find it?" where I put "will" at the beginning of the sentence, and that makes the sentence a question. You can put "quickly" at the beginning of the sentence too, I guess-- "Quickly, the girl found it." That's a grammatical sentence, but it's not a question. It doesn't make it a question. So that gives us some reason to want these to have different statuses. I'll give you another reason to have them have different statuses. If I ask you-- well, is this true? Maybe I'm about to lie. Maybe I should quit while I'm ahead. I think this is true. If I ask you, "Who's going to find the book?" you can answer "The girl will." But if I ask you "Who found the book?" it would be odd for you to say "The girl quickly." That's-- I think-- is that true? Did I just lie? I think that's true. So that's another reason to think there's some difference between "will" and "quickly." Now, what's the difference? Well, we'll get to that, but-- or like how long are we going to explain these things, and we'll get to that too. This is just meant to convince you that maybe they're different things-- they have different properties. OK, cool. So far, so good. Now, we need to do some more merging. Here's what we'll do. We'll merge T with the verb phrase, and then we'll merge "the girl" with the result so that the entire sentence will be a TP-- a tense phrase. Everything is a phrase. There are these, there are noun phrases, there are prepositional phrases, there are verb phrases. And now, we have a new kind of phrase-- a tense phrase-- which is the result of merging tense will with a bunch of things. There-- that's how we'll do sentences Cool. That was easy and painless. Yeah? AUDIENCE: What does topicalize mean again exactly? NORVIN RICHARDS: Oh, it's the name that we've got for taking constituents, putting them at the beginning of the sentence, and it adds some kind of oomph to the thing that you put at the beginning of the sentence. So if I say things like "in the garage, the girl will find the book," that's a sentence of English. It's not the most natural way to say it-- you would only say it if you wanted to put some kind of special emphasis on the garage. And we will not ever, in this class, try to be more definite than that about the meaning of topicalization. But it's a handy phenomenon for finding constituents. Yeah? AUDIENCE: Every node on the tree is a constituent? NORVIN RICHARDS: Yes. AUDIENCE: What is "will find the book in the garage"? NORVIN RICHARDS: It's a T bar. AUDIENCE: Well, yes, but we have constituency tests that we use to tell whether something is a constituent. Does that work for a tense phrase? NORVIN RICHARDS: Yeah-- OK, that's a good question. [LAUGHTER] That's a good question. Let's see. So let me-- so let's see. You would like to know-- is this what you're asking me? Let's see. We've got three things-- the noun phrase, "the girl," and the T "will," and the verb phrase find "the book in the garage." Oh, here. We have a technical term for this. It's called a triangle. It's used when you do not wish to draw the inside of a structure. So this is just an abbreviation for that. Is that OK? So I've just used a triangle here so that I won't have to draw that entire phrase. What I really need is like a stamp or something that I can use on the blackboard that will create these phrase structures for me. We'll tell you on problem sets if we want you to not use triangles. Sometimes, we'll ask you not to use triangles so that we can make you show us everything that you think about the insides of structures, but for now, I'll use a triangle. So I just casually said, hey, I've got a good idea-- we'll merge "will" first with this, and then we'll merge the result with that. If we're doing merge the way we've been doing merge, there's only one other option, which is this. And so you would like to know, how do we know that it's what's on the slide? How do we know that it's not what I've got here on the board? Just so nobody gets confused, this is not the right tree. That's the right tree, but now let me see if I can come up with anything that will convince you that it's the right tree. First of all, if we tried to use topicalization, which is the test that we had before-- if we tried to use it on the tree here and we tried to topicalize T-bar, we'd end up with-- "I said the girl would find the book in the garage and will find the book in the garage, the girl." Now, as I mentioned before, my grasp of English is now a little shaky after years of abuse, but I think that's not great. Do people agree? So if there is a constituent T-bar here-- I tried to soften you up for this possibility earlier-- we need it to be the kind of constituent to which topicalization can't apply. That is, topicalization doesn't get to apply to everything. And I think I also floated the possibility that it cares about whether you're looking at a P or a bar-- that you can't topicalize bars, you can only topicalize P's. This might be another case where we want to take that possibility seriously. Yeah? AUDIENCE: It seems like that-- the one on the board works better as a constituent, because you could be like, "Who will find the book in the garage?" and then say "The girl will." But if you say "What will the girl do," you can't say "Will find the book in the garage." NORVIN RICHARDS: You can't say "will find the book in the garage"-- if-- I am astonished by what you have just said. If I've just-- you've just said "The girl will find the book in the garage." Can I say "Will find the book in the garage?" Some think yes, others think no. We'll have a wrestling match outside later. AUDIENCE: That makes sense. But if you're astonished, you could say "The girl will find the book in the garage"-- "The girl will?" NORVIN RICHARDS: (QUESTIONINGLY) "The girl will?"-- ah, yeah, that's true. You guys are discovering a phenomenon called ellipsis. Ellipsis is a phenomenon whereby you can take chunks of the sentence and fail to pronounce them. You can remove chunks of the sentence. So if I ask you a question like, "Who will find the book in the garage?" you can say "The girl will." Or I can say things like "The boys will find the book in the garage and the girl will too." All of these-- the way we would talk about them in terms of tree, like the one that's on the slide there, is in terms of what's called VP ellipses. It. Is this process of silencing constituents-- silencing phrases-- has applied to the verb phrase when you say things like "Who will find the book, (blah, blah, blah)?"-- "The girl will." So there's a blank here, and the blank is understood as being the same as the preceding verb phrase. And VP ellipsis is a huge phenomenon, which we will get a chance to talk about actually because it's going to be useful in a couple of weeks for other phenomena. Notice-- I'm trying to think if there's a way that I can convince you. And so that's not where you were going with this. Where you were going with it was, hey, look, "The girl will" is a constituent. There is a thing that we want to have be a constituent-- we want it to be a T-bar like here, whereas what I'm saying is, no, we want the structure on the slide, and there is such a thing as VP ellipsis that allows you to fail to pronounce verb phrases. Let me see if I can come up with another test that will help you believe in one of these over there. There are other phenomena that people use to diagnose constituent structure, and I'm trying to think if there are any that I can introduce painlessly. You-- ah, no. So I was about to introduce coordination, but I think I won't. You know what? Your TAs will prove to you that it's the tree on the left and not the tree on the right in the recitation section. So definitely go to the recitation session this week, because your TAs will help you with this. I don't want to go too far down this path because there are-- if I try to show you the constituent tests that would help us to find that, I think we'll be in trouble. There is one-- well, maybe I can do this now. Can I do this now? I'm going to get in so much trouble. This is not going to go well. I'm going to regret doing this, and we're almost out of clock. Tell you what-- I will prove to you-- you don't have to-- so you should go to recitation session, but I will prove to you in the next class that that's the right tree and this is the wrong tree, but it's going to require some setup. And if I try to do the setup now, we'll be in the middle of the setup at the end of class, and we'll all go away unhappy. So this is a really excellent question, and you're right to be skeptical and alarmed. But everybody write the tree that's on the board here, and I will also reproduce it on the next set of slides for Thursday. And then you will go into spring break understanding why I think it's that tree and not this tree. But if I tried to show it to you now, I'll just hurt all of us. Yeah good questions. Are there other questions about this. I'll very quickly move away from this issue. So I want to go back to the verb phrase just as an exercise in making sure that we all know what we're doing. So we're going to do "I will tickle the child with the feather. Ambiguous sentence-- does everybody agree? So what are the things that it can mean? Can somebody paraphrase them for me? Joseph? AUDIENCE: Either there's a child that is holding a feather and your going to go tickle him or her, or you're going to go tickle the child, and you have a feather in your hand, and you're going to use the feather. NORVIN RICHARDS: Yep, nicely paraphrased. So give me neither of those things. We think that we can build this in a couple of different ways. So "I will tickle the child"-- we have two places to attach "with the feather." So two trees for this. I'm going through this fast because we're running low on time, but is this all clear? This is just the same kind of tree that we've been drawing up until now. And constituency tests, like topicalization, tell us that we're right to associate these two different trees with two different meanings. So these trees are basically just like the trees we did-- in fact, they're identical to the trees that we did for "I shot an elephant in my pajamas." The prepositional phrase that's at the end of the sentence can either be part of the noun phrase that's also at the end of the sentence or it can modify the verb phrase. It depends on whether we're doing, in this case, the tickling with the feather or whether it's the child who has a feather and I'm going to tickle them some other way-- maybe with my fingers. And then if we do topicalization, that makes the meanings collapse. So "The child I will tickle with the feather" only has the tree on the left. "The child with the feather I will tickle" only has the tree on the right and it only has the meaning on the right. Does everybody see that? This is review of where we are so far with the constituent structures-- constituency tests that we have. And as I said, I will reveal another one to you on Thursday, which will help me to get rid of this tree. Yeah? AUDIENCE: Oh, so that second sentence-- I think this is a feature of English grammar, which is that you could understand "the child with the feather" [INAUDIBLE] as a nondestructive modifier. So that I have a child with a feather and I will tickle-- NORVIN RICHARDS: "I will tickle people generally. The child with the feather, I will tickle." Really? Maybe. If so-- AUDIENCE: "I, the child with the feather"-- NORVIN RICHARDS: "I, the child with the feather, will tickle." AUDIENCE: --[INAUDIBLE]. NORVIN RICHARDS: I think I want "the child with the feather" to be after "I" for that. I think that was Kateryna's point. AUDIENCE: I am the child. NORVIN RICHARDS: "I, the child with the feather"-- "I, the professor in this class, demand cookies." I think that's grammatical-- even plausible. Cool. So yeah, good point. There could be other structures to think about here. Because we're almost out of time, let me talk quickly through some terminology, and then I'll let you guys go. First bit of terminology-- when people are talking about trees they often use feminine kinship terminology to talk about relationships between nodes in the tree. So if I want to say something like "I will tickle the child," we have pairs of nodes that were merged together to form larger objects. Like in this tree, we merged the T-- "will"-- with the verb phrase-- "tickle the child"-- to form the T-bar that's just above them. What we say about those is that they are sisters. So if you have two things that are the two things that were merged together to create one larger thing, those two things are sisters. Similarly, we say that VP is the mother of V and NP. And we also use the word "daughter"-- sorry. We also use the word "daughter." Although I don't have it on the slide, I should add it. So you say that the V is the daughter of VP. That's as far as people go with kinship terms, so you don't hear about grandmothers, or aunts, or cousins, or anything like that. But people use those terms to talk about relationships between nodes in the tree. The relationship of motherhood is-- so the relation that is created between two nodes that are merged and the new node that you create via merge, that's the relationship of what's called immediate domination. So the verb phrase immediately dominates the V and NP. That's just another scarier way to say that the VP is the mother of the V and the NP. The VP is the thing you created by merging V and NP together. And dominate-- so immediately dominate is kind of the basic relation in trees. It's the relation that's created by merge. Dominate is the transitive closure of immediate dominate. So VP dominates everything it immediately dominates, and everything those things immediately dominate, and everything those things immediately dominate all the way down to the bottom of the tree. So that verb phrase dominates the verb and the NP and also the D and the N-- "the" and "child." It dominates the things that are below it that were there when it was put together. And I've been using this word "constituent," but now we can talk about constituents using this terminology. We'll say that something is a constituent if all and only the words in that thing, alpha, are dominated by a single node. So there is a constituent-- "tickle the child," for example-- that verb phrase-- because there is a verb phrase that dominates just those words, "tickle the child." There is no constituent "will tickle" in this tree, because there's no node that just dominates the words "will tickle." Let me-- because we are just about out of time, but not quite-- and this would get us into something new-- let me get started on getting rid of this tree, and then we'll pick it up here next time. Let me show you some data. Consider sentences like "She likes Mary." Let's start with that one. No-- here. Let's do "She thinks I like Mary," and for that matter, "She thinks I"-- no-- "Mary thinks I like her." These are two grammatical sentences in English-- "She thinks I like Mary" and "Mary thinks I like her." In the second sentence, we've got "Mary" and we've also got "her." Can "her" refer to Mary? AUDIENCE: Yeah. NORVIN RICHARDS: Yeah. It could also be somebody else, but it could be Mary. We'll mark that by putting a little subscript on "Mary." I'll call it subscript I, and we'll put the same subscript on her. So "her" could refer back to "Mary." It could also refer to any other person-- any other female person. "She thinks I like Mary." Can "she" be Mary? No. So here, she-- Mary can't be the same person. It would be bad for "she" to refer to the same person as "Mary." Why is that do you think? Yeah? AUDIENCE: Because that's [INAUDIBLE] "I like Mary" is a constituent, and so when I define "Mary" inside of the constituent-- oh, shoot-- this is not working out. NORVIN RICHARDS: No, no. AUDIENCE: [INAUDIBLE]. NORVIN RICHARDS: Does anybody have a simpler theory? Yeah? AUDIENCE: Is it simply because in the first sentence, Mary hasn't been established as a person yet [INAUDIBLE]?? NORVIN RICHARDS: So good. That's where I was hoping the first person would go. Joseph is already going more sophisticated. So one theory we could have would be "Mary" can't refer to somebody-- so "she" can't refer to somebody who hasn't been mentioned yet. She can't refer to somebody who's later in the tree. Here, "her" is referring to somebody on its left. Notice-- think about a sentence like "Everyone who knows her likes Mary." Can her refer to Mary there? Yeah. So not to immediately dump on your hypothesis, but I'm immediately dumping on your hypothesis. So pronouns actually can refer to things that are later in the tree. Sorry, let me say that again more coherently. Pronouns can refer to things that are later in the sentence. You can tell I'm a syntactician, because when I think of sentences, I think of trees. So the relationship that holds between "she" and "Mary" here-- between "her" and "Mary" here-- the thing that tells you whether a pronoun can refer to a person-- it can't just be about whether the person's name is later in the sentence. It's going to turn out to be about the structure of the sentence. So we'll develop a theory of these kinds of facts, which will be another test for constituent structures. We'll develop a theory of that, and it will tell us that this tree is wrong. But I'll first finish developing that theory, and then we'll be able to use it as an additional test for structure. That's the incentive to come to class next time. You'll get to see how that's going to work. Let's stop there unless people have any questions about this stuff. Good. Let's just pause it right there, because if I start trying to show you this any further, we'll just all get confused.
MIT_24900_Introduction_to_Linguistics_Spring_2022
Lecture_8_Phonology_Part_1.txt
[SQUEAKING] [RUSTLING] [CLICKING] NORVIN RICHARDS: All right. So last time, we were talking about English plurals. And we went through this fairly fast so I wanted to talk about it again a little bit just to make sure we're all on the same page. The descriptive fact is that English plurals, putting aside "children," and "oxen," and "fish," and "sheep," and other things like that, there's a general plural that's used most of the time in English that we typically spell with an S. But you've heard me talk about how I feel about English spelling. It's pronounced in at least three different ways, sometimes as a "z," like in "dogs," sometimes as an "s," like in cats or giraffes, and sometimes as a schwa followed by a "z," like in "bridges," or "brushes," or "messes." And so descriptively, it's schwa "z" after what's called a strident. So the stridents are a particular class of fricatives that have a lot of high frequency noise. They are "s," and "sh," and also the affricates that end in those fricatives, the "chu," and the "juh," "tsuh" for that matter. Those are all stridents. And so you get schwa "z" after a strident, and then if you're not looking at a strident, it's just voicing. You get "s" if the sound before the plural is not strident, is voiceless, and you get "z" if it's voiced. And we had talked about all that before. And what I asked you to do last time, and I just want to go through this exercise again real quick just to make sure we're all following it. Oh, I get to take this off. I keep forgetting that. Want to go through the story again real quick. Forget for a second about the strident thing. We said there are at least two imaginable stories. We're in a room full of very imaginative people. There are lots and lots of imaginable stories, but let's concentrate on two that seem particularly attractive. One would be, the plural suffix is underlyingly an "s,", and after a voiced sound, it becomes a "z." So it starts off as an "s,", that's the one you're getting in "cats." And when you put the "s" after the "g" in "dog," it voices to a "z." And the other imaginable story would be the opposite. It would say the plural is underlyingly a "z," and after a voiceless consonant it becomes an "s." So that would say the "z" that you're getting in "dogs" is the basic plural suffix. And it's devoicing in words like "cats" because it's preceded by a voiceless consonant. And we convinced ourselves that one of these is-- or I tried to convince you, let me put it that way, that one of these is more attractive than the other, because one of these allows us to treat the sound-changing question as a general property of English phonology, rather than as something we must specifically state about this morpheme. So I'll just go through this again. On a theory where the suffix is an "s" and it voices after a voiced sound, we ask ourselves, could we make that a general fact about English? That is, is it generally true that if you have an "s" at the end of a word, you can't have a voiced sound before it? Are there any English words that end with an "s," and before the "s" there's something voiced? Can anybody come up with one, a word like that? Do people see why I'm looking for one? Yeah, Faith? AUDIENCE: What about "ribs"? NORVIN RICHARDS: So ribs, if I were going to write that in IPA, would be that. That's a "z" sound. So is there a word where there's an "z" sound that's, right before it, there's something voiced? Yes? AUDIENCE: "Tense." NORVIN RICHARDS: "Tense." Yeah, that's a good one. So that's "tense" in IPA. I guess if I'm being more specific, there's an aspirated "t" there, "tense." OK, so that's one where that could be the right theory, but it would have to be a theory that made a specific reference to this morpheme that said, special thing about the plural morpheme, it voices after a voiced sound. Because there's nothing about English phonology that prevents us from having words that end in an "s" immediately preceded by a voiced sound. Suppose we do it the other way around. If the plural is underlyingly a "z" and then we say it devoices after a voiceless consonant. Can we think of any English words that end in a "z," and before the "z," there's something voiceless? Sorry, you have another comment? AUDIENCE: Are we not thinking something like plushes, or something that's like [INAUDIBLE]?? NORVIN RICHARDS: OK, so plushes, suppose I write that down, "pluh," "shes." This is an instance of what I asked you to not think about for a while, but you're a rebel, I'm sorry. Here we've got the plural suffix and it's after a strident, so we're getting this schwa that's put in here because there's a strident before it. I started this by saying, let's put that aside for now and we'll come back to it, so good point. But actually, this isn't an example of a word that ends in a Z and before there's something voiceless because this is voiced. So we're looking for a word that ends in a Z, and before the Z, there's something voiceless, sort of the opposite of tense. Are there any words like that? I heard an "uh." AUDIENCE: "Blitz"? NORVIN RICHARDS: Sorry? AUDIENCE: I don't know, I was thinking "blitz," but that's [INAUDIBLE]. NORVIN RICHARDS: "Blitz"? OK. Well, so that's a word that when we spell it, we spell it like this. AUDIENCE: But it still sounds like an "s." NORVIN RICHARDS: Yeah. Right, exactly. When we pronounce it, we pronounce it like that, I think, "blitz." Yeah, we spell it like that because we got it from German. Yeah? OK. So yeah, no. There aren't any English words like that. We don't have words like that. There aren't any words like "cat-z." That's not a possible English word. And so if we're willing to say that the plural starts out as a "z," well, then we can handle this part of the-- not Raquel's part, not the part about "brushes" or "plushes," where there's a strident before it. But putting the stridents aside, we can handle this fact that the plural suffix is sometimes a "z' and sometimes an "s." We can make that part of English phonology generally. We don't have to make any statements specifically about this suffix, which is pleasing. So maybe that's an argument for making that move. So yeah, hypothesis two, the plural is underlying a "z," and it devoices after voiceless consonants. That can be part of a general English fact, which is English doesn't have words that end in a voiceless sound followed by a voiced sound. Similarly, after a strident, we get schwa "z," "brushes," "messes," "latches," "mazes," "plushes," that's Raquel's example. And English doesn't have words that end in two stridents. So the plural of "plush" is "plushez." It isn't "plushz." But we generally don't have sequences like that. And actually, this is an instance of we don't have "z" after a voiceless sound. But we also don't have-- yeah, what would we be looking for? I'm trying to think of something that ends in "dz." Is this a real word, or is it just something people use in crossword puzzles? AUDIENCE: I think it's a tool. NORVIN RICHARDS: Tool, like an ax, right? So if I get to pronounce this like this, the plural of it can't be "adz-z," right? It's "adzez," I guess. Yes. Actually, I don't know why I went for obscure tools. I could have done "mazes," which we have up here on the board. OK, yeah. AUDIENCE: "Rose"? NORVIN RICHARDS: Sorry? AUDIENCE: "Rose." NORVIN RICHARDS: "Rose." "Rose" would have done even better, yes. "Rose." Yeah, plural is "roses." So English doesn't have words that end in multiple stridents, and so yeah, we're putting a schwa there. Not something that we have to state maybe specifically about this morpheme, it's just a general fact about English phonology. So yeah, we can state rules that say things like an obstruent-- I'm sorry, I haven't introduced you to the word "obstruent." Obstruent is a word that covers stops and fricatives. Obstruents are sounds that create a build up of pressure in the oral cavity. So stops and fricatives are obstruents. Liquids, and glides, and vowels, and what else, liquids, and glides, and vowels, and nasals are not obstruents. So oral stops and fricatives, those are obstruents. Obstruents become voiceless after something voiceless. So underlying "cat-z" becomes "cats." And we have another rule here that inserts a schwa, and this is really just an exercise in showing you what phonological rules can look like. That second rule inserts a schwa between two stridents at the end of a word. So "inserts a schwa," the way you say that with these rules is "nothingness becomes a schwa." Yeah, I always liked that way of describing insertion. So you have this place where there isn't anything and it turns into something, namely a schwa, between stridents, so "brush-z" becomes "brushez." So we can state these rules, but we can think of these rules as things that English does in order to enforce its general conditions on what words can look like in these particular cases. Does that make sense so far? Questions about any of that? Oh, yeah, and then, if the only goal-- sorry, I should have remembered I was going to do this here. If the only goal of these sound changes is to create words that obey general conditions on English phonology, then we might ask ourselves, suppose you start off with that word. What is that word I've got down there at the bottom? Forget about the plural suffix on it. What's the singular? AUDIENCE: "Brush"? NORVIN RICHARDS: "Brush." Yeah. So suppose we start with "brush" and we're going to add the "z." Why not change the "z" to an "s"? After all, it's right next to a voiceless sound, giving you "brushs." And then they insert a schwa, giving you "brushes." That's not what you do. And we could wonder, why not, particularly if I've posited the two rules that I've got here, right? So I've got a rule that is going to devoice the "z" if it's at the end of a word and after something voiceless. And I've got a rule that inserts schwa between stridents. If I applied that first rule first and then the second rule, I would get this consequence. Does that make sense? And there are at least two ways of thinking-- sorry, you're making hand gestures that suggest you can see where I'm going with this-- There are at least two ways of thinking about this. There's the mechanical way, which is to say, well, we're learning something about these rules. They don't apply in that order. You don't do the first rule and then the second rule. You do the second rule and then the first rule. That's one way to think about this. There's another way to think about it, though. Look, what we've been saying right along is these rules are things that English does in order to make these plurals obey general English rules about how sound can be put together. I'm waving my arms a lot because I'm avoiding a technical term. And that's dumb, so let me just teach you a technical term. Phonotactics, this is the conditions on how sounds can combine, so what the rules are for which sounds can be where in a word. And I've now several times in the course of this lecture, I've almost said the word "phonotactics," and then stopped and said, wait, I haven't taught them that word yet. So I've been saying things like "the rules for how words can combine." So now you know that word, too, "phonotactics." Rules of English phonotactics include things like "English words can't end in a voiced sound that's immediately preceded by a voiceless sound," or "English words can't end in two stridents in a row." So we have these general conditions on English phonotactics, these conditions on how sounds can combine. And those two rules, those two things that happen to sequences of sounds are ways of enforcing those general conditions on phonotactics. Does that make sense? If we think about it that way, that is, if we think, there is, on the one hand, the rules-- the descriptions of the things that are happening. And then there is, on the other hand, the reasons why these things are happening. Does that make sense? So the reasons why these things are happening are things like, English doesn't like two stridents at the end of a word. That's why that second rule is happening. That's what it's for, it's to stop that from happening in the case of the English plural. So we were going to draw this distinction then between phonotactics, which say there are some combinations of sounds that are bad, and things like these rules which say, and if you have these combinations of sounds, here's what you ought to do to fix the problem. If you think about it, if I just tell you, having a word that ends in two stridents is bad in English. So if I start off with "brush"-- oh, dear. Now I have to write an "r" upside down. That's always hard. If I start off with "brush" and we're saying that the plural is underlyingly a "z," what we in fact do is to introduce a schwa. And we think the reason that we're doing that is, well, we're going to make reference to the fact that English generally doesn't allow words to end in multiple stridents. But if our only goal is to make sure this doesn't end in multiple stridents, well, there are lots of ways we could fix that problem. We could get rid of a strident. The plural of "brush" could be "bruhz," or "brush." We could introduce something other than a schwa. We could introduce a different vowel. It could be "brush-eez." We could introduce a consonant. It could be "brushts." Actually, that's not so easy to say either. "Brush-dz." Yeah. We could introduce some other consonants that would work better than these. Heck, we could introduce lots of consonants and vowels. The plural of "brush" could be "brush-glors." You can just introduce a whole chunk of stuff in here. There's all kinds of stuff you could do to fix that problem. This is what you, in fact, do. So when we say we're going to cover these facts about the different allomorphs of the English plural with general English phonology, is what I said, we got ourselves halfway there. We came up with these generalizations about English phonotactics. English doesn't like words that end in two stridents, let's say. And then we also have to have this solution to the problem, insert a schwa. The account of what's happening in English plurals needs both of those components. Does that make sense? Some of you are looking at me as though I'm making sense. It's always hard because you're all wearing masks. So I appreciate it, you've spent all this time working out ways of communicating emotions just with your eyebrows, which is great. All of you are waving your eyebrows in a semaphore-like fashion at me. I appreciate it a lot. So getting back to this question, why not first devoice the "s," and then introduce the schwa? The answer to that might be, well, that first-- so I said, there are two ways of talking about the answer to that. One is the mechanical way, which says, no, that's not the order these rules apply in. That's the way we had talked about this before. We went through these Lardil cases, and I said, sure looks as though we need to be willing to say, there are all these things that happen to these words, but we need them to happen in order so that we get the right answers. And that works. And in this particular case, we could say that. It would also work. We could say, the reason you don't do what I've got up there on the board, the reason you don't first devoice the "z" and then introduce the schwa is that that's not the order the operations take place in. But maybe this way of talking in terms of phonotactics is also helpful here. So we could say to ourselves, the reason that you don't first do that first thing is that it doesn't solve all the problems. Inserting a schwa solves all the problems at once. By inserting a schwa, you now no longer have a word ending in two stridents, and you also no longer have a word ending in a voiced sound preceded by a voiceless sound. Because the "z" at the end is now preceded by a schwa, which is voiced. So maybe there's some kind of principle of minimal repair, maybe, that says, look around at the various things you can do. Do the most effective one. That could be a way to talk about this. We're going to need to look at more cases to try to decide. But the point of going through the English plural in this level of detail was to introduce you to this idea. And having introduced you to this idea, we're now going to spend some time with this idea, this idea that when we're looking at a phonological problem, it's useful to think about it in these two halves. There's, on the one hand, what's the problem? Why are sounds changing? And on the other hand, what are the sounds doing? What's the fix? What's the repair? And we want to think about both of those things. Yeah. So in English, we've talked about a couple of kinds of problems that English plurals can create, and now we're also talking about some specific repairs. And we raise questions for ourselves. Why those repairs? Why not other repairs? We'll come back to that issue. One consequence of all this is that we're able to apply the rules of English plural formation to words that don't have the phonology of English words. Here are two words from other languages. Anybody want to pronounce them for us? Yeah. AUDIENCE: "Bach" and "rouge." NORVIN RICHARDS: Yeah, "Bach" and "rouge." Yeah. So where "Bach," if I'm being unbearably pretentious and I want to demonstrate for you the fact that I speak some German, I might pronounce the name of the great German family of German composers, the Bachs, was a big family, generations of Bachs writing all kinds of things. I might pronounce it with a velar fricative. The standard English way to pronounce it is with a velar stop because we don't have velar fricatives. But if I feel like being pretentious, I'll pronounce it with a velar fricative, "Bach." And then I've already done this, but what's the plural? "Bachs," right? It's with an "s." Why is it with an "s"? Well, because the repairs that we've been talking about are repairs for specific problems. So by hypothesis, the plural of this starts off as this. And now you have a word that ends in a voiced sound preceded by a voiceless sound. And the standard English repair for that is to devoice, so we change this "z" to an "s." I get "Bachs." Shows us that what we've got in our heads is not just-- so we've been going around this fact for a while now. What we have in our heads is not just a list of English nouns in their singular and plural forms. What we have are rules, algorithms for making plurals. And you can apply those algorithms to words that are not English words that have sounds in them that we don't have in English, like "Bach," or "rouge," which is the French word for "red," but I guess has been borrowed into English as a name for a component of makeup. If you're talking about a bunch of different kinds of rouge, which of these various kinds of rouge do you like best? Well, I like those. What's the plural of "rouge"? "Rouges" [with schwa and "z"]. "I like those rouges over there. I like these lipsticks better than those rouges." What did I do here? I inserted a schwa. Why? Well, because this is a strident there. It doesn't matter that it's not an English strident. We don't have that, except in words that we borrowed from French, like this one. Yeah? OK. All right. So this has all been an attempt to-- whoops, I spelled the "r" right side up again. This has all been an attempt to get you to take seriously the idea that it's useful when you're looking at a phonological problem, a phonological process, to ask yourself both, what is happening? So what has changed? What were the original things that I was combining? Let's say if I was adding an affix to a word, and what has happened to them? What form do they have now? Ask yourself that. And also ask, why? So what general properties of the language are we trying to enforce with this sound change? Yes? AUDIENCE: I was thinking about how you were saying that certain sound combinations just don't sound right. And I was thinking about how if you didn't know what part of speech the word "pore" is, then if you were like, OK, I'm going to pluralize it, then it would sound weird to say "por-s." But then if you knew it was supposed to be an adjective, then "porous" would actually sound kind of fine. So maybe there's some element of more structure that needs to be known to define what sounds good or weird? NORVIN RICHARDS: I think I might see what you mean. What was the word that you started off with? AUDIENCE: "Pore." NORVIN RICHARDS: "Pore." You mean that word? Yeah. And so we have those in our skin, little holes in our skin. And the plural is that, right? Yeah. Oh, I see. But you're saying-- so yeah, there's an adjective, "porous." Is that what you're thinking of? Yeah. And yes, so I'm trying to think about whether I can think of any adjectives that end with "-orz." I have "pores" on the brain, and "s'mores," and "floors," though everything I'm thinking of off the top of my head is plural. There are some generalizations about phonological differences sometimes between, occasionally, under certain circumstances, things of different syntactic categories. So there are languages that draw distinctions between, say, nouns and verbs, or nouns and adjectives. And nouns have a particular phonological signature, and adjectives have a different one. We'll actually get a chance to talk about an example like that when we get to Japanese, which has something like that. It's common, for example, for-- well, yeah. I'll leave it there. When we get to Japanese, we'll see something a little bit like that. And so maybe, getting back to your original comment, your original comment was, I'm talking as though what makes something look like a good English word is just going to be true of all English words. But maybe there are cases where we have to ask, does this look like a good English adjective or a good English verb? Yeah. And people have found things very vaguely like that, and I'll talk more about it when we get to Japanese. Yeah, nice point. Other points people wanted to make before we do Yawelmani? Yeah, Joseph? AUDIENCE: I'll ask you a question about transcription of "porous." If you use the power, would you use-- why would you use [INAUDIBLE] backwards C shape? NORVIN RICHARDS: Oh, you would use this? I think I would be tempted to pronounce that "pah," "ah," "pars." That's an "ah" sound, and so I guess that's why I wanted to spell it like this. I thought you were about to ask me why I'm not spelling it, say, like this, "porz" which I guess I could also do, like indicate the aspiration. I'm going to quickly erase this before anybody has any worries along any of these lines. Oh, and this actually brings me to something. I was raised, for some reason, to talk about decisions like this, decisions about whether to spell aspiration on English voiceless stops, to talk about that as a choice between what I was taught to call a "loose" and a "tight" transcription. But there is a more general term, which turns out to be used a lot more. I don't know who did this to me. The terms that people use more commonly are "broad" and "narrow." One of the TAs pointed this out to me after last time. He came up and said, why are you calling them "tight" and "loose"? I'm like, because I'm weird. I'm sorry. So "broad" and "narrow" is what you'll see more often, if you look in the literature. So I'm glad you asked your question because it allowed me to come clean about this. Other questions about why I am weird? Yes? AUDIENCE: Which one is broad, and which one is narrow? NORVIN RICHARDS: I'm sorry. Narrow is when you are trying to represent every fact. Broad is when you leave out facts that are predictable. So a broad transcription of "tense" would leave out, say, the aspiration on the "t." Because an English voiceless stop in that position is going to be aspirated, whereas a narrow transcription is trying to come as close as it can to a spectrogram. It's going to represent everything about the speech signal. I was taught to call that tight, but apparently I was taught by strange people. Yeah. I knew that I was taught by strange people. I just didn't realize how strange. Any other questions about that? All right. Yawelmani. OK, so I'm sorry. Let me do the intro to this slide again. So what I just did by taking you through the English plural forms was to try to introduce you to the idea, get you used to the idea which we're now about to see in action, again, in a different place. The idea that when you are looking at a phonological problem, it is useful to think both about what exactly is happening, what sound changes are happening, so in English, here we are inserting schwas or devoicing consonants. That's one kind of thing that's happening. It's useful to think about that. And it is also useful to ask yourself, why are these things happening? Can I think of this as a way of enforcing a general pattern in the language? It's useful to think about both of those things. So I'm about to show you another example of that in action. And let me just spoil the surprise for you right now. If there's anybody here who hates spoilers in movies, you should cover your ears or something. I'm going to show you a bunch of data from a language originally spoken in Northern California. I believe it's no longer spoken. It was called Yawelmani. And it was a language that had a general ban on having three consonants in a row. Now we're going to see that general ban in a bunch of different places with different fixes in different places. So the recognition that these different things that Yawelmani is doing under different circumstances are all instances of this ban was kind of a discovery, and people saying, yeah, it's doing this, and it's doing this, and it's doing this. And what all of those things achieve is avoiding sequences of three consonants in a row. So you guys have an advantage on the phonologists and the Amerindianists who were studying Yawelmani. They were mostly concentrating on what's happening in this form of the verb, what's happening in that form of the verb. Yeah. But you have the advantage that you have seen this slide. So remember what this slide says, and now let's look at some Yawelmani. Here are a bunch of Yawelmani verbs in their future forms. I don't speak Yawelmani so I won't try to pronounce these for you. But you can see that there's a suffix here, which is spelled E-N. And that suffix is the future tense suffix in Yawelmani. So far, so good? Now here's the gerund. The gerund is a form of the verb. Doesn't matter what the gerund is, but just so you know, a gerund is a form of the verb that allows it to be used as a noun, turns it into a noun. So when you're saying things in English like "swimming is fun," what you've done is take the verb "swim," and by adding "-ing," you've turned it into a gerund, that is a noun, the kind of thing that can be the subject of a sentence like "swimming is fun." These are gerunds in Yawelmani, and as you can see, Yawelmani gerunds involve a suffix, "-taw." Except in that last sequence of verbs, "sing," "pulverize," and "fight," you don't just add "-taw." You also insert a vowel, the vowel that I'll pronounce "ee", the vowel that's written with the letter "i," inside the original verb. So you don't get "ilk-taw," you get "ilik-taw." And you don't get "logw-taw," you get "logiw-taw." And because you saw the last slide, you know why Yawelmani is doing this. Yawelmani is doing this because it hates having three consonants in a row. So it's OK to add "-taw" to "mut" (swear) or "xat" (eat). I said I wouldn't try to pronounce Yawelmani but I guess I lied. Because there you're just putting two consonants in a row, and that's fine. But "sing," "pulverize," "fight," those verbs underlyingly end in two consonants. And if you add "-taw" after them, you're going to have three consonants in a row, and that's bad. Yawelmani doesn't like three consonants in a row, so you introduce this vowel. Now I just said that's why Yawelmani is doing this. At this stage, you could just look at this and go, well, no. Surely you're overreacting. Look, there's a rule. If you're forming the gerund, you add "-taw," and if the verb ends in two consonants, you insert a vowel between the two consonants as well. But wait, there's more. So here's one rule. Nothingness changes into an "i," so you insert an "i" between the first and second consonants when you have three consonants in a row. That's what that rule says. Here's another rule. Here's the desiderative. So you can take a verb in Yawelmani and add a suffix, "-hatin," that gives you a new verb that means, "want to (verb)," so "want to know," or "want to sink," refraining from wondering why you would ever want to sink. The desiderative suffix is usually "-hatin," except if you're adding it to a verb that ends in two consonants, like "speak" or "lift," well, then the desiderative suffix is no longer "-hatin." It's just "-atin." You take off the "h." OK. So here's another thing that Yawelmani does. If you've got two consonants and then an "h," you get rid of the "h." Again, we have two rules, both fine rules. They both work. But by just stating them, by saying, here are two rules, here are two things that Yawelmani does, we're obscuring something we feel like. Yawelmani does both of these things, but they both have the same consequence. They are two ways to avoid strings of three consonants in a row. Yes? AUDIENCE: Is this three consonants [INAUDIBLE]?? I see [INAUDIBLE] double Ls. I don't really see that. Is that actually just two consonants sounds that are [INAUDIBLE] NORVIN RICHARDS: Yeah. So it's two, yes. It is a geminate "l." I say this out of my vast, vast knowledge of Yawelmani. I'm being ironic. I don't know anything about Yawelmani, but this is a geminate "l." It's an "l" sound that's held for two beats. So yes, we need that to count as two consonants. You're raising a really good point. We have to ask ourselves, when we say it's bad to have three consonants in a row, what counts as a consonant? This had better be one. Yeah. Good question. All right, more Yawelmani. Here is a passive suffix, spelled H-N-E-L. I won't attempt to pronounce it. So here we've got the verb "to be tied" and the verb "to be hit." So you can take the verb "to tie," "t'ik'e," which has two of those cool ejectives, and you can add this passive suffix to it, H-N-E-L. Maybe all of you can imagine what's going to happen next. If we add this suffix to something that already ends in a consonant, well, then we get rid of the H. So "to be helped" or "to be held under the arm," those are two verbs that end in consonants, and now the suffix is just "-nel." So in the last slide, we saw that "h" goes away if it's preceded by two consonants. In this slide, we see that "h" goes away if it has consonants on either side of it. Fine, we can say it that way. We can state different rules for all of these morphemes. And in fact, the first people who worked on Yawelmani money did just that. They were like, OK, the passive suffix is "-hnel," unless the verb ends in a consonant, and then it's "-nel." The desiderative suffix is "-hatin," unless the verb ends in two consonants, and then it's "-atin." You can do that but you're missing a generalization, which is Yawelmani is a language that hates having three consonants in a row. So this is like the distinction that we were drawing when we were talking about English plurals. When we were talking about English plurals, we said, yeah, English doesn't like words that end in two stridents, or words that end in a voiced sound preceded by a voiceless sounding, which doesn't allow words like that. And English has repairs, things it does, it devoices, or it introduces a schwa. We're seeing something similar in Yawelmani. Yeah? AUDIENCE: So in two examples, we had "-hatin" and "-nel," which were both [INAUDIBLE]. NORVIN RICHARDS: Yeah. AUDIENCE: [INAUDIBLE],, should we try to go through the sounds itself? Or would this make someone who's trying to come up with the rules mad to consider something like, if "h" is part of the suffix, then drop the "h," instead of the position of "h," the three consonants? NORVIN RICHARDS: I'm sorry, can you ask your question again? I'm not sure I'm getting it. Can you just say again? AUDIENCE: So in "-hatin" and "-nel" example, the conclusion was about [INAUDIBLE] and the three consonants. Could it also be about whether "h" is part of the added suffix? NORVIN RICHARDS: Oh, I see. I see, I see. That's a nice point. So could you say, suffixes that begin with an "h" won't begin with an "h" if the "h" is going to create a sequence of three consonants? Yes. Yes, indeed, that covers everything I did with my last two rules, though it's not something that's easy to state with the rule formalism that I showed you. Maybe the response to that is, so we need a better rule formalism. But no, that's absolutely right. Really what you're asking, in a way, is, is it important that those examples involved in "h" that was at the beginning of a suffix? Could I give you any examples where there was a verb that ended in two consonants, let's say, of which the last was an "h"? And then when you add a suffix, the "h' will go away. That's what these rules are predicting. And your rule is predicting something else. And I don't know enough about Yawelmani to know whether there are examples that allow you to draw the distinction. But you're absolutely right, that's what you'd want to go do is to find out whether the suffix matters. AUDIENCE: So if you have a suffix which has "h" in the second place of a consonant-- NORVIN RICHARDS: That would be another good place to look, wouldn't it? AUDIENCE: --in that case, would the "h" drop, or [INAUDIBLE]?? NORVIN RICHARDS: So yeah, let's make up a Yawelmani consonant. If there were a consonant in H-E-L, I guess these rules predict that if you added that to a verb that ended in a consonant that the "h" would drop. And so that's what these rules predict. The imaginable rule that you're talking about that says "h" at the beginning of a suffix is particularly vulnerable, and you'll drop an "h" at the beginning of a suffix as a way of fixing this general rule. And then if you can't do that, then you insert a vowel. Maybe that's a way to talk about it. That makes a different prediction about this kind of suffix. I'm pretty sure there aren't any suffixes that have that shape. I don't know much about Yawelmani but I'm pretty sure there aren't any things like that. But all of us should now go find things on Yawelmani and read them. Yeah, so this stuff is all out there. It'd be interesting to find out. But you're right, this is exactly-- what you're doing is exactly what we want to do now. So I'm showing you one set of rules that would cover these data. And it's worth asking yourself, what other kinds of rules would cover these data? And how would we find out which of these rules are the right ones? And the only answer is the boring one, which is, go learn more about Yawelmani. But you're right, that's the first thing to do. AUDIENCE: Yeah, my main question now is more of the lines of what you said. This rule of formalism doesn't allow these kinds of rules, right, so do we just not want to [INAUDIBLE]?? NORVIN RICHARDS: No. Well, so we want to find out. Because if there are rules-- you're raising a really good point. If there are rules like that, then we want a rule formalism that allows us to state them. It's not like we would have to kill ourselves making that rule formalism either. So the rule formalism that I showed you has a symbol in it for a word boundary that says there are special things that happen at the end of a word. So there's this symbol that says, in English, if you have two stridents at the end of a word then you've done something wrong and you must introduce a schwa. No reason we couldn't introduce a symbol for morpheme boundary. And then we would use it to capture your generalization. In order to find out whether we need that for Yawelmani, well, we need to know more Yawelmani. But the question you're raising is exactly the right question, and it's how we find out whether we need devices like that. Yeah, cool. That's linguistics you're doing there. Yeah? AUDIENCE: What exactly are the apostrophes? NORVIN RICHARDS: Oh, good question. These particular apostrophes are markers of ejective stops. So the verb to hit is "tok-o" with an ejective "k." That's what those are. People remember ejective stops. All of you were ejectively stopping at me during the phonetics part of the class. This is where you make a glottal stop and express some air from your mouth during a stop by moving your larynx, shoving the air out that way, "ka." OK, cool. So to summarize then, we have these rules and this interesting discussion about whether they're the right rules. There are various ways of categorizing the rules. And what I'm suggesting here is that we might want to be willing to abstract away from the particular rules that I posited and say, Yawelmani doesn't like strings of three consonants in a row. And it has these general principles that say, you can introduce a vowel to break up sequences, and you can get rid of an "h." And the question that was getting raised over here was, is it just, you can always get rid of an "h"? Or is it, you can get rid of an "h," but only if it's the beginning of a suffix? Because that was the case in all of the examples that I showed you, to which, unfortunately, the only answer is, go ask someone who knows more Yawelmani than me, which is most people. No, that's not true. There are several people who know more Yawelmani than me. So do people see this? We're making the same move here that we made when we were talking about the English plurals. So we said, yeah, English has these particular things that it does, these repairs. And these repairs are there because of these conditions on English phonotactics, rules about how sounds can combine in English. Similar kind of thing here. There's a principle of Yawelmani phonotactics. Don't have three consonants in a row, and there are these repairs that you use to repair violations of them that you have created by adding suffixes to things. Yawelmani was the first example that people first-- made people think about phonological problems this way. It was described as a conspiracy, that is, the phonologists who first noticed this about Yawelmani were saying, yes, we have all these rules and they do various things. But they all have the consequence that you get rid of three consonant sequences. And so it's as though there's a conspiracy, and all of these different rules are conspiring to have the consequence that you never have three consonants in a row. And then we might want to know, why? Why doesn't it like having three consonants in a row? There's all kinds of work to do. I want to show you another example of a conspiracy. And we can probably get some ways into this before we have to stop. This one is from Japanese. So in order to show you this, I need to tell you some things about how Japanese pronunciation goes. This is all going to be from the Tokyo dialect of Japanese. So this is how pitch accent works in Japanese. So here are four words. Does anybody here speak Japanese natively? Any Japanese people here? Oh, cool. Can you pronounce these words for us? AUDIENCE: "Makura-wa, kokoro-wa, atama-wa, sakana-wa." NORVIN RICHARDS: Cool. So I just broke my heart listening to you say those words that way. There are various ways to pronounce these words. Let me pronounce them a different way and then you can tell me why I pronounced them wrong. The first one I think we agreed on, it's "makura-wa." There are speakers who will pronounce the second one "kokoro-wa," and the third one, "atama-wa," and the last one, "sakana-wa." But for you, that's apparently-- so you were pronouncing all but the first one the same way, I think, low on the first one, and high on all of the others. This is a point of variation between dialects of Japanese. So people from different parts of Japan, when I was living in Japan, I would talk about going to other places. And the people I was living in Tokyo, the people in Tokyo would say, oh, they speak Japanese so strangely over there. They don't say "HAshi," they say "haSHI." They put the high tone in a different place, put high pitch in a different place. So I want to show you some things about how accent works in Japanese, how accent is pronounced. So first, I'm going to tell you some things about the laws of accent realization, the way I was taught them. And then I'm curious, actually, where are you from, where? AUDIENCE: I'm half-Japanese. My mom's from Japan, so I just went to a Japanese school in the US. NORVIN RICHARDS: Oh, wow. Awesome. Cool. That's cool that you speak Japanese so well. My son, whose mother is Japanese, is attempting to follow in your footsteps. Yeah. I have to get you to tell me how it worked for you. Cool. So let me tell you something things about how accent realization works. By accent, we're talking about the pitch accent system of Japanese. So if you decide to study Japanese, one of the things you must learn is where the pitch of your voice needs to be high and where it needs to be low. And the basic rules for this in Tokyo Japanese go like this. There are words that don't have an accent anywhere. The example on the slide here is "fish," the last word. And in words like that, the first syllable is low and the others are high. So you get "saKANA-WA," low, high, high, high. And then, when there is accent, the accent perturbs this distribution of lows and highs in a particular way that I now tried to describe for you. It goes like this. The accented syllable is high, so "pillow" has accent on the first syllable. "Heart" has accent on the second syllable. "Head" has accent on the third syllable. So you get a high on the accented syllable, and every syllable after an accented syllable is low. So pillow is "MAkura-wa," high, low, low, low, because it's got accent on the first syllable. And then, heart and head have accent on places that are not the first syllable. And so they have high on their accented syllable and then low from then on. So heart is "koKOro-wa," low, high, low, low. Low on the first syllable because, well, low on the first syllable is the general thing, and high on the second syllable, and then low from then on. Don't worry. You're not going to be quizzed on this. Actually, I'm really just telling you this so that you will know why we might care about where accent goes in Japanese. Mostly what we're going to be talking about in just a second is what the rules are for where accent goes in Japanese. Though if none of this makes any sense, or if you're wondering, why on Earth are we talking about all these highs and lows? It's really just so you'll know what we're talking about. We're talking about what the rules are for what goes high and what goes low in Japanese. So there are these regular rules that go for "fish." It's like, if you have an unaccented word it's low at the beginning and high from then on. If you have any accents, the basic rule is, put a high on the accented syllable, low from then on, and otherwise, act like a fish. So "pillow" has a high tone at the beginning because it's accented on the first syllable and then it's low from then on, "MAkura-wa." And then, everything else tries to be low, high, high, high, with the overriding principle that you have to have a high on your accented syllable and low from then on. That's the basic way that this works. And I promised Raquel that when we got to Japanese I would tell you about a way in which languages sometimes distinguish verbs from nouns and other kinds of things. Japanese nouns, as you can see, get to have accent or not. So "fish" doesn't have an accent and the rest of these words have accents. And if they have an accent, the accent can be on any syllable of the word. Verbs in Japanese only get to choose whether they're accented or not. They don't get to choose where the accent is. And that's not uncommon for languages to make that kind of distinction to have a richer array of possible accent types in nouns than in verbs. That's one of the kinds of noun-verb phonological differences people sometimes settle on. So really, this is just here for your greater education. If any of you were thinking about learning Japanese, this is one of the things you will have to learn. I'll feel bad if, now, any of you were thinking about learning Japanese but have now changed your mind. It's not that bad, really. And then there are minimal pairs. So "rain" in Tokyo Japanese is "ame." "Candy" in Tokyo Japanese is "ame," so they have the same consonants and vowels in them in the same sequence, but different positions of pitch accent. "Candy" is unaccented and "rain" has an accent on the first syllable. Or the one that everybody always uses is "chopsticks," which is HAshi, "bridge," which is "haSHI," and "edge," which is "haSHI." "Bridge" and "edge," there's a lot of beautiful Japanese phonology trying to figure out whether there's a difference between the pronunciations of "bridge" and "edge." The place where everybody can hear a difference is when you add suffixes. So if you add, say, the nominative suffix, "-ga," to those, "edge" is like fish. So not only will edge be low, high, but then the "-ga" will also be high, so you get "haSHIGA." Whereas "bridge" has an accent, that means there'll be a high on the "-shi," but then it'll be low from then on. So "edge," the nominative is "haSHIGA," but "bridge," the nominative is "haSHIga," low, high, low. So all fun things to look forward to if you decide to try to learn Japanese. Now here's a generalization. And remember, I told you when we were doing Yawelmani and also when we were doing English, we're going to talk about various kinds of things this language does under different circumstances. But here's a generalization that holds in Japanese, at least Tokyo Japanese. Words have at most one accent. They don't have more than one accent. And so we're going to look at cases where words are going to experience a temptation to have more than one accent. And they're going to do various things to avoid that, various things, kind of like Yawelmani was experiencing the temptation to have three consonants in a row, and it saved itself in various ways under different circumstances. We're going to see the same thing in Japanese and the moral is going to be the same. Here's a language that has a bumper sticker. It has an inspirational poster on its wall. Don't have more than one accent in your words. There's a cute picture of a kitten or something under it. And then the way it achieves that is going to vary from word to word. That's going to be the moral of what we're going to see. Let's look at that. There's "pillow," "pillow" has accent on the first syllable. Yeah, so "from the pillow" is something like, "MAkura-kara," and "to the pillow" is something like, "MAkura-made." The heart, the head, yeah, same kinds of deal. So "from the heart" is "koKOro-kara," with accent just on "koKOro." To the heart is "koKoro-made," with accent just on the "-ko." "From the head" and "to the head," same deal. There's an accent on the last syllable of "head." But when we look at "fish," and remember that fish is underlyingly unaccented, that's why there aren't any accented vowels there in "sakana," the word for "fish." When we look at "fish," we look at it and we find a difference between "from" and "to." So for "from the fish," the whole sequence, "sakana-kara," is unaccented. But for "to the fish," the sequence "-made" has an accent on "-made." It's "sakana-MAde," with a dip at the end there. So "from" and "to" act the same for "pillow," "heart," and "head," the accented nouns. But for the unaccented noun, the word for "fish," which doesn't have an accent on its own, then we see a difference between "from" and "to." "From" doesn't have an accent but "to" acts like it has an accent all of a sudden. A conventional way of talking about what's going on says, yeah, there are, what, four, or five-- six! There are six morphemes in this slide. There's "pillow," "heart," "head," "fish," "from," and "to." "Pillow," "heart," and "head" are accented. "Fish" is unaccented. We already knew that. I showed you that on the earlier slide. "From" is also unaccented. "From" is like "fish," but "to" is accented, and that's why it's showing up as accented in that phrase that means "to the fish," "sakana-MAde." "Made" doesn't just mean "to," it means "as far as." It suggests that someone is crawling on the beach up to the fish. They got to the fish. They got that far, "sakana-MAde." So there are six morphemes on this slide and four of them are accented, "pillow," "heart," "head," and "to." So we only get to see that "to" is accented when we combine it with an unaccented noun, because there's this general principle that you can't have a word that has more than one accent in it. So when you add to to a noun that's accented, to loses its accent. "To" and "from" act the same way. Does that make sense? So here's case number one of Japanese working out a way to make sure that a single word only has a single accent in it. Here's a repair that it takes. It deletes the accent on "to," on the word for "to," "-made." There are other repairs you could imagine. It could have gotten rid of the accent on the noun instead, but it doesn't. That's not what it does. "Fish," "pillow," here's another Japanese word, "even," as in "even a fish." Try to imagine a circumstance in which you would want to say, "even a fish," or "even a pillow." "Even" is accented. We saw that "fish" is unaccented and "pillow" is accented. But when you add this word for even, "-gurai,' both of those nouns become unaccented. So it's sakana-GUrai, and it's makura-GUrai, even though it's MAkura and saKANA. So "pillow" is accented on its first syllable normally, but it loses its accent if you add this "even" thing. That's the repair that I just alluded to accidentally. So "even" is like "to" in that it has an accent. So if you were going to add "even" to these nouns, with "fish," no problem, with "pillow," there is a problem. You're in danger of having two accents. And the repair in this case is to get rid of the accent on "pillow." Raquel, did I bulldoze a question? AUDIENCE: I had a somewhat related question to the topic in general, which is just you're saying that languages will be like, I don't want this to happen? NORVIN RICHARDS: Yeah. AUDIENCE: Do we observe times in languages where they seem like they don't want things to happen, and they just break their rules all the time? Not like English and spelling, but something like they have certain things they definitely don't want to do, so you get just lots and lots of examples of times [INAUDIBLE]?? NORVIN RICHARDS: Of apparent counterexamples? Well, we saw something sort of like that when we were looking at Lardil. Maybe you remember, I showed you that it was useful to think that nouns don't like to end in the vowel "u." The vowel "u" changes to "a." So there was an underlying [NON-ENGLISH],, which means "blood," which you get to see in the accusative, which is [NON-ENGLISH]. But the nominative is [NON-ENGLISH].. And we said, well, that's because Lardil doesn't like words to end in "u." it changes the "u" to an "a." Lardil also doesn't like words to end in "k." So there's an underlying word for "boomerang," which is underlyingly [NON-ENGLISH],, which you see in the accusative. It's [NON-ENGLISH],, but the nominative is [NON-ENGLISH].. And so we said, yeah, it doesn't like words to end in "k." But Lardil also has words that are underlyingly ending in a "u" and then a "k." So this is the word for "story." The accusative is [NON-ENGLISH],, but the nominative is [NON-ENGLISH]. So you get rid of the "k" at the end, but now what you've got at the end is an "u," which we said when we looked at "blood," Lardil doesn't like. Is that the kind of thing you're talking about? AUDIENCE: Yeah. So just is it actually common that languages will break these important rules, or is it actually pretty infrequent? NORVIN RICHARDS: So we're going to see, I think it's fair to say that it's common. So I'm deliberately finding some cases, it may not feel this way, but I'm finding some cases that are comparatively simple, where there is a single principle that a language is trying to achieve and it achieves it in various ways. That's why we're talking about this kind of case. But there isn't any conflict between the various things that a language wants to do. So we're talking about cases where these languages only have one thing they want to do and they always achieve it. There are cases where a language has multiple things that it wants to do and they're not compatible with each other, and the language has to choose. So you can convince yourself that it wants to do both of these things, but sometimes it just has to give one of them up. And Lardil might be a case like that. That's a kind of thing that happens. So what we'll end up with is a picture in which languages might have many things that they want to do, but they have a list of priorities. And we're talking in these cases about languages that-- we're only talking about priority number one. It always wins. You always do whatever you have to to achieve that. But there are also languages that have priority number one, and number two, and number three, and number four. We find examples like that, too. Does that answer your question? AUDIENCE: Yeah. NORVIN RICHARDS: Cool. Yeah. Yeah? AUDIENCE: So taking the case to a study of English. I would assume that no double stridents is very high priority. NORVIN RICHARDS: Yes. AUDIENCE: Because I've never heard it. NORVIN RICHARDS: We don't have any words like that. Right. Yeah, that's a good example. So that's a case where English just will not tolerate cases where a word ends in two stridents. Yeah, we don't have words like that. Good example. Yeah. All right. Let me continue crunching through Japanese here. So again, Japanese doesn't allow for words with more than one accent. We've seen that "-made," this word for "to," loses its accent after an accented word. And now we're seeing that if you have an accented word, it will lose its accent before "-gurai." So again, Japanese has this general thing, very high priority, no words with more than one accent in them. And then it has various fixes depending on the particular circumstances. We've talked about two so far. We'll see some more. OK. So Japanese avoids having more than one accent in this unit consisting of a word along with its suffixes through various means. You either get rid of the first accent or the second, depending on which accent exactly it is that you're talking about. We can maybe think of this as an example of the kind of thing that [INAUDIBLE] Japanese's highest priority is, don't have a word with more than one accent. And then it has these other priorities, like keep the accent on the noun, and keep the accent on "-made," the word for "to," and keep the accent on "-gurai," the word for "even." And what we're seeing maybe is that we should list the priorities as only one accent. That's number one. And number two is keep the accent on "-gurai," this word that means "even." And number three is, keep the accent on nouns. And then, number four is, keep the accent on "-made," this word for "to." And these priorities we should think of as being in this order. So Japanese really only wants one accent per word, that's the most important thing. And then also, it really wants to keep the accent on "-gurai." It would like to keep the accent on nouns. So if it just has to choose between the accent on a noun and the accent on "-made," well, it'll keep the accent on the noun. It'll lose the accent on "to." But if it's choosing between the accent on a noun and the accent on "-gurai," on even, well, it'll choose the one on "-gurai." So it's as though we have, again, this ranked list of priorities, these things that Japanese cares about. And this sort of way of thinking about things raises all kinds of questions, like, what kinds of things do languages get to care about, and why? And what determines which things are more important than which other things? Is there anything to say about that? These are all open topics in phonology, things people work on. I'll show you a few more facts about Japanese before we have to stop. Here are some Japanese compounds. So compounds where you take two nouns and put them together. And of course, you may all be imagining potential problems, because as you know, a single word can only have one accent in it. What's striking about compounds is that the compound word, the word that you create when you put two words together, indeed, it can only have one accent in it. But the accent it has in it doesn't have to be the accent of either of the two components. So take the last example, there's a place. It's a place I lived in Japan for about a year, Chiba. Chiba ken. So Japan is divided into what are called prefectures. They're like states, so there are a certain number of them in Japan. There's a city that's the capital of Chiba prefecture, Chiba City. If you've ever read any William Gibson, you've read about it. That's science fiction. Chiba is not like that. The place itself has accent on the first syllable, so it's "CHIba." And the word for "prefecture" has an accent on it, it's "ken." But the compound, "Chiba prefecture," has one accent. That's sort of what you expect, words should only have one accent, but it is on the one syllable that it is not accented ordinarily. It's "chiBA ken." Sorry, "chiBA ken." That's right, "chiBA ken," so accent just on the second syllable, not the syllable that's ordinarily accented in "Chiba," and not the syllable that's ordinarily accented in "ken," the other one. You also get accent in compounds. You can see in the first example, you get accent in compounds even if the components of the compound are not accented themselves. So a "milk drinking child," "milk drinking" is itself a compound, but anyway, "milk drinking" is not accented. It's "tinomi," and "child" is not accented, it's "ko." But the compound has an accent, which is on the "i" there, so it's "tinoMIko." Yeah, so a nursing baby, "milk drinking child." Which prefecture is your mom from? Do you know? AUDIENCE: She's from Tokyo. NORVIN RICHARDS: She's from Tokyo? Cool. Great city, Tokyo. So yeah, so all of these compounds, they have one accent in them, exactly one accent. And the accent is not necessarily the accent of either of the components. In fact, I think I've carefully set it up so that it isn't in any of these examples. Where is the accent going in these compounds? Yes? AUDIENCE: The second to last syllable. NORVIN RICHARDS: So it's going on the second to last syllable in all of these examples. Yes. Is there another way you could say it? Yes? AUDIENCE: But would it be, like, on the last syllable of the first part of the compound? NORVIN RICHARDS: It's going on the last syllable of the first part of the compound. I haven't given you any examples here yet that distinguish those two theories but those are both perfectly good theories, what's going on. Yeah? Here are some places to distinguish them. I especially like the last one. So the Japanese word for "fried potato," or for french fries-- "furaido poteto," fried potato. Well, what we can see actually is that neither of those theories is right. If we ignore "nursing baby" and "Kagawa prefecture," where is accent going in "raw egg," and "field mouse," and "fried potato"? Yes? AUDIENCE: Isn't it called the antepenultimate? NORVIN RICHARDS: Oh, it is called the antepenultimate, yes. That is true in all of these examples. It's going on the antepenultimate syllable, that is, the syllable which is before the penultimate syllable, that is, the syllable which is three syllables from the end. Yeah, that's true for all of these. There's another way of saying it, though, sort of like there were two ways to say it the first time. Yeah? AUDIENCE: The first syllable of the second. NORVIN RICHARDS: The first syllable of the second part of the compound. Yeah, that works for all of these. I can't remember if I have any slides that show this, but that's actually the right way to talk about it. The right way to talk about where accent goes in a compound is, it goes at-- so first, it has to only have one accent. The accent has to be next to the boundary between the words and it can't be final, so it can't be on the last syllable. So in words like "Kagawa prefecture," so when you take "kagawa" and "ken," what you get is "kagawa ken." So there the accent goes at the end of "kagawa" because it can't be final. It can't be on the word for "prefecture" because that's only one syllable long. But on "fried potato," it can be on "potato," on the first syllable of "potato" because it's still between the boundary, next to the boundary between the words and it's not final. And then, if possible, that's what you prefer. So you prefer "furAIdo POteto" to "furaiDO poteto," where the accent would be at the end of the first syllable. Does that make sense? Make sense? So here's another place where-- so I did something like this over here. We'll just end with this and we'll pick it up here next time. Here's a place where it's useful to think of Japanese as having a bunch of priorities which it sort of has in an order. So if it's starting off with "kagawa" and "ken," Kagawa prefecture, it has to decide, what should I do? Should I keep both of the accents I've got? Should I just have one accent, and if I'm going to have one accent, where should it go? We've seen already from other examples in Japanese that it has a very high ranking preference for there only to be one accent. And so it's not going to keep two accents. That's the first candidate there, the first thing that it's imagining maybe being able to do. And then when it's trying to decide where the accent should be, the single accent, it has these various considerations. It wants the accent to be near the boundary. It would like the accent to be on the last word. But more important than having the accent on the last word is avoiding having accent on the last syllable. So when it's deciding between having the accent on the last syllable, which would put the accent on the last word, and putting the accent at the end of the first word in the compound, it chooses to put it at the end of the first word of the compound, because, well, that puts it near the boundary and avoids having it be final. So Japanese has these various things it's trying to do and it can't have everything that it wants. So it would like to accent the second word. It would like to put the accent on the second word. That's what we've seen when the second word is long enough, then the accent will go on the second word. But when the second word is so short that that would put it at the end of the compound word, then it doesn't go on the second word. It doesn't go at the end of the word. So avoiding having the accent at the end of the word is more important than putting the accent on the last word. So here's a place where Japanese is acting as though it has many things that it wants and it chooses the best one. This is a very influential way of thinking about phonological problems, which we will talk about more next time when we have more time. Are there any questions about it before I let you guys go? Yes? AUDIENCE: Does this have a formal name? NORVIN RICHARDS: Yes. It is called Optimality Theory, and we will talk more about it next time. You are seeking to make your choices optimal, to make them as good as they can be. Other questions? OK. So again, I'll put up a new problem set today, and I'll see you guys next time.
MIT_24900_Introduction_to_Linguistics_Spring_2022
Lecture_3_Morphology_Part_2.txt
[CREAKING, CLICKING] NORVIN RICHARDS: All right, so welcome to day two of morphology. Last time, I was trying to get you to believe that it's worth-- for trying to think about what you have in your mind, the representations you have in your mind of what you can do with your native language-- if we're trying to think about that, that it's useful to think, yeah, your mental list of elements of your basic elements of your language consists, maybe, of a list of morphemes, where morphemes are these units that can combine in various ways, as we've seen. Each morpheme we're going to need to tag with information about its sound, how it's pronounced, its meaning, and then information like is it a bound morpheme or a free morpheme? That is, is it the kind of morpheme that needs to combine with another morpheme or not? So there are free morphemes like "cat," where, when we're listing its sound, will represent the fact that it's pronounced "cat," and when we're listing its meaning, will represent the fact that it refers to a certain type of small mammal. And when we're listing whether it's bound or free, we'll say that it's free because "cat" is a word. You can say it by itself. As opposed to the S at the end of a word like "cats," we'll say that's a bound morpheme because it's not a word on its own. And if we're talking about bound morphemes, we ran through bound morphemes from a bunch of different languages. We saw there are prefixes out there, there are suffixes, and there are other things we talked about-- infixes, various other kinds of things. There are other kinds of morphemes to talk about. We eventually will. But first-- so what I said here was first, another kind of information we're going to have to list. But actually, before that, is any of that unclear? Does that all make sense? Is everyone convinced of the existence of morphemes? Are you willing to at least assume that they exist? OK. So you're used to looking words up in the dictionary, but in your mental dictionary, maybe what you have are not words but morphemes-- things that are at least sometimes smaller than words. So another kind of information we're going to have to list. Think about the bound morpheme "-al" that shows up at the end of industrial, or national, or autumnal. That's limited in where it can go. So there aren't words like "assert-al" or "impress-al," or "industrializ-al," yeah? Again, let me pause and make sure that I'm not saying things that are just false. I think I told you there's a danger when you're a linguist you gradually get out of touch with your native language. But I'm pretty sure this is true. Is there a generalization about the kinds of things you can add -al to? What are they? Yeah? AUDIENCE: Nouns. NORVIN RICHARDS: Nouns, Yeah. So "industry" and "nation" and "autumn" are all nouns, as opposed to "assert," and "impress," and "industrialize," which are not nouns. What are they? AUDIENCE: Verbs. NORVIN RICHARDS: Verbs, Yep. And if anybody is listening to this and thinking, nouns, verbs? What the heck? Don't worry. We will talk more about this eventually. Yep, OK, so we just said there is a word "industrial" because "-al"-- maybe I should write this on the board. Or I would if I thought there was chalk anywhere, anywhere at all. Here we go, I'll write it on the board right here. That will be helpful. No, I won't. I'll put it over here. We've just said "-al" attaches to nouns and can't attach to verbs. That's what's going on in the first of the list there. So we've got a noun can have "-al" on it. I'll write that again over there. This is surely not the most efficient way to use the blackboard. Is it necessary? Does it work? Can everybody see one or another of the things that I wrote? Is there anybody who can only see one of the things that I wrote? Yeah, OK, so it's worth it to write it twice. OK, so "-al" attaches to nouns and doesn't attach to verbs. That's why you can say "industrial." It's why you can't say "industrializ-al," yeah? Why can you say, "industrializational"? That's a word. It's kind of a long word. "Industrializational," I guess, means having to do with industrialization. Yes? AUDIENCE: Because "industrialization" is a noun? NORVIN RICHARDS: "Industrialization" is a noun. So "-ation" is attaching to "industrialize," and it is apparently creating a noun. So at least for some kinds of morphemes, we're going to want to list the kinds of things they attach to, their input, and then their output. So "-ation" can attach to a verb like "industrialize," and it can create a noun. Yeah, so "-ation" attaches to a verb, and then what you get is a noun. Say that again, if you take a verb and add "-ation," what you end up with is a noun. And that sort of-- we have intuitions about which things are nouns and which things aren't. We have the intuition that "industrialization" is a noun, but now we can check our intuition by asking ourselves, well, we know that "-al" attaches to nouns. Can we attach "-al" to "industrialization"? Yes. Yes, our intuition is right; that's a noun. OK, cool, so there are at least some kinds of morphemes which convert words into other types of words, convert nouns into verbs or, in the case of "-al," verbs into adjectives-- sorry, nouns into adjectives. OK, so adding some things to the list I showed you before, the lexicon has morphemes in it, and morphemes all contain the information about sound and meaning, and whether it's bound or free, and if it's bound, what kind of bound morpheme it is-- a prefix, or a suffix, or a tone, or an infix, or whatever-- and also what kind of morpheme they can attach to. Do they attach to nouns, or verbs, or what? And, at least for some kinds of morphemes, what the result is-- whether you create a noun, or a verb, or an adjective, or whatever. Yeah, they both sound OK. Actually, sometimes, we'll have to say more than that. So "sincere," "chaste," "scarce," "curious," "deep," "wide," and "warm"-- what are those? They're adjectives. And when you add "-ity" to "sincere," you get "sincerity." When you add "-ity" to "chaste," you get "chastity." And what are "sincerity" and "chastity"? AUDIENCE: They're nouns. NORVIN RICHARDS: They're nouns. So "-ity" is attaching to an adjective, and it's making a noun. When you add "-th" to "deep," you get "depth." Now, when you add "-th" to "wide," you get "width." So "-th" is attaching to an adjective, and what is it creating? AUDIENCE: Noun. NORVIN RICHARDS: A noun, yeah. So here we have two suffixes that both seem to do the same thing. They both attach to adjectives and create nouns. But we're going to have to say something else, because you cannot say "deepity," or "sincereth." Yeah, those are not words. So you can't add "-th" to the adjectives at the top, and you can't add "-ity" to the adjectives at the bottom. So sometimes it's not enough to say, this attaches to adjectives. You have to say, it attaches to adjectives in this list, or we have to understand something more about the adjectives. Anybody have a guess about why there are two kinds of adjectives in English? Yes. AUDIENCE: I was going to say that some are Latin-derived, and others are just from Old English or Germanic. NORVIN RICHARDS: I think you were going to be right, but did you convince yourself not to say that? AUDIENCE: Because "scarce" does not come from Latin, so-- NORVIN RICHARDS: Yeah, but it should have, yeah. Yeah, so the ones at the top, yes, look Latinate, you're absolutely right. And the ones at the bottom look Germanic. Yeah, that's right. So there are a few things like this where it's a result of English history-- that we have suffixes that we got from-- "-ity" is something that we got from French and "-th" is something we got from Germanic. And so yeah, these suffixes are maintaining their history to a certain extent. Yeah? AUDIENCE: "-ity," I don't know if this is a real thing or just me thinking, but it reminds me of Spanish "-idad." NORVIN RICHARDS: It is related to that, yes, Yeah, score. Yeah, good point, yep. OK, so I mean, you can be a native speaker of English and not know this. It's not like we are all remembering, oh, yeah, those are the Germanic adjectives, those are the Latinate adjectives. Be sure to add "-ity" to that one and "-th" to that one. But on some level, we have to say, yeah, there are adjectives of class 1 and adjectives of class 2 or something. There are purple adjectives and green adjectives or something. We have to distinguish these from each other. I'm giving you English examples, but this is a very common situation. Lots of languages have been in intense borrowing situations at some point in their history with weird results like this. OK, now we've talked about this a little bit. So I said when you add the plural suffix to "cats" and "dogs," one of the first things I said to you, it's pronounced differently on "cats" than it is on "dogs." Yeah, and we talked about why, and we're going to talk more about why later. This is the kind of thing that happens sometimes when you add one morpheme to another morpheme-- one or another or both of the morphemes changes a little bit. We'll talk a lot about this as we start talking about sound change, phonology, which is our next topic. But for now, we can just pause and notice, if you add "-al" to "electric," you get "electrical." But if you add "-ity" to "electric," you don't get "elec-trick-ity". You get "electricity." It's not because "elec-trick-ity" would be hard to say. Yeah, it's just we have this idiosyncratic sound change-- which we inherited, again, from French-- that softens the K to an S there. Yeah? AUDIENCE: [INAUDIBLE] idiosyncratic? NORVIN RICHARDS: I'm sorry, say it again? AUDIENCE: What does "idiosyncratic" mean? Oh, "idiosyncratic" means-- what does it mean? What do I mean when I say "idiosyncratic?" It means you sometimes have to say, this particular morpheme has two different forms depending on what it combines with. So you have to say more about that. So "idiosyncratic" means special, having something special to do with that. So the suffix "-ic" at the end of "electric" will change to "-is" before "-ity." And there's more to say than that, but you have to say something that has that consequence. Similarly, the past tense, it's like the plural suffix, which can be either "z" or "s," like in "cats" and "dogs," has a bunch of different forms. Sometimes, it can be more of a "d" sound like in "hummed." Sometimes it can be more of a "t" sound like in "leaped" [pronounced like "leapt"] And the verb itself can undergo changes when you add the suffix to it. So the past tense of "hum" is "hummed." That's all fairly peaceful. But the past tense of "leap" is not "leaped" [with "d" sound], it's "leaped" [with "t" sound]. The past tense "go" is not "goed," it's "went." This is a case where we-- it's what's called suppletion. So English speakers at some point in the history of English, for mysterious reasons of their own, decided that the past tense of "go" should borrow-- that "go" should borrow its past tense from another verb. So for the past tense of "go," we use the past tense of another verb, "wend," as in "to wend your way." And so the result of the past tense go, it's no longer "go." It Is now "went." Languages do things like this. It'd be interesting to try to understand why. Or similarly, "sing" and "sang," past tense of "sing," there's no "d" or "t" or anything. You just change the vowel. So there are cases where a morpheme will have different forms depending on what it's combining with. That's the quick and dirty way of describing everything that's on the screen here. The technical term for what's going on here, we say that when you have a morpheme that has two different forms, the two different forms-- or multiple different forms, sometimes it's more than two-- the different forms of a morpheme that it takes under different morphological circumstances, those things are called allomorphs. So we say that, for example, the "d" and the "t" past tenses are two allomorphs of the past tense, or that "leap" and "leap" [pronounced like "lep"] are the elements of the verb "leap," the word that means "jump." Yes? AUDIENCE: For example, when we [INAUDIBLE].. NORVIN RICHARDS: Oh, I'm just trying to call your attention to the fact that it's pronounced differently. Eventually, those brackets will mean something, but we haven't gotten far enough yet to get them to mean anything. So right now, all I'm doing is calling-- if I had said "leap, leaped," I would have spelled "leap" [pronounced like "lep"] the same way I'm spelling "leap." We would just pronounce it differently because English spelling is insane. And so I'm just putting the brackets there as a warning sign-- here's a place where the spelling is unreliable. Good question. Eventually, we'll figure out how to talk about that. Yes? AUDIENCE: [INAUDIBLE]. NORVIN RICHARDS: Yeah. AUDIENCE: So does that define [INAUDIBLE].. NORVIN RICHARDS: Yeah, so right, so I called it "d" and "t" because "hummed," the past tense of "hum," although we spell it with an "e" before the "d," you say it, I think there's only one vowel in that word. It's "hummed," right? We're not saying "humm-ed." That's not the past tense of "hummed." AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Yeah, yeah, so I'm trying to just spell out sound, which is hard because we haven't done phonology yet. Yeah, but that's a really good question. This is one of many places where English spelling will get you in trouble because English is spelled according to rules that are designed to repel invaders, yeah. Yeah? Any questions about any of this? These are really good questions you guys are asking. Yeah? OK. OK, yeah. So sometimes-- so we will get a chance to talk more-- in fact, we'll talk more very soon- about how exactly to talk about allomorphy, this property of morphemes of having more than one form, of having allomorphs. All we've said so far is, sometimes morphemes have allomorphs. So depending on what they combine with, they take different forms. There's often what's called a default allomorph. That is, there's a morpheme that is the basic one that shows up most of the time. And when we're making our special statements about other forms, well, they're going to be about the other forms. So there's going to be the form of the morpheme that you start off with. And then, you will do things to it to create the other allomorphs. So for example, the default assumption if I'm telling you about a verb, I've made up a verb. It's "fleep," "to fleep." It's a special move in soccer in which you do a headstand and kick with your feet over your head. I just made that up. The past tense of "fleep," you're going to assume that it's "fleeped." Yeah, it could be "flept." Yeah, it could be like leap. But you're going to assume that it's "fleeped." That is the default. You assume that morphemes don't have allomorphs other than the one, in this case, that shows up in ordinary present tense circumstances when they're not combining. OK? Is this any of this surprising or weird? Yes? AUDIENCE: Does that particular example have anything to do with the fact that it looks like the word "leap"? NORVIN RICHARDS: I made it up that way to make you think that. So she asked, does that have anything to do with the fact that it looks like the verb "leap?" I guess the point I was making is we can't have a general rule that if you're making the past tense of a verb that ends in "-eap," that it's going to change to "-ep." When we're trying to figure out what's going on with "leap," that can't be what's going on. I guess I could have made that point in other ways. We have verbs like "heap," like to heap up sand, and the past tense of "heap" is not "hept," right? It's "heaped." But similarly, if I make up a new verb, you're not going to apply what happens in "leap" to the new verb. Yes? AUDIENCE: Are there any specific characteristics that would deter someone from going towards the default allomorph? Like if you made another made-up word, would there be any characteristics about that word that would have us moving towards [INAUDIBLE] the "t" at the end? NORVIN RICHARDS: Oh, I see. I don't think there's any way for me to induce you to do the "leap"/"lept" thing other than by telling you, oh, by the way, the past tense of "leap" is "lept." But it's interesting to think about. Sorry, Raquel, you were about to comment on that. AUDIENCE: I was thinking like maybe if the word, you know it was in a specific discipline where the default one for that discipline is to do something that's viewed as a regular compared to the average thing-- like if you had an "ae" mean the plural, and you normally don't do that for all words, but you might do it for, I don't know, weird plants or something. NORVIN RICHARDS: Yeah. AUDIENCE: You might default to weird plants. NORVIN RICHARDS: I see. I see what you mean. And there are languages, certainly, in which it might be easier to make a point like that. I guess the other thing to say-- the point you're raising brings this up-- people sometimes claim anyway that these kinds of irregular allomorphy, these things you just have to memorize about how things combine, they'll go away if you-- they'll sometimes go away if you combine a morpheme with something else. So the classic example, for some reason, the plural of "leaf," the things on trees, is "leaves." And here, I'll put it over here, too. But I've heard it claimed, anyway, that if you're talking about the athletic team, the Toronto Maple Leaf-- so the singular would be the "Toronto Maple Leaf"-- that the plural is the "Toronto Maple Leafs," and not the "Toronto Maple Leaves." I'm going to pause. I'm not enough of the sports fan to know whether this is true. So I'm getting a thumbs up. Anybody here from Toronto, or is anybody a Maple Leafs fan? Anyone know whether this is true? Presumably, is the kind of thing you could find out by looking at their website-- what does it say on their T-shirts? But this is a claim that I've heard, anyway. So there are some cases where this weird allomorphy goes away because you've made it part of this compound, or you've made it part of a name. So it isn't just about-- if we're talking about actual maple leaves, like leaves on a tree, then those are definitely "maple leaves," and something about them being the name of an athletic team. And somebody else had a hand that I ruthlessly ignored. Oh, Joseph? AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Oh, they do? You checked. This is the kind of thing. [LAUGHS] I can tell sometimes that I was born in a different century, because there are all kinds of things where I raise interesting questions, and then in class, someone will be like, yes, and here is the answer. Like, oh, right, yeah, we have the ability to do that now. Anything else? AUDIENCE: What do you think the past tense of "slingshot" is? NORVIN RICHARDS: Ooh! AUDIENCE: So "slingshotted" and "slingshot" both feel wrong. NORVIN RICHARDS: OK. [LAUGHTER] "Slingshoots." So question was, what's the past tense of "slingshot"? Two possibilities-- "slingshotted" and "slingshot." And there was another possibility, which we will ignore for reasons of timing. Who thinks the past tense of "slingshot" is "slingshotted"? Who thinks it's "slingshot"? OK, interesting. There are some examples-- does anybody have another alternative? AUDIENCE: "Slingshote." NORVIN RICHARDS: "Slingshote"? No! [LAUGHS] Shame, for shame. AUDIENCE: [INAUDIBLE] "Slingshut." NORVIN RICHARDS: "Slung," "slung," "shoot"-- no. Also no. There are some examples of verbs or morphemes that if you combine them-- AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Sorry, if you combine them with other morphemes that no one knows, this is kind of like "slingshot." So take the verb "to stride." So the past tense is "strode." And then, the participle was, he has-- AUDIENCE: [INAUDIBLE] "stride"-- NORVIN RICHARDS: "Stridden." AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: "Strode," "strod." [LAUGHS] Who thinks it's-- AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Who thinks it's "stridden"? Who thinks it's "strode"? Who has something else they'd like it to be? AUDIENCE: "Strud." NORVIN RICHARDS: "Strud?" Any votes for "strud"? No, you're all alone, I'm sorry. So this is a classic example of a word where English speakers just are not sure what the participle of this is. AUDIENCE: "Smite" as well. NORVIN RICHARDS: Sorry? "Smite" is another good example, yes. So that one-- AUDIENCE: [INAUDIBLE]. NORVIN RICHARDS: That one, yeah, so for me, so "smitten" has this other use, right? You can be "smitten" with someone-- which I think is interfering. Yep, OK, so lots of fun things to think about allomorphy. OK, so I'm going through all this partly-- so I want to go back to the stuff about morphemes combining with things and creating other things because so far, we have talked about various cases where morphemes combined with other morphemes, and we've also talked about cases where morphemes combined with more than one morpheme. And I kind want to talk about that a little bit carefully, because it's going to be something that's going to be useful as we go forward in the class. So let's talk about these particular morphemes. Think about the suffix "-ment." What does "-ment" attach to, and what does it create? So you can say things like "government" and "treatment." You cannot say things like "bodyment" or "powerment." What does "-ment" attach to? What are "govern" and "treat"? Verbs. Verbs, OK. So "-ment"-- erase some stuff here-- put "-ment," it attaches to a verb, and it creates what? AUDIENCE: Noun. NORVIN RICHARDS: A noun, yeah. "Treatment" and "government" are both nouns. And I'll put that again over here. Do-do-do, "-ment" attaches to verbs and creates nouns. OK, cool. Now, how about the prefix "em-," as in "embody" and "empower." What does "em-" attach to, and what does it create? AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Attaches to nouns and makes them into verbs, aha. I'll just say that again over here. [LAUGHS] There's got to be a better way to do this. Attaches to nouns and makes them into verbs. OK, cool. All right, so we just said, we know why you can't say "bodiment" or "powerment." it's because "-ment" attaches to verbs, and "body" and "power" are not verbs. How come you can say "embodiment" and "empowerment"? AUDIENCE: Because [INAUDIBLE]. NORVIN RICHARDS: All of you are saying the right thing, sort of in chorus. As the semester goes along, eventually, you'll start saying these things in unison, so it would be really cool. Maybe with harmony. So yeah, you can't attach "-ment" to "body" because "body" is a noun. But you can attach "-ment" to "embody." So "embody" is a verb. "Empower" is a verb. "Em-" attaches to the noun "body" or the noun "power" and makes it into a verb. And then, "-ment" attaches to the resulting verb and makes it into a noun again. So it's useful to think of these morphemes, so words like "embodiment" and "empowerment," not just as consisting of three morphemes-- although they do, a prefix, a stem, and then a suffix-- but as consisting in three morphemes that are assembled in a particular order. You first attach the prefix, and then you attach the suffix. People see why I'm saying it that way? That make sense? So here's a way of representing that fact, which is going to come in handy as we go forward. We're going to want to say, yeah, you start with "power," and you glom "power" together with this prefix. I've given it the label "aff," which is just supposed to stand for "affix." So the affix "em-," the prefix "em-", attaches to the noun "power" and creates a verb. That's what the lower left-hand side of that tree is meant to show you. Does that make sense? And then, that verb is combined with this other affix, "-ment," and the result is a noun, "empowerment." That make sense? So this tree is just a way of representing what I just said, which is it's useful to think of this as not just a word consisting of three morphemes, but a word consisting of three morphemes which are attached in pairs. That is, you attach "em-" to "power," and then you attach "-ment" to "empower." Yeah? AUDIENCE: Are the placements of the affixes on this tree because of whether they're prefixes or suffixes? Yes, yeah. So you're asking, why did they put "em-" before "power," and why did they put "-ment" after "empower"? And yes, it's just whether they're prefixes or suffixes. That's all I was trying to do, right? So the tree is just a way of representing that kind of derivational history of this word. So what's the order of operations that you used to create this word? Yeah? AUDIENCE: So they're ordered-- so it goes from the bottom up? Yes, that's a way to talk about it. Yeah, OK. Yes? What [INAUDIBLE]? Oh, well, infixes would be where I would fail at what I'm trying to do. This was your point from just now. I'm trying to represent whether things are prefixes or suffixes by using the tree. If we were doing Tagalog, and I wanted to do the past tense of the verb "baba," which involves an infix "um" I'd probably put together, here's the affix, and this was a verb, and I'm going to put these two things together as a single thing. And then, yes, somewhere else, I would have to say, oh, by the way, this is an infix. So try not to take too seriously which things are to the left and right of which other things. What I'm really trying to get across with these trees is the order in which you are putting things together. The fact that some things are prefixes, and others are suffixes, and others are infixes, or tones, or whatever all else, that's somebody else's job for today, OK? Right, OK. There's a standard way of talking about trees like this. Trees like this are very popular. Try to get comfortable with them. We will spend lots of time looking at trees as the semester goes along. There's a standard way of talking about them, which involves feminine kinship terms. So we'll say that the sister of "em-" is a noun because those two things are next to each other on the tree. They are two daughters of the same thing, that verb node that's above them both. Another way to say it is that that verb is the mother of both "em-" and "power." I think that's as far as people go with the feminine kinship terms. People don't talk about "aunts," or whatever. So it's "sisters," and "daughters," and "mothers." Nobody even talks about grandmothers. It's nuclear families. Yeah, so there's just a terminology for talking about this. So "-ment" has a verb as its sister, and its mother is in N. When we say "-ment" has a verb as its sister and a noun as its mother, that's a way of saying things like what I have on the board here twice-- that "-ment" is something that takes a verb and converts it into a noun. That's how that kind of fact is represented in these trees. OK, now there will be times-- big surprise, this is like day three. I've been looking at comparatively simple cases. There are going to be examples where we'll want to distinguish multiple affixes that are maybe similar to each other. So think about the prefix "un-". The prefix "un-" shows up in words like "unwrap" and "untie," and also in words like "unlikely" and "unhappy." yeah these two "un-"s arguably mean different things, and they're attaching to different things. So somebody help me be less vague. What's the first "un-" doing? What does it attach to? AUDIENCE: Verbs. NORVIN RICHARDS: Verbs. OK, so there's an "un-" that attaches to verbs. What does the "un-" that attaches to verbs mean? AUDIENCE: An undoing of that verb. NORVIN RICHARDS: Yeah, you undo-- wait, I just used-- yeah, you reverse. There we go. You take the result state of the verb, and you cause it to go back to the way it was before the verb was done, or something like that. Yeah, so if you unwrap something, you take something which is wrapped, and you cause it to not be wrapped anymore, or something like that. Same with "untie." So there's an "un-" that attaches to verbs and means something like "reverse." Or, to put it another way, there's an "un-" that attaches to verbs and means something like "reverse." That's actually putting it the same way, but in a different place, yeah? And then, there's another "un-" that attaches to what? What's the other "un-"? What does it attach to? AUDIENCE: Adjectives. NORVIN RICHARDS: Adjectives, and what does it mean? AUDIENCE: "Not." NORVIN RICHARDS: Yeah, like "not," or something like that. I guess we get into debates about whether "unhappy" and "sad" mean the same thing, or "unlikely" and whatever the opposite of "likely" is. What am I putting here? Adjectives, "not." So two "un-"s. Yes? AUDIENCE: If you a word like "unwrapping," is the "wrapping" a participle and "un-" a third "un-," or is it "unwrap," and it's "-ing" modifying the "unwrap"? NORVIN RICHARDS: Really good question. What kind of tree should we draw? Should we draw a tree? So the question was for "unwrapping," are we going to attach "un-" to "wrap," giving a verb "unwrap," and add "-ing" to that? Or are we going to add "-ing" to "wrap," giving you "wrapping," and then add "un-" to that, a third "un-? What do people think? What should we do? AUDIENCE: It depends on what you're using the word as, yeah. NORVIN RICHARDS: Yeah? Do you have-- AUDIENCE: If you're using the word as the verb, "I am unwrapping this thing," then you would do the first, where you attach the "-ing" to "wrap" and you undo the wrapping. NORVIN RICHARDS: Yeah, so well, we have an "un-"-- maybe this is a way to think about it. We have an "un-" that we can attach to verbs. And so in the case of "unwrapping," I guess we could hope that we can survive with just that "un-" until something forces us to do something else. But you have an idea, too. AUDIENCE: [INAUDIBLE] verb, like "unwrapping"? NORVIN RICHARDS: Yeah. AUDIENCE: Then, then "unwrap" [INAUDIBLE] the "ing" part of it, [INAUDIBLE]? NORVIN RICHARDS: Yeah. AUDIENCE: [INAUDIBLE] something [INAUDIBLE] of the work? NORVIN RICHARDS: Yes. AUDIENCE: [INAUDIBLE] to now, and [INAUDIBLE]?? NORVIN RICHARDS: So I guess there are two words. Unwrapping, this was your point, too. There are two words, "unwrapping," that we should think about. One is, "He is unwrapping the presents," and the other is, "Unwrapping presents is fun," where "Unwrapping presents is fun," hopefully, "unwrapping" is some kind of noun. It's what people sometimes call a gerund, where you convert a verb into a noun that means the process of doing the verb, or something like that. Maybe in both cases, we would hope, yeah, that you would first create the "un-" verb, and then you would add "-ing" to it, because "-ing" attaches to verbs and does other things with them sometimes. They're still verbs. Sometimes they're nouns now. That would be my first hope until somebody made me hope for something else. Maybe one way to say this is, these are cases where we know that there have to be two "un-"s, both because these two "un-"s seem to mean slightly different things, and because, well, they're just-- they are attaching to things of different kinds, verbs and adjectives. Your example, where we're adding "un-" to-- so suppose we were thinking about the "unwrapping is fun" case, the gerund case. If you wanted to add "un-" after you added "ing," after you made it into a noun, you'd be invoking an "un-" that attaches to nouns. And then the question, the next question people would ask you, would be, OK, so can you show me an example of "un" attaching to a noun where the noun isn't created from a verb with "ing," just a plain noun? That would be the strongest kind of evidence that we needed a third "un". Does that make any sense? Good question. Other questions? OK, all right, so we need-- what does it [INAUDIBLE] to, what does it create? OK, cool, so we need two "un-"s. Here's another affix to think about: "-able." "Drinkable," "breakable," "watchable," what does "-able" attach to? AUDIENCE: Verbs. NORVIN RICHARDS: Verbs, and what do you get? AUDIENCE: Adjectives. NORVIN RICHARDS: Adjectives, yeah. So "-able," "-able" converts verbs into adjectives, and it also converts verbs into adjectives. "Un-" doesn't convert the things that attaches to at all so far, yeah? So "un-"-- we'll go back to "un-." Sure, we will-- there we go. "Un-" attaches to verbs, and what you get is a verb. "Un-" attaches to adjectives, and what you get is an adjective. So it changes the meaning, but it doesn't change the category of the word. The thing is still the same. Yeah, OK. OK so we have "un-"s, one that attaches to verbs and makes new verbs-- yes, thank you, I just said that-- and then another one that attaches to adjectives and makes new adjectives. And then we have an "-able" that attaches to verbs and makes adjectives, yeah? OK? So now let's think about a case where we're adding "un-" and also "-able," a word like "unlockable." What should that be able to mean? AUDIENCE: [INAUDIBLE]. NORVIN RICHARDS: Yes, it's the answer. So if we're going to start with "lock"-- let me find an eraser. So we'll do this over here first. If we're going to start with "lock," what shall we add first? AUDIENCE: "Un-." NORVIN RICHARDS: "Un-," let's say, OK, so "lock" is a verb. We'll add "un-." We'll get-- so here's an affix. Put these two things together, what will the result of adding "un-" to "lock" be? AUDIENCE: A verb. NORVIN RICHARDS: Another verb, yeah. And then we'll take "-able," which we know attaches to verbs and creates adjectives. Yeah, so we'll take that affix. We'll attach another verb. And what we'll get will be an adjective. So we'll have an adjective which we created by first putting "un-" on "lock," and then putting "-able" on "unlock." I'm going to draw the other option over here. And where am I going to put it? Here. So over there, we attached "un-" first, so this time, we're going to take the verb "lock", and we're going to add "-able" to it. What's the result going to be when we add the affix "-able" to the verb "lock"? An adjective. And we're going to add the affix "un-" to that adjective, and the result of that will be an adjective. Both of these will be pronounced "unlockable." But what will they mean? What will that one mean? AUDIENCE: Not able to be locked. NORVIN RICHARDS: It cannot be locked. Yeah, so this is a broken lock. Yeah, it is unlockable. Yeah, right, because when you add "-able" to "lock," you get an adjective-- possible to lock this thing. And then, when you add "un" to it, you say not possible to lock this thing. How about that one, that "unlockable" over there, the one where we first added "un-" to "lock" and then we added "-able?" What does this one mean? AUDIENCE: "Can be unlocked." NORVIN RICHARDS: "Can be unlocked." This is not a broken lock, a lock that's working properly. Yeah, and that's our intuition about the word "unlockable." It is ambiguous. If you say, this door is unlockable, because I have the key, or this door is unlockable, somebody calls a locksmith. Does that sound right? It can be either of those things. Yes? AUDIENCE: I [INAUDIBLE],, I don't think there is a particular [INAUDIBLE] on [INAUDIBLE].. If we looked at "unsinkable," for instance-- NORVIN RICHARDS: Yeah? AUDIENCE: I don't tend to hear "unsinkable" as meaning "able to be unsunk." NORVIN RICHARDS: Right, do you think do you think there is a verb "unsink"? AUDIENCE: I don't know. NORVIN RICHARDS: Yeah, so actually, you're raising-- oh, yes. AUDIENCE: Does that have to do with the transitivity of the verb? NORVIN RICHARDS: Well, you can sink a ship, so the verb's transitive. And the ship can also sink, so it can be intransitive. You can lock a door, and a door can lock. So we have a bunch of verbs that can go back and forth between being transitive and intransitive. And we'll talk more about transitivity later. But for people who are wondering what the heck we're talking about, verbs are said to be transitive if they have both a subject and an object. So if you sink a ship, "I sank the ship," then the verb is transitive. If you say, "the ship sank," then there's only a subject. There's no object. So the verb is intransitive. English has a lot of verbs that can be either. And a lot of the groups that are up here can be either. Your question raises a really good point, though, which is, I just casually said, you can add "un-" to verbs, and it means undo the effect of the verb-- yeah, look, "unwrap," "untie," yeah. But you can't unsink a ship. Even If you go down with divers, and find the ship, and bring it back to the surface, you're still not unsinking the ship, I don't think. It doesn't matter how good you are at this. So we have to say some other things about that "un-." You're raising a good point. There are some verbs that cannot be reversed, and "sink" maybe is one of them. [INAUDIBLE], Joseph, yeah? AUDIENCE: This might just be me, but I feel like because the word is ambiguous, at least in the context of that word, I feel like in my mind, my brain has decided that "unlockable" means something that generally this one, like it cannot [INAUDIBLE]. NORVIN RICHARDS: The other one? Yeah, yeah. If your brain is like my brain, then it really likes that one, the one you can't see. It's not your fault. Yeah, it's over there under the slide, yep. Yep, yep. AUDIENCE: It's like something that can be unlocked, yeah. NORVIN RICHARDS: Yeah, yeah, it's unlockable. The door is unlockable. Yeah. AUDIENCE: It's unlockable? NORVIN RICHARDS: Yep. AUDIENCE: So you can say the lock is broken. NORVIN RICHARDS: Yeah, yeah, I might be more likely to say that. I think you're right. I have that feeling too. And actually, I cheated a second ago when I said "unlockable." It's ambiguous. You can say either one. But for me, at least, if I want to mean that one-- the one that means it's broken, you can't lock it-- I kind have to say it's "un-lockable," like as opposed to this one, which you say, it's "unlockable." I kind have to put another stress on "un" or something like that. That's my feeling. But maybe some of you are looking at me as though I've grown two heads, which is a kind of look that I get a lot just kind of walking around so, you know, I'm used to it. Does anybody else have that feeling? I don't think I pronounce these two verbs the same way, these two adjectives in the same way. Yes? AUDIENCE: I think [INAUDIBLE] put a stress in the word [INAUDIBLE]. AUDIENCE: Yeah. [INAUDIBLE] not [INAUDIBLE] NORVIN RICHARDS: Yep. AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Yes, now this is 24.900. We will surely not get to the bottom of this. But this is a really interesting sort of question. And maybe one of the things we're learning is that if we want to have a full understanding of how stress works, we need to be willing to think about these structures. These structures are apparently informing our intuitions about where stress goes and how it's treated, which is kind of interesting. You had a hand up a while ago, and I never got to you. I'm sorry. AUDIENCE: In the sentence, you can't "unsink a ship," NORVIN RICHARDS: Yeah? AUDIENCE: Sink does sound correct. NORVIN RICHARDS: Oh. AUDIENCE: That's the previous-- NORVIN RICHARDS: You can't unsink a ship? Oh, boy. In general, you can't unsink a ship, but I think this ship might be unsinkable. Maybe, I don't know. You can't you unburn a letter. You can't unbreak a teapot. Yeah. [LAUGHS] So maybe I was too fast to say there are verbs that can't be reversed. And to the extent that you can reverse them, I think you then get both of these adjectives. So-- AUDIENCE: [INAUDIBLE] certain affects. NORVIN RICHARDS: Yeah, yeah, yeah. Whoo, these are deep waters. Fortunately for me, other people are raising their hands. Let me see what they have to say. Yes? AUDIENCE: I think it's like you're pointing out in the sentence, the fact that it's not the same. NORVIN RICHARDS: Oh, that's true. AUDIENCE: You're pointing out in the sentence like, almost like that sounds ridiculous. You can't unsink a ship. That's not going to work. NORVIN RICHARDS: Yeah, yeah, yeah. "We have this crazy scientist at MIT. He's doing this weird diving research." He's spending all his time trying to unsink ships. [LAUGHTER] I don't know. Maybe it's just a matter of the number of times you say it, so that eventually-- [LAUGHTER] It's just unlocking a door, this is the kind of thing that happens all the time, whereas unwrapping a present, whereas unsinking a ship, people generally don't. Yeah, maybe that's all that's happening. Did you have a question? AUDIENCE: Well, it was I was essentially going to say that it might have to do with whether or not the reversion is temporary. If the reversion of the verb is a permanent thing, and you won't be able to restore it to the previous state, [INAUDIBLE]? NORVIN RICHARDS: Ah, so if you unlock a door, you can lock it again. Of course, if you can unsink ships, you can probably sink them again, right? [LAUGHTER] This is probably going to get expensive and wet. But yeah, so it's presumably something you can do. Raquel? AUDIENCE: I have two thoughts. And the first thought is that "unlock" is an actual word, but I don't think "unsink" is an actual word. So maybe we think of "unlock" as its own specific word that we can add "able" to. And then, the second thought is that "unbearable", that breaks this, because we don't think "unbear." NORVIN RICHARDS: Oh, that's a very-- that's-- AUDIENCE: You can't unbear a bear. [LAUGHS] NORVIN RICHARDS: Well, yeah. So but OK, those are both really interesting thoughts. I mean, the first thought is what we're arguing about, is whether "unsink" is a word or not. And I guess the answer that's emerging is, sort of. It's just not something you do all the time. And so our intuitions about it are kind of fuzzy. "Unbearable" is a really nice example. Basically, what that's telling us is that-- what the heck is it telling us? AUDIENCE: It's unbeared? NORVIN RICHARDS: So "bearable," right, the adjective "bearable," it has a couple of meanings. The meaning that you have in mind, the meaning that you have in mind-- let me remind my computer not to do that, sorry. The meaning that you have in mind is one where "bearable," it doesn't just mean "can be carried," right? It means something more emotional than that. And no, but it can have that kind of emotional meaning when you're bearing-- you know. Yeah, what? "She bore the loss of her stock holdings well." Yeah, so I don't know. So maybe, OK. So I guess what we're learning is that although you cannot unbear something-- maybe because there's no sense in which you can reverse the bearing of something-- there is an adjective "bearable," and you can add the other "un-" to that, the "un-" that attaches to "bearable," to adjectives, that means "not bearable." There, yes. OK, good. I got distracted by emotions, as so often. But yeah that, just has to do with the fact that "un-"-- we're now debating about whether you can unsink a ship, but you surely can't unbear a loss. I don't even know what that would mean. Yeah, yeah. Wow, lots of questions. Yes? AUDIENCE: I think it's about the number of times [INAUDIBLE].. Like we used to say "follow," and now we can say "unfollow." NORVIN RICHARDS: Oh, yeah. Yeah, yeah, "follow" and "unfollow" is a really good example, actually. Yeah, so it used to not be possible to unfollow things. One crucial difference, I guess, is that if you wrap something or tie something, you change it so that it's different now as a result of what you've done. So if you tie something, it used to be just a string, and now it's a knot. Or if you wrap something, it used to be a thing, and now it's a present. If you sink a ship, this is what we've all been talking about. It goes from being on top of the water to being underneath it. As opposed to "follow," if you follow someone in real life, you don't do anything to them. They're still the same as they were. We kind of have this intuition. The "follow" that you can unfollow someone is something that changes them from a person who has n number of followers to a person who has n plus 1 number of followers. So there's something about whether the verb changes the state of the object. I think that's got something to do with all this, which is why these examples about unthinking ships and breaking pots are kind of interesting to think about. Yes? AUDIENCE: It turns out I just searched [INAUDIBLE] "unsink" on the Scrabble dictionary, and it says no. NORVIN RICHARDS: Well, Scrabble thinks it's not a word. So I mean, Scrabble, they have to have rules. They have to draw a line somewhere, yeah. Yes? AUDIENCE: [INAUDIBLE] they make [INAUDIBLE].. NORVIN RICHARDS: So Scrabble, we should have a Scrabble version of 24.900 where we try to figure out what's going on together. But Scrabble is not English, right? It's something related to English, but not the same. We're allowed to have our own intuitions about what things are words and what things aren't, Yeah? AUDIENCE: Yeah, I'm not sure how to phrase this question, but it's like you started off the lecture by saying that meaning comes from morphology. NORVIN RICHARDS: Yes. AUDIENCE: And now we're trying to, I guess, figure out what [INAUDIBLE] means, right? NORVIN RICHARDS: So one way to think about it would be to think, so this stuff about which verbs can you attach "un-" to, where I just said-- well, so we started by saying "un-" can attach to verbs. And it gives you a new verb that means, "take the thing that is in the state resulting from the action of the verb and put it back in the state that it was in before you did the verb." That's one way of defining what "un-" means, yeah? And then, did that make sense? So if you untie something, what does that do? It takes something which is tied, which is in the state that you're in as a result of the verb "to tie," and converts it into something which is not tied anymore. That's what it is to "untie" something. So if that's what "un-" means, then yes, you should be able to attach it to verbs. But you should only be able to attach it to verbs that change the state of their objects because it makes reference to this change of the state of the object. This is me making stuff up right here in front of all of you, but this seems reasonable to me. I think it might be something like that. Yes? AUDIENCE: Wouldn't that make "unthink" valid? NORVIN RICHARDS: It should. Well, so it should be valid, yes. AUDIENCE: When you think something, doesn't it change the state? NORVIN RICHARDS: It certainly does change the state, yes. Gosh, is that the time? Yeah. [LAUGHTER] So look, this is-- so for me, at least, this is what is cool and fun about linguistics. Here it is, day 3, and we are right on the edge. I'm a professional linguist, this is what I do for a living, and I'm not quite sure what the answer to this question is. So you guys could be the ones who figure out what the heck is going on with "unsink," to which the answer could be, yeah, "unsink" is more or less OK. It's just not something people do all the time, and so we're not used to hearing it. Maybe that's the easy way of getting out of where I am. Or maybe there's more to say about what exactly "un-" does to the meaning of the verb. But that's the kind of answer to your question that we would get-- that if we define everything about the meaning of "un-" that that will help us understand why it can attach to some verbs but not others, yeah. And as you can see, we're not quite there, maybe. Joseph? AUDIENCE: So well, I guess if you put it together-- if you have the verb "help," could you help somebody-- NORVIN RICHARDS: Yes? AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Well, you can be unhelpful. AUDIENCE: Unhelpful. NORVIN RICHARDS: Oh, god. [LAUGHTER] But there, you're adding it to an adjective, yeah. Yeah, yeah, so you cannot unhelp someone. [INTERPOSING VOICES] So I mean this is a good example of something somebody should have said to me when I first said, well, some verbs change the state of the object, and others don't. So if I burn something, it goes from being a thing to being a pile of carbon. And if I break something, it goes from being a thing to being-- if I sink a ship, it goes from being on top of the water to being under the water. If I help someone, maybe they're the same as they were before, just in a better position? This is the place where you're entitled to ask, what do you mean by changing the state? What counts as the state? And we really must go on. Yes? AUDIENCE: [INAUDIBLE] reversible change in state? NORVIN RICHARDS: I guess yeah, maybe that's-- AUDIENCE: So you can't exactly revert your help. Like you can hurt someone. NORVIN RICHARDS: Yeah. AUDIENCE: Following that, but you can't exactly undo the help that you've already done. NORVIN RICHARDS: I don't know. I mean, look, suppose I write you a recommendation letter for a job, right? That's helpful to you. I helped him. And then, suppose I write another letter saying, you know what? I take it all back. In fact, it's the opposite of everything I just said. AUDIENCE: [INAUDIBLE] person [INAUDIBLE].. [INTERPOSING VOICES] NORVIN RICHARDS: Hold on, guys. Yeah, yeah, yeah, no, I see what you mean. All right, so moving on, then, from this slide, which has generated so much cool discussion. This is great. So trees. Yeah, there, why is "unlockable" ambiguous? Because it has two trees-- or to put it another way, there are these two morphemes you're adding to "lock," and you can add them in either order, with the result that the word is ambiguous in the way that we suspect, we expect. So here's a model for how we're going to make words. We're going to take morphemes, pairs of morphemes, and we're going to glom them together. And we feel silly saying the word "glom" over and over again, so we've made up another word for that. We call it "merge." So "merge" is this process that allows you to take two things and put them together and make a new thing. So you take "un-" and "lock," and you merge them together to make "unlock." Merge is recursive, which means only that it can reapply to its own output. So you take "un-" and "lock" and merge them to make "unlock," and now you have a new thing, "unlock." And that can also undergo merge. So you can take "unlock" and merge it with "able" to get you "unlockable". That's what's going on in the tree on the left. That's all linguists mean when we say that merge is recursive. You may sometimes hear people have arguments about what it means. That's what it means. So it means once you've merged two things to create a new thing, you can take that new thing and apply merge to it as well and make new things that way. Yeah, that's all it means. OK, and then there are statements which we have to put somewhere, like "-able" needs to merge with a verb and the result is an adjective-- things like that. Yes? AUDIENCE: Does the tree have to be binary? NORVIN RICHARDS: Ah, so I said we're going to take two things, and we're going to merge them and make a new thing. A really good question about which people have arguments, yeah? Actually, it seems to be the case that the trees are binary, which is interesting. So to put it another way, there are lots of places where you can't tell whether the tree is binary, or ternary, or whatever. But in every place where you can tell, nobody has come up with a convincing example of a tree that is anything other than binary, which is itself kind of interesting. I want to try to understand why that is. We're going to be doing lots of merging in morphology. And then, not so much in phonology. And then, when we get to syntax, there will be much merging, many merging of things-- words, putting them together to make phrases of various kinds. So try to get comfortable with the concept of merge. It should be easy. It's not a hard concept. OK, this is great. I just want to talk-- ooh, I'm going to talk a little more systematically about allomorphs. So I mentioned before, you take morphemes, you merge them together, you create new things, and what often happens is that one or another or both of the morphemes will change its form. So the past tense of "leap" is "leaped" [pronounced like "lept"]. The verb changes from "leap" to "leap" [pronounced like "lep"]. The past tense suffix can either be "t," or "d," or some other things. So I want to talk a little bit systematically about these allomorphs. I'll start by saying that there are cases where there just isn't anything interesting, there isn't anything helpful to say. Past tense of "go" is "went," and it just is. If you want to learn English, you've got to learn that. There's nothing else to say. But there are other places where there are general laws about which allomorph you're going to get under which circumstance. We alluded to that a little bit when we were talking about cats and dogs, that "s" and "z." Those allomorphs of the plural are conditioned by the sound that's before them. And we'll talk more about that. So I'm going to give you an example where there's more to say about the allomorphs. This is an example from Polish. I should say I've spelled Polish here in a strange way. Anybody here speaks Polish, this isn't the way you're used to seeing Polish spelled. So I've spelled it in a way that makes it easier for people who don't speak Polish to see what the words-- how the words are pronounced, more or less. So here are some Polish words. They mean "language," "pot," "juice," and "bow," and you've got some plurals there. So does anybody here speak Polish? Excellent, I can just make anything up that I want. [LAUGHTER] Believe me, this is how Polish works. It is, really. I'm not making things up. Does anybody find any morphemes in here? I mean, there's morphemes that mean "language", "pot," "juice," and "bow." What's the plural morpheme? "Ee," yeah? So it's a suffix, it's spelled with the letter "I" it's pronounced, "ee". No allomorphy here so far. Here are some more words-- "bank of a river," "debt," and "lie," which have plurals. The idea of a plural of "lie" is a little strange, if you ask me, but nobody asked me. If you had the plural morpheme, plural morpheme there is "ee", and you get those plurals. Anybody seeing any allomorphs? So the plural suffix is still the same, but "debt," for example, has two allomorphs. What are they? Somebody attempt to pronounce them. It's OK, none of us speak Polish. Yeah? AUDIENCE: "Duk" and "dug?" AUDIENCE: Yeah, we've got "duk" and "dug", yeah. So "debt" has one allomorph that ends in a "k" and another allomorph that ends in a "g." What's the rule, do you think, that determines when you get "k" and when you get "g"? Yeah? AUDIENCE: Would it have to do with the content that comes before the "l," where it [INAUDIBLE] like sound like [INAUDIBLE],, so it's like "ch," "k"? NORVIN RICHARDS: Oh, I see. That's a neat idea. So maybe we could work out a way-- so all of you are realizing the problem. The problem is there are the words we started with, where we have all these words that end in "k," and when you make the plural, there's still a "k" before the "ee". And then, we have these other words, where the word ends in "k," and then, when you add the plural, you get a "g." Yeah? AUDIENCE: I feel like we need more information, because even if you look at the words for "bow" and "lye," they're the same word in the singular form. NORVIN RICHARDS: Very nice point. So this slide was carefully constructed to have what's called a minimal pair. Yeah, "bow" and "lie" are both pronounced "wuk" in the singular, but in plural, the plural of is "wuki" and the plural of lie is "wugi," yeah? This is meant to make us despair-- [LAUGHTER] --about the option that you were raising right, which was maybe if we look at everything about the word, we'll be able to predict which "k"s become "g"s and which "k"s stay "k"s. No, give up. Despair, yeah? Cannot be done. We cannot, if we start with a "k," we cannot predict which "k"s will become "g"s and which ones will not. Yeah? AUDIENCE: [INAUDIBLE] one of those examples [INAUDIBLE] not [INAUDIBLE] things [INAUDIBLE],, but [INAUDIBLE].. NORVIN RICHARDS: I think that's a neat idea. So can you say more about that? AUDIENCE: I mean, my first guess might just be that if the end in "g-i", is it [INAUDIBLE] into a "k"? NORVIN RICHARDS: OK, yeah so I'll just say again what you just said, just to make sure everybody heard it. I framed the problem as, which "k"s change into "g"s, and which "k"s don't change into "g"s? And then we convinced ourselves, thanks to your point, that problem is impossible. We will just have to list, well, sometimes "k" becomes "g." Sometimes "k" doesn't become "g." But that's because I phrased the problem as, which "k" becomes "k" and which one doesn't? If we do, if we look at this problem in the other direction-- this is your suggestion-- if we start with the plural and think about the singular, then we can say to ourselves, OK, there are some nouns that end in "k" and some nouns that end in "g." In the singular, what's the rule? In the singular, do you get "k," or do you get "g"? AUDIENCE: "k" NORVIN RICHARDS: "k," always. Yeah, so what we need for Polish is the willingness to say, yeah, there are nouns that end in "k" and nouns that end in "g." The plural suffix is this letter "i," this "ee." And there's a very general rule. If you have a "g" at the end of a word, turn it into a "k." So if we're making the lexicon for Polish, what we want is for the word for "bow" to be in the lexicon "wuk," and for the word for "lye" to be in the lexicon "wug." Yeah, and then, the plural suffix is "ee," and there's a general rule, "g" at the end of a word becomes "k," right? This means the lexicon is a little bit abstract. if you ask a Polish speaker, what's the word for "lye," they're going to say "wuok." And if you say, no, it's not, it's "wug," they're going to be like, who's the Polish speaker around here, you or me? [LAUGHTER] But in fact, you are right and the Polish speaker is wrong. You have the word for "lye" is "wug," and then there's this rule-- "g" becomes "k" at the ends of words. What that means is that the morpheme for "lye," if we want to know, what's the morpheme for "lye," well, it isn't what the speaker would say it is necessarily. It doesn't have to be a morpheme that you ever hear by itself. It only shows up in the plural, not in the singular. That's the most economical way of talking about Polish. Because the fact is, I can tell you this from my vast knowledge of Polish, you will never, no matter how many Polish words you look at, find one that ends with a "g." They never do. Plenty that end in "k," but they never end with a "g." So this rule is one that we can rely on. Yes? AUDIENCE: Does that make "wug" a bound morpheme? [LAUGHTER] NORVIN RICHARDS: Well, this morpheme is a free morpheme because it doesn't need to combine with anything. It has an allomorph, "wuk," which is the result of this general sound change, that "g" becomes "k" at the ends of words. I guess that's the easiest way to say it. That's a good point, yeah. Yeah? OK. So this is a place-- I told you, go has the past tense went, and give up on coming with sound up with sound changes that will get you that. That's just our ancestors being quite peculiar. Yeah, but Polish has this general rule that changes final "g" to "k." And so for Polish, we don't in the end need to list allomorphs. "Lye" has allomorphs "wuk" and "wug." We can, if we want. We need a hobby. It'll keep us off the streets. But it's OK to just say, no, there's a word for "lye," "wug," and then there's this general sound change law that changes the "g" to a "k." Yes? AUDIENCE: Is that just transliteration, or is it the actual "wug"? NORVIN RICHARDS: So this is not Polish orthography, yeah. So this is not an ordinary Polish spelling. I think I started by saying that. Yeah? AUDIENCE: I was just [INAUDIBLE],, I don't know Polish, but I know Ukrainian and Russian. NORVIN RICHARDS: Yeah. AUDIENCE: So I [INAUDIBLE]. NORVIN RICHARDS: Yes. AUDIENCE: And like I was just checking the actual Polish word. NORVIN RICHARDS: Yes. AUDIENCE: And [INAUDIBLE] that actual Polish word for "wuk," "wohg," actually "g" in there. NORVIN RICHARDS: Yes, yes. AUDIENCE: [INAUDIBLE] check the one for "lye"? No, no, no-- NORVIN RICHARDS: No, that's "lye," yeah. AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: Yes. AUDIENCE: And if you check the one for "lye," that [INAUDIBLE]. NORVIN RICHARDS: Yeah. AUDIENCE: [INAUDIBLE] NORVIN RICHARDS: So I think I started this by saying, I'm going to spell these Polish words in an unusual way. And I tried to make it sound like that was for your benefit, so that you would know how the words were pronounced. But I was actually obscuring the fact-- which you were bringing up-- which is that Poles spell as though this rule had not happened. So they pronounce the word for "lye" and the word for "bow" the same way. But you're right, they spell the word for "lye" ending with the "g." They also don't use the letter W. This is their barred L. Yeah, yes? AUDIENCE: So for language acquisition, I've been studying Polish? Because it would almost seem like speakers of Polish gain the plural form of the word before they can be singular. NORVIN RICHARDS: Oh, I see what you mean. There's a lot of work on the acquisition of morphological and phonological rules like this. I don't know, I can't swear that people have worked on this in Polish. My understanding is that children-- this is a very common kind of sound change. And we'll talk more about it as the class goes along. And my understanding is that children who are learning languages that have this kind of sound change-- it's called final devoicing-- acquire it kind of immediately. It's not something that takes them a while. Other questions? Children are very smart. This is one of the big results of language acquisition research. They know things before you would think they do. OK, let's see where we are. Yeah, that's where we are. Yep, so we'll never be able to predict which "k"s change to "g" in the plural. So what we'll do is posit these underlying forms over there on the right, some ending in "k," others ending in "g," and then we'll have a rule, "g" becomes "k" at the end of a word. And given, the time. I think this is a good place to stop. Are there any questions? Yes? AUDIENCE: [INAUDIBLE] is there any [INAUDIBLE]?? Or like [INAUDIBLE]? NORVIN RICHARDS: You know what? Let's talk about that when we do phonology a little more carefully. Because yes, this change from "g" to "k," you're right, it's part of a more general change. There are lots of things like this that happen. And it's a very common change. So we'll try to talk more about it as we get closer to that. Good question. So again, problem set due today, interpreted generously. It's due by dawn. And I'll try to put up a new problem set before today is over.